Monday, May 25, 2015
I love Twitter for many reasons, but most of all because, in my own experience and use of it, it has been an excellent way to connect with people and talk through ideas. I learn something new every single day. I tend to avoid fights, though I do frequently express my views on controversial subjects. For me, Twitter has been a space for networking with a range of people who share my interests in education technology, digital pedagogy, and higher education policy. It's also been a great space for thinking through and adding nuance to my views through dialogue with smart interlocuters.
Yesterday I wrote a blog post about the need for faculty to get up to date on education technology. Implicit in my post was the view that classroom technologies like student response systems make it much easier to implement best practices in our teaching. It's not that I want faculty using technology for the sake of technology; but rather, that education technology is a crucial tool for updating one's pedagogy to integrate current best practices. The growing array of ed tech tools--which seems to expand by the day--makes it easier for us to do our job well, but it requires faculty to be aware of what is out there and figure out how to use it.
To give just a few examples: student response systems are a fantastic way to check comprehension and evaluate what topics need clarification, what topics are already well-understood in large enrollment classes (and even in small classes). I use i>Clicker questions to tell me what I need to spend time on in class. Instead of just running through my prepared lecture, I figure out what students understood from their pre-class work, and what they didn't. I then spend the majority of my time on the parts they didn't understand. It requires a lot of improvisation, and has completely changed the way I teach large classes.
i>Clickers are especially helpful because they are anonymous and you get a response from all students, not just the few who raise their hands.
A second example: we know that practice and rapid feedback are crucial for learning. This can be entirely automated by creating a database of practice questions that include feedback for the students. These practice questions let students practice answering questions; and they let them know quickly where they need to do more review. In addition, well-written questions can push the students to make connections across the course. In all of my classes now, I have a database of automatically graded questions (MC, matching, ordering, etc.) for the students to practice on and build their mastery of the course content. This small change, which required a substantial initial investment of time, has done wonders for student learning. Most obviously, it has meant that students can know do much more higher-order thinking in class and in other activities.
The incredible array of ed tech tools is all great if you are fortunate enough to teach at a university where a. your classrooms are "smart," i.e. equipped with the technologies you want to use, like lecture capture; and even a good projector or Smart Board. As well, if you want students working online during class, the room needs to be able to support those wireless connections. In large enrollment courses, this can be a huge challenge. Even at UT Austin, we only have a few classrooms that can reliably support more than 100 or so wireless connections at one time. This should change when Google Fiber comes to campus; but UT is, in this respect, a very advanced campus. More typically, austerity has meant significant cuts to IT budgets, and this has had serious consequences for classrooms. At many non-elite, public universities, classrooms are seriously out of date.
And then there's the question of accessibility for students. As Megan Kelly gently reminded me this morning on Twitter, i>Clickers are an additional expense for students, one that many don't want to take on. In addition, especially in more rural areas, students don't necessarily have reliable access to a broadband connection. Even if our campus libraries and such provide that access, it may not be feasible for a commuter student who is working 30-40 hours/week to remain on campus to do their homework and studying. Even at UT Austin, this issue occasionally crops up. The first semester that I taught my Introduction to Ancient Rome as a blended class, I had a lot of students who lived in the campus dorms. They paid for broadband based on their usage at that time and many were upset to be using their broadband bytes for school instead of gaming or Netflix. Again, no matter how eager an instructor might be to integrate digital assets into their course, it may not be feasible if the students can't easily access the online content.
As Megan noted, in the classroom there are some ways around these limitations, using analog tools that let us implement the same pedagogies. I will say, when I teach on campus, I always opt for analog over digital--partly because we are still at a stage where things go wrong with the digital. So, for example, I give low stakes, weekly quizzes. It would be much, much easier to do these online but I can't assume that all students have a device to get online; nor can I assume that the wireless will support 200+ students at one time, especially on a timed quiz. Rather than deal with endless tech issues, I use scantrons. It's more work but it saves the hassle of constantly troubleshooting tech issues.
I am also a huge fan of student response systems in class. But there are a lot of ways to do this, including making laminated cards of different colors and having students hold these up. I ended up not using them, but I have a box of laminated cards, four colors held together my a ring. I was planning to hand these out to students at the start of the semester and then collect them at the end. These analog methods are not quite as easy as using an i>Clicker but it accomplishes the same goal. I also do a lot of "talk to your neighbor" with complex questions and then have the groups click in with their response. This could also be done with laminated cards.
One of the things that my conversation with Megan clarified for me was that, in pushing my faculty colleagues to get up to date on education technology, what I was really advocating was that they get up to date on the latest pedagogy, as they are informed by the learning sciences. It's always about the pedagogy, not the technology. Ed tech tools can make it a lot easier to integrate good pedagogical techniques into our classes--most especially in the area of providing immediate feedback and frequent practice. But ed tech does not replace pedagogy; and much of what ed tech does can be accomplished with analog tools.
At the same time, it is crucial that universities invest in keeping their IT and classroom technology up to date. This also seems like an area where foundations ought to be looking to provide financial assistance. In addition, libraries should be places where students can go to do any work that requires a quick and reliable broadband connection. This doesn't entirely solve the problem for our students, many of who are commuting to campus and working long hours off campus; but it at least provides the option.
Foundations like the Gates Foundation, I have an idea for you: provide stipends for students who don't have access to broadband at home. Make it possible for them to sign up for broadband so long as they are registered students. Similarly, it must be possible for companies like i>Clicker to provide under-resourced public universities, especially in more rural areas, with used i>Clickers (e.g. first generation i>Clickers). These first generation clickers are perfectly functional for what most of us need and it would be a great service to students and instructors. Just a thought...
Sunday, May 24, 2015
Likewise, if one is in any kind of leadership role, with decision-making authority, it is simply inexcusable to be unaware of the vigorous conversation about the role of education technology in higher education. In particular, it is essential for our college/university leaders to grasp what is at stake when a decision is made about, for instance, outsourcing online course development to the private sector. Higher education, especially public education, has become deeply politicized. Debates about the role of education technology are central to the politics of higher education, in part because there is a lot of money at stake. As campus IT evolves into a complex (and expensive) part of the campus infrastructure, more and more resources are directed away from instructional budgets and towards IT departments (in various configurations). This includes the creation of entirely new units in such areas as Learning Sciences and Data Analytics. This isn't a bad thing; it has the potential to be quite a good thing. But it is a big change.
This reallocation of resources is largely invisible to most teaching faculty and even department chairs; yet an awareness of it is essential. For one thing, it might finally demonstrate to departments that our universities are in the midst of significantly redefining the way they accomplish their mission. Rank and order faculty remain ignorant of this important conversation at their own risk. Departments with graduate programs have an ethical obligation to ensure that their graduate students are trained in the latest best practices, including a strong knowledge of online course design and implementation; and a basic knowledge of important ed tech tools. The graduate students of today will be expected to have this skill set when they apply for jobs, and many of them will be asked to teach both in classrooms and online. It is inexcusable to not prepare those who pursue academic jobs for this reality.
For many decades, it was possible for faculty, and especially for our department leadership, to be disconnected from the larger, national discussions about higher education. It is surprising to me how few faculty have any real sense of the vigorous debates currently happening, especially around the issue of the private sector elbowing its way into both K-12 and higher education. It is simply impossible for departments (and Colleges within universities) to make wise decisions about where to direct resources if you don't understand this encroachment by the private sector--and what factors are fueling it. Most faculty have little sense of where their students are taking classes. They don't realize that, over the past three years, the number of students taking courses outside of the university--either online, at community colleges, or even at other 4 year institutions--has increased exponentially, to the point that a college like mine (Liberal Arts) has already outsourced an unacceptable amount of our General Education/Core Curriculum courses. We should be having vigorous conversations about how to reverse this trend, including how to integrate our own, high-quality online courses into our curriculum. It is frustrating to me to see that, if these conversations are happening, they are happening at very high levels and largely exclude faculty. It is equally frustrating, though, to see my faculty peers express so little interest in these important conversations.
I appreciate that faculty are busier than ever these days. It would be helpful if institutions did more to ensure that faculty are educated about policy conversations but also, that faculty are held accountable for using best practices in their classrooms. In any other profession, an individual who refused to remain up to date on basic technological developments would no longer be able to work in that field. This has become a real issue in medicine, as more and more sophisticated diagnostic tools come out (e.g the Da Vinci robot for abdominal surgeries). Physicians in procedure-intensive specialties regularly have to learn how to use new devices--otherwise they would lose patients and business. Likewise, it would be considered malpractice for a physician to be unaware of the latest research and medications for various conditions. Imagine a rheumatologist who ignored all the new studies about the dangers of long-term use of prednisone and continued to prescribe it at high doses to his patients!
Out of date or sub-par teaching won't kill anyone, to be sure. Eventually, though, students and their parents will become more savvy and less willing to tolerate poor learning experiences. The result won't be that tenured faculty are fired for poor teaching, however. Instead, the persistence of faculty resistance to ed tech and best practices will be used in higher education debates to persuade the leaders of our institutions that the teaching of, especially, first and second year courses can't be left to the luddite faculty who refuse to update their practices. This will (and, in some places, already has) lay the foundations for outsourcing these courses to private-sector companies; and to the continued redirection of resources away from departmental instructional budgets. If departments want to hire more positions, the best thing they can do is figure out how to reclaim the credit hours that we have lost--and will continue to lose in escalating numbers. We are approaching a tipping point. Soon it will be too late, if it isn't already too late.
One of the best historians of education technology is Audrey Watters. She regularly posts her public presentations to her site hackededucation.com (see The Golden Lasso of Education Technology for a recent example). I am also a huge fan of this collection of her essays, The Monsters of Education Technology (a bargain at $4.99 on Kindle; 9:99 for pb). Another fantastic way to get caught up on the conversation about higher education and the politics of education technology? Twitter. Really.
5/26: See also The Ed Tech Curmudgeon: "If the educators that care about students don't find a way to respond, the future of education will belong to those who stand to profit economically or politically or both. Who will build the universities and colleges of the future - those who understand the history and share the values of the educational enterprise, or those who simply want to make a buck?"
Tuesday, May 19, 2015
|With thanks to Laura Gibbs for sending me this perfect cartoon!|
It had been a long and difficult two years, made much more challenging by severe health issues over this past year. Despite those issues, I felt like I owed it to my team to continue the project (thus letting them remain employed). Especially this spring, it was incredibly difficult to find the energy to do the work on the course. I motivated myself by thinking about the big picture: we needed the course to be finished this spring so that funding and staffing could be in place for the Fall 2015 semester, when the course would be in the Classics Department's possession. I had also worked hard with my PM and the Asst Dean who oversees our development studio (LAITS) to get the funding in place. This required a great deal of work and time and politics, especially on the part of Joe TenBarge, the Asst Dean. The department was totally uninvolved in getting the funding lines in place. But, when I finished the last of the development, it seemed like everything was finally coming together. It felt like it was worth all the horrible days of working on it this spring.
Because I had designed the course for others to run and teach, I also put in place a transition plan. It's not a typical "sage on the stage" MOOC-model course. There are no video lectures; I appear nowhere in the course. It focuses on active learning and, especially, the use of primary source materials. Students work through modules and create their own narrative through a kind of Socratic method of course design. We draw on documentaries, primary sources, and other things for content. But thinking and questioning are key features of the course. One of the reasons students like it, and why it produces high levels of student learning, is precisely because of this unusual model (unusual not because of the focus on active learning but because we do this at a scale of 1 instructor/100 students).
Let me be clear that my issue has nothing to do with wanting control or with wanting "my chosen successor" appointed. In fact, what I suggested in a document dated 9 March is that the department continue the current instructor's appointment. As I said, he couldn't succeed me in a job I never did--it's always been his job. But, as the course transitioned to the department, I was no longer able to appoint him. That became the responsibility of the department chair. The main motivation for my suggestion was an interest in ensuring a smooth transition. The plan I suggested would have had the current instructor be "course coordinator." One thing that is not clear to most people: this is one course, but with multiple sections, taught to somewhere between 300-500 students (depending on how many sections are run; the College is requiring at least three/semester to keep the funding for the position).
So, in truth, it's about running a small business, working with at least some grad students who aren't content experts and have no experience in online teaching but who will be "instructing" sections of 100 students. It's a model that can work, I think, but it needs for certain key components to be in place. One of those is a course coordinator/mentor/supervisor who knows the course inside and out and knows how to teach online. Of course, over the year, I expected that others would learn how to run the course through the experience of training under an experienced course coordinator. My goal, always, was sustainability. That requires many people with the ability to teach the course at any given time.
I am the only member of my department to have taught a course over 250 or so students. For two years I taught a 400 student Intro to Ancient Rome class. It was a huge production and logistical challenge. It's not just like teaching 200 student classes times two (which is the common assumption of those who have never done it). It's about ten times the work and complexity. The current instructor dealt with this in Fall 2014, when we had over 300 students in the course. It was his first experience of such a set-up, which included managing graders; it was a real challenge and he learned a lot by trial and error. My concern is that, now, the Classics Department is asking someone with no experience in any of the key components required to run the course to step in and a. run a huge, multi-section course; b. know how to manage, support, and train the instructors of the individual sections.
Thankfully, LAITS had a kind of back-up plan in place. They have hired the current instructor to teach the Extended Campus version of the course in the Summer and Fall 2015. In the Summer, he will be focused entirely on converting the 15 week course into a five week course while a graduate student from the department is the instructor. He will work closely with that grad student and mentor him as much as possible on the challenges and tricks of online instruction. Likewise, he will work with the course coordinator in the fall semester. I was relieved to learn of this plan and I think it will do a lot to keep things from being a total disaster in the fall. But it depends on the fact that the current instructor is extremely gracious--far more than I would be in a similar circumstance. It also leaves unaddressed the larger issue that the course coordinator is far from a content expert or online teaching expert, and yet is going to need to mentor several graduate instructors each semester. I would understand if there had been no other option but that just wasn't the case.
I'm a bit disappointed that the article's melodramatic (and oddly gendered) tone encourages comments about me instead of about the issues at stake here. This isn't about me or my course design. In fact, a key component of the project from the start was figuring out a sustainable model at scale--all without overly compromising on the quality. It was a huge challenge but we did it. The course is an excellent example of a quality online course that is "efficient"--or at least more efficient than our current campus version; and works at a 1/100 scale. It also, ideally, provides graduate students the opportunity to teach online under the supervision of an experienced course coordinator. Finally, it created two lecturer positions for my department for next year, with money from my college as well as LAITS.
In return for taking on this project, I received nothing. I was not paid a stipend. My time was paid for, including summer funding; but, for instance, the 20 hours/week this spring didn't come close to paying for the 60 hour weeks the project required. It's not at all clear that the creation of this course will "count" for anything professionally. After all, it's not a monograph. I get no royalties, which I would get if this were a published textbook. I didn't even get a thank you from anyone. In essence, I donated two potentially productive years of my career to a project that I cared deeply about and was intellectually challenging to me; but which benefited my career and bank account not at all. I'd have been better off to design courses for online programs for a negotiated stipend (if it was the money that I cared about).
The real lesson of my experience is not that your baby can be taken away (ugh!); or that instructors can be separated from their courses (duh!); but that, at the moment, institutions are still playing catch-up in terms of policy and infrastructure around the delivery of online courses to campus-based students. What I found, over and over, was that we were inventing policy and procedures as we went. A big part of the problem is the fact that the development of these courses is done in non-academic units, but then the courses are handed over to academic units to manage. Sometimes this is ok. But if the department isn't up to date on the latest best practices of online education; or if a chair decides to make unilateral decisions and not collaborate with the people who are familiar with the course and its design, it can lead to real problems.
The reason I have made such a sensitive issue public is to try to encourage more conversation (and action) on the policy front. If administrators are going to ask faculty to take risks, to sacrifice years of their time to projects like these, then there need to be clear and rational policies. For instance, despite the tradition that departments control all staffing decisions, with online courses (which are very expensive to develop and keep updated), this is probably not the best procedure, especially if the main concern is ensuring a quality course and learning experience for our students. I want to encourage my smart and talented colleagues to take on a project like this instead of focusing entirely on work that is great for their career but doesn't do much for the larger community (i.e. writing monographs and articles). At the same time, for this to happen on a larger scale, we need to get policies in place. Faculty want to take risks, they want to take on projects like this. I think most of us have no problem handing our courses over to others. What we want are some assurance that the course we spent two years building will be treated well and, hopefully, run as it was designed to run. This seems like a reasonable expectation to me vis-a-vis Online Rome, given that I have essentially donated two years of work to my department and have received no extra compensation.
As we venture forward into this brave new world, we also need to make sure that all decision-makers are educated in the areas about which they are making important decisions; and ensure that there is campus support for all online instructors/course coordinators. This brave new world of online course design and delivery is, potentially, a fun, stimulating, and surprising one. It is also one that doesn't quite work like our traditional campus education. Online teaching requires a lot of different skills than does classroom teaching. I'm confident that, eventually, all these pieces will get into place on our campuses. My aim in discussing my experience is entirely to provide a kind of "case study" for why it matters that we get these policies in place sooner rather than later.
Monday, May 18, 2015
One of the persistent challenges in teaching online is designing assessments that are credible, reliable, and resistant to academic dishonesty. In smaller courses, this can be done by avoiding exams and turning to papers, blogs, creative writing, and other kinds of activities that can be gathered into a final portfolio. As much as I would love to assess students using something other than quizzes and exams in Online Rome, it just isn't feasible given the large enrollments that we are required to maintain. We do make a point of making written work some part of the grade, in the form of essays and short answer exam questions. But, ultimately, about 50% of the final grade is determined by the students' performance on quizzes and exams.
I've been waiting for a reliable and affordable online proctoring service to come on the market. With today's announcement that Instructure (Canvas) has partnered with Verificient, an online proctoring service, it looks like we are making some progress on this front. This was an obvious next move for Instructure, now that their LMS is being used as a platform for online course delivery of all sorts, including MOOCs. The absence of a proctoring service was a serious gap that, it seems, is now being filled. We'll see how effective it is--I never underestimate the creativity of students and their ability to outsmart any monitoring system. But, at least, it is a good start. As online courses become more common, we need a way to protect the integrity of the grade. The system doesn't need to be perfect. After all, academic dishonesty happens all the time in face to face classes. But instructors need to feel reasonably confident that the grades they are awarding were, in fact, earned. This is especially true as universities move towards a policy of not distinguishing between online and classroom-based courses on transcripts.
The quizzes, which appear at the end of each module, are not proctored. We tried to disincentivize dishonesty by writing questions that are application of facts rather than regurgitation of facts (and therefore difficult to Google or find in a textbook). The quizzes are timed and each question has three variations, so it is unlikely that students working together will have very many questions in common. The quizzes are also not worth all that much of the final grade. Ultimately, even if a student cheats on 1-2 questions/quiz, it is unlikely to make any difference in the final course grade.
Still, I would have preferred for the quizzes to be mastery quizzes rather than graded quizzes. When we tried that approach in the Fall 2014 version of the course, however, students didn't study for them and, ultimately, were not learning the material as well as they needed to in order to perform well on midterm exams. By switching to graded quizzes, we were able to get the students to take them seriously. The performance data looked almost identical to the data produced by my classroom-based students, which suggests to me that there was not much cheating going on. Likewise, we saw significant improvements in the exam scores--another indication that the students were studying for the quizzes. Finally, there were no obvious cases where a student had high quiz scores and low exam scores--not a sure sign of academic honesty, but a decent indication that any cheating was small-scale.
Online Rome also had three midterm exams that were administered on campus, in a proctored environment. We included these, in part, as a way to ensure the course's credibility. I suspect that, at some point, these midterms can be eliminated. I suspect that the quiz data will demonstrate that it is a reliable indicator of a student's mastery of the content; and that the instructor can design other, more engaging activities that require students to make connections across the course. A key skill in the study of Ancient Rome is the ability to see patterns and connections across time. Studying for midterm exams is one way that students begin to see broader patterns--but there are many other ways that this could happen.
For now--and, I suspect, the next several years, midterm exams will be a part of the course. For all sorts of reasons, it would be easier to administer these exams via Canvas. This spring, we required that all students--even non-UT students--take the exam on campus or in an accredited testing center. We had to resort to this after several problems with academic dishonesty in the fall semester, when we did allow distance students to take the exams online (there was an oral component to the exam, which was largely just a pain for everyone and did not work especially well). Since the majority of the students who currently take Online Rome are Austin-based, it has been fairly easy to administer the exams on campus. As we make a real effort to expand the enrollment to non-Austin based students, however, it would be great to have an effective proctoring service built into Canvas.
Sunday, May 17, 2015
|Image source: https://notegraphy.com/mdvfunes/note/1750710|
Without an instructor who is a. capable of creating and nurturing relationships with the students; and b. capable of creating a learning environment that provides opportunities for students to connect to and learn from one another, an online course is very unlikely to be truly successful. It certainly will never rival the classroom experience. Faculty who oppose online education generally rationalize their opposition by claiming that the online medium cannot replicate the intimacy of the classroom, the ability of the classroom space to nurture the kinds of relationships that support learning. This is a dangerous failure of imagination. In fact, a skilled and experienced online instructor knows exactly how to nurture these crucial relationships--and often can do so much more deeply and with many more students than can a classroom instructor. That's the irony in all of this.
These days, most of the attention of institutions is on the courses. There is a tendency to get wowed by fancy videos, animations, and the like. None of these things matter much for student learning if the course lacks an instructor who knows how to build relationships and also provide space (and incentivizes) for the students to build relationships with each other. This latter task is especially challenging. Discussion boards are the old fallback but, in my experience, they don't actually do much to encourage student to student interaction.
One of the areas where Online Rome could use more development is in the area of supporting peer to peer learning. Steve, the course instructor, and I had planned to do some of this in the upcoming semester. I hope that we will have the chance to implement our plans for non-UT Austin students at some point. For campus-based students, however, it should not be very challenging to design some activities that require or at least strongly encourage peer learning. I know that there was a fair amount of this happening on an informal level (e.g. students worked together on the modules; they studied for exams in small groups). We captured information about some of this through course surveys. But I suspect that a well-designed class activity (or series of activities) that puts students into small groups for the semester would improve the student learning and general experience even more.
Until then, though, I won't stop saying: teaching is about relationships. Period. The medium of instruction influences how we construct those relationships, what tools are available to us, but the process of teaching and learning is always about relationships and will always be about relationships. Further, it is incredibly shortsighted (and uninformed) for faculty to believe that face to face relationships are inherently superior to other kinds of relationships. This prejudice for presence is at least as old as Socrates and Plato; but it has been debunked over and over again.
P.S. So, TechCrunch asks why the university is still here. An idiotic question, but a pretty simple answer: in part, because we haven't figured out how to support social learning online. MOOCs are the opposite of the right answer.
Saturday, May 16, 2015
A few weeks ago, my department chair sent an email to the faculty list-serv with the news that UT Austin had once again failed to pass its SACS Accreditation. The problem, it seems, hinged on the failure of departments/colleges to define and measure learning outcomes in a way that was acceptable to SACS.. The chair expressed her view that this was a nonsensical process, anticipating the general moaning and groaning that was sure to emerge from faculty who believe that things like learning outcomes are silly. At the start of the academic year, our CTL had sent around a very helpful sample syllabus, with the various UT policies included as well as things like a place for the instructor to list the course's learning outcomes. Two senior colleagues ridiculed this document as useless and stupid--after all, they have been teaching for decades and don't need such guidelines. Sadly, I'm fairly certain that if they were asked to define the learning outcomes for their course and explain how their course was going to support students in achieving those outcomes, neither colleague would be able to do so.
Writing learning outcomes is very difficult for faculty who were never trained to think about their teaching in such terms. We are great at describing what content our course will cover; we are pretty good at knowing that we expect our students to master a certain amount of content or skill set by the end of the semester. We are terrible at framing our expectations for student learning in terms of learning outcomes, with all of our learning activities in the course aligned to those learning outcomes. We are even worse at measuring learning outcomes. We conflate grades with learning outcomes on the regular.
Every professor will tell you that students learned in their course--they will insist on it, despite never having measured how much students knew at the start or measuring their knowledge using a standardized instrument rather than an instructor-written exam. In fact, we generally have no real way of measuring what they did or didn't learn, and how well. This is especially true for one-off courses, like the lower division general education courses many of us teach. So a student of mine doesn't really know anything about Roman culture or history at the end of the semester... This is likely to have little impact on their educational career unless they decide to be a Classics major.
To my mind, one of the great boons of digitizing our teaching is precisely that it requires faculty to finally learn about things like learning outcomes and backwards design. These are not difficult concepts but they require some intentionality--and sometimes some assistance--to implement. The activity of building a hybrid or online class encourages faculty to think hard about how all the pieces contribute to student learning. The existence of the course on a digital platform means that the experience of taking or teaching a course is now preserved as an artifact that can be examined by a third party. It is no longer an ephemeral experience in which we depend on the reports of instructors and their students to evaluate teaching efficacy.
I worry about "Big Data" intruding on academic freedom, both in terms of what we teach and how we teach. Will we all be forced to teach in 3-5 minute blocks because someone decided that this was the average attention span? At present, online courses are closely scrutinized and monitored by our campus administrators. To what extent will this monitoring expand as our ability conduct this monitoring with computers grows? Currently, we are required by law to post our course syllabus. At what point will we be required to use the Campus LMS for all graded activities, so that student data can be captured more easily? I suspect that the refusal of most faculty to understand that they are accountable for demonstrating student learning will make it all the easier for university administrators to start tracking students and imposing ever more restrictions on what and how we teach.
I make it a point to identify and articulate course learning outcomes for all my courses, including graduate seminars. I also include a "map" that illustrates how the different learning activities in the course will help students reach these outcomes. It took a bit of practice to learn to think about teaching and course design in these terms but, after two years, it's become second nature. These are the learning outcomes for Online Rome. The least important of them is mastery of the course content. I am much more interested in helping students develop crucial "soft skills":
· become active, “self-regulated” learners
· learn and become more skilled at good time management techniques
· learn basic skills of “reading” ancient texts/art/architecture
· develop and practice ability to think make connections between different parts of course (i.e. think analytically)
· develop and practice ability to evaluate competing explanations or theories
· master basic narrative of Ancient Roman cultural history from Iron Age-2nd Century CE
There is a lot of conversation on college and university campuses about getting undergraduates involved in research as soon as possible. It is clear that the close contact with faculty as well as more experienced students is a high impact experience and is positively correlated with graduation and shorter time to degree. At UT Austin, this emphasis on undergraduate research seems to be a main plank in our new president's platform. It has been a central topic in the Campus Conversations, which the president sponsored over this past year in his role as the Provost. There are many different initiatives on campus, particularly in STEM fields, which already focus on this effort, including the Freshman Research Initiative in the College of Natural Sciences. My own home, the College of Liberal Arts, briefly had a program for freshmen and sophomores, whereby students could work with a faculty member on a project connected to the faculty's research program. I worked with a couple of students through this program but, ultimately, found it to be an exercise in futility. The students who were eligible had no language training and my research absolutely requires knowledge of Latin as a sine qua non. Even the task of constructing a bibliography requires more than a knowledge of English. I struggled to devise interested but manageable tasks for the students; and I think they left the experience wondering why anyone would want to do research in Classics!
In liberal arts, and especially in the field of Classics, the major obstacle to involving undergraduates--especially early undergraduates--is that they lack the specific skills that are necessary to do even the most basic research. Classics training at the undergraduate level focuses on language instruction, and it typically takes a student most of their undergraduate years to even master Latin and/or Greek to a level where they can begin to make sense of an ancient text. Very often, research is postponed to graduate training, even the PhD these days. Given this, it is a real challenge to think about how to bring inexperienced undergraduates into the faculty research process. In truth, it can't really happen for a scholar who produces the kind of scholarship I do.
This is not to say that I cannot engage in productive conversations about research with undergraduates; or have them learning some of the "skills of the trade" in my company. As I learned during the production of Online Rome, though, the best place for this to happen--for me, given the kind of research I do--is around teaching. It is my sense that the value for students in engaging in research early in their undergraduate career is not research qua research so much as it is, first, structured contact with faculty as well as, ideally, more senior graduate and undergraduate students who can act as peer mentors; second, an opportunity to see the application of knowledge (I don't think it's particularly relevant whether the application occurs in a research or a teaching environment); and third, an opportunity to see "behind the curtain" of academia, to see what it is that professors do and what it means to be a professor. This third element is especially important for students in liberal arts who think they might want to continue their study of the field as graduate students.
When I was putting together a team of students to work with me on building Online Rome last summer, I included an advanced undergraduate student (a double major in Classics and Religion). In retrospect, I wish I'd included more. It was fascinating to see how, in giving her the task of creating a first draft for one of the modules, we were able to have deep and engaged conversations about the content, how best to present it, what sorts of trends it connected to, etc. Whereas I feel like any effort I have made to involve undergraduates in my research has ultimately meant that they are doing uninteresting "grunt" work, this project allowed for genuine intellectual growth. The key was the fact that it was project-based for the student. She worked on it and then we discussed the work. In the future, I hope very much to involve a larger team of undergraduates in a course-building project. In liberal arts, and especially in Classics, it is an ideal way to connect with undergraduates on intellectual grounds, but at a level that is accessible to them (because the task is not the production of original research but, instead, how to teach complicated content to their peers); and which results in the production of something tangible and useful.
I've never been much of a fan of the "hack the syllabus" approach to undergraduate teaching, primarily because it doesn't work terribly well with the particular audiences I teach. What does work is involving a group of students in the design and build of a course from the very start. I can imagine a very interesting learning experience in which, for one semester, a group of students works with me to design and build a course. The following semester, we would teach that course, with the student builders acting as peer mentors to the enrolled students. In this way, it would be possible to put more experienced students together with less experienced students--something that happens to great benefit in graduate seminars but is nearly impossible to do in undergraduate classics courses (in part because, through an odd system of incentives, nearly all students in upper division "seminar" courses are non-majors).
At present, I have only involved undergraduates in the delivery of Intro to Ancient Rome/Online Rome as graders. It's not a bad job and it is enormously helpful to the instructional team, especially when we are teaching very large numbers of students and need to turn around the exams quickly. The course instructor graded 10-15 short answer sections on the exams, to establish a rubric. The instructor then met with the graders, reviewed the rubric and scoring of points (with attention to common errors and breakdown of points for each question). Without exception, the undergraduates did an excellent job and were very attentive and punctual. They more than earned their stipend. The job offered an opportunity to engage with the instructor and to see a bit how large courses are run.
I am a much bigger fan of peer mentors in the classroom. I saw them used to excellent effect in my colleague Cynthia LaBrake's Intro to Chemistry course. I wish that I had had the funds to create a more elaborate network of peer mentors for Online Rome. This is one aspect of the Online Rome course that could be developed. Especially in the online class, putting current students in contact with former, successful students would go a long way towards helping the new students figure out how to take an online class and, in particular, how to do well in Online Rome. Online bulletin boards/Rate My Professor-like sites can offer some information, but it tends to reflect narrow points of view. It would be much more effective to have a group of former, successful students who act as mentors and graders. If some effort were made to meet weekly with these peer mentors, it would also be an excellent way to sharpen and expand the mentors' content knowledge as well as their ability to explain complicated concepts to their peers. There is really a great opportunity here, especially for Classics and related majors, if the funding can be found and if someone is willing to spend the time developing and running such a program.
Friday, May 15, 2015
Even as it has become evident that online courses will play an increasingly prominent role in higher education (and academic administrators acknowledge this reality), faculty have continued to be skeptical of their quality and ability to support crucial skills like critical thinking (see also this article). I have always found this skepticism odd and more a reflection of the fact that most faculty have no direct knowledge of or experience with online education in its current forms. It is especially challenging to imagine how an online course could do the same kind of work that, at least in the minds of faculty, our small seminar-style classes do. In addition, we overlook the fact that, in truth, most of us have little sense of what goes on in the classrooms around our campuses.
When it comes to the courses of our own departmental and institutional colleagues, however, we know very little about what or how well students are actually learning. Our default assumption is that any classroom based course is a good course with abundant, high quality student learning; but we should probably assume that such courses are the exception rather than the rule, especially on R1 public university campuses where the majority of courses are currently taught by professors in training (aka graduate students) and contingent faculty who are badly paid, carry heavy teaching loads, and lack any kind of job security.
This prejudice against online education is one that I find interesting but also troubling. On the one hand, faculty who know nothing about it assume that it is, by nature, inferior to a classroom-based course (much like Plato and the ancients perpetuated the view that writing inferior to speech). On the other hand, these same faculty view online courses as effectively self-teaching. My own sense is that, lacking any first hand knowledge of the wide variability in online course designs, most faculty assume that all online courses look like the standard MOOC: a set of talking-head lectures by a content expert followed by some machine-graded quizzes and/or exams.
In reality, online courses are highly variable, far more so than classroom courses. I suspect that, as more faculty get down in the trenches of online course design, we will see even more variability in course design. The online platform allows for far more innovation, variation, and creativity than does the current college/university classroom, even so-called Smart Classrooms. The trick is for the faculty designer to recognize that the online space is a wholly different kind of space with different affordances--and to leverage those affordances. One of the most disappointing things about MOOCs is that, in most cases, they are incredibly conservative in format. They attempt to recreate the classroom experience, but imperfectly. Few MOOC designers approach the task of designing the learning experience as an opportunity to invent new models of teaching and learning that maximize the strengths of the digital while minimizing its weaknesses.
Instead of lamenting the absence of face to face interaction in online classes, and the challenges that this presents, we need to look at the online medium for what it CAN offer that face to face cannot--and then maximize those affordances. This is tough work. It requires a lot of creativity and a willingness to, in some sense, re-learn basic skills. It requires the course designer to understand that you can have the same basic learning outcomes for a classroom course and an online course; yet the pathways to achieving those learning outcomes are likely to look very different. This can be intimidating to successful and experienced classroom instructors. Yet, if one tries to build an online class that is a poor imitation of a classroom course, it is not likely to succeed.
Online Rome exploited the affordances of the digital learning environment in several crucial ways. Two of these: the ability of digital learning activities to provide immediate feedback and opportunities for repetitive practice and self-correction; and the ability to personalize student engagement with primary source material. I'll talk about the ways that the course highlighted student engagement with primary source material in another post. This post focuses on the ways we exploited the digital learning environment and mastery learning in the course design.
A driving theoretical principle in the design of the Online Rome course is, essentially, that practice makes perfect. In my previous life as a serious athlete (I played fastpitch softball at a pretty high level, as a pitcher), I learned at an early age that the key to success under pressure was practice. A lot of practice. As a pitcher, I practiced every day. I had a coach who critiqued the tiniest things and made me re-do pitches over and over until I got every part right. I watched video of myself. I got better because I worked very hard at it--even though I am 5'3 and have short "levers." I wasn't born with a pitcher's body, but I was smart and I worked incredibly hard to maximize the talent I had.
Online Rome follows the same basic principle that hard work can make up for a lack of natural talent--and, in fact, is far more important than natural talent. My job, as course designer, was to create activities that focus the work and provide immediate feedback so that students can recognize their misconceptions and correct them before they take hold. This process is much easier done in the digital environment than in a classroom, especially when dealing with larger class sizes. In a classroom, it is very difficult to know what every single student is thinking at a given moment (though student response systems like i>clickers are of great help). It can also be difficult to clarify misconceptions, because the nature of those misconceptions will vary from student to student. In the long run, it is much better pedagogy to train students to recognize and correct their own misconceptions using instructor-provided feedback. In the digital environment, that feedback can be instantaneous thanks to machine grading.
2/3 of the grade on the 10 course modules is entirely about effort. Students earn full credit if they score 90% or higher on the in-module questions, but they can repeat the module as many times as they need to. The theory is that, by incentivizing practice, we are actually incentivizing the type of behavior that leads to learning. Similarly, at the end of each module we included a large number of practice quiz questions (c. 35-50). The point of these practice questions was for students to be able to check their mastery, figure out where they needed remediation, and fill in those gaps BEFORE taking a graded quiz. The graded quiz provided motivation to do the practice quiz but, in fact, the far more important and influential learning activity was the practice quiz.
The performances on the graded quizzes at the end of each module; the essays; and the midterm exams suggests that we were right about this. The students practice learning the content until they master it. They are happy because this produces high grades and I am happy because it produces high quality learning, especially in the essays (where we ask them to do analysis and application). I use various forms of digital learning activities and automated feedback in my blended classroom-based Intro to Rome class (i<clicker questions, practice quizzes, in class quizzes). Because I teach large numbers of students, I have to teach students to use the diagnostic information that they get from these activities to self-correct. This process is no different online than in the classroom--except that I can do it far more thoroughly in the digital learning environment of an online course.
Thursday, May 14, 2015
The process of designing and building Online Rome was filled with obstacles. A big one--perhaps the biggest and most challenging one--was figuring out a design that would allow the course to scale well enough that my university would be willing to create and fund future instructors. I discovered only in Fall 2014, more than a year into the project, that there was no plan in place for funding the instruction of Online Rome after Fall 2014 (when, as part of the terms of the grant, I hired and paid all instructors and graders). I gather that there was the expectation that departments would take on the instructional costs. Yet, with no clear incentives to do so, that was unlikely to happen.
A savvy department leadership might understand that there were long-term (and even some short-term) benefits to working their soft money budget to fund the instruction of an online course but, given the general lack of experience with or knowledge about online teaching and learning in academic units, this was not likely to happen. My own department certainly had no interest in supporting the ongoing instruction of Online Rome without additional soft money from the College of Liberal Arts. Thankfully, with the help of our fabulous Liberal Arts ITS department head and my project manager, I was able to get my college to foot the bill for the course instruction in Spring 2015. I then spent this Spring advocating tirelessly for my college to find a longer term solution to the funding of the course's instruction--which, thankfully, they did in the form of giving my department instructional funds to hire a lecturer on the condition that at least three sections of Online Rome were offered each semester.
One of the ways that I was able to "sell" my college on continuing to support the instruction of Online Rome was by designing a course that is, in fact, less expensive to teach than the classroom based version. For me, the challenge was to navigate between the Scylla of deans who wanted efficiency; and the Charybdis of quality. If I could not find a way to design a course that also produced high quality learning, I had no interest in extending the life of the course. The design of the Fall 2014 version was labor intensive and totally unsustainable. It also did not really produce the kind of quality critical thinking that I wanted to see.
My main task during this past spring semester, on the development side, was finding a way to reduce the labor demands on the instructor; and ensure that the instructor's time was being spent as much as possible on high impact activities. In brief, it was about finding a balance between questions that could be automatically graded but still required students to do more than regurgitate content; and instructor-graded activities--in our case, short essays. In the Fall 2014 version, I tried to use short answer question inside the modules. I then wanted the instructional team to read, grade and respond to these short answer questions. This might have worked for a class of 30 but was unmanageable in a class of 300+, also because Canvas is not set up very well for this. Speed Grader only works for a graded quiz but, for other reasons, we needed to categorize the modules as practice quizzes. It rapidly became clear that the module grading had to be automated if the course was going to scale at all.
In order for the module grading to be automated, the short answer questions had to either be removed or be ungraded. I did not want to remove them. They served several purposes, the most important of which was to require active thinking/writing at frequent intervals. They also allowed me to ask certain kinds of questions that could not be reduced to a multiple choice/matching/ranking, etc. question. My solution was to retain the short answer questions and provide extensive feedback on each one. In the course orientation module, I spent a good amount of time introducing the students to the concept of self-regulated learning; and explained how the concept was used in the design of the course. Of course, students could skip over the short answer questions or write nonsense. They could paraphrase the feedback. I did test some of the content in the short answer questions later in the module. I would love to be able to work with a programmer to design an LTI that makes it easier to spot the short answer questions in the modules.
The course modules were worth 35% of the final course grade. Each module was worth 3 points. 2 points were awarded for getting 90% or higher on the questions in the module. Students could redo the module questions as many times as they wanted, until they earned that 90%. The third point was awarded for earning a 90% or higher on a graded, 15 question quiz at the end of the module. A quiz grade of 70-89% earned a student 1/2 point. Anything under 70% earned no points. These graded quizzes were intended primarily as a way for the students to get feedback on their learning before the midterm exam. They were graded to encourage them to take them seriously--which worked. They were not weighted very heavily because they were intended to be formative; and because they were not proctored. We did have a database of questions and no two students would get the same quiz; and the questions were not easy to Google. At the same time, to remove the temptation to cheat, we de-emphasized the graded quiz. Students also had an optional practice quiz that they could take; and many of the quiz questions came directly from the module and practice quiz. The emphasis and incentives were entirely on effort, persistence, self-correction, and time on task.
45% of the course grade was 3 proctored midterms, administered on campus in the evening. The midterms consisted of multiple choice questions, of the same sort that they saw in the modules, practice quizzes, and graded quizzes; and then short answer questions. The instructor provided a detail study guide for the short answer questions and took the exam questions from the study guide. Again, the emphasis was on effort and focused work. In reality, the study guide was so comprehensive that nothing was being given away--it was simply a way to soothe anxieties and help the students focus their study.
In place of the time-consuming and low-return short answer questions, we added 500-750 word essays to the course. The essays were worth 10% of the total grade. Each student had to submit 5 essays over the course of the semester. All students completed a final, summative essay with the last module. For modules 2-9, we divided the students into sections and, for each module, half of the sections had an essay. This meant that, for each module, the instructor had 50 essays to grade rather than 100. Five essays was plenty of writing and practice with critical thinking. The instructor spent a lot of time crafting and refining the essay prompts. On the whole, the students did an outstanding job with the essays. The instructor reported that he enjoyed reading them and, frequently, found the students thinking critically and deeply about Roman history and culture. This change in the course design was a resounding success.
The final component of the course grade, a new addition in the spring, was a movie module that was worth 10%. We chose Stanley Kubrick's Spartacus, partly because it's a good way to introduce students to the ways in which Ancient Rome has been used to talk about contemporary political issues. An area ripe for future development is the addition of more movie modules (Gladiator, Pompeii, etc.).
We introduced the module with an outline of the module as well as a reminder to review their earlier work on the historical figure of Spartacus and his revolt. There were two in-module quizzes. The first focused on slavery in Ancient Rome, so that the students would understand that aspect of the film and also understand the ways that Roman slave practices differed from what they might have learned about slavery in the US.
The second in-module quiz focused on Kubrick's film and the political background of the McCarthy Hearings--a topic that was unfamiliar to nearly all the students.
The module ended with an essay prompt. All students were required to write this essay (so, in fact, students wrote 6 essays in total during the course).
In the end, I am happy with the end result. Students engage in a substantial amount of critical thinking and writing, especially in the essays. Effort and persistence are highly rewarded, two things that we know are crucial to learning. The scores on the proctored midterms were very high over the semester, indicating that the work that the students were doing on the modules was leading to real learning that they were then able to demonstrate on exams.
Wednesday, May 13, 2015
At the moment, Online Rome is not publicly accessible. There are various complicated reasons for this, mostly related to FERPA and the fact that we used UT's Canvas platform. This means that one needs to be a UT System employee in order to access the course, for the time being. I am working with the project manager to make at least some part of it publicly available. I hope this will happen over the next month, once students are finished with the course.
In the meantime, if you are interested in the course design, take a look at this excellent Prezi put together by the course instructor, Dr. Steven Lundy. The Prezi was for a presentation to the UT Classics Department, to familiarize them with the basic structure of the course as well as the function of different elements (and why various elements, like the weekly review sessions, seem to have been crucial to improving retention from about 80% in the fall semester to the low 90s in the spring semester (a stat which is right in line with f2f classes of 100+ students).
Monday, May 11, 2015
UT Austin's College of Liberal Arts began to experiment with different forms of online instruction several years ago when they offered Intro to Psychology as a SMOC (Synchronous Massive Online Course). The SMOC format involved the live streaming of lectures that were recorded in a campus studio. Students were required to log-in to the class session, at least for the first ten minutes, to take a short MC quiz. The course design also included opportunities for students to discuss questions in small groups--something that would have been more successful if more students had remained logged into the course after the first ten minutes.
As we went live with Online Rome, to test and revise the design as well as to determine staffing needs, our primary audience was UT Austin students. The course was offered through the Classics Department, side by side with the face to face version of the course. For UT Austin students paying flat-rate tuition, the course was fully covered. The demand for the course was extremely high--somewhat to my surprise given that it was brand new and still in development. In Spring 2015, we had to cap the course at 100 just to be able to have enough time to finish the development.
The majority of the students preferred the online class to the classroom course for reasons of flexibility. Many of the enrolled students were STEM majors who had very complicated schedules and long days on campus. The much preferred to take a core requirement course online, where they had much more flexibility in completing the work. The course still had plenty of structure and deadlines to keep everyone on track, but we saw that many students would spend several hours working through a module as soon as it was released. In fact, in future iterations that we control, we will release the modules well in advance to provide as much flexibility to students as possible.
One of the unexpected elements of teaching campus-based students was that, for about 20% of the class, they desired face to face interaction with the instructor. Typically, such interactions are not possible in on online class. But, when the students are campus-based, it is possible to hold office hours, exam review sessions, and even weekly review sessions. Based on feedback from the fall semester, we added a weekly review session to the Spring 2015 class. The vast majority of students did not need or want this extra interaction with the instructor. But for the 20% who did want it, it made a tremendous difference in their ability to stay on track and feel engaged with the course. We also live-streamed the review sessions, so that everyone had access to them.
In the end, the instruction of an online course to campus-based students seems to push towards the hybrid. This makes a lot of sense to me. The hybrid model produces the highest learning gains. It combines the best of all worlds--the advantages of face to face instruction while leveraging all the affordances of the digital. It is also something that online instructors are not always prepared for. They often assume that, since they are teaching online, they don't really need to interact with students face to face. This isn't really true in any context. Even in distance online courses, there needs to be significant attention to connecting with students and building a learning community.
When students are campus-based, they expect that instructors will be available for scheduled office hours, appointments, and structured reviews. It was interesting to watch as a significant number of students intuitively grasped that, for them, this kind of hybridized model was going to best support their learning. Of course, many students were fine to work through the modules, submit assignments, and use the provided study guides to prepare for exams--those are the 30% of our students who would be A students regardless of what we did. But for the students in the middle, the B students who can become A students, the C students who can become B students, they recognized and asked for more direct contact with the instructor. An advantage of offering the course to campus based students is that, in essence, we can retain the asynchronous online class for those that want it while also providing the hybrid course for those who prefer that model.
|(Image source: http://www.rasmussen.edu/student-life/blogs/main/critical-thinking-skills-you-need-to-master-now/)|
Partly in an effort to discourage anyone from thinking that my Online Rome course could be run without the involvement of at least one content expert at the top, I did not include any lectures in the course. You will never see me in the role of "content deliver" or "content expert." We did include some old pre-recorded lectures that covered the basics, mostly because we had them and I was curious to see how much students used them when they were unnecessary. Those lectures will not be used in the version of the course that is run for UT Austin students, in part to avoid the false assumption on all sides that students will pass by watching those lectures.
Rather, the design of the course emphasized active, constructivist learning. Students learn by answering questions that highlight essential bits of ancient Roman history. We ask them to think about things deeply and critically. Oftentimes there isn't one "correct" answer--and that's the point. We talk a lot about the limits of evidence. We make it clear that this is a class that goes well beyond memorizing a bunch of random information and regurgitating that on graded assignments. If that's all the class were, it really should be taught by a robot. As Mike Caulfield puts it, we are not in the content business; we are in the business of building communities of learners.
In designing the class, we did as much as we could to focus on the development and exercise of critical thinking skills--in the modules, in essays at the end of modules, and on graded activities. In the first live version of the course, we tried to use short answer questions inside of modules to accomplish these outcomes, but without much success. A big part of the problem: Canvas isn't really designed to give students feedback on short answer questions inside of modules; and it required that graders have a substantial knowledge of ancient Roman history to give useful and on point feedback. It also required that students review the feedback and take it in.
In the Spring 2015 version, we retained the short answer questions but emphasized the self-regulating aspect of learning with them. We provided extensive but general feedback in the comments but relied on students to answer the questions (or not). We also reviewed the contents of some of the short answer questions later in the module, through an automatically graded question. The instructor feedback was shifted to the essays. This produced much better results, both because the students took the essay more seriously and very often produced thoughtful responses; and, because of how we distributed the work, the grading for the instructor was 50 500 word essays/week--a small enough amount that he could give extended and engaged feedback to students. The students, in turn, were more likely to look at the feedback for a single assignment that felt weighty to them. The depth of student engagement on the "big issues" has been very impressive.
There is nothing about the online meeting that makes it easier or more difficult to teach and practice critical thinking skills. It's entirely about devising and incentivizing the right learning activities for the environment. It also requires that one view the instructor not as a content provider or manger of logistics, but as a teacher. At every step, critical thinking requires reflection from the student and, at key points, feedback from the teacher. It is crucial that the teacher has the knowledge to provide that feedback that then pushes the student to think more deeply and critically.
To give just one example of a Q&A from the class discussion board:
A student asks the following question while working through a module: "A question says that this was necessary to confer the powers required to rule as emperor. However, the recording states that this was a self defeating proposal as it conferred power beyond the legal basis law. Given that Vespasian was already emperor, and thus laid claim to supreme power, did this truly do anything beyond codify what he already had? That is to say, was it necessary or just convenient for Vespasian?"
The instructor replies: "Good question, and one without a clear answer. On the one hand, as a non-Julio-Claudian, and a usurper of the throne, Vespasian required the lex to give his position legal standing. On the other, this seems to obscure a reality that had stood behind Vespasian's rise and, indeed, Augustus': the power of the emperor was not based on law, but his irrefutable military supremacy.
In other words, what the lex did was standardize the position of the emperor in a way that made it possible for an emperor to hold power without basing his legitimacy on family lineage. What it didn't do was resolve the ambiguity that resided between the "clout" of the emperor and the notional continued constitutional existence of the Republic (which echoes the contrast between auctoritas and imperium that was apparently crucial to Augustus' reign). The degree to which this was understood by contemporary Romans is debatable; for them, it was merely a standard law that made Vespasian's extraordinary reign consistent within existing fabric of Roman society."
In other words, what the lex did was standardize the position of the emperor in a way that made it possible for an emperor to hold power without basing his legitimacy on family lineage. What it didn't do was resolve the ambiguity that resided between the "clout" of the emperor and the notional continued constitutional existence of the Republic (which echoes the contrast between auctoritas and imperium that was apparently crucial to Augustus' reign). The degree to which this was understood by contemporary Romans is debatable; for them, it was merely a standard law that made Vespasian's extraordinary reign consistent within existing fabric of Roman society."
The amount of expertise required to engage with this student's interesting and thoughtful question is very high and goes well beyond the "I read the textbook a week before the students did" approach of some out of their depth instructors. An interesting thing happens when you give smart kids a lot of information and ask them to think about it: they do. And sometimes they have questions that don't have an easy, Google-able answer. Sometimes, to answer their question, you have to have a deep knowledge of late Roman republican history; Roman law; Augustan auctoritas vs imperium, and how that evolved under the Judio-Claudians; and the intersection of law and martial power from Sulla onwards. This is very specialized learning, the sort of learning one acquires only by writing a dissertation/conducting research in the field or, possibly, after decades of teaching the course.
Learning does not happen by magic. It doesn't happen just by attending lecture nor does it happen just by opening up and even working through an online module. While many--most of all ed tech VCs--would love to be able to automate professors, the harsh reality is that we can't be automated. Parts of what we do can be automated, certainly; but WE can't be automated. We can be replaced by other content experts (who will have different sets of strengths and weaknesses), but we can't be replaced by robots or by a Physics BA who is looking to earn some extra money on the side.
If one thinks of an online instructor as nothing more than a grader and student wrangler, quite a lot of potential learning seeps away as students realize that nobody has the expertise to engage with them. It's interesting to me that we all recognize them when it comes to a physical classroom; yet want to believe that, somehow, when it comes to online learning, the course itself can magically provide all these ingredients--especially timely and engaged feedback--that are essential for developing critical thinking skills in the vast majority of our undergraduate students. Technology is not magic. Learning online is difficult and requires the same interaction with content experts that classroom learning requires.
Sunday, May 10, 2015
Every hiker knows not to leave home without water, an emergency supply of food, a map, and a compass. A cell phone or an emergency locator beacon can also be useful. Before setting foot on the trail, hikers will orient themselves to their surroundings and make note of the weather (and, hopefully, they checked the forecast before leaving home). They will find North and make sure they are headed in the correct direction.
Educators know that orientations are also important for our students. At UT Austin, we require students to attend an entire week of orientation activities on campus during June and July. The first week of classes is full of orientation activities. We orient new graduate students to campus, the department, and the program. Yet, for the most part, most of us devote very little time to orienting our students to our face to face classes. Sure, we spend the first class meeting reviewing the syllabus and discussing our expectations for the course. In a seminar course, students might introduce themselves to one another. But, really, we don't orient students to the course itself, including what to expect as the semester progresses. For the most part, this lack of orientation isn't a problem. The basic experience of taking on class is not that different from another class; and these students have been practicing classroom-based education since before they could use the potty on their own. What they don't know, they quickly figure out; and if they are truly disoriented, they will seek help from a classmate or us.
The problems come up in spades when we shift learning to online. Suddenly, the students feel lost, uncertain, unsure--even if, in actuality, their part in the learning process hasn't significantly changed and there is not reason for them to feel disoriented. The majority of these so-called digital natives behave as if they were plopped down on a different planet that operates by a wholly different set of natural laws when they participate in the online classroom. They are disoriented and looking for familiar landmarks--which are often not immediately evident to them. Sometimes they forget good behavior. The same student who would never shout obscenities to a classmate suddenly posts an invective-laced, ad hominem attack on the discussion board. When the impropriety is pointed out, they often are ashamed and deeply apologetic. They just didn't think about it, they say.
When I implemented the blended course design in my campus-based large lecture class, the first semester was an exercise in frustration for me and the students. Most of it, I realized, came down to issues of disorientation: the students felt disoriented and were unable to recognize the familiar when it was right in front of them. It really was as if they were wearing a pair of glasses that distorted everything and made even completely normal things seem unfamiliar. From this, I learned the value of crafting a thoughtful orientation for students in "innovative" courses. These days, students are much more accustomed to the expectations and workings of the blended classroom, so orientation goes pretty quickly. The new frontier is the online class, especially the online class at scale (the larger the class, the more potential for disorientation).
In the first live run of Online Rome, we were pressed for time to get the course ready for students (the decision to go live was made by UT days before the start of the semester). We put together a short orientation module, but hastily. It worked ok, but throughout the term it was apparent that some portion of the students--maybe 10%--were still confused and disoriented. It wasn't that we were asking them to do strange things; it was that they were disoriented in the online environment and so were unable to recognize and feel comfortable with standard learning activities. They felt the need to double-check everything, seek confirmation that they were "doing things right." All of this disorientation required significant time and effort on the part of the instructor throughout the semester.
Over the winter break, I spent considerable time revising the orientation module so that it did a better job of equipping students with the skills they would need to confidently and successfully navigate the course. I also worked with the course instructor on the issue of orientation. This spring, he made regular announcements to the class, reminders of where they should be, upcoming deadlines and other course activities, and general feedback on things like their essays. The results have been what I expected: the students felt comfortable, knew what was expected, and have wasted little time fretting about logistics.
Eventually, we will be teaching a generation of students who are as comfortable learning online as they are in a classroom; and who require less regular orientations and re-orientations by the instructor. For now, though, the default learning environment for our students is the face to face classroom. Anything else requires us to be aware of the constant potential for disorientation. A lot of time can be saved with a well-crafted orientation module, that lays out for the students the architecture of the course and their role in it. To give just one example, we focused on the role of self-regulated learning in Online Rome. We had the students read a little bit about the concept and answer some questions; and then had them apply the concept to a learning activity (answer a short answer question, look at feedback, reflect on how they would modify their response).
Finally, having students perform the sorts of activities that they will be doing throughout the semester is an excellent way for them to evaluate whether the online course is a good fit for them. If they find it difficult to complete the orientation module on time; or feel alienated in the online environment, they have plenty of time to drop the course and find something that is a better fit for their style of learning.