Like most instructors I know, I love teaching and hate grading. It's not the act of grading that I particularly hate--though I won't claim to love it--but rather, the way that grades inevitably get in the way of student learning. Anxiety levels sky-rocket and engaged learners suddenly become sweaty and grade-obsessed. After a decade of teaching, I've come to despise midterm season and, even more, the final weeks of the semester. Any illusions I might have held that my students were invested in my course for the pure joy of learning are dashed against the rocks as they pepper me with questions about grades or demand higher grades, often with little or no justification. In fact, I became so fed up with the unsubstantiated demands for higher course grades (I need an A to get into the business school; I need to pass the class to graduate) that I instituted a new policy this spring: any requests for a higher grade for any reason other than an error of calculation would result in an immediate 5% deduction from the student's course grade. This policy did, in fact, work. The students whose grades had miscalculations contacted me and I corrected the errors. I reviewed the work of a few students who had questions about the scores they had earned on specific assignments. I received 0 emails asking for an arbitrary grade boost. Consequently, my own end of the semester was moderately less stressful.
When I embarked on my project to flip my 400-student Rome class, I had a pretty tenuous sense of what a well-run flipped class would require and I had no idea that it would require me to overhaul completely my assessment structure. It should be obvious: if we instructors want students to use different learning strategies we also need to alter the ways we assess their learning. Yet it is a point that nowhere came up in my (admittedly preliminary) reading about the flipped class (and I'd be happy for readers to point me to any research on this topic). While flipped classes are becoming more common in K-12 schools, they are still pretty exceptional at post-secondary institutions. In part, this is because many humanities disciplines have long operated their smaller seminars as, in essence, flipped classes (assign readings, discuss assigned readings during class). In recent years, especially in the math and natural sciences, the flipped classroom has started to take hold on university campuses. Still, these flipped classes have generally been on the smaller side, and no larger than about 75 students. What I didn't know and couldn't predict was how different the techniques and tactics for flipping a 50-75 students chemistry class would be from those that would work for a much larger student enrollment in a non-problem based discipline like Roman history.
In Fall 2012, my first attempt to flip a traditional lecture course, I (like most people) thought flipping a class was all about using technology (in my case the Echo360 lecture capture) to shift content delivery to outside of class. To prepare and record those lectures over the course of five weeks, four days/week was an exhausting labor of love--and resulted in a stress fracture in my foot! I devoted substantial time to what I was going to be doing during class time in lieu of lecturing. In fact, a motivation for the flip was the addition of an Ethics and Leadership flag to the course. Among other things, I planned to do a series of 7 case studies of ethically questionable moments in Roman history during class. Still, I did not really understand that I couldn't just flip a class by recording a bunch of lectures and planning some in class reviews of content as well as discussions. The only change I made to the assessment structure--a dumb one in retrospect--was to allow them to count their lowest midterm for 5% of their final course grade (to avoid giving endless make-up exams).
In Fall 2012, I taught a class I labeled as flipped, but with the standard 3 midterms + final exam assessment structure. It will surprise nobody that a. my students, as a group, did not really flip (I had no way to force them to learn the content week by week); and b. that, though the very top students performed at an incredibly high level, many of the B-C students performed at the same or even slightly lower level than the Fall 2011 traditional lecture cohort (in part because the 2012 course was, in fact, more challenging than the 2011 version). I didn't hurt anyone; and those that did the course as it was designed saw real returns for their efforts. The majority of the class, however, ignored the course design altogether and focused all of their attention on the graded assessments. Like the Fall 2011 cohort, they did little work between exams and then, in the 48-72 hours before the exam, they tried to cram as much as they could into their tired brains with fairly mediocre results. I learned an important lesson from this cohort, one that should become any instructor's mantra: students flip a class, not the instructor and not the course design.
At the start of the fall course, I spent an entire day talking about the flipped class model, explaining why I was using it, how it would work, and why it was preferable to the traditional lecture model. We had a long discussion about learning and the difference between grades and learning. I was optimistic that, maybe, I could get this group of students to discover the joy of learning, quite apart from assessments and grades. Ha! I soon realized that there was no way I was going to undo a way of approaching education that had been firmly inculcated in these students by an intense regimen of high stakes, standardized tests as well as a requirement that they rank at the very top of their graduating class in order to gain admission to UT (thanks to UT's deeply troubling "Top 10%" rule). These students spoke the language of grades, not learning; if I wanted to get through to them and have some chance to making inroads into their learning habits, I needed to speak their language, not expect them to speak mine.
At the risk of appearing to be yet another proponent of Skinnerism, I did essentially adopt a behaviorist approach to my course (re)design for Spring 2013. I recognized that, if I wanted students to do something, I needed to incentivize and reward that behavior as well as discipline other choices. If I wanted them to stay on top of the course material week by week so that class meetings would be useful to them, I needed to institute a series of low-stakes, weekly quizzes. Each quiz was only 5% of their total grade. I gave them ungraded practice questions to do on Blackboard and took several of the questions for the actual quiz directly from these practice questions. If you did the work, it was a great way to get an easy 5% A. If a students didn't do the work, it wasn't the end of the world. The could drop their lowest quiz. And it only counted 5%, so plenty of time to recover and refocus.
Similarly, for each ethics case study, there was a series of 3 worksheets: one for before class that reviewed the basic historical facts and ethical dilemma, intended to get students to start thinking about the major issues (they could work on these together if they wished); one which they completed in class (working on their own, with a peer, or as a class on different questions); and a short essay in which they applied the ethical issue to a modern issue. We collected the worksheets regularly to emphasize the importance of having them done on time; and my TA offered substantial feedback to the students, with advice on how to improve their answers. During class, I frequently asked questions. Students who volunteered were given a piece of candy. Silly, yes, but it had the desired effect of--finally--getting students seated all over the giant auditorium to participate in the conversation. If students did not fill in their scantron forms correctly, we made them come in to the TA's office and dig through the pile of 400 to find it. We also deducted two points. We noted that we had no repeat offenders during the semester (though there were always a few new offenders).
After exams, we had them do "Exam Wrappers" for extra credit. These exam wrappers asked them to reflect on their study strategies and performance on the exam. Instead of me telling the class how they did, what they needed to do to improve, we had each student do it for themselves with an exam wrapper. I discovered, not surprisingly, that students knew exactly what they needed to do to improve. The issue was doing it. I also discovered that it was far more effective for them to figure out what they needed to do than it was for me to try to tell them. For many semesters, I have worked to change deeply engrained student behaviors, particularly the "cram for the exam" approach. I know it doesn't work for most of them; yet every one of them wants to believe that they are one of the 10% for whom it does work. By the time they realize that, no, they aren't, it's too late and their grade has tanked.
In the Spring 2013 course, students were being assessed constantly: quizzes, worksheets, online discussion board, midterms. Interestingly, though, anxiety levels were palpably lower than ever before. The reason, I think, is because assessment was just part of the class, not a "special" thing. As well, no single assessment was worth very much. Midterms were worth about 13%, not 30%. Oddly, though, they studied just as hard for them and performed at a significantly higher level than the Fall 2012 cohort, whose midterms were a substantially larger portion of their grade. The other reason that grades were much higher in the Spring, I think, is because students had multiple opportunities to adjust their behaviors, learn from mistakes. They got feedback not just from a computer (although that was one source of feedback), but also from real, living, breathing members of the teaching team. That human feedback was, in my opinion, crucial. It was also the most time consuming and is the part of the flipped class model that is difficult to sustain without significant classroom support.
One other interesting feature of the Spring 2013 cohort: despite the ubiquitous presence of assessment in the class, talk of grades was relatively infrequent. For the most part, they just did what they were supposed to do. The frequent, low-stakes assessments meant that they always had a much better sense of where they stood, what they needed to do to get the grade they desired. The one assignment that generated the most grade-related questions was the ethics portfolio, because I had deliberately not posted a specific, point by point breakdown and explanation of how the portfolio would be graded. It is one assignment which I will be adding more explanation to for future students.
It wasn't until the final week of the semester that talk turned to grades. Students were running numbers, worrying about what they needed to get on the last midterm, and so on. I was a bit disheartened by this turn, but not overly so. Of course they were worried about grades. That is their language. At the same time, they had spent 14 weeks learning the course content. By the time they started to obsess about their grades, they had done what I wanted them to do: learn Roman history. At this point, I have accepted the fact that--at least with this audience of non-majors taking my class to fulfill a core requirement--I'm not going to persuade most of them that they should not worry about their grade. I certainly believe that good grades will proceed from learning; and I strongly believe in the joy that learning can bring. But I've learned that the way to win this argument is to set up a system that "speaks" to them and let them experience the joys of learning and good grades that come out of using good learning behaviors. Roman history is fun, and I am pretty sure that many of them felt that way at the end of the semester.
At the start of the semester, a demographic poll told us that 94% of the class expected to earn some form of an A. A challenge in teaching this class is, before all, having to get at least 50% of those students to accept that they aren't going to earn an A; but to remain invested and working hard. For this particular audience (the attitudes and expectations of this group are pretty typical of the students who take this course), frequent assessments/reality checks are an invaluable way to communicate to them where they stand, how well their learning strategies are working, and what sorts of things they need to do if they want to earn a higher grade.