Category Archives: Teaching

What are We Supposed to Do With Student Evals Anyway?

This post is not to exalt evaluations as the pinnacle of student assessment. First, the hunt for great evaluations sped on by grade inflation and student pressure for good grades can negotiate a tacit contract between instructors and students-a kind of quid pro quo agreement. Second, response rates are notoriously low on student evaluations and often the two types of students who complete evaluations are those most & least satisfied with our ourses. This leads to a Janus-faced discourse that is unhelpful to instructors. Third, student evaluations are always already burdened by systems of patriarchy and racism and exist to support systems of white, male, heterosexual privilege. For example, see this excellent annotated bibliography on Gender and Student Evaluations. The race, gender, and age of the instructor prejudices student perceptions of instructors before they ever begin to teach.

Moreover, even beyond these factors, subject expertise does not always translate to expert teaching. Meaning that the most qualified person to deliver content to students is not always the best at that task–so we should not assume that poor evaluations automatically assumes that learning is not occurring. Also, to some degree, evaluations are a test of the affective response lingering by students towards the instructor, instructors who may come off as misanthropes at times, may be great in the classroom and are not perceived that way by students.

Feel free to skip the breakdown below and jump to the tips on reading evals at the very end.

Watching my inbox, twitter, & Facebook feeds it seems it is student evaluations time now that the semester has come to a close. Student evaluations make for tricky reading for instructors because they are documents that can have some weight on current and/or future promotion, retention, or job prospects. Also, no matter how much we urge and desire student feedback during our classes, many students use evaluations, for a variety of reasons, rather than talking to instructors directly about their experiences in a class.

I know many of my peers and myself, especially those of us early in our careers, are eager to see student feedback for a number of reasons. Because of that urge, I want to explore the usefulness of qualitative student feedback on course evaluations.

First, we want to know about major flaws in our course design. This is a genuine desire: was a unit particularly unhelpful? Was a unit too easy? Was an assignment particularly unclear? Grade data can help provide this information, but the evidence is particularly damming when it corresponds to student feedback.

Second, teaching is a deeply performative art. Students often provide feedback on our personality. This can be painful to read. We learn the good and the bad. It was in course evals that I learned I like to make a fist and pound on the white board when I get fired up about a topic. It is also here, that we can learn that we are communicating things to our students we never intended. For example, from time to time students will report on my course evaluations that I am intimidating, arrogant, or difficult to approach. I struggle with this description, not because I lack the capacity to be arrogant, intimidating, and difficult to approach, but because I work hard in the classroom to craft a persona the is open and approachable–even begging students to visit me during office hours if they have questions, or by lingering after class to speak with students and arriving early to chat informally with students. Moreover, I know this is a performative issue because the students I develop deeper relationships with describe me in much different terms.

Third, evaluations are these bizarre blind exercises. There is this deep temptation to try and figure out who said what about you. Sometimes you have a sense about which student said what by their tone and writing style, particularly if the semester demanded you attend to their writing in detail. This may be the most consumptive and least useful part of reading student evaluations. When I was a young boy I heard a story about my grandfather; who was a minister in a Presbyterian church in Florida. Each year the congregation would take an anonymous vote to retain or dismiss him. Each year the vote would be something like 68-2. For the rest of the year my grandfather would relentlessly obsess over who the two people were that sought to cast him away. I wonder with some irony, if we engage in those same exercises. It is as if we tell ourselves: If I can correlate the negative comments with students who performed poorly, perhaps I can excuse their force. Even though that means we may be dismissing a very important correlation, a student’s poor experience and their poor outcome may be crucially related.

How, then, do we use qualitative feedback on student evaluations?

First, the feedback will be personal, but don’t take it that way. One semester is a snapshot in your career and teaching is an act of becoming. It is a work in progress, not a reflection on who you will always be.

Second, no comment in isolation merits significant change, but every comment can, and perhaps should, generate reflection. If one student says assignment expectations were unclear, then perhaps they were to them. If four or five evaluations say that, it may be time to revisit that assignment’s description. Especially if those comments correlates with the class’ average score on that assignment.

Third, even when statements correlate, that does not mean that that you need to take action. I have four years of student evaluations that suggest that the reading material in my classes at the University of Utah is too difficult (not every eval says this, but about six students per class will say this). I recognize that many of the readings I assign push at my students. However, I have yet to find a case where the difficulty of the readings was the cause of a lack of student success. Does that make my class hard? Yes. Demanding? Yes. Will I ease up on my syllabi? No. In this case I recognize the triangulation on evaluations, but see the educational merit in reading difficult, but applicable, material for the educational opportunities it offers. Moreover, I have not chosen the material just to be difficult, it simply happens that when you deal with complex and abstract ideas you need complex and abstract readings. In essence, even when students routinely object, if you have taken a principled stand, don’t let evaluations pressure you into backing down. I have recently started sharing some of the thoughts on difficult readings & reading difficult theory collected by Robert W. Gehl with my classes and I saw less of those comments this last semester, you can find his thoughts here.

Please share your thoughts on course evals, as a student, instructor, or interested/apathetic party below or with me via twitter @acaguy



Filed under Cultural Studies, Feminisms, Grad School, Teaching

Attack of the Exam: Summer Edition

I have spent the last two days watching my students in Analysis of Argument and Communication Criticism turn in their final papers and take their final exams. I can see the anxiety on their faces as they pry open their blue books and wait for the exams to reach them. Exams are stressful, though we ought not worry about causing our students stress-in can be a very productive affective mode. Nonetheless, their anxiety is also paired with my own:

Will this exam adequately and accurately assess what they have learned?
Is the exam hard enough? Too hard?
Is the exam fair?

Before my argument students took their exam I was telling them about a professor I once had who called an exam a four-letter word (see also: test and quiz). A student told me that they once had a prof who called exams celebrations of learning. I like that frame because, although cheesy, it is a good reminder that we do not test because we are mean or punitive, but because we want to provide a means for students to show what they have learned. Even under such positive pretense, questions about effectiveness and fairness remain.

My own testing strategy is minimalistic and is a holdover from my own undergraduate education. A blue book and pen matched with identification questions, short essay questions, and long essay questions. For example, my Comm Criticism exam has a grand total of four questions (2 short essays and 2 long essays). These are time intensive for the student to complete and for me to grade.

Of course others test in different ways, multiple choice exams are commonly used for many reasons by my colleagues (they believe these exams are more comprehensive, they better test students, and/or can be graded a lot faster). Although I am the first to admit I despise multiple choice exams, I cannot fault any of my peers for using such exams-especially as class sizes continue to swell without increases in time or compensation. My bias remains, however, and so I test my students in the same way I was tested. As a result, there are a number of consequences:

1) These exams assume a kind of bias that rewards on the spot depth of thinking in a way that is not always beneficial. Papers produce more complex arguments than timed essay exams.
2) Every question is high stakes. With only 4 questions on an exam, to miss one answer can decimate a grade. I help soften the impact of this by always giving as much partial credit as possible and by letting students choose the questions they respond to from a bank of options (For example, “Choose 2 of the 4 short essay questions below”).
3) I sacrifice quantity for quality. I am always perplexed at the concept of exam reviews because students want me to cover everything important in a semester in one class meeting; that is impossible. However, my minimalist approach does not solve the problem, my exams commit many sins of omission by leaving out crucial tests of knowledge that are important to the structure and knowledge of my classes.

As opposed to saying one way of testing is better than others, I am simply saying what we all know; any test is not an objective metric but a subjective exercise in power/knowledge. Now to make such a claim is not to denounce testing, I take pride in the exams I give, but the way we test speaks loudly about the kinds of classes we teach and the ways we conceive of knowledge production and evaluation.

For now, the blue book blues stand between me and a visit with friends and family in Wisconsin.

For those of you who teach: What types of exams do you use? What do you think of their efficacy? Why do you test that way?

For those of you who are, or were, students: What kind of exams do you prefer? Why? What kinds of evaluation methods did you find rewarding and productive for retaining course material?




Filed under Teaching