Prism Magazine - February 2003
Getting Down To Business
Unsettling State of Affairs
Down & Out in Afghanistan
Comments
E-Mail
Briefings
Databytes
Refractions
Teaching Toolbox
ASEE Today
Professional Opportunities
Last Word
Back Issues

LAST WORD

CLOSING THE FEEDBACK LOOP

- By Julia M. Williams

I have just finished reviewing my student evaluations from the Fall quarter, and as usual, the experience has been bittersweet. Please don't misunderstand me. I am satisfied, even pleased, by the results of these evaluations. And I recognize how significant the results of student evalua-tions can be. They dictate promotion and tenure decisions, determine raises, and often influence the way a faculty member views his or her own teaching. My problem lies in the nature of student evaluations themselves. Because of an inherent paradox, student evaluation forms fail both students and professors. The forms reflect an evaluation culture that places an inordinate value on personal opinion but does not require students to reflect substantially on the nature of their educational experience. Until we address those problems, student evaluations will remain inherently flawed.

Student evaluation forms differ to some degree from institution to institution, but in general most of them resemble the one my college uses. The form is divided into three parts, with one section on learning, one on the course, and one on me, the instructor. Within each section, students are asked to evaluate aspects of the course by marking a response ("Strongly Agree," "Agree," "Disagree," "Strongly Disagree," "Not Applicable") to statements like "The instructor's teaching style kept my attention" and "The laboratory for this course reinforced the lecture material." The numerical averages for each statement represent the quantitative component of the evaluation. In addition, each section is followed by a window to allow students to write additional comments. These comments cannot be totaled and averaged like the quantitative portion, and thus represent the qualitative component of the evaluation.

Current research has demonstrated the problems with a form like this: They are often used to compare professors to one another based on the quantitative results, they are biased against women, and so on. Little attention has been paid, however, to the cultural context in which students fill out the forms. We live in an evaluation culture. No matter where we are and what we are doing, we are asked to evaluate our experience. A visit to Amazon.com requires us to scroll past lists of recommendations from people we do not know. Whether we attend a symphony performance or pay the phone bill, we are continually asked to rate the service or the string section. It is no wonder then that when students are asked to evaluate my "Technical Communication" course, they approach the task as if they considered my course the latest Hollywood blockbuster and were now giving it two enthusiastic thumbs up: "Dr. Williams rules!" Some will argue that student evaluations are not part of the evaluation culture, that marketers use the response cards to sell cellphones and video games, not college education. I would claim that students see the evaluations as comparable, since evaluation forms look so similar; the results are used to market certain professors and the university, and students are accustomed to their roles as "customers" for education.

We have also erred by making forms that are too easy to fill out and take only minimal class time to complete. The university requires us to collect student evaluations at the end of the quarter or semester. Usually this means taking 10 or 15 minutes on the last day of class, a point in the term when students are tired, overworked, and distracted by the various assignments and exams that must be completed. In addition, the same evaluation forms are used in every class, so students become familiar with the nature of the questions and can speed up their responses. As a result, we do not ask students to reflect substantially on their learning throughout the course. Essentially they record their feelings about the last day of class. In order to assess their educational experience, students will have to evaluate the course multiple times throughout the term.

The current evaluation forms are easy for everyone, students who must fill them out, faculty who must sacrifice class time to distribute them, and administrators who must use them to make promotion and retention decisions. But we should not fool ourselves into thinking that they measure a student's educational experience.

 

Julia M. Williams is the coordinator of technical communication and an associate professor of English at Rose-Hulman Institute of Technology. She can be reached at jwilliams@asee.org.

 
Prism@asee.org