Print

Print


I think the reason for including the student's GPA on the evaluation form
(anonymous, of course) is the assumption that a mode of teaching may
please some students more than others.  A particular instructor may, for
instance, be well rated by the "good" students, but be teaching over the
heads of the weaker students.  If "mixed" evaluations correlate with
student GPAs, this tells us something about how the teaching/learning
might be improved.

It may also be useful to correlate evaluations with the students'
ANTICIPATED grades in the course (and I believe there is often a
correlation here).

The most important thing, of course, is to remember that student
evaluations are ONE perspective on the quality of teaching/learning in a
particular course.  Since they are the easiest perspective to collect (and
because, under the reign of business metaphors, they are the "customers"
who must be pleased), however, they often become THE data from which
instructors are evaluated.  A much more difficult question is what other
"data" belongs in a teaching dossier.  After all, we ultimately want to
know about the quality of students' learning (which may or may not
correlate with their ratings of the course/instructor--tho, of course,
they do in my classes, he added tongue somewhat in cheek).

In technical terms, it is much easier to establish the RELIABILITY than
the VALIDITY of any evaluative instrument (which is why, in my experience,
"experts" tend to focus our attention on reliability, not validity).  Of
course, if the instrument isn't valid, why should anyone care how reliable
it is.  In this case, student evaluations matter insofar as they are
valid, i.e., insofar as they correlate with quality of learning.

rick coe