A recent article by Inside HigherEd reports on a new meta-analysis of research correlating student evaluations of teaching with learning. I’ve long thought that as academics with a wealth of qualitative and quantitative research knowledge we can do much better in measuring effective teaching by actually measuring learning and growth.
Why does it seem faculty and their universities hold on to this poor measure? Is it because it is easy? One thing that is certain, these evaluations motivate faculty to do things to improve their ratings (not necessarily the learning) much like students are motivated by grades.
This seems like a good opportunity to put a plug in for more valuable, reliable evaluation processes, such as the Group Instructional Feedback Technique, Teaching Squares, or the Student Assessment of Learning Gains. These aren’t perfect but they are moving towards improvement. Imagine what could be done if we were serious about building a better ruler.
A number of studies suggest that student evaluations of teaching are unreliable due to various kinds of biases against instructors. (Here’s one addressing gender.) Yet conventional wisdom remains that students learn best from highly rated instructors; tenure cases have even hinged on it.
What if the data backing up conventional wisdom were off? A new study suggests that past analyses linking student achievement to high student teaching evaluation ratings are flawed, a mere “artifact of small sample sized studies and publication bias.”
“Whereas the small sample sized studies showed large and moderate correlation, the large sample sized studies showed no or only minimal correlation between [student evaluations of teaching, or SET] ratings and learning,” reads the study, in press with Studies in Educational Evaluation. “Our up-to-date meta-analysis of all multisection studies revealed no significant correlations between [evaluation] ratings and learning.”