Student evaluations of teaching: it's not only how you teach — it's also whom you teach. New paper by Sara Ayllón et al. finds that "less generous students systematically sort into certain fields, courses, and instructors’ sections". As the Figure below shows, there is "significant variation in the average ratings across majors, with instructors in the lowest-rated majors (e.g. Architecture and Economics) receiving approximately 0.5 SD lower ratings than the highest-rated majors (e.g. Medicine and Philosophy). While differences in instructional quality may partially explain these gaps, it is likely that student sorting plays an important role." The paper also documents "considerable variability in the disadvantage faced by female faculty across and within fields". Notably: "female faculty in Business and Economics face substantially more gender-biased students than faculty in Arts and Communications and, as a result, receive significantly worse student ratings." The good news is: there are ways to correct for this. "A complex solution is to provide ratings for female and male faculty that adjust for gender-specific generosity and are normed to be equivalent across genders. This is technically feasible, but sacrifices transparency. A simpler solution flags to administrators courses in which female faculty face an expected disadvantage" Read the full paper here: Sara Ayllón, Lars Lefgren, Richard W. Patterson, Olga Stoddard, Nicolás Urdaneta (2025), ‘Sorting’ Out Gender Discrimination and Disadvantage: Evidence from Student Evaluations of Teaching, National Bureau of Economic Research working paper 33911. https://lnkd.in/ecKBEZEi (open access) https://lnkd.in/eDZnQbf8 (gated)
Insights from Student Evaluations
Explore top LinkedIn content from expert professionals.
Summary
Insights from student evaluations refer to the valuable information and patterns gathered from student feedback about their courses and instructors, which can reveal trends in teaching quality, learning experiences, and possible biases. These evaluations provide universities with perspectives on classroom dynamics but may also reflect student preferences and demographic factors rather than objective teaching outcomes.
- Understand feedback patterns: Pay attention to recurring themes and disparities in evaluation scores across different fields and instructor demographics, as these may signal underlying biases or student sorting effects.
- Clarify evaluation purpose: Communicate to students why evaluations matter and how their input influences ongoing improvements to teaching and course design.
- Address bias concerns: Consider flagging or adjusting for systematic biases, like gender or major-related differences, when using student evaluations for decisions about instructor performance or promotions.
-
-
At semester’s end, many universities lean on student evaluations of teaching as a proxy for quality. By Kirkpatrick’s classic framework, that tool mostly taps Level-1 “Reaction”, how much students liked the class, not whether they actually learned (Level-2), transfer what they learned (Level-3), or whether there are meaningful outcomes (Level-4). For example, a student comment like “great lecturer!” is reaction; showing they can solve novel problems on a closed-book exam is learning; applying concepts in later courses or internships is behavior and results. A recently published study “The boys’ club: gender biases in students’ evaluations of their philosophy professors” (https://lnkd.in/eVksAVqX) further shows why caution is needed. When identical content was presented as if delivered by a man versus a woman, the “man professor” was consistently rated higher on competence, clarity, confidence, interest, and willingness to enroll, while the “woman professor” was more often judged on “care.” Making gender cues more realistic (using voices) preserved these differences, and they persisted even among students who endorsed egalitarian views. In short, student evaluations reflect preference and stereotype, maybe even more than they reflect pedagogy. If student evaluations mostly assess reaction and are systematically gender-biased, they are not a sound stand-alone basis for quality management (...or for hiring and promotion). But what could be a good way to actually evaluate teaching? #HigherEducation #GenderBias #UniversityTeaching #AcademicLeadership #QualityManagement
-
🎯 It's all about feedback: student evaluations We all need feedback to grow—at work, in science, and in teaching. In industry or national labs, our managers (who may not know every technical detail) still give us valuable input on teamwork and professional growth, and contribution to the team success. In academia, we get constant feedback via paper and grant reviews, and through student course evaluations. Many colleagues ask, “How can students evaluate professors?” Student comments can be blunt or even harsh, testing your moral fiber to read them. But feedback, however imperfect, is essential to improve. What matters isn’t just what I know, but how well I communicate and support learning. To make evaluations more useful, I explain why they matter and how I’ll act on them. Then, at semester’s end, I steel myself to review the results—and I can clearly see how things evolve! Spring 2024 vs. Spring 2025 (averages) Metric 2024 Avg → 2025 Avg Instructor contributed to understanding 4.40 → 4.60 Course challenged you 4.60 → 5.00 Atmosphere invited extra help 4.20 → 4.50 Responded to inquiries in 48–72 hrs 4.40 → 4.56 Respectful & positive environment 4.40 → 4.90 Useful feedback on assignments 4.20 → 4.11 Sessions well organized 4.60 → 4.70 Materials enhanced learning 4.40 → 4.70 Hours/week outside class ~6–7 hrs → ~8–9 hrs Key takeaways • Higher engagement: Response rate up, students feel more challenged • Stronger climate: Positive, supportive scores climbed across the board • Room to grow: “Useful feedback” dipped slightly—time to refine assignment comments Grateful for every piece of feedback. Here’s to iterating and communicating even more effectively next semester!