Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: Student/Faculty Evaluations and Grades - Correlations



Please excuse this cross-posting (in the interest of
interdisciplinary synergy) to discussion lists with archives at

AERA-D <http://lists.asu.edu/archives/aera-d.html>,
ASSESS <http://lsv.uky.edu/archives/assess.html>,
EVALTALK <http://bama.ua.edu/archives/evaltalk.html>,
Phys-L <http://mailgate.nau.edu/archives/phys-l.html>,
PhysLrnR <http://listserv.boisestate.edu/archives/physlrnr.html>,
POD <http://listserv.nd.edu/archives/pod.html>,
STLHE-L <http://listserv.unb.ca/archives/stlhe-l.html>.

If you wish to respond, please DON'T HIT THE REPLY BUTTON (the bane
of discussion lists) and thereby inflict it yet again on suffering
list subscribers.

In his AERA-D post of 11 Nov 2002 11:20:51+0000(sic) titled "Re:
Student/Faculty Evaluations and Grades - Correlations," Ivan Smodlaka
of CUNY wrote:

"We are going to take (another?) look at the correlation between
students' evaluations of teaching faculty and students' grades."

In his AERA-D response of 11 Nov 2002 12:27:28-0500, Dennis Roberts wrote:

"just out of curiosity ... what if you did find that a typical r was .3 ...
what then?"

This time I'm with Dennis. Suppose you were to replicate the results
of Peter Cohen's (1981) oft-quoted meta-analysis of 41 studies on 68
separate multisection courses purportedly showing that:

"the average correlation between an overall instructor rating and
student achievement was +0.43; the average correlation between an
overall course rating and student achievement was +0.47 ... the
results ... provide strong support for the validity of student
ratings as measures of teaching effectiveness"?

Would you then assume that Professor A who received relatively high
student evaluations is a more effective teacher than Professor D who
received relatively poor student evaluations?

The value of student evaluations as gauges of the cognitive impact
(as opposed to the affective impact) of courses is a hotly debated
issue. Even student-evaluation champion Ken Feldman (1989),
commenting on Cohen's (1981) claim, pointed out that McKeachie (1987)
"has recently reminded educational researchers and practitioners that
the achievement tests assessing student learning in the sorts of
studies reviewed here. . . (e.g., those by Cohen (1981, 1986, 1987).
. . typically measure lower-level educational objectives such as
memory of facts and definitions rather than higher-level outcomes
such as critical thinking and problem solving . . .(he might have
added conceptual understanding). . . that are usually taken as
important in higher education."

Striking back at student-evaluation skeptics, Peter Cohen (1990) opined:

"Negative attitudes toward student ratings are especially resistant
to change, and it seems that faculty and administrators support their
belief in student-rating myths with personal and anecdotal evidence,
which (for them) outweighs empirically based research evidence."

However, as far as I know, NEITHER COHEN NOR ANY OTHER STUDENT
EVALUATION CHAMPION HAS COUNTERED THE FATAL OBJECTION OF MCKEACHIE to
the validity of student evaluations as measures of student
higher-level learning.

Russ Hunt recently asked subscribers of POD and STLHE-L for
"suggestions of an article or two which would offer a good overview
of what's currently known about student evaluations."

Because many POD and STLHE-L subscribers strongly believe in the
value of student evaluations (even as gauges of the cognitive
effectiveness of courses!) Russ's list reflects that bias.

For an opposite standpoint on student evaluations, reflecting
17-years worth of DIRECT assessment of introductory physics-course
effectiveness by rigorous pre/post testing, see (Hake 2002).

Richard Hake, Emeritus Professor of Physics, Indiana University
24245 Hatteras Street, Woodland Hills, CA 91367
<rrhake@earthlink.net>
<http://www.physics.indiana.edu/~hake>
<http://www.physics.indiana.edu/~sdi>


REFERENCES
Cohen, P.A. 1981. "Student ratings of Instruction and Student
Achievement: A Meta-analysis of Multisection Validity Studies,"
Review of Educational Research 51, 281 (1981). For references to
Cohen's 1986 and 1987 updates see Feldman (1989).

Cohen, P.A. 1990. "Bring research into practice," in M. Theall & J.
Franklin, eds. "Student ratings of instruction: Issues for improving
practice: New Directions for Teaching and Learning," No. 43, pp
123-132. Jossey Bass.

Feldman, K.A. 1989. "The Association Between Student Ratings of
Specific Instructional Dimensions and Student Achievement: Refining
and Extending the Synthesis of Data from Multisection Validity
Studies," Research on Higher Education 30: 583. Education 30: 583.

Hake, R.R. 2002. "Re: Problems with Student Evaluations: Is
Assessment the Remedy?"
AERA-D/ASSESS/EvalTalk/Phys-L/PhysLrnR/POD/STLHE-H post of 25 Apr
2002 16:54:24-0700; online at
<http://lists.asu.edu/cgi-bin/wa?A2=ind0204&L=aera-d&P=R4664>.

Hunt, R. 2002a. "Some resources for learning about student
evaluations of teaching," STLHE-L post of 1 Nov 2002 12:01:38-0400;
online at
<http://listserv.unb.ca/bin/wa?A2=ind0211&L=stlhe-l&P=R67>.

Hunt, R. 2002b. "Some Resources on Student Evaluations of Teaching";
online at <http://www.stu.ca/~hunt/evalbib.htm>.

McKeachie, W.J. 1987. "Instructional evaluation: Current issues and
possible improvements." Journal of Higher Education 58(3): 344-350.

This posting is the position of the writer, not that of SUNY-BSC, NAU or the AAPT.