Physics educators may or may not be interested in a recent post "Re:
pre-to-post tests as measures of learning/teaching" [Hake (2008)].
The abstract reads:
********************************************
ABSTRACT: Barry Hicks, in his Chemed-L post titled "pre-to-post
tests as measures of learning/teaching" raised five points regarding
pre-to-post tests as measures of learning/teaching which I shall
paraphrase as:
(1) Are pre-to-post test gains a reliable assessment of learning/teaching?;
(2) I hear about this crap from fuzzy departments and have to shake my head;
(3) A student new to a course does poorly on a test at the start of a
course but better on the same test at the end of the course - stop
the presses! - publish this extraordinary result!
(4) What added value is pre/post testing?
(5) Do others do this pre/post testing?
Other points added by Chemed-L'ers in reply to Hicks's post are:
(6) Any controlled experiments to determine (a) the halo effect of
pretesting and (b) question familiarity with pretesting?
(7) When money is a conflict of interest, the administrators will
cause teachers to teach to the test.
(8) If students are not graded on the test results, how do you get
them to take the test seriously?
In this post I address the above 8 points,
********************************************
REFERENCES
Hake, R.R. 2008. "Re: pre-to-post tests as measures of
learning/teaching" online at the OPEN AERA-J
<http://tinyurl.com/3booqj>. Post of 28 Jan 2008 17:33:48-0800 to
AERA-L, Chemed-L, PhysLrnR, & POD; and (with several typos corrected)
to AERA-J on 29 Jan 2008 11:19:05-0800.