Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: Measuring Content Knowledge



Subscribers to Chemed-L, Phys-L, and STLHE-L have been mercifully
shielded (by the line limitations of, respectively, 150, 300, and
150) from my recent 432-line post:

Hake, R.R. 2004. "Re: Measuring Content Knowledge", online at
<http://listserv.nd.edu/cgi-bin/wa?A2=ind0403&L=pod&O=D&P=16472>.
Post of 14 Mar 2004 16:29:47-0800 to ASSESS, Biopi-L, Chemed-L,
EvalTalk, Physhare, Phys-L, PhysLnrR, POD, and STLHE-L.

If your interest is:

(a) zero or less, then please hit the delete button,

(b) only slightly greater than zero, then please scan the abridged
version in the APPENDIX.

(c) somewhat greater than zero, then please access the entire post by
clicking on the above URL
<http://listserv.nd.edu/cgi-bin/wa?A2=ind0403&L=pod&O=D&P=16472>.

Regards,

Richard Hake, Emeritus Professor of Physics, Indiana University
24245 Hatteras Street, Woodland Hills, CA 91367
<rrhake@earthlink.net>
<http://www.physics.indiana.edu/~hake>
<http://www.physics.indiana.edu/~sdi>

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
APPENDIX [Abridged Version of Hake (2004)]

In her EvalTalk post of 11 Mar 2004 13:08:10-0500 titled "Measuring
Content Knowledge," Jacqueline Kelleher wrote:

"In an effort to come up with an evaluation plan for our teaching
assistants in higher education, some colleagues and I engaged in a
difficult dialogue about assessing content knowledge. Aside from GPA,
Praxis II or GRE scores, does anyone know of tools or procedures used
to tap into evaluating domain knowledge for program areas?" . . . . .
. . . . . . . . . . .

Various physics-area diagnostic tests such as the FCI [Hestenes et
al. (1992)] are listed at NCSU (2004). Diagnostic tests for physics
and other sciences are listed in FLAG (2004) ["Field-tested Learning
Assessment Guide"]. For some comments on the use of FLAG see Hake
(2002a).

Diagnostic tests of content knowledge [or (better) "operative"
knowledge (Arons 1983)] in various non-physics areas have been
constructed by those interested in the development of pre/post tests
to measure learning gains in science courses:

ASTRONOMY: Adams et al. (2000); Zeilik et al. (1997, 1998, 1999);
Zeilik (2002);

ECONOMICS: Paden & Moyer (1969); Saunders (1991); Kennedy & Siegfried
(1997); Chizmar & Ostrosky (1998); Allgood and Walstad (1999);

BIOLOGY: Roy (2001, 2003); Anderson et al. (2002); Klymkowsky et al.
(2003); Sundberg & Moncada (1994); Sundberg (2002); Wood (2003);

CHEMISTRY: Milford (1996); Bowen & Bunch (1997); Robinson &
Nurrenbern (2001); Gonzalez et al. (2003), Birk et al. (2003); ASU
(2004);

COMPUTER SCIENCE [Almstead (2003)]; andk

ENGINEERING [Evans & Hestenes (2001); Foundation Coalition (2003);
Belcher (2003); Wage & Buck (2004)]

In some cases the above work was stimulated [see, e.g. Stokstad
(1999), Evans & Hestenes (2001), Evans et al.(2003), Roy (2001.
2003), Wood (2003), Klymkowsky et al. (2003), Belcher (2003),
Foundation Coalition (2003), Wage & Buck (2004), ASU (2004)] by
pre/post testing in physics education [for a review see Hake (2002c),
initiated by the landmark work of Halloun & Hestenes (1985a,b).. . . .

Unfortunately, except for the above stimulated authors, the physics
pre/post testing work is almost unknown outside physics (and even
within physics) - an argument for use of online and easily searchable
"structured abstracts" for ALL education research publications
[Mosteller et al. (2004), Hake (2004)]. Even the NRC's expert
committees regularly ignore the pre/post testing movement [see e.g.,
Labov (2003), McCray et al. (2003) - but a recent exception is
Donovan & Pellegrino (2003)].