Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: student assessment/content knowledge



In his Chemed-L post of 16 March titled "Re: student
assessment/content knowledge," Barry Hicks (2004) of the U.S Air
Force Academy wrote:

"I went to Richard Hake's (2004a) link yesterday to read about
content knowledge. Couldn't make much of it . . . . I went to [Hake
(2003b)] to see Dr. Hake's arguments [against gauging student
learning by course exams], but the link was only about pre/post
testing and <g> values. Much of what I saw was what I call
edu-speak; commentary by educational folks without saying clearly
what is meant to the 'unjargonized" masses (to which I belong)."

For the benefit of the "unjargonized masses," Joe Bellina (2004) has
brilliantly translated into plain English my impenetrable, pedantic,
prolix, overly referenced, and jargonized edu-speak regarding the
unreliabilty of course exams as gauges of student learning. I
acknowledged the value of Joe's translation in Hake (2004b).

Nevertheless, I do want to contest Barry Hick's claim that Hake
(2003b) is only about pre/post testing and <g> [average normalized
gain] values. I suspect that Barry may have been too numbed by
edu-speak to recognize in Hake (2003b) the course exam criticism in
parts "c" and "d" of the section below, bracketed by lines
"HHHHHHHHHHHHH. . . . ":

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
Unfortunately, many psychology/education/psychometric practitioners
PEP's, university administrators, and university faculty appear to be:

(a) afflicted with pre/post paranoia (Hake 2001)];

(b) convinced that Student Evaluations of Teaching (SET) ratings are
valid measures of the cognitive impact of courses (as distinguished
from the affective impact) [see Hake (2002)];

(c) unconcerned with higher-level learning objectives such as "procedural",
"schematic", and "strategic" knowledge within knowledge domains
(Shavelson & Huang 2003, Chart 1) as measured by tests such as the
"Force Concept Inventory" that are constructed by disciplinary
experts in education research;

(d) ONLY COGNIZANT OF TESTS MEASURING LOWER-LEVEL ROTE MEMORIZATION
OF FACTS AND DEFINITIONS AS REFLECTED IN MOST COURSE EXAMS AND FINAL
GRADES [McKeachie (1987), Johnson (2003), Merrow (2003)].
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

As indicated in Hake (2004a), the late Arnold Arons (1983) contrasts
the "declarative" knowledge of facts measured by most course exams
with the "procedural" knowledge shown by the disciplinary education
researchers referenced in Hake (2004a) to be lacking in most
graduates of college introductory science courses.

Richard Hake, Emeritus Professor of Physics, Indiana University
24245 Hatteras Street, Woodland Hills, CA 91367
<rrhake@earthlink.net>
<http://www.physics.indiana.edu/~hake>
<http://www.physics.indiana.edu/~sdi>


REFERENCES
Arons, A.B. 1983. "Achieving Wider Scientific Literacy," Daedalus,
Spring. Reprinted in both Arons (1990) and Arons (1997) as Chapter
12, "Achieving Wider Scientific Literacy." Arons wrote: "Researchers
in cognitive development describe two principle classes of knowledge:
figurative (or declarative) and operative (or PROCEDURAL).
'Declarative knowledge' consists of knowing 'facts' for example, that
the moon shines by reflected sunlight, that the earth and planets
revolve around the sun . . . . 'operative knowledge', on the other
hand, involves understanding the source of such declarative knowledge
(How do we know the moon shines by reflected sunlight?" See also Hake
(2003a).

Arons, A.B. 1990. "A Guide to Introductory Physics Teaching." Wiley;
reprinted with minor updates in Arons (1997).

Arons, A.B. 1997. "Teaching Introductory Physics." Wiley. Contains a
slightly updated version of Arons (1990), plus "Homework and Test
Questions for Introductory Physics Teaching" (Arons 1994), plus a new
monograph "Introduction to Classical Conservation Laws."

Bellina, J. 2004. "Re: student assessment/content knowledge,"
Chemed-L post of 16 Mar 2004 09:44:21-0500; online at
<http://mailer.uwf.edu/Lists/wa.exe?A2=ind0403&L=chemed-l&D=1&O=D&P=20998>.

Feldman, K.A. 1989. "The Association Between Student Ratings of
Specific Instructional Dimensions and Student Achievement: Refining
and Extending the Synthesis of Data from Multisection Validity
Studies," Research on Higher Education 30: 583.

Hake, R.R. 2001. "Pre/Post Paranoia," AERA-D/PhysLrnR post of 17 May
2001 16:01:56-0700; online at
<http://lists.asu.edu/cgi-bin/wa?A2=ind0105&L=aera-d&P=R19884>.

Hake, R.R. 2002. "Re: Problems with Student Evaluations: Is
Assessment the Remedy?" available in pdf form as ref. 18 at
<http://www.physics.indiana.edu/~hake> and as HTML at
<http://www.stu.ca/~hunt/hake.htm>.

Hake, R.R. 2003a. "The Arons-Advocated Method"; online as ref. 31 at
<http://www.physics.indiana.edu/~hake>. To be submitted to the
"American Journal of Physics."

Hake, R.R. 2003b. "Meta-analyses of <g> Values" post of 5 Sep 2003
17:14:26 -070 to ASSESS, EvalTalk, PhysLnR, and POD; online at
<http://listserv.nd.edu/cgi-bin/wa?A2=ind0309&L=pod&P=R2439>.

Hake, R.R. 2004a. "Re: Measuring Content Knowledge", online at
<http://listserv.nd.edu/cgi-bin/wa?A2=ind0403&L=pod&O=D&P=16472>.
Post of 14 Mar 2004 16:29:47-0800 to ASSESS, Biopi-L, Chemed-L,
EvalTalk, Physhare, Phys-L, PhysLnrR, POD, and STLHE-L.

Hake, R.R. 2004b. "Re: student assessment/content knowledge,"
Chemed-L post of 16 Mar 2004 17:43:18-0800; online at
<http://mailer.uwf.edu/Lists/wa.exe?A2=ind0403&L=chemed-l&D=1&O=D&P=21338>.

Hicks, B. 2004a. "Re: student assessment/content knowledge," Chemed-L
post of 16 Mar 2004 07:07:10-0700; online at
<http://mailer.uwf.edu/Lists/wa.exe?A2=ind0403&L=chemed-l&D=1&O=D&P=20866>.

Johnson, V.E. 2003. "Grade Inflation: A Crisis in College Education,"
Springer-Verlag, 2003."

McKeachie, W.J. 1987. 'Instructional evaluation: Current issues and
possible improvements." Journal of Higher Education 58(3): 344-350.
Feldman (1989) pointed out that McKeachie (1987) "has recently
reminded educational researchers and practitioners that the
achievement tests assessing student learning in the sorts of studies
reviewed here. . . (e.g., those by Cohen (1981, 1986, 1987 - see Hake
(2002) for the references). . . typically measure lower-level
educational objectives such as memory of facts and definitions rather
than higher-level outcomes such as critical thinking and problem
solving . . .[he might have added conceptual understanding] . . .
that are usually taken as important in higher education."

Merrow, J. 2003. "Easy grading makes 'deep learning' more important,"
USA Today Editorial, 4 February; online at
<http://www.usatoday.com/news/opinion/editorials/2003-02-04-merrow_x.htm>.
Merrow wrote: "Duke University Professor Valen Johnson studied 42,000
grade reports and discovered easier grades in the "soft" sciences
such as cultural
anthropology, sociology, psychology, and communications. The hardest
A's were in the natural sciences, such as physics, and in advanced
math courses. The easiest department was music, with a mean grade of
3.69; the toughest was math, with a mean of 2.91.

Shavelson, R.J. & L. Huang 2003 "Responding Responsibly To the
Frenzy to Assess Learning in Higher Education," Change Magazine,
January/February; an article summary is online at
<http://pqasb.pqarchiver.com/change/>, search for "Shavelson." Why
doesn't the AAHE keep important articles freely available online for
the benefit of higher education?