Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: Monster Classes



In his ASSESS post of 13 Aug 2003 15:13:04-0700 titled "Re: Monster
Classes" Jerry Pritchard wrote (bracketed between the lines "PPPPPP.
. . .":

PPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP
Monster classes, like televised instruction, are only bad if you see
them as the most important component and/or the focal point of
student learning. I would argue that individual student study
outside of the classroom is where most learning takes place.

If we use large classes for efficiency in presenting and clarifying a
few critical concepts, for inspiration and motivation by having truly
meaningful things go on, and for pacing of material, you can then put
the burden of learning back on the students and ask them to study
individually, in small work groups, in activity-based settings, and
with text books and other written materials in hand.
. . . . . . . . . . . . . . . . . . . . .
In the end, we need to focus on the quality of the learning and
effectiveness of the instruction, not the mode of delivery.
PPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP

If, as Jerry states, most learning takes place outside the classroom
then, at least for traditional introductory mechanics classes (those
with passive-student lectures and recipe labs) very little learning
appears to takes place outside the classroom. In my survey (Hake
1998a,b; 2002a,b), 24 traditional (T) courses (total enrollment N =
2084) achieved an average normalized gain of <g>(T-ave) = 0.23 with
an sd (standard deviation) of 0.04. In sharp contrast 48 Interactive
Engagement (IE) courses (N = 4458) achieved <g>(IE-ave) = 0.48
(sd = 0.14), almost two standard deviations of <g>IE-ave above that
of the traditional courses.

The same survey showed that IE-type "lectures" by Mazur using "Peer
Instruction" (Mazur 1997) at Harvard in halls occupied with up to 216
students yielded <g>(Mazur-ave) = 0.56. [See Table Ic of Hake
(1998b).]

But, as I attempted to explain in Hake (2003a), I suspect that when
the hall occupancy climbs to over 500, the degree of interactivity
[even if exemplary IE methods such as "Peer Instruction" and
"Just-In-Time Teaching" [Novak et al. (1999) are attempted] will
decline to a level characteristic of traditional lectures, resulting
in average normalized gains <g> of about 0.2.

IF such a test were to be undertaken (don't hold your breath) as I
proposed in Hake (2003a) then I think the low <g> = 0.2 results that
I anticipate would be hard for large-lecture apologists such as Jerry
to explain away.

And by the way, what evidence can Jerry cite for his apparent
conviction that monster lectures to 500 plus students can be
effective in promoting student learning? Pre/post testing might be
used, but unfortunately tests constructed by disciplinary experts and
widely recognized within the discipline as valid and consistently
reliable are almost unknown outside physics [and even within insular
sectors of physics, judging from the NRC's McCray et al. (2003)
report]. The landmark work of Halloun & Hestenes (1998a,b) in
constructing such tests is discussed in Hake (2003b).

Jerry might wish to cite Student Evaluation of Teaching (SET)
evidence for the salutary cognitive impact of monster classes, but,
in my view such use of SET evidence, though common, is completely
unwarranted (Hake 2002c).

Or Jerry might cite faculty-generated course exams or course grade
evidence of student learning. But as indicated in Hake (2002c):

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
With regard to the problem of using course performance as a measure
of student achievement or learning, Peter Cohen's (1981) oft-quoted
meta-analysis of 41 studies on 68 separate multisection courses
purportedly showing that:

"the average correlation between an overall instructor rating and
student achievement was +0.43; the average correlation between an
overall course rating and student achievement was +0.47 . . . the
results . . . provide strong support for the validity of student
ratings as measures of teaching effectiveness"

was reviewed and reanalyzed by Feldman (1989) who pointed out that
McKeachie (1987):

"has recently reminded educational researchers and practitioners that
the achievement tests assessing student learning in the sorts of
studies reviewed here. . . (e.g., those by Cohen (1981, 1986, 1987).
. . typically measure lower-level educational objectives such as
memory of facts and definitions rather than higher-level outcomes
such as critical thinking and problem solving . . .[he might have
added conceptual understanding] . . . that are usually taken as
important in higher education."
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Unfortunately, it is much more difficult to measure higher-level
learning objectives such as "procedural", "schematic", and
"strategic" knowledge within knowledge domains [Shavelson (2003),
Chart 1] rather than the lower-level rote memorization of facts and
definitions as reflected in most course exams and final grades
[McKeachie (1987), Johnson (2003), Merrow, (2003)].

Richard Hake, Emeritus Professor of Physics, Indiana University
24245 Hatteras Street, Woodland Hills, CA 91367
<rrhake@earthlink.net>
<http://www.physics.indiana.edu/~hake>
<http://www.physics.indiana.edu/~sdi>

REFERENCES
Cohen, P.A. 1981. "Student ratings of Instruction and Student
Achievement: A Meta-analysis of Multisection Validity Studies,"
Review of Educational Research 51: 281. For references to Cohen's
1986 and 1987 updates see Feldman (1989).

Feldman, K.A. 1989. "The Association Between Student Ratings of
Specific Instructional Dimensions and Student Achievement: Refining
and Extending the Synthesis of Data from Multisection Validity
Studies," Research on Higher Education 30: 583. 30: 583.

Hake, R.R. 1998a. "Interactive-engagement vs traditional methods: A
six-thousand-student survey of mechanics test data for introductory
physics courses," Am. J. Phys. 66: 64-74; online as ref. 24 at
<http://www.physics.indiana.edu/~hake>.

Hake, R.R. 1998b. "Interactive-engagement methods in introductory
mechanics courses," online as ref. 25 at
<http://www.physics.indiana.edu/~hake>. SUBMITTED on 6/19/98 to the
"Physics Education Research Supplement to AJP"(PERS). In this SADLY
UNPUBLISHED (Physics Education Research has NO archival journal!)
crucial companion paper to Hake (1998a): average pre/post test
scores, standard deviations, instructional methods, materials used,
institutions, and instructors for each of the survey courses of Hake
(1998a) are tabulated and referenced. In addition the paper
includes: (a) case histories for the seven IE courses of Hake (1998a)
whose effectiveness as gauged by pre-to-post test gains was close to
those of T courses, (b) advice for implementing IE methods, and (c)
suggestions for further research.

Hake, R.R. 2002a. "Lessons from the physics education reform effort."
Conservation Ecology 5(2): 28; online at
<http://www.consecol.org/vol5/iss2/art28>. "Conservation Ecology," is
a FREE "peer-reviewed journal of integrative science and fundamental
policy research" with about 11,000 subscribers in about 108 countries.

Hake, R.R. 2002b. "Assessment of Physics Teaching Methods,"
Proceedings of the UNESCO-ASPEN Workshop on Active Learning in
Physics, Univ. of Peradeniya, Sri Lanka, 2-4 Dec. 2002; also online
as ref. 29 at <http://www.physics.indiana.edu/~hake/>.

Hake, R.R. 2002c. "Re: Problems with Student Evaluations: Is
Assessment the Remedy?" post of 25 Apr 2002 16:54:24-0700 to AERA-D,
ASSESS, EvalTalk, Phys-L, PhysLrnR, POD, and STLHE-L; online at
<http://listserv.nd.edu/cgi-bin/wa?A2=ind0204&L=pod&P=R14535>.
Slightly edited and improved on 16 November 2002 and available in pdf
form as ref. 18 at <http://www.physics.indiana.edu/~hake> and as HTML
at <http://www.stu.ca/~hunt/hake.htm>.

Hake, R.R. 2003a. "Re: Monster Classes" post of 13 Aug 2003 12:53:22
-0700 to Biopi-L, Chemed-L, EvalTalk, Phys-L, PhysLrnR, POD, and
STLHE-L; online at
<http://lists.nau.edu/cgi-bin/wa?A2=ind0308&L=phys-l&O=D&P=4290>.
This post was later sent to AERA-D, ASSESS, Biolab, to the NRC's CUSE
committee.

Hake, R.R. 2003b. "Re: Designing Pretests," post of 31 Jul 2003
13:38:21-0700 to ASSESS, Biopi-L, 6, EvalTalk, PhysLrnR, and POD;
online at
<http://listserv.nd.edu/cgi-bin/wa?A2=ind0307&L=pod&O=D&P=22283>.
Later sent to AERA-D, STLHE-L, Phys-L, Physhare, AP-physics, and
Biolab.

Halloun, I. & D. Hestenes. 1985a. "The initial knowledge state of
college physics students." Am. J. Phys. 53:1043-1055; online at
<http://modeling.asu.edu/R&E/Research.html>. Contains the "Mechanics
Diagnostic" test. This landmark work is NOT referenced in McCray et
al. (2003).

Halloun, I. & D. Hestenes. 1985b. "Common sense concepts about
motion." Am. J. Phys. 53:1056-1065; online at
<http://modeling.asu.edu/R&E/Research.html>.

Johnson, V.E. 2003. "Grade Inflation: A Crisis in College Education."
Springer-Verlag, 2003." See also Merrow (2003).

Mazur, E. 1997. "Peer instruction: a user's manual." Prentice Hall;
online at <http://galileo.harvard.edu/>.

McCray, R.A., R.L. DeHaan, J.A. Schuck, eds. 2003. "Improving
Undergraduate Instruction in Science, Technology, Engineering, and
Mathematics: Report of a Workshop" Committee on Undergraduate STEM
Instruction," National Research Council, National Academy Press;
online at <http://www.nap.edu/catalog/10711.html>.

McKeachie, W.J. 1987. 'Instructional evaluation: Current issues and
possible improvements." Journal of Higher Education 58(3): 344-350.

Merrow, J. 2003. "Easy grading makes 'deep learning' more important,"
USA Today Editorial, 4 February; online at
<http://www.usatoday.com/news/opinion/editorials/2003-02-04-merrow_x.htm>:
Duke University Professor Valen Johnson studied 42,000 grade reports
and discovered easier grades in the "soft" sciences such as cultural
anthropology, sociology, psychology and communications. The hardest
A's were in the natural sciences, such as physics, and in advanced
math courses. The easiest department was music, with a mean grade of
3.69; the toughest was math, with a mean of 2.91.

Novak, G., E. Patterson, A. Gavrin, and W. Christian. 1999.
"Just-in-Time Teaching: Blending Active Learning and Web Technology."
Prentice-Hall; for an overview see
<http://webphysics.iupui.edu/jitt/jitt.html>; for implementation
information see <http://galileo.harvard.edu/galileo/sgm/jitt/>.

Shavelson, R.J. 2003 "Responding Responsibly To the Frenzy to Assess
Learning in Higher Education," Change Magazine, January/February;
online at <http://www.aahe.org/change/>.