Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: Normalized Gain (was Inquiry method and motivation)



In his Math-Learn post of Nov 24, 2003 10:40 titled "Re: Normalized
Gain (was Inquiry method and motivation)" "Chris" of
<Cmpalmer2@aol.com> once again asks three good questions:

1111111111111111111111111111111111111111111111111
1a. "If the pretest scores are relatively close together, then are
you suggesting that there is no point in comparing individual
results?"

I assume Chris is primarily interested in using pre/post testing as
formative evaluation of his math classes from year to year to gauge
the effects of various factors [e.g., pedagogical methods, texts,
time apportionment, average student characteristics] on course
effectiveness in promoting student learning. If that's the case then
the question he should be asking is:

1b. "If class average pretest scores <%pre> are relatively close
together, then are you suggesting that there is no point in comparing
class average normalized gains <g>, where

<g> = Gain / Gain (max)

<g> = (<%post> - <%pre>) / (100% - <%pre>)

where angle brackets <. . . .> indicate the class average (preferably
only for students who have taken both the pre and post tests)."

The answer to question "1b" is "NO."

Even if the average pretest scores <%pre> from year to year remain
relatively close together, it is still important to track and compare
the normalized gains <g>. One could argue that if the <pre>'s remain
nearly the same then it's only necessary to compare the actual gains
(<%post> - <%pre>). But then no comparison is possible with
literature values of <g> for other physics courses [for reviews see
Hake (2002a,b)].

222222222222222222222222222222222222222
2. "If there is a wide variation (How wide is wide?) in pretest
scores, then how does one go about showing "that the correlation of
individual student g's with their pretest scores is relatively low"?

As indicated above, I assume Chris is interested in the class average
normalized gain <g> and NOT in individual student gains g. IF that's
the case then he should be interested in whether or not <g>'s from
year to year correlate with class average pretest scores <%pre>,
especially if there are wide variations (say greater that 10%) in
<pre>.

The correlation of <g>'s with <pre>'s can be determined by standard
statistical procedures [see, e.g. Slavin (1992)] that can be quickly
done with, for example, Microsoft's "Excel" spreadsheet, or using
Aaron Titus's (2003) "Assessment Analysis".

As indicated in Hake (2003b): a significant POSITIVE correlation
would suggest that the instruction tends to favor students who have
MORE prior knowledge of the subject as judged by the pretest score
("Matthew effect") [Matthew (25 +or- 25 A.D)]; a significant NEGATIVE
correlation would suggest that the instruction favors students who
have LESS prior knowledge of the subject as judged by the pretest
score ("anti-Matthew effect"); and an insignificant correlation would
suggest that the instruction is at about the right level for students
who have an average prior knowledge of the subject as judged by the
pretest score.

However, as explained in Hake (2002b,c) some teachers may be
interested in single student g's in order to examine possible
correlation of g with e.g., gender, math proficiency, spatial
visualization ability, scientific reasoning skills, physics aptitude,
personality type, motivation, socio-economic level, ethnicity,
completion of other courses, IQ, & GPA. Work of this sort has been
reported by Hake et al. (1994), Hake (2002c), and Meltzer (2002a,b).
One of the goals of such effort is to discover student-profile
characteristics or diagnostic tests that might alert teachers to
potential low-g students. If such were known then corrective actions
might be taken early in the course so as to improve the overall
effectiveness of the course.

John Texas Clement (2000, 2001) has reported a positive correlation
between a pretest measure of reasoning ability [probably Lawson
1995)] and single-student normalized learning gains in a high-school
physics course. Unfortunately this potentially important work has
not, to my knowledge, been published, probably because John is one of
those grossly overworked, underpaid, and under-appreciated
high-school physics teachers.


333333333333333333333333333333333333333333
3. "What about the situation with respect to questions that were
answered correctly on the pre-test, but incorrectly on the post-test?
Again, do we simply ignore such data, because we are working with a
class average rather than individual scores, and what we are
calculating is merely a tool that has been shown to have a
correlation to 'effective treatment'?

The answer is "YES." The fact that a student may answer a question
correctly on the pretest but incorrectly on the post test, may be of
interest in diagnosing the learning problems of that student, but has
little bearing on the class average <g> that I assume is of interest
to Chris.

As indicated in an earlier post (Hake 2003), Chris is handicapped by
the failure of mathematicians to develop valid and consistently
reliable tests of mathematical understanding and to employ pre/post
testing with such tests to gauge the need for and effects of reform
teaching methods. Instead, most mathematicians are content to engage
in math-wars rhetoric that seens to reflect primarily different value
systems (Sowder 1998). In the meantime under the "direct instruction"
mantra [see e.g., Carnine (2000)] student understanding (as opposed
to rote memorization) of math [and science - see e,g, CCCSC (2003)]
in California (and most other states) continues to deteriorate.

Richard Hake, Emeritus Professor of Physics, Indiana University
24245 Hatteras Street, Woodland Hills, CA 91367
<rrhake@earthlink.net>
<http://www.physics.indiana.edu/~hake>
<http://www.physics.indiana.edu/~sdi>

". . .I will look primarily at our traditions and practices of early
schooling through the age of twelve or so. There is little to come
after, whether of joys or miseries, that is not prefigured in these
years."
David Hawkins in "The Roots of Literacy" p. 3.


REFERENCES
Carnine, D. 2000. "Why Education Experts Resist Effective Practices
(And What It Would Take to Make Education More Like Medicine),"
online as a 52kB pdf at
<http://www.edexcellence.net/foundation/global/found.cfm?author=72&keyword=&submit=Search>.
The Fordham Foundation's Chester Finn introduces Carnine's paper by
stating that: "After describing assorted hijinks in math and reading
instruction, Doug devotes considerable space to examining what
educators did with the results of "Project Follow Through," one of
the largest education experiments ever undertaken. This study
compared constructivist education models with those based on direct
instruction. One might have expected that, when the results showed
that direct instruction models produced better outcomes, these models
would have been embraced by the profession. Instead, many education
experts discouraged their use."

CCCSC. 2003. "Criteria For Evaluating K-8 Science Instructional
Materials In Preparation for the 2006 Adoption," California
Curriculum Commission Science Committee (CCCSC). Outlined from
CCCSC's serial listing by R.R. Hake on 10 November 2003; online as
ref. 33 at <http://www.physics.indiana.edu/~hake>.

Clement, J.M. 2000. "Using physics to raise students' thinking
skills," AAPT Announcer 30(4): 81.

Clement, J.M. 2001. "Techniques used to raise thinking skills in high
school," AAPT Announcer 31(2): 82.

Hake, R.R., R. Wakeland, A. Bhattacharyya, and R. Sirochman. 1994.
"Assessment of individual student performance in an introductory
mechanics course," AAPT Announcer 24(4): 76.

Hake, R.R. 2002a. "Lessons from the physics education reform effort,"
Conservation Ecology 5(2): 28; online at
<http://www.consecol.org/vol5/iss2/art28>. Conservation Ecology is a
free "peer-reviewed journal of integrative science and fundamental
policy research" with about 11,000 subscribers in about 108 countries.

Hake, R.R. 2002b. "Assessment of Physics Teaching Methods,
Proceedings of the UNESCO-ASPEN Workshop on Active Learning in
Physics, Univ. of Peradeniya, Sri Lanka, 2-4 Dec. 2002; also online
as ref. 29 at
<http://www.physics.indiana.edu/~hake/>.

Hake, R.R. 2002c. "Relationship of Individual Student Normalized
Learning Gains in Mechanics with Gender, High-School Physics, and
Pretest Scores on Mathematics and Spatial Visualization," submitted
to the Physics Education Research Conference; Boise, Idaho; August
2002; online as ref. 22 at <http://www.physics.indiana.edu/~hake>.

Hake, R.R. 2003a. "Re: Normalized Gain (was Inquiry method and
motivation)," post of 24 Nov 2003 17:12:05-0800 to EvalTalk,
Math-Learn, PhysLrnR, & POD; online at
<http://listserv.nd.edu/cgi-bin/wa?A2=ind0311&L=pod&O=D&P=18573>.
Later sent to AERA-D, ASSESS, and Biopi-L; and in abstract form to
Chemed-L, Phys-L, Physhare, STLHE-L, edstat, FYA, and AP-Physics.

Lawson, A.E. 1995. "Science Teaching and the Development of
Thinking." Wadsworth. Appendix F.

Matthew. 25 + or - 25 A.D. Later published in "First Gospel of the
New Testament" (Gutenberg edition) "to him that hath shall be given,
but from him that hath not shall be taken away even that which he
hath." (Paul Camp - please supply further bibliographic details.]

Meltzer, D.E. 2002a. "The relationship between mathematics
preparation and conceptual learning gains in physics: A possible
'hidden variable' in diagnostic pretest scores," Am. J. Phys. 70(12):
1259-1268; also online as article #7 at
<http://www.physics.iastate.edu/per/articles/index.html>.

Meltzer, D.E. 2002b. "Normalized Learning Gain: A Key Measure of
Student Learning," Addendum to Meltzer (2002a); online as article #7
(addendum) at
<http://www.physics.iastate.edu/per/articles/index.html>.

Slavin, R.E. 1992. "Research Methods in Education." Allyn and Bacon, 2nd ed.

Sowder, J.T. 1998. "What are the 'Math Wars' in California All About?
Reasons and Perspectives" Phi Beta Kappa Invited Lecture; online at
<http://mathematicallysane.com/analysis/mathwars.asp>: "I will
discuss today the ways that I see these two sides differing: They
hold different beliefs about what mathematics is, different beliefs
about how mathematics is learned, different understandings of what it
means to know mathematics, and different ways of interpreting what
research has to tell us on these issues. In a nutshell, THEY
REPRESENT DIFFERENT VALUE SYSTEMS. I believe that rational,
reflective discussion and exploration of these issues can bring the
two sides closer together. Thus, although the two sides may not reach
total agreement, they can come to understand the issues better and
find ways to compromise. I am told that California schools educate
one-seventh of the students in this country. THERE IS TOO MUCH AT
STAKE TO CONTINUE THE FIGHTING, to take a chance on sacrificing the
mathematical education of our children by not reaching some agreement
on what that education should be." (My CAPS.)

Titus, A. 2003. "Assessment Analysis"; online at
<http://www.highpoint.edu/~atitus/aa/index.html>: "a web-based
program (CGI script) that helps teachers analyze test results. . . .
STATISTICAL ANALYSIS [of] pre and post test data [t test, normalized
gain (Individual and Class), Effect Size, Max, Min, Mean, Median,
Standard Deviation]; CORRELATION ANALYSIS; and FACTOR ANALYSIS.