Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: science for all?



The effect is essentially an effect size of 1.0. Essentially the mean of
the curve was moved over by and amount equal to the standard deviation of
the original curve. For more information I would suggest reading the cited
paper. When I looked at the actual curve I noticed that the final curve
seemed to show that most concrete students were moved to higher thinking
levels. An educational effect this large is considered to be VERY large.
The particular evaluation is not testing knowledge but the thinking ability
of the students. I have data spanning several years for students at our
school, and this evaluation seldom shows a decrease for individual students
and never shows a decrease for the average over a group. The questions
involving proportional thinking do show some backsliding, but as many
students improve as backslide. I have only seen 1 student who showed any
serious negative gain, and she was a foreign student with a language
problem. I have also seen many students show dramatic gain.

One standard deviation change does not mean a 30% chance that it was due to
random fluctuations. The particular study was over about 600 students so
the error in the mean is quite small ( SD / sqrt(600) ).

I would like to point out that improvement in content understanding can be
extremely stable even years later when the correct teaching techniques are
used. FCI results have been found to be very stable after an interactive
engagement style course up to 3 years later. See: G. Francis, J. Adams, E.
Noonan (1998). Do They Stay Fixed?, Physics Teacher, 36, 488-490 However,
I will also agree that when students rely on purely memorization methods,
content material decays within about 2 weeks. Priscilla Laws pointed out at
a workshop that they observe rising gains for about 2 weeks on evaluations
after the concepts were presented using the type of labs that they have
found to be effective. This is consistent with Shayer&Adey's theory and
observations ("Really Raising Standards"). Incidentally my own testing
shows that after a 2 week Christmas break and several weeks of a new topic
that FMCE results show only a small drop. I have also tried telling
students the answers to a question or 2 the day before an evaluation, and
many still miss it. The non Newtonian distracters are often too hard to
overcome with a little prepping. I gave them the answers as part of a test
review, but did not tell them that the answer I gave was exactly the answer
to a question on the test.

John M. Clement
Houston, TX



1. How did 1 SD get to be "very large"? And 1 SD of what? Usually in
educational circles people seem to quote the change in the mean of a
group. 1 SD means about a 30% chance that the change was a random
fluctuation. The physicists I know usually insist on 4 SD as the
threshold for evidence of a new effect. Also, these statistical measures
can be utterly misleading if one does not know the distributions.

2. Much more interesting than the improvement at the end of a course
would be the results of testing after a summer, or even a semester, break.
My experience is that the apparent "improvements" dissipate very rapidly.

Regards,
Jack


On Sun, 23 Dec 2001, John Clement wrote (in part):
_________________________________________snip___________________________

Despite the fact that many students enter testing at the
concrete level, it
is possible to structure the course so that they improve their thinking.
This is now being routinely done in the intro. biology course
at AzState.
Anton Lawson has been a prolific publisher of papers on the
subject, and has
been pushing this idea for years. One of the latest papers
published by one
of Lawson's collaborators mentioned that the course at AzState pushed up
student thinking by about 1 STD, which is considered to be a very large
increase. The article is Wycoff "Changing the culture of Undergraduate
Science Teaching", Jour of Coll. Sci. Teach. XXX #5 pp306-312.

I can tell you the statistics for my course. About 30% of the incoming
students are at the concrete level, but less than 15% are still at that
level when they leave. Likewise only about 15% test as formal thinkers
coming in, and about 30% test as formal thinkers going out. One might
presume that courses which have a large number of physics,
engineering, or
pre-med students would have a high proportion of formal
thinkers at major
universities. I would guess 40-50% or more. While at
community colleges
the number would be more like 20%, and for physics for poets courses or
courses with a large number of elementary ed majors the number
may be below
10%. This is purely conjecture because to my knowledge this
sort of survey
has not been systematically carried out. About 30% of the
students entering
the AzState general studies bio course tested at the formal
level, but the
vast majority were at that level at the end of the course (if I
read their
graph correctly in above mentioned paper). If we can push
thinking levels
up, and as a result have more success, shouldn't we do this?

__________________________________________________________________
__________

--
"But as much as I love and respect you, I will beat you and I will kill
you, because that is what I must do. Tonight it is only you and me, fish.
It is your strength against my intelligence. It is a veritable potpourri
of metaphor, every nuance of which is fraught with meaning."
Greg Nagan from "The Old Man and the Sea" in
<The 5-MINUTE ILIAD and Other Classics>