Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-l] Inquiry



Let us take one person's thought that reading the book provides the student
with greater background detail. While it is true that the detail is there,
but does it actually contribute to greater understanding, or does it
reinforce the student's idea that this is confusing stuff?

We do have evidence that increased FCI scores do correlate with better
ability to do problems, so the FCI is partially, but not directly an
indicator of the ability to do problems. Since most college courses
emphasize problem solving, the deduction that students will do better is
reasonable, but not proven. Until a reasonable number of students enter
college from IE courses the evidence will be small. But Mazur has published
a paper showing that IE pedagogy brings women up to parity on the FCI with
the men, and both achieve much higher gain. And then there is the APS
abstract of the experiment where they found that an IE course brought women
up to parity with the men on the MCAT, but the men statyed the same. So
there is evidence beyond the FCI. The same methods that bring up the FCI
produce improvement on the MCAT.

The fact that most students never crack open a book indicates that the book
is not necessary to passing the course, but does it improve student
understanding? I suspect that it does help some few students, but that the
texts are written in such a say that they can not help most students.
Incidentally non US books are much slimmer and more concise in countries
that show higher test scores. Less is more?

One way of doing an experiment ethically is to give one group of students
extra work on things that are known to work. This would be activities which
are PER designed and for which there are published articles comparing gain
with and without them. Then the other group is given reading activities and
training in reading. Each group gets an equal treatment time, but different
treatments. Feuerstein did this with a residential school but the equal
time treatments were designed by the teachers. As a practical necessity,
one could do the 2 different methods in different years, but have pretests
to be assured that students had comparable incoming knowledge. This gets
around the student communication problem. Notice that what you for the
control has to be equal in time, and considered to be a good strategy. The
only way you have of testing is to use the FCI, FMCE, or MBT, but for
comparison the FCI or FMCE are the only widely used tests in mechanics.
This is how many experiments are actually done. The control group is often
the students in the years before the experiment, but one has pretest scores
to assure you have comparable populations. Blind testing is absolutely
impossible in most educational situations, but is much easier with medical
studies.

The question of texts is a very complicated issue. But my position is that
most of the reading strategies I have seen do not measure up compared to the
IE methods, so if reading strategies take away from IE the students will do
worse. Shayer and Adey in their Thinking Science did have the students do
some reading, but the emphasis was on a learning cycle approach, and no
textbook was used. The students had a 15% delayed improvement in reading
ability on a standardized test, along with about 20% in both math and
science. Nowhere does the TS program teach reading, but rather the students
have to do some reading as part of the activities in class, and then for the
homework or in class work. The final problem solving activity could be done
either in class or at home according to the teacher's desire.

The other difficulty that is known about texts is the issue of how they are
written. I have said it is done by the seat of the pants, and that is true
because authors draw on their experiences and what they think students need,
without regard to published research. This means that most books tend to
proceed in a similar fashion as if each author copied the others. Until now
there has been no research to guide textbook writing, and even now the
research on physics texts is probably fairly slim. There is firm evidence
that just rearranging the text to the style of a learning cycle does produce
dramatic improvement in comprehension. Refutational text helps some with
misconceptions, but the research I have seen shows small gains compared to
IE, and students don't like refutational text. Then there is the way texts
are gussied up to appeal to parents and teachers without regard to how this
may confuse students. There is a lot of information about how the texts are
misleading and have outright misconceptions embedded in them.

Competition between textbook publishers is generally about convincing
teachers to use the books, and not whether the books actually work better.
There is always a long list of people who endorse the books, but there is
generally no published research. Choosing a text is like choosing a medical
doctor, or mechanic. You really have no hard information to draw on. So
you have to use more subtle clues like is the shop clean, have you heard
good or bad things, do they appear honest, what are the diplomas on the
wall? By contrast one can get objective reviews of products from various
magazines. Consumer Reports is very objective and accurate, but they can't
be encyclopedic. Now you also have individually submitted reviews which
provide good information, but have to be read carefully and weighed. It has
been my experience in HS the texts are usually sold by the ancillary
material they provide, rather than whether they work better. Meanwhile
actual research based texts have a small share of the market. IPS
(Integrated Physical Science) is research based and has a fairly small
market share, but it is the only book in the lower HS middle school area
that is completely accurate. The one HS book that I know of that has a very
good research base behind it is Minds on Physics. The topics are extremely
well connected, and they use the standard research based pedagogy
throughout. The language is selected to challenge students a bit, and there
is evidence that it promotes expert like problem solving. Since it has good
depth, it can not cover every single state mandated topic explicitly. It
does cover most of them as parts of other units, but it does not break each
thing down into a little nugget the way all the other texts do.

In the end I am extremely skeptical that just getting students to read the
texts will have a huge effect on learning. I am also convinced that those
students who read the text probably already have higher thinking skills and
have already built better understanding before they open the book. So a
simple correlation between frequency of reading and understanding may be
misleading. I am also convinced that the texts are in general written in
such a way that students can not learn well from them. So consequently I
would propose using other tactics to improve the students, or having
extremely well integrated short research based texts. And specifically ALL
research points in the direction that inquiry raises the thinking level of
students, while conventional education has a much smaller, or no effect.
Once it has been raised, then the texts may do a better job. At one time I
would have thought otherwise but, after looking at research and looking at
various texts, I have changed my paradigm.

But I do not have access to all of the research so any published results
that refute by paradigm are welcome.

John M. Clement
Houston, TX

Apologies for the egregious grammar error:
________________________________________

I don't thing it would help to give the FCI/FMCE before and after to both
groups. My conjecture was not about those tests. I don't know if there
is research that shows that performance on those tests correlates with
success at learning physics and engineering in college and beyond. I
suppose [their - Ack!] there must be. Otherwise, it's just conjecture
too.