Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-l] Sample problems and derivations...




At the recent AAPT meeting I heard it mentioned that both doing sample
problems on the board and showing derivations in a similar manner have a
minimal impact on student learning. This happens to fit with my
preconceptions about teaching, so of course I am not here to challenge
this
idea. ;-)

So far so good!

Mostly I am hoping to find evidence for this in a PER article somewhere.
If
anyone on here knows of such a publication I would be grateful for a
citation.

I'd say quit while you're ahead. The problem is that you can find
evidence for almost anything in the literature.

Of course this is nonsense. Do not quit, but look through the literature
for both positive and negative evidence. You can always find articles which
purport to present evidence, but articles with real evidence based on data
is a different thing. To understand the literature it is necessary to read
it in depth not only in AJP, TPT, but also JRST.

You don't have to look far to find evidence for the idea that doing problems
and proofs during lecture has a low effect on learning. Mazur has totally
given up doing these types of things, but he has also shown that his
students problem solving ability rose along with his FCI scores. Redish has
shown using the MPEX that many students treat proofs as just a way of
showing it is alright to use the equations rather than as a means to connect
concepts. Modeling has also given up teacher problem solving and also most
proofs with a few exceptions during the wrap up of a lab, and the FCI scores
are higher. Indeed most PER inspired curricula have gotten rid of lecture
proofs and problem solving with a resulting increase in FCI gain. The
Hellers do have a few sample problems, but in general problem solving is the
responsibility of the students, and they show higher ability to solve more
complex rich context problems.

Of course this does not mean that proofs and teacher problem solving have no
effect. They may have a positive effect on a few exceptional students, but
the evidence is that PER has a much bigger effect on all. Actually teacher
problem solving promotes memorization rather than understanding for most
students, and it promotes the teacher as the sole authority. The result is
that peer interaction is lowered which is known to lower learning. Indeed
teacher problem solving may actually be a destructive practice.
Confirmation may be seen in Redish's finding that conventional lecture
courses drive students toward more novice like attitudes, while studio
courses promote expert like attitudes.

Schwartz has shown that students should not be taught algorithms at first,
and the learning cycle which promotes higher thinking scores on the Lawson
test has exploration first. But algorithms can be taught after students
have struggled with problems.


I'm not saying the PER literature is particularly worse relative to
the literature in other fields I can think of, but in absolute terms
it's bad. I'm going to pick on it today because it is relevant to
the question that was asked, and to the topic of this list.

Just so you know where I'm coming from, I worked in the "neural
networks" field at a time when 90% of the published papers in the
field were nonsense. People sneered at me for being part of that
community. But I didn't take it personally. I was new to the field,
but I was able to figure out which 10% of the literature was worth
paying attention to. The rest was a big waste of trees, but it
didn't interfere with my work.

So, if you work in the PER field, please don't be too insulted by
what I am saying here.


This is not an isolated incident. Every so often I go read the PER
literature, and what I find -- almost always -- is page after page of
stuff that cannot possibly be true. If the PER literature told me
that the sun rises in the east, I wouldn't necessarily believe it,
especially if I had any first-hand reason to doubt it (which in fact
I do ... and there's a funny story about that, but it can wait for
another day).

OK, put up. Name a specific article and which things can not be true! I
would like to see which articles can be criticized this way, especially
articles written by major researchers.


The field is supposed to be about physics, pedagogy, and critical
thinking ... but what I see in the literature is mostly wrong physics,
bad pedagogy, and an astounding lack of critical thinking.

OK where is there bad physics in an article. Perhaps the questions in the
FCI or FMCE would be the first places to start. As to bad pedagogy, that
can only be determined by experiment. Each teacher has a paradigm of good
pedagogy, but that paradigm is usually based on anecdotal evidence. Often
when you look at experiments you find that what you think works, doesn't.
Whenever you tell someone that lectures are very poor teaching, many respond
with personal anecdotes rather than evidence. The plural of anecdotal
evidence is not data. And evidence must be base on data.

As to every now and then reading the PER literature, we now have a great
depth of literature going back to the 70s which has substantial agreement,
and there are to my knowledge practically no counter papers to the papers by
the major authors. For example can one achieve high FCI gain by
conventional lecture/verification lab/teacher problem solving? Nobody has
demonstrated this in the literature. The highest target for FCI gain is now
90% as reported for a Modeling class. (go to http://modeling.asu.edu/ to
find the evidence)

I would trust the intuition of one actual classroom teacher over any
ten PER articles.

But the average physics classroom teacher achieves FCI or FMCE gain in the
range of 0 to 25%, with an average of 10% according to an indirect Hake
measurement. The average PER practitioner generally achieves >30% up to
90%.


The intuition of classroom teachers is skewed by anecdotal evidence, and
while some intuition is correct, a lot of it is completely opposite to what
we know from PER and cognitive science. Karplus used intuition, but also
tested students to see how they responded to various interventions.
Classroom teachers don't do that, but they should. Let me cite a simple
example of how intuition is wrong. Most texts teach energy before momentum
and a professor at a 2YC that I worked with thought that was the correct way
to do it. But the work of Laws et al showed that you get better results in
reverse. Similarly NTNs laws are generally taught in the the order 1,2,3,
but experiments have shown that NTN3 should be taught first by introducing
interactions before teaching NTN2. Redish tried to achieve FCI gain by
better lectures, and failed. Then he tried a few PER labs and succeeded.
Mazur had the same experience so he hit on the interactive lecture where he
now mainly asks questions.

In psychology there was a huge fixed belief that IQ was a fixed measurement
which did not change. But this was easily refuted by experiments in JRST
where science educators such as Lawson showed that IQ could be raised, but
not by didactic means. Shayer & Adey and Feuerstein also showed this. The
intuition is just as often wrong as right. You still have counselors who
insist that IQ or ability to think is fixed.

PER is certainly much more difficult than rocket science because you do not
know all of the variables, but that does not invalidate the papers.

As to using PER papers as a guide to classroom practice, only a small
minority of teachers are capable of this. We have found that classroom
practices are generally only changed by extended workshops which also
provide the necessary materials along with intensive instruction in the
reformed methods. A few teachers are capable of paradigm change, but most
just teach the way they were taught. Modeling for example has demonstrated
gain in both the teacher's scores and then with experience the students of
the trained teachers also show higher gain.

I know there are some good people on this list who publish in the
PER literature, and I would encourage you to continue. As the saying
goes: The light shines in the darkness, and the darkness cannot
overcome it.

Ahhh, but the darkness resists it. Remember that it took a new generation
to embrace antisptics, while the older MDs clung to their prestigious blood
stained tweed coats.

I think that I can see lowered student initiative in science courses. That
is students are much more passive because they are being noodled more
intensively, with a resulting fossilization of their ability to transfer.
(noodling: see noodling geese)

I am serious about knowing examples of particular PER papers which have
flawed physics or pedagogy. Please cite some, and be specific. If papers
can not be refuted, then I must assume that they are actually pretty good,
especially the ones written by the major authors. But whenever I issue a
challenge, it is ignored. As a result I must conclude that my observations
are probably correct, or people are woefully ignorant of the PER/science
education literature.

John M. Clement
Houston, TX