Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: Order of E&M topics (was B and electric charge)



produces the "70% barrier" in most modeling physics classes - the
inability
of most modeling instructors to achieve higher than a 70% class
average on
the Force Concept Inventory (FCI). (Still way beyond what traditional
instruction can achieve.)

I have to take exception to this last statement, the likes of
which seem to
permeate PER. I can fairly state that my General Ed physics
class regularly
ends up the first semester above 70% on the FCI and my new
Calculus-Physics
class (with whom I spent much less time working on Newton's Laws) ended up
this year at 66% and a normalized gain of .47. I have really no clear cut
idea what a 'traditional instruction' based course actually is, but I can
state that I don't use any particular pedagogical 'program'. In fact, due
to 'outside' constraints on the content, the Calc based class was as close
to a 'lecture & demonstration' class as I've run for many
years--we covered
11 chapters of Hecht's Physics:Calculus with a strong emphasis on problem
solving. In other words, one CAN get high-score performance on the FCI
without committing to a modeling course, or a directed discovery
course, or
a cooperative learning course. You do have to keep the students
interested,
and you can have them to learn a 3rd Law 'mantra' which can do wonders on
the FCI. Do I think that either of these classes _really_ has a deep
understanding of Newton's Laws? NO. That certainly was not a fundamental
goal of the Calc course, and was only a secondary goal of the
GenEd course.

I would say that you have very strong evidence that it is possible to get
better results on the FCI by conventional means. I would suggest that this
could be published. I would also wonder what sort of scores you get the
FMCE, which is very different in format from the FCI. While it yields
comparable results, it may be possible to teach in a fashion that produces
better scores on one than on the other. Since you have been testing your
students, I would put the evidence as better than anecdotal and indeed
publishable.

Rather, I would point to these kind of results as anecdotal evidence that
the FCI is not the end-all of tools to assess pedagogical techniques. One
CAN teach to this test (3rd Law mantras are an example), but more
importantly, this (or any other fairly specific test) also drives
the focus
of the instructor--and not always to the benefit of the students. It is
just possible that an in-depth understanding of Newton's Laws may not be
particularly important or useful for the majority of students taking
introductory physics (yes it is important for some), and
instructors who do
focus their attention on getting good gains on the FCI may do so at the
expense of other legitimate educational goals for their students.


I think that while the FCI is certainly a large factor in many of the
published papers, other considerations are also being investigated. The
ability of students to do expert problem solving is probably the second
largest interest, and the ability of students to understand graphs ala the
TUG-K is also being explored. Of all of these tests the FCI is probably the
one which uses the simplest language and as a result is the most accessible.
Also because it was one of the earliest to be published in a readily
available journal it has become a standard.

I completely agree that just focusing on the FCI is a mistake, however very
low FCI scores are probably indicative of very low understanding by the
students. One comment attributed to Hestenes was that he wondered why many
students even with good FCI scores had trouble doing problems. This is
where other things like the rich context problems, ALPS worksheets, or the
MOP problem solving methods are useful. As far as I know there are no good
evaluations for the ability to solve problems.

Done with my periodic rant about the FCI and absolutes about what
works and
what doesn't!

I think that the mistake that is often made is to assume that a particular
method is absolutely the best for all possible students. Obviously the
current lecture system allows a select few to excel in physics. However it
seems to be a filter for many other students. The current research is
focusing on trying to get good results with a larger number of students.
There is some evidence that telling students the definitions first is OK for
students who test as formal thinkers, but does not work well for concrete
thinkers. The lower students need to have some exploration before the
definitions (lectures), and this is one of the things that are provided by
the reformed curricula. Anyone who is interested in a good book on the
research should read Lawson, "Science Teaching and the Development of
Thinking". Lawson also brings up the best argument in favor of the learning
cycle approach which is common to most reformed physics courses. The
learning cycle improves overall thinking ability. At the moment few PER
types are testing for this effect. BTW it may be possible to implement a
learning cycle in a lecture course, but I have not seen any reports of this.

A very good example of a conventional intervention that can help students
dramatically was the work of Mehl. It was his thesis at university of
Capetown, but it is summarized in "Really Raising Standards". Shayer&Adey.
He took 1/2 of the physics class and treated the students differently. He
carefully observed what tasks and ideas the students had difficulty with and
then devised methods for the students to use when solving problems. The
result was extremely dramatic. The treated half had a 100% passing rate on
the final, while the untreated half had a 50% failure rate. His work
however was far from conventional. He used the theories of Voygotsky and
Feuerstein to guide his work. He investigated the student capabilities by
extensive interviews and on the basis of this he designed the program.
While the format of the course was not changed, the lecture instruction
revolved around his changes. However his program did not seem to generalize
to other aspects of physics so it probably did not improve student thinking
skills.

One good indicator of whether the instruction is really changing student
understanding, would be to give the FCI after a time delay and see if the
scores drop much. Since this subject came up before, I performed this
experiment. I gave the FCI cold after the Christmas break of nearly 3 weeks
after my midterm exam. I observed a negligible drop or about 0.7 points out
of the maximum of 30. While prepping might improve scores on a test, it
should have much less effect on a delayed surprise evaluation.