Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: data on typical FCI scores



I repeat my concerns about what I feel is an 'unhealthy' reliance on the FCI
in determining the effectiveness of instructional techniques. It is not at
all clear to me that good performance on the FCI is a sufficient indicator
of conceptual understanding. I can easily teach to the FCI and especially
to the types of questions on the FCI (Third Law questions as presented in
the FCI are a good example) and get 70-75% performance out of a
non-scientist (low math) class. This does not translate into 75% conceptual
understanding of this material. I suggest that it will take a lot more in
the way of diagnostic tools to really prove that the Hake data has the
significance attributed to it. For example (as mentioned before) the Force
& Motion Conceptual Evaluation from Tools For Scientific Thinking, CSMT,
Tufts U. can easily show divergent results from the FCI. Interactive
Techniques developed in one place or by one person often don't translate
into the same level of performance when used by someone else. The
classification of IE and traditional techniques were somewhat suspect in
Hake's original draft (sorry I haven't seen the published paper) and I
suspect, often a subjective call regardless (how many people really do PURE
lecture courses today?).

Rick

----- Original Message -----
From: "Brian McInnes" <bmcinnes@PNC.COM.AU>
To: <PHYS-L@lists.nau.edu>
Sent: Thursday, January 06, 2000 4:12 AM
Subject: Re: data on typical FCI scores


brian whatcott looked at some of the data Richard Hake presented
..
the average effectiveness of a course in promoting conceptual
understanding is taken to be the average normalized gain <g>. The
latter is defined as the ratio of the actual average gain (%<post> -
%<pre>) to the maximum possible average gain (100 - %<pre>)....

(b) Traditional (T) courses ... average <g> for 14
courses (N = 2048) of 0.23 ± 0.04sd
...
(c) Interactive-engagement (IE) ... average <g> = 0.48 ± 0.14sd.

(d) Current IE methods need to be improved, since none of the IE
courses achieves <g> greater than 0.69.

and asked

Can anyone explain why the IE method, shown to be better,
is in need of improvement?

The g value that Richard uses is a measure of how much of an
improvement of conceptual understanding instruction has produced. It
looks at the gap between understanding at the start of the instruction
and a "perfect" conceptual understanding.
Richard's thorough investigation shows that the improvement from
traditional courses gets students, on average, only 20% of the way;
interactive-engagement instruction does a lot better, getting
students, on average, 50% of the way BUT that is ONLY 50% of the way
and the very best results were only 70% of the way. Richard is
saying: why can't we do better? it looks as though we are on the right
track but there's a long way to go.

I've heard that Priscilla Laws, at Dickerson College, does get much
better than 70% but which of us can replicate the Dickerson
environment?

Brian McInnes