Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: Ten Learning Principles - Worthwhile or Not?



Hi all-

I've been busy on a legal matter. I apologize for the delay in
responding.


On Wed, 30 Jan 2002, Richard Hake wrote (in relevant part):

In his Phys-l post of 30 Jan 2002 00:24:08-0600 titled "Re: Ten
Learning Principles - Worthwhile or Not?" Jack Uretsky made several
comments (C) to which I shall respond (R).

11111111111111111111111111111111111111111
C1. "There is a glaring difference between the educational research
and the physics research the I have seen. The educational
experiments seem to be generally designed and conducted for the
purpose of proving the correctness of an educational theory."

R1. Jack appears to have a rather limited familiarity with the
educational literature, in particular with Hake (2002). That study is
NOT designed to prove the correctness of any "educational theory."
As clearly stated in the introduction, the study was designed to
answer the question:

"Can the use of interactive engagement (IE) methods increase the
effectiveness of introductory mechanics courses well beyond that
obtained by traditional methods?"

Here "IE methods" are OPERATIONALLY defined as "those designed at
least in part to promote conceptual understanding through interactive
engagement of students in heads-on (always) and hands-on (usually)
activities which yield immediate feedback through discussion with
peers and/or instructors, all as judged by their literature
descriptions."

Ask a loaded question, you'll get a loaded answer.
1. As Hake and Uretsky have emphasized in their discussion labs, an
operational definition is one that can be sketched as someone doing
something. I'd like to see a sketch of "promote conceptual
understanding".
2. Hake leaves out of "increase effectiveness" an implied "as measured by
the FCI". Operational thinking teaches us that the measuring instrument
defines the measurement.
3. The question, which sounds neutral, is not really an a priori
question. It is an a posteriori question proposed by a proponent of IE
methods (which I happen to be also, maybe only since I graduated from
law school). The question is a posteriori because it focuses on
one particular teaching method as opposed to all others. Studies based on
a posteriori questions were roundly criticised generations ago by R.A.
Fisher, whose methods we otherwise follow..


22222222222222222222222222222222222222222
C2. "Physics experiments are generally designed and conducted for
the purpose of disproving a physical theory."

R2. Jack seems to have a high-energy theorist's theory-first view of
experimental physics. In my 40 years of experimental research in
superconductivity and magnetism, I never designed and conducted an
experiment "for the purpose of disproving a physical theory." Nor
did most of my fellow condensed-matter physicists. Instead, many of
us did experiments which were designed to uncover NEW physical
phenomena, let the theoretical chips fall where they may.

So let me restate it in condensed-physicist language: Physics
experiments are generally designed and conducted to seek unexpected
results. This is done by making quantitative, a priori predictions of
expected results according to accepted theory, and then determining the
likelihood that the experimental result differs from expectation. When
that likelihood is deemed insignificant, then no new physical phenomena
are claimed to have been discovered. A result may, of course, be an
improved determination of the value of a physical quantity.


33333333333333333333333333333333333333333
C3. "How many educational experimental papers have you seen where
the proponent of a new educational theory reported on the unsuccess
of the theory? Yet it is valuable to know the techniques that are
enthusiastically tried and don't work."

R3. Here again, Jack's unfamiliarity with the educational
literature, and especially Hake (2002), is apparent. In the latter
paper I write:

"Average pre/post test scores, standard deviations, instructional
methods, materials used, institutions, and instructors for each of
the survey courses are tabulated and referenced in Hake (1998b). THE
LATTER PAPER ALSO GIVES CASE HISTORIES FOR THE SEVEN IE COURSES WHOSE
EFFECTIVENESS AS GAUGED BY PRE-TO-POST TEST GAINS WAS CLOSE TO THOSE
OF T COURSES, advice for implementing IE methods, and suggestions for
further research."

Dick is here hoist on his own petard. When the expected result is not
obtained, then one finagles the apparatus to get the expected result
("advice for implementing IE methods"). That's educational (you should
excuse the expression) research.


Regards,
Jack