In his Phys-L post of 11 Feb 2005 Rick Tarara (2005a) wrote
[bracketed by lines "TTTTTTTTT. . . ."' :
TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT
But how do you even do a meta-study? In the case of the response
systems -- do you go out and pick ten instructors at random working
with a range of courses and say "OK, next year you must use this
tool. Keep these records, and then we'll come in and test your
students." Well of course not. You can collect data from people
using method A, B, or C but then you are back to the "enthusiast"
bias. . . . [for 73 references to such Tarara-discounted data see
Hake (2005a)].
In the end, I think this is just a very tough area to apply
"scientific" testing to. We can't do double blind experiments in
most cases. No placebos to put out there. No really "neutral"
presenters. Way, way too many variables.
Teaching is still an art form. As Joel said, use your common sense.
If a tool or technique makes sense to you, and if you are willing to
put in the time and effort into learning how to do use it, it is
worth trying [assuming you are unsatisfied with what you are
currently doing]. This particular example. . . [student response
systems]. . . may be on the fringe of that advice however, because of
the cost involved, so I can understand why someone looking at it
would want as much information as possible. However, "research" here
is unlikely to be very meaningful, so in the end, I'd look to
personal testimonials of people you trust.
TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT
John Denker responded on 11 Feb 2005 12:19:40-0500:
"It ought to be possible [to do a meta-study]. As usual, it is
important to ask the right question. . . Let's not let the perfect
be the enemy of the good. Let's focus on what's possible . . . . some
information is better than no information."
I agree completely with Denker, except that I prefer Voltaire's:
"Le mieux est l'ennemi du bien."
(The best is the enemy of the good.)
Voltaire in Dict. Philosophique((1764). Art Dramatique.
The reason for my preference is that "nothing is perfect," not even
the double-blind randomized control trials demanded by education
experts Rick Tarawa (2005a) and Douglas Carnine (2000).
Even the NON-double-blind randomized control trials (RCT's) enthroned
by the U.S. Dept. of Education (USDE) as the "gold standard" of
education research are a far cry from perfect. In my opinion, they
are not even the "best" [Hake (2005c)].
"Let us agree at the outset that *good teaching is an art,* fully
deserving our respect and admiration. It does not follow, however . .
. that there cannot also be *a science of teaching.* "
David Hestenes (1979).
Hake, R.R. 2005c. Should Randomized Control Trials Be the Gold
Standard of Educational Research ? online at
<http://lists.asu.edu/cgi-bin/wa?A2=ind0504&L=aera-l&T=0&O=D&P=1945>.
Post of 15 Apr 2005 to AERA-C, AERA-D, AERA-G, AERA-H, AERA-J,
AERA-K, AERA-L,
AP-Physics, ASSESS, Biopi-L, Chemed-L, EvalTalk, Math-Learn, Phys-L,
Physhare, POD, STLHE-L, & TIPS.
Tarara, R. 2005a. "Re: Research on Student Response Systems," Phys-L
post of 11 Feb 2005 11:31:27-0500; online at
<http://lists.nau.edu/cgi-bin/wa?A2=ind0502&L=phys-l&O=A&P=19483>.
For another of Tarara's breakthough insights on education research
see Tarara (2005b) and the response by Hake (2005b).