Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

[Phys-l] unbiased experiments +- index of refraction



On 05/11/2009 06:48 PM, Hugh Haskell wrote:

I would argue that any lab in which the goal is to measure the %
difference between their value and the "accepted" value teaches the
students nothing of value, and in most cases does just the opposite.
First, it teaches the students that experiments are not to find out
anything new, but to verify what we already know, and second it leads
to what we used to call in the Navy, "gundecking" the results--that
is, making the results give the "right" answer. Furthermore, the
"accepted" value is the accumulated best experimental results, and
not in any sense "correct." so calculating the % difference between
the two is pretty meaningless, and particularly confusing when the
"accepted value is zero, since any % difference from zero is
automatically infinite.

It is much better to design experiments that have no pre-known
answer, and show them how to 1) estimate a statistical uncertainty
value, and 2) look critically at the experimental setup and try to
figure out what, if any, systematic error might be present due to the
experimental design.

Amen, brother.

I have not much to add, except to point out that the index of
refraction experiment makes it easy to do things right. You can
make it a "double blind" experiment by making a whole bunch of
samples all different, assigning arbitrary serial numbers to them,
and then handing them out to be measured.

One approach: stock + one drop of diluent, stock + two drops of
diluent, et cetera.

Meanwhile, the students have no way of cooking the books. Doing the
measurement is easier than cheating.

After all the results are in, you can de-blind the samples and plot
the results as a function of how the samples were constructed. You
can make the point that you don't know the "right" answer because you
didn't measure the stock or measure the samples before handing them
out. But by deblinding the samples you can obtain a consensus view
of what the stock must have been. Botched measurements will stick out
because their data will fall far from the regression line.