Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-L] determine k



On 02/08/2015 09:44 PM, I wrote:
I think the process outlined above is pretty much
squeaky-clean in terms of physics and statistics.

Well, not quite. I thought about it some more.

If you measure the spring using an oscillator, I
predict that you will discover the existence of
something called /systematic error/.

The textbooks are full of techniques for handling
random errors ... but not so much systematic errors.

In particular, I predict that your oscillators will
be affected by the mass /of the spring/ ... not just
the mass of the bob. (The Hooke's law experiments
will not be affected in the same way.)

There is a longstanding (but not venerable) tradition
on phys-l of making things overly complicated. I'm
trying to avoid that, or at least minimize the damage.

The oscillator experiment has the /potential/ to
be extremely accurate. Mass can be measured very
precisely, and time can be measured even better
than mass. It pains me to think of a potentially
excellent experiment spoiled by systematic error.

If this were a college experiment instead of a HS
experiment, the path would be straightforward:
measure the oscillator frequency using several
different bobs, with widely varying masses. Then
do some data reduction to model the contribution
from the mass of the spring.

For HS that's way too much work. It's too much
lab work and too much cognitive load. I'm trying
to come up with some constructive suggestions,
with limited success. Right now I'm thinking
out loud.

Of course the simplest thing is to just take
the attitude that "life sucks and then you die".
That is, just accept that the dominant error
will be a systematic error.

Alternatively, one could imagine just using
a "big enough" mass so that the spring is
negligible in comparison. Alas, I doubt that
can be done with real-world springs ... not
without a painful sacrifice of accuracy, or
a more complicated experimental apparatus,
or some other nasty business.

A reasonably simple way to get decent accuracy
is to weigh the spring and use theory to say
that the /effective mass/ is the mass of the
bob plus 1/3rd of the mass of the spring.

You could just pull that result out of some
place where the sun doesn't shine, or you can
justify it empirically: Have various students
use different masses. We can't expect the students
to model the data properly, but the teacher can
do it. The frequency should fall on a straight
line as a function of 1/sqrt(effective mass).
Here's a spreadsheet:
https://www.av8n.com/physics/measure-k-oscillator.xls
By wiggling the scrollbar you can show that the
data fits the model much better if you include
1/3rd of the spring in the effective mass.

My spreadsheet uses synthetic data, but you
could perfectly well populate it with real
data (in the "bob mass" and "observed freq"
columns) and use it to fit to the effective
mass.

I know this is still too complicated, but it's
the best I can do at the moment. (There's a
long list of even-more-complicated schemes I
considered and rejected.)

There is lots of upside potential here. At the
college level this is a nifty way to demonstrate
why you should not put much faith in chug-and-plug
statistics. You could measure the frequency to
one part in a gazillion, and you could "prove"
that the statistical uncertainty was very small
... but there would still be systematic error,
and statistics generally won't tell you that,
especially if you use only one bob. If you use
a variety of bobs, you will begin to see some
scatter in the data, but even so, the mean of
the distribution will *not* be a good estimate
of the thing you are trying to measure.

Any decent statistician will tell you that statistics
by itself is never enough. Doing statistics on a
black box is a recipe for disaster. You need to
do the physics. That is, you need to build a
model that makes sense in fundamental physics
terms. Then analyze the data by fitting to the
model.

This is fairly typical:
a) If you do the statistics incautiously, you
get badly fooled.
b) If you do the statistics more carefully (e.g.
using an ensemble of bobs), statistics will tell
you that you've got a problem, but won't tell
you how to fix it.
c) To fix it, you need physics. (Of course you
still need the statistics.)

If you know enough theory, a zero-parameter model
suffices for this experiment: You know that for
an ideal spring the fudge factor is 1/3rd of the
spring mass. Alternatively, you can skip the
theory and determine the fudge factor empirically,
using a one-parameter model (and modestly more
data).