Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-l] estimation competition?



On 04/29/2007 11:49 AM, Brian Blais wrote:
Hello,

I was thinking it would be fun to have a competition in my class estimating various quantities. It could be things like fractions of colors or M&Ms, or high and low temperatures, or whatever. What I would like most is that they report not just the estimate, but the uncertainty as well. Then I'd like to rank them in some way, and I am not sure what is the best way to do this.

Intuitively, I want something that has the following properties:
1) for the same uncertainty, larger deviations of the estimate from the actual yield lower rank
2) for the same estimate, larger uncertainty yields lower rank

Has anyone every tried this? I think it is important to communicate that the uncertainty in an estimate is every bit as important as the estimate itself.

That's an excellent question for two complementary reasons:
a) The objective is entirely commendable.
b) It needs to be done right, and doing it right isn't easy.

I don't have all the answers, but here are some possibly-helpful
remarks:

1) The idea of keeping track of the uncertainty is so important
that it ought to be built into the course in many, many places.
Having one special "Uncertainty Day" would be sort of like having
"National Brotherhood Week" which carries the implication that
the issue is not important the rest of the time.
http://www.guntheranderson.com/v/data/national.htm

2) Error analysis covers a lot of ground, ranging from some
relatively easy ideas to some seriously sophisticated and
complicated ideas. So you have to use some judgment as to
how much gets taught at what grade level.

In particular, the whole topic of "ranking" estimates based
on uncertainty as well as nominal value calls for comparing
two probability distributions. That is more than some students
can handle. (It's more than some teachers can handle, alas.)
Many students, if they have any idea of probability at all,
think in terms of "THE" probability, and the idea that there
could be two probability distributions covering the same set
of events requires some sophistication.

I'm not saying you shouldn't go there ... but you do have to
make sure the proper foundation has been laid.

3) There are various types of estimation. At one extreme, there
is quick-and-dirty estimation, where you merely glance at a pile
of M&Ms and try to estimate the numbers. At the other extreme,
almost all of science, including the most fanatical metrological
science, can be considered estimation, since we almost never
know the "actual" answer.

In particular, you could put M&Ms in an urn, draw random samples,
and use classical statistical techniques to estimate the nominal
percentages and the associated uncertainties ... all very very
precisely, if you take enough samples.

4) There are innumerable ways of increasing the physics content.
For example, consider the canonical "marble launcher" experiment.
Students could use physics to predict where the marble will land,
and then draw concentric one-sigma, two-sigma, etc. prediction
rings. Then you give a prize to the student whose histogram of
actual results most nearly matches his predicted distribution.
Use Kullback-Leibler "distance" or some such to compare the
distributions.
http://mathworld.wolfram.com/RelativeEntropy.html

Note that this can be done in such a way that it doesn't require
the student to match an "actual" result or a "consensus" result
... which is IMHO important, because (as previously mentioned)
in real science we almost never know the "actual" result, and
trying to match the consensus result is highly unscientific.

The same game can be played with almost any physics experiment,
provided it can be replicated enough times to create an empirical
probability distribution that can be compared with the predicted
distribution.

5) Full-blown error analysis is hard. Therefore it might make
sense to restrict attention to experiments where there is only
one significant source of uncertainty ... even if this requires
jiggering the experiment to artificially introduce some well-
characterized uncertainty. Sampling error (as for M&Ms in an
urn) is one type of well-understood uncertainty, but it is
not the only type, and certainly not the one most relevant
to physics. I'm trying to think of an experiment that depends
on thermal noise in an age-appropriate way, but I haven't come
up with anything wonderful. Searching PIRA for "noise" and
"fluctuations" didn't turn up much beyond Brownian motion
demos. Searching for "turbulent" turned up little of interest.
My gut tells me I'm overlooking something, but for now I'm
stuck. Perhaps somebody else can suggest something.