Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-l] estimation competition?



On 04/29/2007 11:49 AM, Brian Blais wrote in part:

I think it is important to communicate that the uncertainty in an estimate is every bit as important as the estimate itself.

This is a super-important topic that deserves more
discussion than it has yet received.

Let me come at it from a slightly different angle.

The problem with a whole lot of uncertainty-related
activities is that the students don't have a /real/
reason to care about the uncertainty; they only care
because the teacher /claims/ to care about the
uncertainty.

Among other things, this allows wild misconceptions
to creep into the system and go unchallenged. For
example, on another list there recently appeared
the assertion that the uncertainty of sum was the
sum of the uncertainties of the addends. This is
nasty, but if the students calculate the "uncertainty"
according to that rule, they will be marked "correct",
and they encounter little empirical evidence that
would tell them how wrong that is.

In the real world, you might bet your life on something that
has ten-sigma reliability, but you wouldn't bet your life
on something that has only one-sigma reliability. So yes,
it is important to know the uncertainty.

The way I think about uncertainty is in terms of probability.
I just don't see any other way to approach it. (If somebody
has a better approach, please let us know!)

Assuming that uncertainty depends on probability, it would
be nice to claim we have reduced it to a previously-solved
problem ... but alas the previous problem is not well
solved! All too commonly, incoming students have almost
no clue about probability.

So let me start there. Here is an activity that illustrates
some important ideas about probability.

*** Cat and Mouse Game ***

This is nominally a two-player game, but it works just
fine if one person plays both roles.

This is board game. You can use the outermost ring of
squares on a standard checkerboard, forming a ring of
28 squares.

There are two markers, one representing the cat and one
representing the mouse. They start out X units apart.

The players take turns. On each turn, the mouse player
tosses the coin. If it comes up heads, the mouse gets
to move one square away from the cat. Tails means no
move. Then the cat player tosses the coin. If it comes
up heads, the cat gets to move one square toward the
mouse. Tails means no move.

The big question is, how many moves does it take for the
cat to catch the mouse?

Many kids find the answer counterintuitive. If they
think just in terms of the "nominal" rate of motion
or "average" rate of motion, then the cat and the mouse
move at the same rate. It's true that they move at the
same rate /on average/ ... but that doesn't answer the
big question. The cat wins because of /fluctuations/,
i.e. /deviations/ from the average behavior.

Another interesting point is that the length of the game
depends on the /square/ of X, to a first approximation.
especially when X is not too large. That is waaay
counterintuitive to kids who have been fed a steady
diet of distance-equals-rate-times-time problems.


At the next level of detail (maybe not necessary in truly
introductory situations) is that the game is cleverly
designed to be a random walk. That is, the /separation/
vector between cat and mouse undergoes an unbiased
random walk.

In contrast, the /position/ vectors of cat and mouse
undergo heavily biased motion. This is actually part
of the design; it makes the game look more interesting.
It looks like a pursuit (which is more interesting than
Brownian motion).

An amusing side-note is the the random walk of the
separation vector is very precisely unbiased, even if
the coin is slightly biased, provided both cat and mouse
use the /same/ coin, since both players benefit equally
from any slight bias in the coin ... provided we assume
the bias is intrinsic to the coin and the same for each
toss, time after time ... i.e. assuming no cheating such
as via a "controlled" throw. See the papers by Persi
Diaconis on the less-than-perfect randomness of a coin
toss.

The periodic boundary conditions mess up the mathematics
a little but, but not too badly, especially when X is not
too large. Even without the periodic boundary conditions
(i.e. if you play the game on an unbounded number line) the
mathematics is slightly tricky; you can't just draw the
Gaussian bell curve and say "here is zero and here is X
and the curve plots probability as a function of X". That's
because we need to know the probability that the progress
of the cat toward the mouse is less than X (the starting
distance) /and/ has never been greater than or equal to
X. That is, we have a kind of _truncated_ random walk.

OTOH you can still draw the bell curve and use it to help
with the explanation. You can still make the point that
/on average/ the cat makes zero progress toward the mouse.
Sometimes knowing the average isn't the whole story, or
even the interesting part of the story.

Disclaimer: I do not claim originality for this game. I
read about it, or something like it, in a library book when
I was about 9 years old. I would like to give fuller credit
to the author, but I can't. Also I suspect I have left out
some details. You gotta cut 9-year-olds some slack.

I remember being wildly impressed, and spending half a day
flipping coins and pushing markers around. I learned a
lot.

I leave it to y'all to fill in the pedagogical connections
to the mathematics of random walks and the physics of
Brownian motion.