Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-l] formatting uncertainties



On 01/24/2008 12:17 PM, Robert Cohen wrote:
There have been a number of good points raised in this discussion.
One is that it adds nothing to round if the uncertainty is already
provided. On the other hand, if the uncertainty is not provided, one
uses the number of digits (sig figs) to ascertain a "vague sense" of
the uncertainty.

Or not.

Suppose the ratio of two things comes out to be 2.54.
Do you know the uncertainty of that number, even vaguely?

Hint: That number might be the number of cm in an inch.
Or not.

The question that generated this discussion, though, is not what to
do when there is no uncertainty provided but rather what to do when
there is an uncertainty provided. In that case, one must remember
that a rounding error is introduced whenever you round (as pointed
out by JD). So, it doesn't make sense to do that on purpose.

:-)

Still, does this mean one should *never* round? It seems to me that
it depends on how large the rounding error is relative to the stated
uncertainty.

Be careful there. We have two ideas on the table. Significance
is not the same as uncertainty of measurement.
-- Uncertainty of measurement is affected by where the
number came from.
-- Significance is affected by what the number will be
used for.

For 6.67255 ± 0.001, rounding to 6.673 causes an error (or at least
an uncertainty) of 0.0045, which seems pretty significant relative to
the 0.001. You are definitely *increasing* the error/uncertainty by
rounding in that situation.

However, suppose you are given 6.67255 ± 1. In that case, it seems
you can probably round to 6.673 and your uncertainty/error would
still be ± 1.

Yes, "probably". But remember it is only a "probably"
valid guess, not any kind of trustworthy principle.

Are you losing significant information by rounding in
that case? I used to think it was okay to round in that case but I
can see if this number is used in another calculation it may be
important to keep the extra digits (as in the GM example). Is
rounding ever acceptable?

Sometimes it is. Sometimes it isn't.,,,,

There is a genuine dilemma here. Einstein said any theory should
be as simple as possible but not simpler. The distinctions between
uncertainty of measurement, roundoff error, significance et cetera
are important ... but that means the topic is not super-simple.

On the other horn of the dilemma, we have to start "somewhere";
we can't teach everything at once, and we can't cover everything
in an introductory course.

The simplest place to start is, alas, not terribly interesting.
The simplest idea is the idea of tolerance, which is complimentary
to significance. When baking cookies, there is a tremendously
wide tolerance. You don't need 2 ± 0 cups of flour and 1 ± 0
cups of sugar. You could work backwards from the actual
tolerances to get some idea of what ± means. Cooking affords
many examples, and there are plenty of other examples as well,
such as adjusting the pH of a swimming pool, where it is not
necessary to weigh out the chemicals to 0.001% accuracy,
because it just doesn't matter.

IMHO the second step should be to consider roundoff error and
the _accumulation_ of roundoff error. The poster child for
this is numerical analysis of the two-body Kepler problem.
(Hi, Ludwik.) The objective is to timestep the equation of
motion, while keeping the _accumulated_ roundoff error small
enough so that you don't spiral into the sun or spiral out
to the Oort cloud. Extra credit if your ellipse doesn't
precess.

IMHO the worst place to start is with uncertainty of measurement.
First of all, this is intrinsically a nasty problem. It is
hard to look at a measurement and ascertain what uncertainty
attaches to it. When I make a measurement, I rarely know
at the time what the uncertainty is ... so pray tell how are
the students supposed to know? Secondly, this topic is
practically begging to be conflated with roundoff error
(which is not the same) and significance (which is not
the same).

I realize that "uncertainty of measurement" is on the syllabus
whereas "numerical methods" is not ... but that doesn't mean
I have to like it, or that it makes any sense........

On 01/24/2008 03:04 PM, Bill Nettles wrote:
1) the number of digits in the reported answer directly relates to
the confidence placed in the precision of the result. 3421.675 means
that I believe the .67 is reproducible by subsequent measurements and
the 5 might vary a little bit. 3.42e3 means something close to 3420.
The number of significant digits carries information about the
result.

That's your opinion. It is not, however, a good practice.
Provably not. People who care about their data don't do it
that way.

2) round-off during intermediate calculations should be minimized

Yes.

because of the dangers of subtractive cancellation and loss of
accuracy

Cancellation is not the only problem.

3) students should be shown the results of poor experimental design
which leads to loss of accuracy due to rounding

4) students should also be shown the fallacy of using too many
significant digits. Have them measure the diameter and circumference
of a tennis ball and calculate pi. Then compare it to the known
value. The comparison should be done to the same number of
significant digits that can be obtained from the measurement
instruments, otherwise, they'll think they proved that pi is "wrong."

I disagree.

Scatter in a measurement does not make the measurement
"wrong" in any sense. This is important. Students need
to learn this!

If a family has five daughters and no sons, it does *not*
mean that the laws of probability are "wrong" in any
sense. As the bumper sticker says, stuff happens. This
is important. Students need to learn this.