Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

[Phys-L] Re: fall cleanup: sig figs



"The other sub-scenario arises in computer programs, typically
when you are doing something like solving a differential
equation by time-stepping it. You can improve your time
resolution by making the time-step smaller, but if you go
too far in this direction the roundoff monster will eat you." [JD]


A question and a "confirmation?".


The same day JD posted, I had just completed a simple minded pgm. to
calculate radiancy from a black body using Planck's eq. (times C/4). I
had checked by comparing it's result (using 6k degK and 0.01 => 100
micron) w/ the S-B eq. After reading the above I modified the pgm. to
display more places and ran it w/ progressively more iterations
(slices). The result was asymptotic and then oscillated. (I used a
narrow range [0.2 => .5 micron *] wrongly thinking it would avoid
waiting an hour and longer).

data: [number of iterations / result
1 / 1.09
10 / 1.98
100 /1.9920
1k / 1.9921568
10k /1.99215768
100k/1.9921576907
1e6 /1.9921576931
4e6 /1.992157043 begins oscillation
6e6 / 7349
7e6 / 6066
7.5e6/ 7287
7.8e6/ 7141
7.9e6/ 7380
8e6 / 5894!
9e6 / 6096
1e7 / 6066 about eight minutes for this one


note except for "C" and Pi the constants are seven figs. e.g.
6.626176(36)e-34


Unless the computer truncates, I would think round off error would be
"somewhat random". Is truncation std.?


* this is one side of the "hump". So I tried the range 0.1 => 200 micron
and found a similar result.

bc


John Denker wrote:
cliff wrote:


How can
digits that carry no significance (meaning I have no idea what the real
value in that place should be) guard anything?


There are several scenarios where this could happen. One
of the simplest is as follows:

Executive summary: guard digits don't make the noise on
the raw data any smaller ... but they *do* make the roundoff
errors smaller.

In more detail:
Suppose I have 10,000 raw numbers. Each of them has 0.1 unit
of absolute uncertainty, uncorrelated and normally distributed.
Now I average them all together. The average has only 0.001
unit of absolute uncertainty.

This is quite a bit trickier than non-experts would have
guessed, for the following reason: Suppose we write each
raw number in the form Ai +- Bi, where Bi is the uncertainty.
Beware: uncertainty is not the same as roundoff error. After
roundoff, we have something like Ai +- Ri +- Bi, where Ri is
the roundoff error. It is easy to fall into situations where
even though the Bi are independent and normally distributed,
the Ri are all one-sided, and they accumulate like crazy.

Gaussian-distributed errors add in quadrature, while one-sided
errors just plain add, linearly. Early in the game, the
randomness of the Bi smears out the one-sidedness of the Ri,
making the Ri much more random, but as soon as the signal-averaging
starts to take effect, the Ri become one-sided again, and
you're in big trouble.

There is a spreadsheet that implements a numerical example of
this, as discussed at
http://www.av8n.com/physics/uncertainty.htm#sec-extracting

There are two sub-scenarios to consider. As mentioned above,
and mentioned by Ludwik, you need to worry about this with
plain old raw physics data.

The other sub-scenario arises in computer programs, typically
when you are doing something like solving a differential
equation by time-stepping it. You can improve your time
resolution by making the time-step smaller, but if you go
too far in this direction the roundoff monster will eat you.