Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

[Phys-L] Re: fall cleanup: sig figs



Folkerts, Timothy J wrote:

1) Excel DOES have a way to generate random normal data: Tools; Data=
Analysis; Random Number Generation will generate a half dozen differ=
ent distributions, including normal. You may have to go to Tools; Ad=
d-Ins and install the correct tools.

That must be new.

Also, what happens if I write a spreadsheet using such
functions, and send it to somebody who hasn't installed
the add-in? Maybe I'm better off doing my on Box-Muller
transform.

That said, I have heard that this isn't a very good normal distributi=
on generator for serious Monte Carlo work.

Huh? How bad can their random numbers be? Compared to
really serious applications, e.g. high-stakes gaming and
cryptology, Monte Carlo is considered one of the least-
demanding applications.
http://www.av8n.com/turbid/

2) You seem to be assuming that all the errors in your analysis are r=
andom, rather than systematic.

I hope I didn't assume that!

I thought the whole point of the Monte Carlo section was to
explain how to handle correlated errors.

I had a set of plastic meter sticks t=
hat were consistently 1 mm different in length from a set of wooden m=
eter sticks. I have no idea which was closer to correct, but no amou=
nt of averaging of reading will help make the answer any more accurat=
e when the instrument itself is flawed.

This is a tricky and important point. I suppose I should
discuss it in more detail.

Briefly: if *everybody* in the world has an instrument that is
flawed in exactly the same way, I'm not sure it counts as a flaw
at all. Every experiment is reproducible, without needing a
correction for the universal "flaw".

We only get into trouble when there is an ensemble of instruments
not all alike. We get into big trouble when somebody takes a
too-small sample of this ensemble and therefore underestimates its
variance.

To my mind, it's still a "statistical" error in some grand
abstract sense, but you cannot evaluate it by doing a simple
statistical analysis on the too-small sample. It's a mess.

3) You give an example of measuring an object to the nearest 1/4 mm =
by interpolation. Such operations tend to be highly subjective. In =
this case, the operator could have a much larger impact on the result=
than any difference caused rounding and/or sig figs. There is a met=
hodology common in industrial settings known as "Gage Repeatability &=
Reproducibility" ("Gage R&R", or sometimes "Gauge R&R") that explore=
s just such issues.

Again, that sounds good, but I'll have to think about it.

4) I don't see any mention of accuracy vs precision. In some of you=
r discussion, I think you are more properly interested in precision t=
han accuracy.

Indeed I hardly mention accuracy *or* precision. That's intentional,
and I suppose I should explain why.

The modern trend, at least among the poo-bahs I talk to, is to move
away from accuracy and/or precision, in favor of the all-encompassing
term "uncertainty". There are many sources of uncertainty, some of
would have fallen under the old heading of inaccuracy, some under
the heading if imprecision, and some under both or neither.

As I recall, the NIST 1297 reference
http://physics.nist.gov/cuu/Uncertainty/basic.html
makes this point, but I can't quote chapter and verse right now.

In any case, if you're even halfway interested in this stuff, I
recommend that reference.

Ludwik Kowalski wrote:
The issue of significant digits can not be avoided, unless the rule is
"write down, or type in, as many digits as you have," which is
ridiculous.

Let's not confuse roundoff rules with sig-digs rules. The
non-ridiculous roundoff rules are:
-- keep many enough digits to avoid any unintended loss of
precision.
-- keep few enough digits to be reasonable convenient.

Those are my rules. They are a far, far cry fom the sig-digs rules
in typical intro textbooks.

In any case, the fact remains that when you see a number,
you can never infer the significance from the number of
digits. A four-digit number might be uncertain at the 1% level.
A two-digit number might be exact. You just don't know.