Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: Significant figures - a Modest Proposal



The definitions given on the NIST site are not in agreement with mine
(dare I say *precise* agreement?). According to this inexpert a single
measurement has a precision associated with it. If I report a duration
of 3.11 s for a process the (implied) precision is 0.01 s regardless
of the number of measurements that went into the measurement. In
astronomy we make many such measurements which by their nature are not
repeatable. We make an estimate of probable error, of course, but that
is a separate determination related to the accuracy of the measurement
and is stated separately, e.g. 3.11(14), meaning that roughly two
thirds of measurements of the same type of process duration done in
this way would be expected to yield results ranging from 4.97 to 3.25
if the value of the duration is exactly 3.11. Note that I said nothing
about the resolution or "least count" of my instrument, which might
have been an analog device with reading interpolated by eye between
0.1 s calibration marks.

Taylor (An Introduction to Error Analysis) says precision is a synonym
for fractional uncertainty, or what I would call relative uncertainty.
That is what is often called improperly "percentage error" in high
school jargon here in BC.

Squires (Practical Physics) doesn't index "precision". Perhaps he does
not use the word. Baird (Eperimentation) doesn't index the word and
has a generally lousy index.

The best treatment I found is in Bevington(d) and Robinson (Data
Reduction and Error Analysis for the Physical Sciences). I quote (with
asterisks enclosing *italics*):

Accuracy versus Precision

It is important to distinguish between the terms *accuracy* and
*precision*. The *accuracy* of an experiment is a measure of how
close the result of the experiment is to the true value. Therefore,
it is a measure of the correctness of the result. The *precision*
of an experiment is a measure of how well the result has been
determined, without reference to its agreement with the true value.
The precision is also a measure of the reproducibility of the
result. The distinction between the accuracy and the precision of
a set of measurements is illustrated in Figure 1.1 [sorry - Leigh].
*Absolute precision* indicates the magnitude of the uncertainty in
the result in the same units as the result. *Relative precision*
indicates the uncertainty in terms of a fraction of the value of
the result.

It is obvious that we must consider the accuracy and precision
simultaneously for any experiment. It would be a waste of time
and energy to determine a result with high precision if we knew
that the result would be highly inaccurate. Conversely, a result
cannot be considered to be extremely accurate if the precision is
low. In general, when we quote the *uncertainty* in an
experimental result, we are referring to the *precision* with
which that result has been determined.

B & R pretty much support my interpretation for single events and are
consistent with the NIST definition, but somewhat more complete. I
recommend their treatment. It would seem that Robert's distinction
between resolution (which I take to be applicable only to single
measurements) and precision is entirely correct, and I'm glad he
corrected me.

My Webster's gives three definitions, the pertinent one being "the
degree of refinement with which an operation is performed or a
measurement stated". That is pretty much exactly my definition, though
I hesitate to cite a vulgar meaning in a scientific context. Fowler's
(Modern English Usage) chooses to muddy the waters by distinguishing
precision from preciseness - without defining either - and he also
introduces me to a brand new term, precisian, which seems to apply to
me!

Back to what they pay me for.

Leigh

On Mon, 30 Aug 1999, Leigh Palmer wrote:

[snip]
I don't believe precision relates to repeatability.

I'm not an expert on error analysis so I may be mistaken about this. If
so, I'd appreciate if someone would set me straight before I mislead all
of my students.

I'm not an expert either, but I think the example above is incorrect.
I can't even consult a standard reference, I'm afraid, but I would
probably do so before teaching this to a group of students.

I was able to find the "Glossary of Time and Frequency Terms" on a NIST
page <http://www.boulder.nist.gov/timefreq/glossary.htm>. In the glossary,
the following definitions are given:

Accuracy: The degree of conformity of a measured or calculated value to
its definition or with respect to a standard reference (see uncertainty).

Error: The difference of a measured value from its known true or correct
value (or sometimes from its predicted value).

Precision: The degree of mutual agreement among a series of individual
measurements. Precision is often, but not necessarily, expressed by the
standard deviation of the measurements.

Resolution: The degree to which a measurement can be determined is called
the resolution of the measurement. The smallest significant difference
that can be measured with a given instrument. For example, a measurement
made with a time interval counter might have a resolution of 10 ns.

Uncertainty: The limits of the confidence interval of a measured or
calculated quantity. NOTE: The probability of the confidence limits should
be specified, preferably as one standard deviation.

Anyone have any other references we can use?

----------------------------------------------------------
| Robert Cohen Department of Physics |
| East Stroudsburg University |
| bbq@esu.edu East Stroudsburg, PA 18301 |
| http://www.esu.edu/~bbq/ (570) 422-3428 |
----------------------------------------------------------