Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: resolution vs. precision



Well... I always tell my students that determination of accuracy is
typically a lot tougher than determination of precision. I mostly
agree with what Leigh says (included below), but I also disagree a
little bit.

Our difference mostly stems from my assignment of accuracy to the
instrument, and Leigh's assignment of accuracy to the measurement. I
think it is common for scientists to have to deal with both. It is
common for manufacturers to state an accuracy for their instruments.
For a particular handheld digital multimeter Tektronix says, "0.06%
basic DC volts accuracy." I think this means the same thing as my
statement about the quartz frequency in the stopwatch. That is, how
well does the instrument comply with established standards.

But what happens when we use that instrument to make a measurement.
What is the accuracy of the measurement?

When we know the accuracy of our instrument, and if our experimental
result is determined by that single instrument, and if we are using the
instrument correctly, and repeated trials show a precision that is
better than the stated accuracy... then I believe the accuracy of the
instrument and the accuracy of the measurement are essentially the same
thing. But there are a lot of ifs in that statement.

Further, what if our experimental result is the combination of several
different measurements using different instruments? Our accuracy for
the velocity of an air-track glider depends both upon an accurate
displacement measurement and an accurate time measurement. We have to
use proper error propagation techniques to establish the overall
accuracy of the final result from the known or estimated accuracies of
all the instruments used.

The NIST statement that Cohen found for us says: "Accuracy: The degree
of conformity of a measured or calculated value to its definition or
with respect to a standard reference... " I think this is pretty much
the same as my statement about instrumental accuracy because that is
(one aspect of) what NIST does. NIST defines and provides ways for
manufacturers to determine and state the accuracy of their instruments.
And when properly used to make a measurement (of the property the
instrument is designed to measure), the accuracy of the instrument is
the accuracy of the measurement.

On the other hand, the goal of the book by Bevington and Robinson is
slightly different. They are trying to teach experimentalists how to
assess the accuracy of a measurement that may depend upon several
instruments, each with their own stated accuracies. In addition, you
also have to assess whether you have any systematic errors within your
setup or your experimental technique. For example, even though you
have an ohmmeter that is quite accurate, your measurement might not be
accurate if you don't take the resistance of the probe wires into
account. For measuring small resistance, our most accurate ohmmeter
requires a four-wire measurement in which the current run through the
resistor is carried on one pair of wires, while the resulting voltage
drop is measured using the other pair of wires. That is the standard
technique for measuring resistance accurately, and failure to use this
technique might not yield an accurate result even with a high accuracy
meter.

In complicated experiments, overall accuracy can be pretty tough to get
a handle on. Often we simply rely upon the time honored technique of
having lots of different people measure the desired number, we assess
how well we think each did it, and we eventually come to an agreement
of what we believe the "true number" is, and how well it is known. Of
course history shows that these numbers change from time to time as our
techniques get better. If they only gain significant figures then the
earlier results were pretty good, but if we have to change some of the
"significant figures" of the earlier result, then we overestimated the
accuracy of the earlier result.

In conclusion, I can view accuracy fairly simplistically, which is what
I originally did. But I also acknowledge, as Leigh pointed out, that
it can be a lot more complicated than that.

Michael D. Edmiston, Ph.D. Phone/voice-mail: 419-358-3270
Professor of Chemistry & Physics FAX: 419-358-3323
Chairman, Science Department E-Mail edmiston@bluffton.edu
Bluffton College
280 West College Avenue
Bluffton, OH 45817



-----Original Message-----
From: Leigh Palmer [SMTP:palmer@SFU.CA]
Sent: Monday, August 30, 1999 3:09 PM
To: PHYS-L@lists.nau.edu
Subject: Re: resolution vs. precision

I equate accuracy with fidelity to established standards. If the
quartz crystal of the stop watch is supposed to oscillate at 10.000
MHz
but is oscillating at 10.002 MHz, we have an accuracy problem.

With this meaning accuracy cannot be ascribed to measurement of a
unique quantity, like the length of a particular rod. I prefer the
Bevington and Robinson meaning for that term. Your *stopwatch* may
be accurate, but that is a different meaning than the application
of the term to a measurement.

Leigh