Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-l] Significant figures -- again



Having taken a several month 'vacation' from Phys-L it is 'refreshing!' to
see that nothing has changed...neither the characters nor the topics. My
only contribution here is to suggest to other readers to use your own
judgment here. If your particular class (usually intro and often
non-science majors or high-school) is not ready for the 'more correct' error
and uncertainty analyses being touted here but you do want those students to
record data to the maximum precision of their instruments and to report
results with some sense of the limits that their original measurements put
on their calculated results, then go ahead and use the 'rules' of
significant figures. As in almost all cases that we wrangle about here,
when the time comes when more advanced students need more advanced tools, it
will happen. Again, as always, the depth and precision of one's teaching
depends heavily on the audience and the course goals. Preparing future
physicists is different than preparing future artists, bankers, and the
like.

Rick


Richard W, Tarara
Professor of Physics
Saint Mary's College
Notre Dame, Indiana

Free Physics Educational Software at:
www.saintmarys.edu/~rtarara/software.html
Most software updated in 2011. New Force-Table lab simulation.

-----Original Message-----
At the risk of throwing myself into the fray... Let me stop you right
there, John. The traditional method of writing a number to indicate the
error or significant figures would be to include the trailing zeros. This
indicates the precision in a manner consistent with professional
publications. For scientists it communicates the precision clearly.

If your point is that this system gets abused, it is well taken. Writing
3.0 ± 0.1 really means that if I measure this same quantity a number of
times I'm likely to get a value greater than 3.1 or less than 2.9 about 30
percent of the time. I'm presuming that the error here is all statistical.
Usually these errors contain some systematic error as well.

The human tendency is to overstate the error because one is cautious about
presenting the result. This is more true when publishing a value that
disagrees with the currently accepted value. I think that you are trying to
make a case that it is wise to keep this extra information throughout the
calculation. Here's a contrived example: Suppose that it were difficult to
measure pi and a world of scientists were doing it. Everybody reported 3.0
± 0.1. The real value falls within 2 sigma of that result so it's a fairly
reported answer. A minority of people measure 3.14 and round off their
error to the same precision: 3.1 ± 0.1. The well-intentioned editorial
staff at the CRC wants to publish a world-average value from all of the most
prestigious journals. If you average all of those 3.0's together with the
3.1's you'll still be wrong. If the 3.0 group had been reporting 3.04 and
the 3.1 group had reported 3.14, the world average would be closer to the
correct number. Or... not quite as wrong. Given the reported limitations
of the methods, it's within error.

It's not uncommon to see two digits of precision in the error. 2.37 ± 0.32
for example.

Paul


On Mar 13, 2012, at 12:21 PM, John Denker wrote: