Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: error analysis



Now that my finals are over, a late response to this thread:

Kyle mentioned an excellent teachable moment associated with the
classic linear expansion lab exercise. As one of their first
exercises, we have students measure and then calculate the
densities of a carefully machined and fairly massive copper
cylinder and two 5 to 15 cm lengths of large and small gauge
copper wire. The exercise is intended to make at least the
following four points:

1. That we don't need to know a "true" value to make important
statements about the results of experiments. To this end we
pointedly abjure any interest in whether or not their results are
"close" to or "agree" with the "actual density" of copper.

2. That we *do* need to know about measurement uncertainties to
make important statements about the results of experiments. The
question is not whether or not the densities are "close;" it is
whether or not they "agree."

3. That (as in the linear expansion exercise) it is usually the
*relative* uncertainty that matters. To this end, we make sure
that the thin wire is bent out of shape.

Students get #1. (After all it means they don't have to look up
the density of copper!)

They have a harder time with #2. Even after losing points on
several exercises for using vague characterizations like "close"
rather than more properly specific ones like "the values agree (or
disagree) within uncertainties" or, better yet, "there is (or is
not) a significant discrepancy," it is difficult to shake their
idea that less than a few percent is "good" *regardless* of the
uncertainty.

(This last quarter I believe I managed to open a few eyes by
bringing in the papers on the muon g-factor and pointing out that
we were talking about a serious scientific controversy over values
that disagreed out beyond the tenth significant digit.)

With regard to #3, students uniformly describe the measurements of
mass and diameter for the thin wire as "hard" while attributing
the large uncertainty in the resulting density to the fact that
the wire was "all bent up."

I find this to be an effective teachable moment. I point out that
it was clearly no "harder" to measure the mass and diameter of the
thin wire than it was the solid cylinder (after all, they used the
*same* devices to make those measurements) and then walk through
the uncertainty analysis to show that the large final uncertainty
came almost *not at all* from the length measurement, but from the
mass and diameter measurements.

Connecting student's innate ideas about measurements being "hard"
to the more precise and quantitatively expressable notion of
relative uncertainty seems to be among the most useful aspects of
this exercise.

John Mallinckrodt mailto:ajm@csupomona.edu
Cal Poly Pomona http://www.csupomona.edu/~ajm

On Thu, 8 Mar 2001, kyle forinash wrote:

A further point about error analysis which I rarely see presented to students.

Most of us are familiar with the old linear expansion apparatus?
Where you heat up a metal rod, measure the expansion and calculate
the coefficient of expansion. Two of the measurements made are delta
L (change in length of the rod) and L (total length of the rod).

Rhetorical question (which I ask my students): Why is it you can get
away with measuring L with a meter stick but delta L has to be
measured with a micrometer? My point is: No measurement ever made in
science is perfect but some measurements are more important than
others.

Students ought to be able to do an error analysis which answers questions like:

1. Of all the measurements made in this experiment, improving which
measurement(s) (if any) would lead to the biggest improvement in the
accuracy of the answer?
2. Given this particular equipment with its inherent limitations
(which all instruments have), what is the smallest error you could
possibly get in your answer? The biggest (if all instruments are at
their maximum error)?
3. Is there any way to change the procedure to get a more accurate answer?
4. Can any error in one place be offset by another error somewhere
else? (For example starting a calorimeter experiment below room temp
and ending above room temp by the same amount to cancle heat
exchange.)