Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: Design of Experiment



Tim Folkerts wrote:
...
I was wondering if any of you had heard of and/or worked
with this branch of study.

1) Nice post! Thanks!

2) I've certainly heard of DoE. Read about it. Thought
about it. Used some of the ideas. Not an expert.

Perhaps an example will enliven this thread:

Suppose you are trying to locate an object O using
passive sonar receivers R1 R2 and R3. The following
layout would be really terrible:

R1 R2 R3 O

since to first order any displacement of the object in
the directions perpendicular to the line of symmetry is
undetectable.

You would be *much* better off arranging the receivers
like this:
R1

R2 O

R3
and even better off arranging them in a *big* triangle
surrounding the object.

The basic aim of DOE is to obtain accurate, statistically
significant answers to experimental questions from a minimal number of
trials.

... where the "minimal number" might be two or three, or it
might be thousands. The idea is that taking data is often
very expensive, and you want to maximize the benefit (in terms
of useful information) net the cost of taking data.

I think that introductory classroom experiments give a
false impression, because all-too-often taking a little
extra data is easier than doing sophisticated DoE beforehand
or sophisticated data analysis afterwards. If/when the
data really is cheap, that's the right way to go, but
somehow students need to get a feel for the other case,
when data is expensive.

Given how central experimentation is to physics, I'm amazed
that more ideas from DOE aren't taught to physicists.

I'm amazed, too. The ideas change your thinking and improve
your "gut reactions" even if you're not using the full formalism.

Some of the DOE techniques require statistical sophistication to fully
understand, but you don't need to know all the details to apply the
techniques

Very true.

A few results that may or may not surprise you:
* DON'T vary one parameter at a time

Right!

* perform trials in random order, rather that stepping through in order

Yes, there are lots of cases where people should be
randomizing the data-taking and they neglect to do so.

But there is more to the story. This is actually tricky.
You couldn't have done the Mercury/Gemini/Apollo missions
in random order, because the later missions exploited
knowledge acquired in earlier missions. In general you
should not take a bunch of data and then try to figure
out how to analyze it. You should have an analysis scheme
in place, so that you can analyze the data as it comes in,
so that early results guide the design of later measurements.
Statisticians hate this, because it introduces weird biases
into the data, but in real life it is very often the best
use of resources.

* standard deviation isn't a good measure of accuracy (but you probably
knew that one)

I didn't know that one until I got to grad school and found
myself the proud owner of a bunch of data where various
variables were highly correlated with others.

* optimizing consistency is often more important that optimizing magnitude

i.e. don't turn up the gain if all you're doing is amplifying
the noise.

===

Let me add a fifth item to Tim's excellent list: Remember you
can measure a resonator's frequency to a lot better than 1 part
in Q, and you can resolve two stars that are a lot closer than
the Rayleigh criterion. If the signal-to-noise ratio is good
enough, you can accomplish a lot by curve-fitting to the lineshape.

This posting is the position of the writer, not that of Tom Miller,
Richard Blumenthal, or Bill Lockyer.

This posting is the position of the writer, not that of SUNY-BSC, NAU or the AAPT.