Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: Air resistance



On 07 Dec 1997 09:55:58 John Mallinckrodt wrote:

The point is that you didn't *measure* v and you didn't *measure* a;
these values are *calculated* from your measured d values using a
processing method that makes certain assumptions. Thus, any results you
obtain from your values of v and a reflect those assumptions. The only
defensible measure of the "goodness" of some particular "fit" is how
well it reproduces your *measurements*. ....

I wish this argument was considered in the "age of the universe" issue
discussed in a parallel thread. Fortunally, the situation is much less
complex in the case of a falling ball (unless you think that the true
R=f(v) relation is not monitonic, that very narrow resonances have strong
effects on averaging, etc.). If I had time and motivation I would show you
data from 100 experiments (total of 1000 data points instead of 10).
Only one experiment was actually examined but several were performed to
convince ourselves that things are more or less reproducible, in terms of
the a=f(t) curves. These curves were shown on the screen when data were
collected with Mac Motion.

Where are questionable assumptions, John? F_net=m*a is undeniable. The
rest is just a matter of definitions. Averaging over 15, at the rate of
40 samples/s, means that the so-called instenteneous v are mean values
over the time interval of 15/40 sec. The v=f(t) curve is so close to be
linear that the systematic error due to averaging must be very small.
The redundancy aspect of the averaging method itself tends to reduce the
systematic error (using the mean from v1 to v15, then from v2 to v16, etc.).

I think that R calculated from consecutive distances are true experimental
values. The error-propagation aspect of such data may be more difficult
to trace than for g=(2*h)/t^2 (deducing g from the time of free fall, for
example) but that is another matter.

And here is a different approach. Suppose we assume that R=0.01*v^2 and
use a spread sheet to produce a set of simulated data, at 40 sampl/s.
Random errors of simulations (due to truncations) are so small that they
are not worth considering. Then do the averaging and analyse data as in a
real experiment. According to my intuition, the pseudo-experimental R=f(v)
relation will be nearly the same as R=0.01*v^2. This would demonstrate
that no harm is done by averaging, and that motion detectors can be used
to measure n with reasonable accuracies. Actually, nearly as accurately
as we wish, provided many experiments are analysed instead of only one.
Using spread sheet to study propagation of errors may be instructive.

Ludwik Kowalski