Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: free fall data




Ludwik's note ( quoted below ) raises several interesting associations.
I should first acknowledge the person who worked this problem rigorously.

He said, " ...we tried other distances, comparing the time for the whole
length with that for the half...;in such experiments REPEATED A FULL ONE
HUNDRED TIMES
[my caps] we always found that the spaces traversed were to each other as
the squares of the times..."
(Naturally Accelerated Motion, Theorem II Prop II Corol I, Third Day, 2 New
Sciences, G. Galilei)

I dare to suppose that the instrumental dificulties faced by this first
experimentalist, where time was measured by heartbeats and by measuring the
efflux of water from a metering pipe, were answered satisfactorily by
statistical averaging measures which are still available to us.

I recall too, that an enlightened distance-learning institution (OU)
determined that it would serve their students' purpose to rerun these
celebrated Galilean experiments, using his inclined plane and a smooth ball.

It was thereby exceptionally easy for them to detect which of the
students were teachers and other people of good background:
their results tended to represent the square law rather well.
It was later disclosed that they ( And Galileo ) had omitted the
correction for angular momentum which a ball gains when rolling down an
incline.

People who create dynamic simulations ( e.g. for flight simulators )
are warned to start the computational path at the highest difference
equation then summate ( for integrations ) in order to avoid the errors
involved with
subtracting successive differences - to whick Ludwik alludes.

Sincerely
Brian Whatcott

-------------------------------------------------------------------
At 11:50 9/13/97 EDT, Ludwik wrote:
....
This question has to do with propagation of errors when data from digitized
movies are used to measure g. You know that free fall position data (x1,
x2, x3, x4, ...) must be very accurate to calculate speeds (assuming errors
are negligible for t). My data are never accurate enough. One approach is to
do averaging. For example, we can eliminate x1, x2, x3 and to keep their
average instead of x2, then do the same for the next group of data, etc.
Averaging on the groups of 5 or more data points (as for example, in a
program like MacMotion from Vernier) would help but this approach can not
be used when only 5 to 10 usable data points are available.

It turns out that accelerations calculated from the v2, v3, v4, v5 data
fluctuate so widely that it is often embarassing to say "as you can see,
the acceleration has a constant value close to 9.8". The mean value often
differs from what is expected by about 30% while individual accelerations
fluctuate between -100 and +100 m/s^2, or so. Do you agree?

1) How to deal with this? My approach, so far, was to avoid plotting
accelerations. Only the velocity-versus-time data are plotted and
"the best" straight line is drawn through the data points. The slop
of this line is the acceleration. But the linear dependence is not
at all obvious and I am forced to say "as you know the relation must
be linear and we will use it to find g". I am not very happy with that
kind of "learning from experiments". A camcorder + computer setup
is more expensive than the sparking wire apparatus, it is not better.

2) A steel ball, dropped from an elevation of 2 meters, hits the floor.
Its motion is recorded at the rate of 30 pictures per second. How
accurate should the last six values of x be if the accelerations
computed from them (four individual differences of differences) are
not to fluctuate by more than 10%?

One way to answer this is to use EXCEL. Assuming the last picture was
taken at t=0.6000s I enter six values of t in the first column and
the corrsponding distances (from the initial location) into the next
column, as shown below. These were calculated with a=9.8. The third
column has formulas for v and the last one has the formulas for a.

t(sec) d (m) v(m/s) a t=0 --> d=0
................................. last frame entered first, etc
0.60000 1.764000
0.53333 1.393776 5.5534
0.46666 1.067081 4.9000 9.80
0.40000 0.784000 4.2462 9.80
0.33333 0.544434 3.5935 9.80
0.26666 0.348427 2.9401 9.80

I change the distance 0.54443 to 0.5 and I see that the last two values
in column 4 change to -0.2 and + 29.8 m/s^2. I restore 0.54443 and change
0.78 to 0.7. This changes of the last three accelerations to -9.1, +47.6
and -9.1 m/s^2. The effect of two same-direction-changes in col 2 is very
dramatic. To increase the last acceleration by 10% (9.8 --> 10.8) I must
change the last distance from 0.34847 to 0.353, that is by only about 1%.

All this is not surprising; the percentage error on d=x1-x2 is much
larger than the errors on x1 and x2, when d<<x1 and d<<x2.

Ludwik Kowalski
-------------------------------------------------



brian whatcott <inet@intellisys.net>
Altus OK