Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: free fall data



I did find a source with information on videotape formats. In the 1993
book "Mastering the World of Quicktime" Jerry Borrell writes (p 64):

"VHS tape generates 200 lines of resolution and Hi8 provides 400 lines;
other formats produce even higher resolutions. ... S-VHS has a signal
with 420 lines."

Wouldn't this be a strong indication that the 480 pixels resolution along
the vertical frame is illusionary when a VHS camcorder is used? A clicking
error of one pixel vertically is likely to be relatively larger (factor
of 640/200) than an error of one pixel horizontally, for the same
displacement.

What limits the horizontal resolution of an analog image from a camcorder?
Is the horizontal VHS resolution good enough to benefit from the 640-pixels
of a common digitizing system? I suspect that filming sidewise may help to
improve data. This would be true only when all other erros, such as paralax,
etc. are much smaller than pixel errors. Strong zooming, to cover as many
pixels as possible, is equivalent to filming from a short distance. It may
be associated with sizable paralax errors.
............................................................................
Another subtopic:
The best way to determine g, as accurately as possible, is to measure time
and displacement from the beginning to the end of the fall. The experiment
must be repeated and the average g used as our best number. This is a very
reasonable approach when WE ALREADY KNOW that g=const. Subdivisions of the
total time into short subintervals is necessary only when the goal of an
experiment is TO DEMONSTRATE that g is constant.

In that context the time subintervals must be as short as practically
possible. The accuracy is sacrificed to get better information about g(t).
(You can call this the incertainty principle, if you wish.) In my opinion
the use of consecutive subintervals has one advantage. Suppose the data
are y1, y2, y3, y4, etc. and that a small error was made in y2. This makes
(y2-y1) a little smaller than it really is. But (y3-y2) is automatically
larger by the same amount. (I am assuming that errors in y1 and y3 are
negligible.) Thus a slightly larger v12 is followed by a slightly smaller
v23. This self-correcting tendency is probably present in other ways of
defining subintervals but it would be less obvious to students.

Averaging over the data points separated by long subintervals may lead
to wrong shapes of v(t) and g(t), even when data are very accurate. You
may come to a conclusion that g is constant when cyclic fluctuations of
v are present. This observation was triggered by the message below. It is
clear to me that the author was assuming that g is constant and his goal
was to find the most accurate single value of g. He would not use the
method if he suspected that rapid fluctuations of v may be present.
So perhaps I am being too picky this time.
Ludwik Kowalski
.......................................................................
I avoid noise in the free fall data by having the students use 16
positions evenly spaced in time. The key point is that *no* position
gets used more than once. I ask them to calculate the average velocity
between the 1st and 9th point, the 2nd and 10th point, etc. to the 8 and
16th points. This means they only get 8 velocities, BUT THEY ARE
INDEPENDENT, AND ARE OVER LONG DISTANCES-SO SMALL POSITION ERRORS ARE'T
FATAL.