Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: Air resistance



Please help to decide between the two positions taken:

1) The data shown below demonstrated that n is between 1.6 and 2.4
2) The data shown below demonstrated that n is between 0.2 and 5.0

The data were collected with the Vernier motion detector at the rate
of 40 samples per second, averaging=15. The first 4 colums are output
from the Mac Motion software. The last column is the air resistance
calculated as R=m*(g-a)=0.55*(9.8-a) where a is taken from column 4.
The purpose is to find the value of n which is supported by
experimental data if we assume that R=A*v^n. The implication of the
second position is that motion detectors are not good enough to deal
with air resistance.

t(s) d(m) v(m/s) a(m/s^2) R(N)

1.400 0.608 1.773 9.63
1.425 0.655 2.013 9.65
1.450 0.708 2.254 9.65
1.475 0.768 2.494 9.64 0.09
1.500 0.835 2.734 9.60 0.11
1.525 0.904 2.975 9.61 0.10
1.550 0.983 3.215 9.59 0.12
1.575 1.066 3.454 9.54 0.14
1.600 1.155 3.691 9.48 0.18
1.625 1.251 3.927 9.44 0.20
1.650 1.352 4.162 9.38 0.23
1.675 1.458 4.395 9.33 0.26
1.700 1.573 4.632 9.31 0.27
1.725 1.692 4.865 9.33
1.750 1.816 5.035 7.83
1.775 1.945 4.971 1.27

I assume that arguments on both sides are known to you. Here are the
arguments for (1) and (2) from previous messages under this thread.

Position 1
==========
Why is velocity decreasing at t>1.75? This is due to averaging. Suppose you
collected 50 data points. Single measurements are never accurate so the
averaging is imposed. At any given position d is the average (of what is
measured and of 14 data point before it). This means that the first 15
distances (when v>0) must be rejected. The same is true for the last 15
distances. (you will see v>0 when the object is already at rest) In our
case distances between t=1.45 and 1.70 are "real" while those outside of
this region are "phony". To get a broader range of v one must reduced the
averaging span (the software allows for averaging over 3, 5, 7, 9 and 15).
But less averaging leads to broader fluctuations in the values of a. As in
the case of a camcorder, this has to do with the fact that aceelerations
are calculated as "difference of differneces". Slight improvement can be
obtained by additional averaging in the 4th column; for example, by using
the mean of 9.48, 9.44 and 9.38 instead of 9.44, etc.

Do not forget that high accuracy of a is essential because R is calculated
as m*(9.8-a). If a=9.44 (+/- 3%) then R is 0.2 (+/- 100 %). We think this
is a good topic for a student research project. The air resistance forces
acting on the falling ball can be approximated by a smooth R=0.0128*v^2
curve. An experiment with a spherical balloon (of the same size as the ball)
loaded with a small mass would help to collect data at smaller v. It is
easier to get good data when R is the main player, not a small contributer.

The rest of our data were collected with coffee filters loaded with aluminum
disks of known masses. The mass of each filter (about 1 gram), and of each
disk, was known to better than 1%. We have 15 data points between v=0.95 m/s
and v=4 m/s. The smooth curve over that region is R=0.100*v^2. The method is
the same as in the case of the ball but working with small a (zero when m is
less than 3 grams) is much easier for obvious reason.

Ludwik Kowalski and Richard Hodson
P.S.
Time zero corresponds to the moment at which the instrument is activated;
the object is usually released about one second later.

Position 2
==========

.... I can (as with the previous data) obtain *excellent* numerical fits
to your d vs. t data (i.e., rms errors in d less than 1 least sig fig)
with drag exponents (i.e., the n in F_drag = b v^n) anywhere from 1/5 to 5.
I get marginally better fits with n near 2, but certainly not significantly
enough better to come to any conclusions about the precise value of n (which
I, nevertheless, suspect *is* pretty close to 2 at these speeds.)

I believe that you are making an error in trusting the acceleration values
for precisely the reasons that you note in discussing the need for averaging.
Furthermore, as you later write:

Do not forget that high accuracy of a is essential because R is calculated
as m*(9.8-a). If a=9.44 (+/- 3%) then R is 0.2 (+/- 100 %).

Exactly, and, because of the gross errors introduced by double differencing
and then averaging over large times, you will not likely get the accuracy
you need.

Again, it is my opinion that this data simply cannot settle the question.
The data would have to be taken over a much larger range of v, probably with
more precision in the distance data, and possibly with better time resolution.

John

Position 2
==========
On Sat, 6 Dec 1997, LUDWIK KOWALSKI wrote:

Please share the values of b for n=1/5 and n=5; I assume b=1 when n=2.

With do = d at the first time, vo = v at the first time, r = b/m where m =
mass of falling opbject and, F_drag = bv^n, I get the following (all in SI
units)

r n do vo g rms deviation in d values
0.18 0.2 0.608 1.776 9.8 .0009
0.023 2.0 0.6078 1.772 9.8 .0007
0.0004 5.0 0.6079 1.763 9.8 .0008

The values of the best fit parameters will surely depend to some small
extent on the numerical method used. I used a simple "predictor-
corrector" type method. I could have obtained marginally better fits by
allowing g to be a free parameter as well, but that was clearly pushing
the data too hard.

I note that, with your value of m = 0.550 kg, the above gives a value of
b = .0126 or .0127 for n = 2, in good agreement with your value.

John

Position 2
==========

On Sat, 6 Dec 1997, John Mallinckrodt wrote (or meant to write):

With do = d at the first time, vo = v at the first time, r = b/m where m =
mass of falling object, and F_drag = bv^n, I get the following (all in SI
units)

r n do vo g rms deviation in d values
0.18 0.2 0.608 1.776 9.8 .0009
0.023 2.0 0.6078 1.772 9.8 .0007
0.0004 5.0 0.6079 1.763 9.8 .0008

The values of the best fit parameters will surely depend to some small
extent on the numerical method used. I used a simple "predictor-
corrector" type method. I could have obtained marginally better fits by
allowing g to be a free parameter as well, but that was clearly pushing
the data too hard.

In fact, I can get an almost equally good fit to the data by simply
assuming a constant drag force leading to a constant acceleration of 9.57
m/s^2 with do = .6081 m and v0 = 1.777 m/s^2.

John

Position 1
==========
On 06 Dec 1997 09:56:16 - John Mallinckrodt

... I can (as with the previous data) obtain *excellent* numerical
fits to your d vs. t data (i.e., rms errors in d less than 1 least
sig fig) with drag exponents (i.e., the n in F_drag = b v^n) anywhere
from 1/5 to 5.

There must be a misunderstanding somewhere. I am reposting the data
at the end of this message. Just plot R=f(v) on the log-log paper
and get n="the best slope". Without doing this I can say that n=2
(plus or minus of up to 20%). Here is my reasoning. The region in
which good data were collected spans from about 2.49 to 4.63 m/s
Thus (v2/v1)=1.8. The corresponding ratio of air resistaces, R2/R1
is 3.0. What is n when 1.8^n=3? It is close to 2.

For your values on n=5 and n=0.2 the force ratios would be 19 and 1.1,
respectively. The experimental data are not perfect but they are
certainly not so bad as to accomodate your range of n. And this has
nothing to do with the value of b (=m*r) in your R=b*v^n formula.

Ludwik Kowalski
Position 2
===========
On Mon, 8 Dec 1997, LUDWIK KOWALSKI wrote:

Where are questionable assumptions, John?

Ludwik,

I don't really see why I should need to say any more since I've already
given you all the information you need to see for yourself that your data
is consistent with a wide range of values for n (and, therefore, does not
determine the value of n with any accuracy.) Nevertheless, here is some
of why your analysis is suspect:

Your analysis is critically dependent upon associating specific values of
v with specific values of a. This is more than a little problematic
because 1) both v and a are subject to significant errors due to
differencing and double differencing data values which don't differ by a
lot more than their inherent uncertainties in the first place and 2) both
v and a are changing with an a priori *unknown* dependence on time.

Therefore, in order to obtain the required correlations (of specific
values of v with specific values of a, you have to 1) do a lot of data
smoothing and 2) decide how you are going to get values of v and a at the
*same* times. Both of these operations depend *critically* on your
assumptions about how the values *vary* with time ... which, again, is
unknown.

For instance, if you decide to associate the average velocity calculated
for each time interval with the midpoint of the interval, you are making
the implicit assumption that the acceleration is constant over that
interval. If the drag is velocity dependent, however, this is clearly
*not* the case, so following this procedure would introduce an error (not
just more uncertainty) into the analysis.

Similar considerations apply to the choice of the time with which to
associate the calculated accelerations, but here the situation is further
complicated by the gross errors introduced by the double differencing. Now
there are *lots* of choices about how to do the smoothing/averaging
and each one carries implicit assumptions about how the acceleration
varies with time ... again, something you don't know. If, for instance,
you do a strict average and associate the result with the midpoint of the
interval you are making the implicit assumption that the jerk rate is
constant which is, again, simply not the case for *any* reasonable drag
force.

The best way to avoid all these thorny difficulties is to use the model to
fit the measured values of d directly as I have done. Again, when I do so
with your data, I get good fits (with small rms deviations between model
d's and observed d's and no obvious trends in the residuals) over *large*
ranges of n.
John