Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: DATA on collapsing WTC



I would like to confirm that Glenn's description of what
I did was correct. But it needs two clarifications.

1) I did not (yet) perform averaging over 15 frames, only
over 9 frames, as posted two or three days ago. I plan to
do 15 frames averaging of y tomorrow. It will be much less
error prone than before because Glenn's data are already
in the spreadsheet form. Click on the link in his message,
download the spreadsheet and play with it in Excel.

2) The two sets of 4 accelerations (very poor reproductivity)
were not based on any average data; they were obtained on
single frames. I selected six single frames (separated by
0.5 seconds) and calculated four accelerations to which
Glenn reefers. Then I took another set of six frames and
found another set of four accelerations. Ideally the second
set should be nearly the same as the first but this did not
happen; it is another indication that large errors are
associated with individual distances.

E.g., for one subset of the data, Ludwik calculated average
accelerations of 3.28, 5.59,2.49, and 5.51 m/s2. When
Ludwik shifted the sets of data by one frame, he calculated
average accelerations of 10.7, 8.23, 2.7 and -2.9 m/s2. ...

That is why I think that averaging (of one kind or another)
is essential in an attempt to estimate the correct values of
acceleration. I do not recall how averaging is done in the
Vernier "motion detector" software. Do they average on
n frames and then go to the next n frames? Or do they
reuse frames in moving from one end of data to another?
What is better and why?
Ludwik Kowalski

"Glenn A. Carlson" wrote:

Ludwik Kowalski and I have discussed through private emails our approaches
to calculating the acceleration of the North Tower of the World Trade
Center (WTC) and determining whether the tower was in free-fall during its
collapse. Ludwik concludes that the tower was not in free-fall with an
acceleration of magnitude of approximately 0.3g - 0.7g. I conclude that the
tower was in free-fall with an acceleration of 0.95g - 1.05g. We are
unable to resolve this discrepancy, but we have agreed to present this
matter to the Phys-L members for discussion and possible resolution.

First let me summarize the facts and issues. (Ludwik, please correct any
misstatements of your positions or findings.) I posted a list of position
(x- and y-coordinates) versus time for the collapse of the North Tower of
the WTC to the Phys-L list. I also posted the data to my website
(www.stchas.edu/faculty/gcarlson/physics/wtc). I compiled this data by
analyzing a CNN video clip of the collapse using DataPoint, a video
analysis program I've written
(www.stchas.edu/faculty/gcarlson/physics/datapoint).

The position v. time data for the collapse spans approximately 4 seconds
and over 60 m of vertical displacement. At a video frame rate of 30
frames/s this amounts to approximately 120 datapoints.

Ludwik's approach and findings--

Taking the data during the collapse in small contiguous sets of 9-15
datapoints, an average time and vertical position is calculated. This
reduced the amount of data from 120 datapoints to 8-13 datapoints. From
this reduced dataset average velocities and average accelerations over each
time interval were calculated using the standard equations: vavg =
delta-x/delta-t and aavg = delta-v/delta-t. The 8-13 average acceleration
values were themselves averaged and that average was reported as the
acceleration of the collapsing tower.

As suspected using this approach, Ludwik found a large variation in the
calculated acceleration, depending on the number of datapoints in a set and
the location of that set of datapoints among all the data. E.g., for one
subset of the data, Ludwik calculated average accelerations of 3.28, 5.59,
2.49, and 5.51 m/s2. When Ludwik shifted the sets of data by one frame,
he calculated average accelerations of 10.7, 8.23, 2.7 and -2.9 m/s2. The
average of the first four values equals 4.2 m/s2; the average of the second
four values equals 4.7 m/s2. These results are consistent with other
attempts, and he concludes that the tower was not in free-fall.

Carlson's approach --

Taking the data during the collapse I calculated the time since the
beginning of the collapse and the magnitude of the tower's
displacement. Using all of the approximately 120 datapoints I calculated a
best power law of the form y=at^n, where y is the magnitude of the
displacement and t is the time since the beginning of the collapse. I
selected the power law as the fitting equation, because we expect the
displacement as a function of time for a body with a constant acceleration
starting from rest and from the origin is y=0.5a*t^2. The coefficient
measures the acceleration; small deviations in the exponent from 2 indicate
the acceleration is nearly constant.

For the entire collapse, I calculate a best fit curve of y=5.1t^1.8
(R^2=0.97). For the first second of the collapse, I calculate a best fit
curve of y = 3.1 t^0.9 (R^2=0.59); I conclude the tower was not in
free-fall for the first second. After t=1s, I calculate a best fit curve
of y=4.7t^1.9 (R^2=0.99); I conclude the tower was in free-fall with an
acceleration of a=2*4.7=9.4 m/s2, which is approximately g=9.8 m/s2. A
(www.stchas.edu/faculty/gcarlson/physics/wtc).

Why I think my approach is the better --

Ludwik and I agree that calculating the average velocity and acceleration
between datapoints using the formulae vavg=delta-x/delta-t and
aavg=delta-v/delta-t will result in wildly varying values of
acceleration. Using Ludwik's to analyze all the data, I calculate average
accelerations ranging from -1831 m/s2 to 1835 m/s2 with an average of 2.9
m/s2. However, if I exclude the first calculated aavg from the final
average, a=13.2 m/s2; if I exclude the first and second calculated aavg
from the final average, a=8.1 m/s2. Ludwik tries to mitigate the magnitude
of the variations in average accelerations by grouping the data so that the
delta-t's are larger, but as his results show this is not completely
successful.

I argue the wildly varying values for the acceleration and the sensitivity
of the final average acceleration calculated from those values cast doubt
on the validity of the final result.

Thus, there are two advantages to fitting the data to a power curve:

1) We avoid calculating unrealistic, wildly varying velocities and
accelerations, because the calculation of the best fit power curve does not
depend on delta-t. Thus, we have more confidence in the calculated
acceleration.

2) A power law allows us to directly and quantitatively verify that the
exponent is 2 (i.e., acceleration is constant); instead of assuming the
exponent is 2, as is usually done in undergraduate physics courses; or
making indirect, qualitative judgements (R^2 for best fit velocity v. time
line).

(If, instead of calculating the exponent, I assume the exponent on time is
2 and determine the best fit line of y v. t^2, I calculate the line of
slope 4.1 m/s2, which results in an acceleration of 8.2 m/s2, which
approximately 9.8 m/s2.)

I look forward to comments from the group.

Thanks.