Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: Kinematics



I wrote:

>>The intermediate-value theorem says that there must
>>be *some* t in the interval [t1,t2] where the velocity
>>equals Vav. Similarly there must be *some* x in the
>>interval [x1,x2] where the velocity equals Vav. But
>>there is !!not!! any good reason to assume that the
>>special point is the middle of the t-interval or the
>>middle of the x-interval.

On 09/16/2003 12:34 PM, David Bowman wrote:
>
> How about this reason? The choice of the midpoint of the (t- or x-)
> interval minimizes the maximum error in the estimate of the location
> of this intermediate value where the Vav value is locally correct,
> and bounds it to be no more than 1/2 of the interval's width.

Whether that's a good "reason" or "explanation" depends
on what we're trying to explain.

Sure, the midpoint is in some sense a minimax estimate
of the location of the intermediate-value point.
However, my point remains: having an estimate of X
(minimax or otherwise) is not the same thing as
knowing X.

For that matter, the suggested estimate is not even
reliably the best estimate. For a strongly curving
function such as an exponential, the midpoint will
be systematically off to one side of the actual
intermediate-value point.

To repeat: picking a point in the interval requires
making an approximation. Without nontrivial additional
information, you don't know whether this is a good
approximation or not.

Note: When we are doing theory, the choice of point
often doesn't matter, because we are going to take the
limit as the size of the interval goes to zero, and we
expect things to converge independent of how we choose
the representative point. But when it comes to real
data and/or numerical methods, we cannot set the
interval to zero, and the choice of representative
point remains entirely nontrivial.

In general:
-- numerical integration results in smooth functions,
but errors accumulate.
-- numerical differentiation results in non-smooth
functions, i.e. it magnifies whatever noise and
uncertainty there is in the data.

Experts lose sleep over issues like this. Anybody who
thinks the answer is obvious doesn't understand the
question.

A beloved reference for numerical methods is:
Forman S. Acton _Numerical Methods That Work_

1234
234
34
4
. 34
> 34
....