Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: discretizing derivatives



At 07:25 AM 2/12/01 -0500, Prof. John P. Ertel (wizard) wrote:
I regularly do this type of calculation and so will proffer the method
that I have always used. I believe it to be correct (or else, why would I
use it) but it also seems that there must be a more mathematically
rigorous method. I do know that this method leads to proper normalization
(always handy if you want meaningful numbers).

1) during the numerical integration, make the 1st step 1/2 my initial step
size and make all subsequent steps (except for the last one in a "peak"
equal to the initial step size;

I can't understand this. I would have thought that "1st step" is
synonymous with "initial step" so I don't see how one could be half the
size of the other.

2) always integrate on increasing slope
(L2R if +value & +slope,
R2L if +value & -slope\,
L2R if -value & +slope,
R2L if -value & -slope);

This doesn't make much sense unless you know in advance that things are
monotone. For a general-purpose integrator, this would be quite an
impediment. I suppose everything is _locally_ monotone, but figuring out
"which regions are which" would be more expensive than solving the problem
in other ways.

3) add the +areas separately (from the -areas) adding small terms to large
(this is only necessary if there are multiple MAX's and MIN's);

This, and the previous item, sound like defenses against roundoff errors
during integration.

There are much more wizardly ways of defending this. For starters, if the
data has any semblance of continuity, you can take the hierarchical
approach (which could also be called the "fractal" or "wavelet" approach)
which is to add pairs of neighboring elements, then combine the pairs
two-by-two, and so on. This means you are always adding things of roughly
equal magnitude.

Meanwhile, with any processor made in the last 15 years, the cost of going
to double precision is small, and doing so makes roundoff problems many
orders of magnitude less severe.

Beloved reference:
Acton, _Numerical Methods That Work_
http://www.amazon.com/exec/obidos/ASIN/0883854503/