Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: Curve Fit Stats



On Thu, 12 Sep 1996, Richard L. Bowman wrote:


Directly related to your suggestion for the standard deviation of the
slope and y-intercept, these sound like good things to know, but how does
one calculate them? It cannot be simply the sum of the square of the
errors divided by one less than the number of data points (as one would do
with a simple list of single values for different trials). The degrees of
freedom need to be included, too. Plus what happens when one does
non-linear fits such as log-log and semi-log plots or exponential fits?

I think we probably need R^2 and much more, PLUS a good reference on how
to interpret the whole thing!!


There are standard ways to calculate and interpret such things. A
good reference is Bevington and Robinson, "Data Reduction and Error
Analysis for the Physical Sciences," 2nd ed, 1992, McGraw Hill. The
book even comes with Pascal source code and a funny black thing, 5.25
inches on a side. It's square. I don't know what to do with it. ;)

The std dev of the slope and intercept (or any fitted parameter) can
be estimated (just as we estimate but never _know_ the standard
deviation of the mean) using the scatter standard deviation as a
measure of the y uncertainty. The number of degrees of freedom is
accounted for in the standard procedures. Recall that the whole
least-squares approach makes the assumption that the x uncertainties
are negligible.

Even in fitting nonlinear equations using a search procedure you can
estimate these standard errors using the slope of the minimization
function.

Once again, my question is WHY do we need R^2? What do you DO with
it? I do know what to do with standard errors, but not these other
statistics.

JEG
John E. Gastineau
304 296 1966
Morgantown, WV
http://www.imagixx.net/~jgastin
email: jgastin@imagixx.net