Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-L] Half-Life measurement



Regarding Paul's outstanding question:

... But my question remains: what is the best we can hope to
measure from this set of data?

The answer depends on the situation for each particular experimental situation. There are a number of factors that affect the answer. Most of them have already been discussed. Among these are the shear amount of data (more data better restrict model parameter ranges and uncertainties, while less data not so much), intrinsic noise sources in the experiment and equipment, and the dependencies and near dependencies among the model's parameter set marked by their strong mutual correlations.

However there is another factor at play here. It is an intrinsic desensitization of the model parameter values and the associated increase in their uncertainties due to the mere fact that the process of fitting data to a model is, at base, a extremization problem. There is some function of (possibly) multiple parameters that is to minimized or maximized (e.g. a posteriori likelihood, or cross entropy, etc.). When that function is so extremized the typical situation is that the function is at a smooth regular point. Therefore that function has its first derivatives w.r.t. those parameters vanish at the extremal point. This means at the optimum extremal point any 1st order change in the parameter values results in a 2nd order change in the optimizing function. Likewise, it means a 1st order change or uncertainty in the function being optimized is associated with a 1/2 order change in the corresponding parameter values. IOW the uncertainties in the parameters scale like the *square root* of the uncertainties in the function being optimized by the experiment's statistics and model. So if the data can narrow down the optimizing function to a range of 1% then the parameter values typically will be constrained to no better than 10%, and that's even before those nasty correlations and near parameter dependencies put big multipliers on that square root function.

Dave Bowman