Chronology Current Month Current Thread Current Date [Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

# Re: [Phys-L] Half-Life measurement

- David Bowman <david_bowman@georgetowncollege.edu>UnsubscribeTo:Phys-L@Phys-L.orgWed, Oct 13 at 8:07 AMJSD mentioned something that triggered me to discuss a related tangent, although I'm certain he is fully aware of it, it is possible that others here might not be.

There is no need to question it, because for present purposes
it's immaterial. If all you are interested in are the half-life
values, you could shift every time in the data set by some
arbitrary amount and the results would be the same.

The fit only cares about the number of events in a given
interval, and the length of the interval is invariant with
respect to a time shift. Both ends of the interval get shifted.
This is an example of gauge invariance.

In the exponentials, shifting the time changes the prefactor out
front, but doesn't affect the shape of the curve or the
halflife.

The tangent relates to a common cause of a translational invariance in some physical variable (e.g. time).  If a phenomenon is described by a differential equation wherein the independent variable of that equation does not itself appear explicitly in it, then the solution set of that DE exhibits a translational gauge invariance in that independent variable, and (one of) the integration constant(s) from the integration of that DE is an arbitrary translational offset the value of that independent variable, whose precise value would be determined by the initial and/or boundary conditions for the problem.  (In addition, if the DE happens to be itself an Euler-Lagrange equation resulting from the extremization of an action functional, then Noether's Theorem guarantees the existence of a function, e.g. energy, of the solution of the DE whose value remains constant over the independent parameter's domain for which the DE is applicable.)

For instance, in the case of Newton's 2nd law of classical mechanics, if the generalized forces acting on a system (& masses) do not have any explicit time dependence, i.e. they depend only on the current positions & velocities of the coordinates, then the system's solution set has a built-in time invariance so that the choice of when to start one's clock is left up to the observer and the chosen initial conditions, rather than having it being dictated by the DE.  In addition, if those generalized forces are gradients of a potential (or at least don't do any non-conservative work) then the system also has a conserved energy function.

Dave Bowman
========================================================================
Dave, I see that you and John have been describing the attributes of data invariance with a time shift, so I had better describe a situation where time merits more consideration. If students have generated a time sequence which begins at some time interval, say 0.5 minutes and continues from there, it is not unreasonable to suppose that a variable that aggregates succeeding values may start with zero events at time zero. In view of the poisson statistics, it may even make sense to insert a data pair whose dependent variable is negative, however improbable, at time zero.If such a data pair is inserted in the observations, it may lead to different results from the "ungardened" data, it seems to me.
But on to the point of this note.  Paul Nord was able to demonstrate a Bayesian estimator for five variables, given a time series in one variable which observes the accumulation of Geiger tube counts with time. It is noteworthy that his estimates for half-lives; 4.674 +/- 0.412 minutes and  765.68 +/- 74.73 minutes compare well with published figures of 5.12 minutes and  762 minutes (12.7 hrs)
I thought I would try a Poor Man's Bayesian estimator, in this way.
If I wish to estimate the five variables ofcount = activity1(1 - exp( -time/tau1) + activity2(1 - exp( -time / tau2) + b*time given the dataset which Paul provided, I would like to take advantage of prior knowledge just as Thomas Bayes advocated. It is quite certain that I can find a published value for one of these variables - let us say tau1 the mean lifetime of the short lived isotope, which is given as 7.387 minutes, representing a half-life of 5.12 minutes.
If I replace my former model with count = activity1(1 - time / 7.387) +  activity2(1 - time / tau2) + b*time, my non-liinear curve fit provides values for the remaining four variables, namely:  activity1 ~ 778.8   , activity2 ~ 49432, Tau2 ~ 1133 min,   b the background ~ 13.604 cpm
It happens that this value for the longer tau2 is 3% higher than a published value of  tau2 = 1099.3 min ( 1/2 life = 762 min or 12.7 hrs)compared with Nord's Bayesian estimate of tau2 = 1104.6 min which is less than 0.5% in error.
Let us see now what would result if I pick instead my prior knowledge of Tau2, given as 1099.3 min ( 762.0 min  1/2 life).My non linear curve fit now shows  activity 1 ~ 779.68, activity2 ~ 47441   ,  Tau1 ~ 7.544 min , b the background ~14.24 cpm.
It happens that this value for the shorter tau1 is 2% higher than a published value of  tau1 7.387 min ( 1/2 life = 5.12 min )compared with Nord's Bayesian estimate of tau1 =  6.743 min which is  9% in error.
Amusingly, the total accuracy of this barbaric 'Bayesian' is better than Paul Nord's results. I expect this is simply a matter of chance in this case! It seems to indicate a background between 13.6 to 14.24 cpm, which is a little lower than recent unshielded observations of mine.
(Just noticed Paul's second dataset. The proof of my barbaric pudding is in the eating.....)