Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-L] Half-Life measurement



John,

The residuals on your analysis look very good.
https://photos.app.goo.gl/uwwxSx4Dees9CGDa6
This is for the my_data.csv data.

How do I interpret the uncertainty of each of your parameters? Is that
information in the data table your software prints out.

Also: I like the idea of a cylindrical copper sample around the detector.
However the primary decay is a beta. This is best observed through the end
window on the tube.

Paul


On Thu, Oct 14, 2021 at 7:19 PM John Denker via Phys-l <
phys-l@mail.phys-l.org> wrote:

OK, today I got a chance to run the copper data through my
least-limp fitting routine. I haven't had a chance to do
any serious checking of the results, but some folks may be
interested to see the preliminary results.

my_data.csv
fit returns: 4 i.e. XTOL_REACHED limp: 199.459
perdof: 4.98647

Normed: 1.15957 0.916396 1.12026 0.928348 0.788025

SI: 1.64457 0.0020677 0.736989 1.40744E-05 0.211614

½life: 1.64457 335.226 0.736989 49248.8 0.211614


In the last three rows, the numbers are
amp, dk (fast component)
amp, dk (slow component)
amp (baseline)

where each amplitude is the rate at time t=0.

The first row is normalized as a multiple of whatever my
initial guess routine came up with. The routine uses
the book value for the dk rates, so you can immediately
see that the fitted values are off by 8% or so relative
to the book values.

The second row is SI units of events per second (amp)
and nats per second (dk).

The third row flips the dk field to give the half life
(seconds per octave).

Limp means log improbability.
The limp per degree of freedom is around 4 or 5, which is
higher than you would like to see, but remember this is
maximum likelihood (max a priori) i.e. the probability of
the data given the model, which is not optimal. We would
much rather have max a posteriori, i.e. the probability of
the model given the data. So there are some normalization
constants running around loose. It's too soon to read much
into the limp values.

For the other data set:
sean_data.csv
Fit returns: 4 i.e. XTOL_REACHED limp: 192.156
perdof: 4.17731

Normed: 1.18838 0.972631 1.12941 0.929294 0.755249

SI: 2.52057 0.00219458 0.965569 1.40888E-05 0.239195

½life: 2.52057 315.844 0.965569 49198.6 0.239195


I haven't checked, but I assume there are hellacious correlations
between the parameters.

Early in this discussion Paul reported that students felt obliged
to make all sorts of ill-founded assumptions in order to analyze
the data.

I would emphasize that there are AFAICT verrry few significant
assumptions involved in my analysis. I assume the data is Poisson
distributed and then turn the crank.

I put a tiny amount of cleverness into the routine that guesses
the /starting/ point for the optimization, but that should have
no effect whatsoever on the ending point.

The optimization was done using the BOBYQA algorithm (Powell's
method) as implemented in the 'nlopt' package which is supported
on all the major linux distros. Reasonably well documented.

See also next message.
_______________________________________________
Forum for Physics Educators
Phys-l@mail.phys-l.org
https://www.phys-l.org/mailman/listinfo/phys-l