Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-L] Half-Life measurement



On 10/12/21 2:03 PM, Paul Nord wrote:

The students told me that
t=0 is when they put the sample it, but you may choose to question that
assumption.

There is no need to question it, because for present purposes
it's immaterial. If all you are interested in are the half-life
values, you could shift every time in the data set by some
arbitrary amount and the results would be the same.

The fit only cares about the number of events in a given interval,
and the length of the interval is invariant with respect to a
time shift. Both ends of the interval get shifted. This is an
example of gauge invariance.

In the exponentials, shifting the time changes the prefactor out
front, but doesn't affect the shape of the curve or the halflife.

If you wanted to measure something else, such as the neutron
cross section and/or abundance of the various species, then
the time offset would matter ... but that's the answer to a
whole different question.

The data can be found here:
https://github.com/paulnord/bayesian_analysis/blob/main/my_data.csv
The columns are end_time, total_counts. The number of counts were recorded
every 30 seconds for the first 15 minutes. Then every 12 hours two
readings were taken spaced two minutes apart.

Let's be clear: There is a clock and a counter. It's a cumulative
counter; it never gets reset during the course of the experiment.
Similarly the clock never gets reset; it shows the cumulative time
since the start of the experiment.

So to my way of thinking, I have a different notion of what a
"reading" is. A reading is a snapshot of the cumulative time
along with the cumulative number of events. If you take a
another snapshot a minute or two later, that's another reading.

I hope nobody is surprised that my first move was to *look* at
the data. Here is what I've got so far:
https://www.av8n.com/copper-decay/img48/rate-v-time-1.png
or equivalently
https://www.av8n.com/copper-decay/pdf/rate-v-time-1.pdf

A reading (in my sense of the word) corresponds to the end of
one horizontal bar and the beginning of the next. The horizontal
extent of the bar is the interval between readings, and the
height of the bar is the average rate (events per second) during
the interval.

The first few minutes of data are off scale in this plot.

There are two types of bars, the Laurels and the Hardys. The
Laurels are so skinny that they are hard to see, so I circled
them in red.

I'm about to say two contrasting things. It's a contrast, not
a contradiction. There is a difference between what's optimal
for a quantitative computer-based analysis and what's optimal
for a quick and dirty eyeball analysis.

From the quantitative point of view, the ideal would be to
have a huge number of Laurels. That is, either timestamp every
single event, or if that's not possible, bin them into the
shortest possible bins.

From this point of view, the main problem is that there are
too few Laurels.

From the eyeball point of view, in this part of the experiment
(excluding the first few minutes) all the information is in the
Hardys. The Laurels are too few and too short. The number of
events in each Laurel is so small that the percentage uncertainty
is huge. We're getting killed by the fluctuations, by the
Poisson statistics at low intensity.

From the eyeball point of view, the Hardys are more informative.
Lots less noise. You can see this in the plots. The Laurels are
scattered all over the place, while the Hardys exhibit a nice
exponential decay.

The existing two-minute Laurels are not very useful. They
are too few to be useful for the quantitative analysis, and
too short to be useful for the eyeball analysis.

If I were doing it, I would take a snapshot of the clock
and counter every hour during the work day. No need to
hang around for two minutes. Just take the one snapshot.

Either that or hook up a Raspberry USB dongle and let
it timestamp every single event.

FWIW if you decide you want more channels and/or
timing resolution down to a few nanoseconds, you
can use an FPGA: 20 bucks including the USB interface:
FPGA Core Board Development Board C402 ALTERA CYCLONE IV EP4CE6
https://www.aliexpress.com/item/1005001975899606.html

The firmware to get this to perform timestamping is
floating around somewhere. Getting up to speed on
FPGAs is a lot more work compared to a Raspberry,
which is a rather conventional prosaic computer, so
the FPGA is not the first choice for most people, but
for mission-critical applications in the research lab
it's really nice. You could spend many thousands of
dollars to get something much less capable.

You can always convert finely binned (or timestamped!) data
to coarsely binned data, but not vice versa.


Separate issue: I am not surprised to hear much discussion
of the background. I haven't done the fit, but I would be
completely unsurprised to find that the background had a
huge effect on the estimated halflife. Huge correlations.
If you knew the background, you could pin down the halflife
to a whole lot better than 10%.

There was some mention of measuring the background "separately".
My suggestion would be to make a big effort to measure it, but
*not* separately. I don't know how much demand is pressing on
the apparatus, but if all possible I recommend running out the
baseline at least 72 hours. The way I figure it:
-- after 24 hours, 27% of the ⁶⁴Cu is still present.
-- after 48 hours, 7% of the ⁶⁴Cu is still present.
-- after 72 hours, 2% of the ⁶⁴Cu is still present.

Running out the baseline will give you a lot more leverage to
determine the background. Also, if there is some impurity
isotope present, there is no hope of untangling it unless you
run out the baseline.