Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-L] Half-Life measurement



On 10/15/21 7:31 PM, Paul Nord wrote:

How do I interpret the uncertainty of each of your parameters?

Here is the normalized covariance matrix:

f.amp f.dk s.amp s.dk bl
0.0107 0.0131 -0.0007 0.0028 0.0043
0.0131 0.0251 -0.0026 0.0094 0.0149
-0.0007 -0.0026 0.0031 -0.0078 -0.0137
0.0028 0.0094 -0.0078 0.0215 0.0364
0.0043 0.0149 -0.0137 0.0364 0.0625

Hypothetically if this were a diagonal matrix, then each
diagonal entry would be the square of the error bar. In
particular, the decay constants would have a standard
deviation of 15 or 16%, as you can easily verify.

Non-hypothetically there are hellacious correlations. In
3 of the 5 columns, the diagonal element isn't even the
largest element in the column. This means the whole concept
of error bar is sketchy at best.

The smart way forward is to do a singular value decomposition
and look at the eigenvalues and eigenvectors. I have the code
to do that, but it's more work than I feel like doing at the
moment.

Trying to interpret the covariance matrix by eye involves
effort and expertise and some risk of misinterpretation. It
seems safe to say that the values for the slow component
are getting killed by the uncertainty in the baseline. There
isn't enough data to pin down what is background and what is
the tail of the slow decay. My prediction is that observing for
an additional day or two would bring a noticeable improvement.

It also seems safe to say that the uncertainty on the fast
component is limited by the small number of events.

==============

Pedagogical remark: Although it's fun to think about ways of
improving the experiment ... there is also pedagogical value
in not improving it too much. It's good for the students to
get experience dealing with imperfect data.

R&D is virtually always an iterative process. You do preliminary
experiments. You built prototypes and pilot plants. You analyze
data from trial runs to figure out what needs improving. Then
you iterate.

It's good for students to see the internal steps in this process
... *not* just the finished product.

===

Here's a possible improvement.

I assume that irradiating the sample for longer yields a linear
increase in the slow component, but sharply diminishing returns
for the fast component, because it decays as fast as you make it.

So it may be advantageous to split the experiment: Cook one
sample for several days and use that to measure the slow
component. Cook another sample for only 20 minutes or so
and use that to measure the fast component.

Rationale: This helps each component stand out above the
background. Note that the slow component creates a background
for the fast component, which complicates the measurement.

Of all the tweaks discussed in recent days, I suspect this may
have by far the best cost/benefit ratio.


Also: I like the idea of a cylindrical copper sample around the detector.
However the primary decay is a beta. This is best observed through the end
window on the tube.

I imagine "best" means "necessarily" in this case.

Even so, it might help to have a cap / cup / bowl shape so that
the emitter subtends the entire field of view of the end cap.

xxxxxxx
xxxx
xx
======================= xx
| xx
| xx
| xx
======================= xx
xx
xxxx
xxxxxxx

OTOH it might not help all that much. A beta coming in at a steep
angle might not be counted with any great efficiency.

Even better: They make "pancake" tubes, i.e. Geiger tubes that
are not tubular. Flat shape gives them increased sensitivity to
alphas and betas with decreased sensitivity to gammas. For this
task that might improve the signal to noise ratio. They cost more
but not crazy more.