Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: statistical fluctuations



The conclusion of 10 +/- 3.2 would still be justifiable, I
think, even if the background was created "all at once,"
for example, by an exposure to a neutron beam only
one day prior to our experiment. The only important
thing is to know EXACTLY how many pits belong to
the background of our particular chip. (We are not
subtracting a background from another chip, as we
would in using nuclear emulsions.)

The best way to perform the experiment would be to
photograph pits and to print pictures. Observing tracks
on two photos we would know which pits (37 of them)
were present after the first development and which (10
in this illustration) appeared during our experiment.
The uncertainty about 10, as in the case of a Geiger
counter, would be sqrt(10)=3.2. It is as if there were
no background at all.

Presumably the experiment is short enough to
assume that only a negligible number of new pits is
due to an additional background. Another possible
danger is that old pits may start disappearing, due
to more that two developments, as suggested by
Bernard. Preliminary experiments testing of all
assumptions would be essential in a serious study.

The development of a CR-39 consists of etching
its surfaces. Etching along the latent tracks (and
other defects) is much faster than along smooth
surfaces. It would not be too hard to find out how
many times a single chip can be "developed"
before the old tracks start to disappear. According
to Oriani, the second development does not
destroy pits revealed by the first development.
My problem (last year) was to distinguish real
tracks from other surface defects. I found that some
microscopes are better than others in that respect.
But this is an issue of a systematic (not random)
error, I suppose.

A good experiment for a student project is to study
the effect of ventilation on the number of tracks due
to radon in a basement. Or to compare the situation
in the basement with the situation in another room.
Depending on the levels of radioactivity exposures
of CR-39 should last several weeks. This could be
a useful student exercise in dealing with
experimental errors.
Ludwik

On Saturday, Sep 6, 2003, Bernard Cleyet wrote:

Not exactly.

I think I understand L's reasoning and if the missing info is what I
think it is then his analysis is approx. correct. Assumptions: the
background is due to exposure from the time of manufacture to
development and is a long time. [slo boat from Eng. to NJ] say 37
days. Therefore, the background is, one count per day. +/- ~ 0.2

Suppose the time between the two developments is one day.
Then the counts are 10 +/- 3.2 minus one +/- 0.2 = 9 +/- 3.3

I've assumed the radon, etc. from the ocean is the same as Ludwik's
basement where he performed the exposure and development, etc.
Obviously, a bad assumption. However, unless the plastic is well
shielded (two inches of Pb?), the quick 30k ft altitude may be worse
than the slo boat. When I fly, next, I certainly will take one or
more
along with my Soviet, pocket, G-M counter. [Have these plastics been
evaluated by exposure to energetic protons, deuterons, neutrons,
muons, etc.? Will it detect Compton electrons?]

Another point: how, if so, does the first development affect the
sensitivity of the plastic as a detector, and why can't one do a third
development?

I've discovered, with the help of the U. Washington lab. mgr. (Jason
Alferness) a method of separating some of the daughters of U. It
results in a "cute" decay experiment similar to the Cs => Ba ion
exchange "cow". Furthermore, this particular separation is incomplete
and results in a rather short half life and a much longer, but
practical
(I think) one. After I've made the apparatus and tested it, I hope
to
permanently separate them and have a source of different energy Alphas.

Ludwik Kowalski wrote:

On Friday, Sep 5, 2003, referring to my illustration
(see below) Dan Crowe wrote:

The standard deviation is 9.2, which is the square
root of (47 + 37). The signal is the difference between
two measured values (47 - 37=10). By Poisson statistics,
the variance of each measured value equals the
measured value (47 and 37, respectively). The standard
deviation of each measured value is the square root of
the variance, or the square root of the measured value
(square root of 47 and square root of 37, respectively).
The variance of the difference between the measured
values is the sum of the variances of the measured values
(47 + 37 = 84). Therefore, the standard deviation of the
signal is the square root of the sum of the measured
values (square root of 84 = 9.2).


In my illustration one chip recorded 47 tracks
(signal plus background) while another recorded
37 (background). I subtract and obtain signal
equal to 10. But I can not say that signal=10
because both 47 and 37 would fluctuate from
chip to chip. The probability that the true signal
is 10 is very very small in this example.



Now consider the Oriani's method. (His numbers
were actually much larger, I made them small to
focus on fluctuations. Suppose that using his
method (one chip) I observe 37 before the
experiment and 47 after the experiment. I conclude
that the signal is 10. Yes, I do not expect that the next
experiment will yield the same signal but the result
will always be positive or zero. Is it correct to say
that the standard deviation for the signal should
be sqrt(10), in this illustration? I am not certain.
Ludwik Kowalski



In other words a person using two separate chips: one
to measure (signal+noise) and another to measure the
(noise only) should report that the signal is 10 +/- 9.2.
This is not a very convincing argument that the signal
is real. Most likely it is an illusion due to statistical
fluctuations affecting each independent measurement.

But now consider a person using only one chip, as
Oriani did. After the first development the (background
only) is 37, after the second development the (background
+ signal) is 47. The second person, if I am correct, should
report that the signal is 10 +/- 3.2. This is a much more
convincing argument that the signal is real; it differs
from the null result by about three standard deviations.

In both cases I am defining the +/- bar of error as one
standard deviation. Am I correct in saying that the second
method is much better that the first when a conclusion has
to be made on the basis of very limited number of counts,
as in the invented illustration?
Ludwik Kowalski