Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-L] Coincidence Statistics



On 11/20/23 5:10 PM, I wrote:

So the whole idea of "rate of coincidences" hurts my brain.
Let's talk more about that.

In a single channel (i.e. no notion of coincidences) the expected flashes per 
bin is proportional to the bin-size. So the expected rate, i.e. flashes per 
unit time, is scale-invariant. It is independent of bin-size. It tells us 
something about the physics.

Now we consider coincidences. All along, and still now, we are talking about accidental, 
coincidental coincidences, devoid of any physical connection. This is analogous to 
"optical double" stars in astronomy.

If we hold constant the two per-channel rates, the number of coincidences per 
bin scales like the *square* of the bin-size (when the bin-size is not too 
large). If we divide the coincidences per bin by the bin-size we get something 
that sorta looks like a rate, but it is !not! scale-invariant. It depends on 
the bin-size. It does not capture the underlying physics.

The physics is tied to coincidences per bin divided by bin-size squared. I 
don't know what that's called.

If the bin-size gets too big, the square-law scaling breaks down. This is called the 
"dead time" effect.

====================

Now suppose there are actual correlations in the physics. For example, you 
could have a reaction that simultaneously produces two products. In this case 
the number of true pairs per bin will scale like the first power of the 
bin-size, and there will be a well-behaved rate.

Checking how the count scales with bin-size -- first power versus second power 
-- is one way of checking for true paired events.

=============

FWIW the modern approach is to not use bins at all, but instead to time-stamp 
the events. (Think of it as picosecond-sized bins if you like.) Then calculate 
the cross-correlation to find the coincidences. I have software to do the 
calculation efficiently, even when the data is super-sparse.

This is simpler, conceptually as well as operationally. For one thing, you 
don't need to worry about true pairs that get missed because they straddle a 
bin-boundary.