Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: purposefully modifying a distribution to get...?



>The x-ray emission events in time are described by a Poisson process.
>This is straightforward to simulate using a computer. However,
>practicalities (limits on memory, etc) require that the pulse height
>analysis data be recorded into finite arrays....

There are several features of this equipment that are unclear to me.
I understand there is a data stream in time, of pulses which vary
in amplitude. (correct?)

Yes. But just to be clear, the pulses are not delta functions. They
have well-defined shapes as a result of the instrument that spews
them. To answer several of your questions about the process and maybe
clear up my presentation:

A constant-energy electron beam traverses sample in a grid pattern or
is stationary.

If traversing, back-scattered or secondary electrons are collected
and used to create an image (your standard electron microscope
image). These events we are not concerned with.

Whether traversing or stationary, the beam impinging on the sample
creates x-rays, which are collected to a silicon detector. The
detector outputs current spikes which can be viewed as delta
functions. These signals go thru a preamp and on to the mysterious
"x-ray pulse processor".

The x-ray pulse processor then spews a pulse train. All pulses are
the same width and shape. Their peaks are separated in time by the
same intervals as the original delta functions. Their amplitudes are
proportional to the x-ray energies. Since these spewed pulses are of
finite width, any that occur close enough together will overlap to
some degree, causing what appears to be a single (misshaped) pulse
with an incorrect amplitude, since it is actually the superposition
of two smaller amplitudes. Traditionally, the x-ray pulse processor
detects such pileup events and rejects (does not spew) the entire
malformed pulse.

Our instrument records the pulse train with a high-speed A-to-D
converter. It is fast enough to do a pulse-height analysis on every
pulse that the processor spews.

Here comes the issue: we are now going to record a large number of
spewed pulses instead of one at a time. We are going to do
pulse-height analysis on a batch in one sweep instead of one at a
time. And we are going to take all pulses, not just the ones the
pulse-processor wants to give us (we use a modified pulse processor
whose rejection circuitry is bypassed).

When we capture an array (window) of say 1000 A-to-D values, it might
include 50 pulses. Because of finite memory limitations, it is quite
possible that a pulse is already in progress at the beginning of the
window or has not finished at the end of the window. IOW, the pulse
gets "chopped" by the method of acquisition, and must be rejected.

In simulating such a scenario, my program creates a pulse train with
a Poisson distribution in time (random amplitudes). If a pulse
happens to occur at or close to a window boundary, the program just
removes it. This is crux of my question and my previous long-winded
discussion: does their removal create a non-Poisson distribution?


It is not clear if this time sequence is obtained by ramping up the
incident energy of the electron beam over a point, or if the sequence
represents an electron probe scanning along a direction of the sample.

The latter. Note though that the electron beam can be stationary. The
beam is unfocused enough that in a sample which is "dirty," x-rays
from several elements may be produced. Likewise, alloys, compounds,
and the like will produce x-rays from different elements. See

<http://www.4pi.com/teksupport/Rev10online/manual/29-edscal.htm>

for an idea of what the spectra look like. Scanning the beam produces
an x-ray image map of the sample. Adding color can make for
impressively beautiful images. Scroll to the bottom of

<http://www.4pi.com/teksupport/Rev10online/manual/39-mxmap.htm>

for a simplistic example.


Further, I am not sure if you are assigning pulses of differing amplitude
to different buckets (which would be a reasonable description of
spectrometry) or if the time sequence is a mapping of X ray energy with
displacement of the electron beam.

The former. The official name is Energy Dispersive Spectroscopy (EDS).


But I fancy the partial pulses you need to discard are selected from the
larger amplitudes which stretch further in time (?) and have more chance
of crossing a window edge. If this were true, you are discounting higher
energy effects.

Since all spewed pulses are the same width, higher energies are not
stretched further in time. A high-energy pulse is just as likely to
be removed as a low-energy pulse.


I would enjoy reading a simple description of the action of your device.

See above :-)

Pending better information of that kind, I could offer a fire-brand to
counter the darkness of the "looks OK" school of software.

If removing edge pulses "looks OK", can you remove 10 times as many pulses?
Those occuring after each 1/10th of the sample window, perhaps?

Yes. And roughly speaking, acquire for 10 times as long. Current
pulse processors have to do exactly this: under high count rates,
pulse pileup and rejection occur more, and the acquisition (to
achieve an equivalent spectrum) takes longer.


If this is a statistically harmless process, it will just take longer to
arrive at comparable results to the existing ones.

Exactly.


This assumes you have some standard for judging your data stream
processing - and I'm sure you do.

"It seems to work." :-)


My concern has been the issue of removing pulses based on window
size, which is quite arbitrary. I've not been -too- worried about it,
since commercial instruments have seemingly been doing the same thing
for 30-40 years. I've just wanted to view it beyond the "looks OK"
software engineer approach. Bernard Cleyet's comments and my
rewritings about it have pretty much convinced me I'm OK.


Stefan Jeglinski