Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-l] some questions related to sampling



Ugh, you got me again. I left out one or two steps in the
argument. Let me try again:

I think we're on the same page now, thanks. I see you added a note about delta functions in 5.3.

Your one section 4.4 is timely and interesting to me, and I'd like to go in that direction for a while. I'm not sure why you called it "Combination."

My take is this. First, a "conventional approach" (N_f = N_t = N) is straightforward. N samples "maps" into N frequencies and vice versa with the inverse transform. I like your term "barely invertible."

If we then increase the resolution in frequency-space (f-space) by increasing N_f as discussed, it might be assumed that we can achieve a higher resolution result in the time domain when inverse transforming back to it. But it doesn't work like that. We can't get back the original waveform. Rather, we might I suppose (?), but there is additional content created in the time domain, in particular imaginary content, whereas the original time domain contained only real data. Not good.

Two points:

1. If we abandon trying to write the transforms as summations, and instead move to a matrix formulation, it is easy to move back and forth in the following sense:

F = E . T

Here, T is the vector of samples in the time domain, and F is the vector of frequencies (meaning the vector of transformed values in f-space, not the frequencies themselves - words fail). E is the square matrix, constructed of the exponential terms that appear in the summations. For N_f = N_t = N, this is just the equation for the discrete fourier transform, and its inverse is T = E^-1 . F, where E^-1 is just the conventional inverse of a square matrix.

Now, increasing the f-space resolution amounts to enlarging the number of terms in vector F, and the number of rows in E, such that T remains intact. We can then still write T = E^-1 . F, where E^-1 is now the pseudoinverse of the non-square matrix E. By doing so, we recover the original time-domain samples without any "additional content," no matter how much we increase the resolution in f-space. I am a bit concerned about whether the pseudoinverse might introduce issues, but AFAICT in the practical examples I've tried, the calculation seems to be above suspicion.

2. There remains the question of increasing the resolution in the time domain (t-space). Well, if you have sampled at higher than the Nyquist rate, and confirmed this by looking at the transform and seeing no overlapped aliases, the sampling theorem teaches that you can recover the time domain at any resolution desired, via the interpolation formula involving the sinc function. No information from the frequency domain (other than to confirm a lack of alias overlap) is needed.

With respect to these 2 points, I do not see a way to perform a DFT, increase the resolution in f-space, and then do an IDFT that achieves a resolution in t-space that is higher than that given by the original samples. But I would be interested in such an approach. We are in fact inching toward the real question I want to ask, but I want to first see what comments there might be about my above speculations.


Stefan Jeglinski









In the spirit of the correspondence principle, let's examine the
relationship between a discrete Fourier transform and a non-discrete
Fourier transform. We start with a discrete transform and see what
happens:

1. As previously discussed: When dt becomes small, the frequency-range
of the first period of the output becomes large, so it covers a big
piece of the f-axis. (The output is always defined for all frequencies,
so we can't talk about the range of the output, only the range of one
period.)
2. Obviously as N_t dt becomes large, the input data covers a big piece
of the t-axis.
3. As previously discussed: When N_f dt becomes large, the output becomes
nearly continuous. The output period is being divided very finely.
4. For present purposes, let's assume N_f = N_t, so that the transformation
will be "just barely" invertible in both directions. Call this common
value N.
5. If you take both limits together - small dt and large Ndt - the discrete
Fourier transform begins to look a whole lot like the old-fashioned
non-discrete Fourier transform.


You can't have a sampled input without having the output of the
transform be periodic. The proof is simple:

Understood. The way I had always looked at this was as follows
(hoping I can make this clear in my non-UTF8 client, and hopefully
without typos). Casting a sampled x(t) as

sx(t) = [Sum_over_all_n] x[n] delta-function[t - nT]

where T is the sampling interval. Then, fourier-transform sx(t):

X(f) = [Integral over +- infinity] sx(t) exp(-2pi*i*f*t) dt

leading to

X(f) = [Sum_over_n] x[n] exp(-2pi*i*f*n*T)

which is a fourier series representation of X(f), which by definition
is periodic. Is there anything mathematically fishy about looking at
it in this slightly more formal way?

Looks OK to me.
The key step is convincing yourself that sampling is represented by
multiplying by a Dirac comb. The rest is just math.

BTW that's the other half of the correspondence argument, i.e. how you
go from the continuous Fourier integral to the discrete Fourier series.

=========

I like to have the equation *and* the picture. I recently cobbled up
the picture that goes with aliasing:
http://www.av8n.com/physics/fourier-refined.htm#fig-fourier-aliasing

_______________________________________________
Forum for Physics Educators
Phys-l@carnot.physics.buffalo.edu
https://carnot.physics.buffalo.edu/mailman/listinfo/phys-l