Chronology |
Current Month |
Current Thread |
Current Date |

[Year List] [Month List (current year)] | [Date Index] [Thread Index] | [Thread Prev] [Thread Next] | [Date Prev] [Date Next] |

*From*: John Denker <jsd@av8n.com>*Date*: Fri, 16 Jan 2009 16:13:51 -0700

On 01/16/2009 01:10 PM, Stefan Jeglinski wrote:

My take is this. First, a "conventional approach" (N_f = N_t = N) is

straightforward. N samples "maps" into N frequencies and vice versa

with the inverse transform. I like your term "barely invertible."

:-)

If we then increase the resolution in frequency-space (f-space) by

increasing N_f as discussed, it might be assumed that we can achieve

a higher resolution result in the time domain when inverse

transforming back to it.

Not greater resolution, but rather greater _span_ in the

time domain.

But it doesn't work like that. We can't get

back the original waveform. Rather, we might I suppose (?), but there

is additional content created in the time domain, in particular

imaginary content, whereas the original time domain contained only

real data. Not good.

Maybe not good, but not terrible, either. The "additional"

content is all zeros.

And the way I look at it, the DFT was already guilty of

creating "additional" content, in the sense that it always

assumes that the input signal is periodic, i.e. a periodic

continuation of the time-domain signal is assumed to exist.

The increased-resolution transform is no worse; it just

assumes a _different_ continuation of the time domain

data. It continues it with zeros for a while, and then

continues periodically (with a longer period).

Two points:

1. If we abandon trying to write the transforms as summations, and

instead move to a matrix formulation, it is easy to move back and

forth in the following sense:

F = E . T

OK.

Tangential remark: If we are writing T as a column vector

and F as a row vector, note that E is _not_ the usual sort

of matrix that takes in a vector and puts out a vector of

the same kind. If we use what Misner, Thorn, & Wheeler

call "index gymnastics", T has an upstairs index, F, has

a downstairs index, and E has two downstairs indices, unlike

the usual sort of matrices that have one of each. Naturally

the inverse transform has two upstairs indices.

Here, T is the vector of samples in the time domain, and F is the

vector of frequencies (meaning the vector of transformed values in

f-space, not the frequencies themselves - words fail).

F is the vector of ordinates in the frequency domain.

E is the

square matrix, constructed of the exponential terms that appear in

the summations. For N_f = N_t = N, this is just the equation for the

discrete fourier transform, and its inverse is T = E^-1 . F, where

E^-1 is just the conventional inverse of a square matrix.

Now, increasing the f-space resolution amounts to enlarging the

number of terms in vector F, and the number of rows in E,

This is a question of taste, of the sort that ought not be

argued, but just to avoid confusion let me say that I have

been thinking of F as a _row_ vector and therefore here we

are increasing the number of _columns_ in F and by the same

token increasing the number of _columns_ in E (not "rows

in E"). This is how my demo spreadsheet lays things out.

such that T

remains intact. We can then still write T = E^-1 . F, where E^-1 is

now the pseudoinverse of the non-square matrix E. By doing so, we

recover the original time-domain samples without any "additional

content," no matter how much we increase the resolution in f-space. I

am a bit concerned about whether the pseudoinverse might introduce

issues, but AFAICT in the practical examples I've tried, the

calculation seems to be above suspicion.

That's true.

There's nothing magical about it.

What we call "the" pseudoinverse is just one way among many of

inverting a non-square matrix.

Suppose the forward transform increases the resolution by a

factor of 4, i.e. Nf/Nt = 4. Then the non-square forward

transform E consists of 4 fairly ordinary square transforms

(with a little bit of heterodyning). They are not stacked

side-by-side but rather collated i.e. intercalated. Any

one of the four is easily invertible. You have to take into

account the fact that three of the four have nonstandard

abscissas, but we know how to do that.

The key requirement is that the inverse transform not

produce too many rows. The pseudoinverse doesn't magically

accomplish that; it only does it because you instructed

it to do so, as part of the definition of pseudoinverse.

Any other row-reduction strategy would have worked equally

well, including simple subsampling of the frequency-domain

data.

2. There remains the question of increasing the resolution in the

time domain (t-space). Well, if you have sampled at higher than the

Nyquist rate, and confirmed this by looking at the transform and

seeing no overlapped aliases, the sampling theorem teaches that you

can recover the time domain at any resolution desired, via the

interpolation formula involving the sinc function. No information

from the frequency domain (other than to confirm a lack of alias

overlap) is needed.

With respect to these 2 points, I do not see a way to perform a DFT,

increase the resolution in f-space, and then do an IDFT that achieves

a resolution in t-space that is higher than that given by the

original samples. But I would be interested in such an approach. We

are in fact inching toward the real question I want to ask, but I

want to first see what comments there might be about my above

speculations.

If the time-domain data is the original data, no amount

of Fourier transforms or other math will ever create more

information. There will be no "loaves and fishes" miracle

where you create more just by rearranging things. The

second law of thermodynamics forbids it.

So all we are talking about here are various heuristics for

_interpolating_ between points in the time domain.

Interpolation is easy if you know the original signal was

band-limited before it was sampled ... i.e. no aliasing.

You cannot safely decide whether a signal is band-limited

or not by looking at the sampled data! That would be like

asking the drunkard whether he is drunk. For example, if

I have a 100.01 Hz wave and sample it at 10.00 Hz, it will

look like a beautiful 0.01 Hz signal. Ooops.

If you want to be sure that the original data is band-limited,

rely on the physics, not on the math. That is, put a filter

on it! A real, physical, analog filter. Then you can be

sure.

And that BTW answers the question about interpolation. The

analog filter is in control. The details of the design of

the analog filter dictate the details of the interpolation.

**Follow-Ups**:**Re: [Phys-l] some questions related to sampling***From:*Stefan Jeglinski <jeglin@4pi.com>

**References**:**[Phys-l] some questions related to sampling***From:*Stefan Jeglinski <jeglin@4pi.com>

**Re: [Phys-l] some questions related to sampling***From:*John Denker <jsd@av8n.com>

**Re: [Phys-l] some questions related to sampling***From:*Stefan Jeglinski <jeglin@4pi.com>

**Re: [Phys-l] some questions related to sampling***From:*John Denker <jsd@av8n.com>

**Re: [Phys-l] some questions related to sampling***From:*Stefan Jeglinski <jeglin@4pi.com>

**Re: [Phys-l] some questions related to sampling***From:*John Denker <jsd@av8n.com>

**Re: [Phys-l] some questions related to sampling***From:*Stefan Jeglinski <jeglin@4pi.com>

- Prev by Date:
**Re: [Phys-l] some questions related to sampling** - Next by Date:
**Re: [Phys-l] some questions related to sampling** - Previous by thread:
**Re: [Phys-l] some questions related to sampling** - Next by thread:
**Re: [Phys-l] some questions related to sampling** - Index(es):