Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-l] frequency and wavelength of sound in air



On 04/05/2009 01:34 AM, Savinainen Antti wrote:

<http://phys.unsw.edu.au/phys_about/PHYSICS!/SPEECH_HELIUM/speech.html>


Let me quote the source:

"The speed of sound is greater, so the resonances occur at higher
frequencies: the second resonance has been shifted right off scale in
this diagram."

This is fine and makes perfect sense. But then the text continues:

"The flesh in your vocal folds still vibrates at the same* frequency,
so the harmonics occur at the same frequency."

The flesh part makes sense but why the *resonances* occur at higher
frequencies whereas the *harmonics* occur at the same frequency?? I
thought that in this context resonances and harmonics mean the same
thing. What am I missing?

This is an important topic.

By way of background, let's review the fact that resonators aren't
always driven at their resonant frequency. The response is not
a binary thing; there is a nontrivial response even when the
signal is quite far off resonance.

As a familiar example, consider the dispersion in a prism of glass
such as lead crystal. The index of refraction is due to a bunch
of oscillators that all have their resonant frequency up in the
ultraviolet. Red is refracted less than blue because it is farther
off resonance.

Or as an even simpler example, consider a mass on a spring. If you
just push on it with a steady push, essentially zero frequency, it
will move. In fact the amount of movement (per unit force) will be
nearly independent of frequency over a wide range of low frequencies.

You can drive a resonator with two signals at the same time. For
example, consider a mass with two springs. Wiggle the far end of
one spring at one frequency, and wiggle the far end of the other
spring at another frequency. The motion of the mass will be the
superposition of the responses you would get from each drive
separately. It's a linear system; linearity implies superposition.

Similarly the prism is a linear system. White light is a superposition
of colors. So when you shine white light on a prism, each oscillator
is being driven simultaneously by many different off-resonance signals.

Moving on: There is a wide class of audio systems that can be modeled
as "excitation plus filter". This class includes the human voice,
reed instruments, bowed string instruments, and a few other things.
It does not include drums or strings that are struck or plucked.

In the case of the human voice, the excitation is the vocal cords.
They smash together periodically, making a series of clicks. This
can be represented mathematically by a Dirac comb, but if that doesn't
mean anything to you, don't worry about it. Given a series of clicks
that is periodic with frequency f, then there will be a lot of energy
at frequencies f, 2f, 3f, 4f, et cetera. The Fourier decomposition
is a very _slowly_ convergent series; that is, the energy is spread
out over a great many modes. The Nth mode has almost as much energy
as the fundamental, even for rather large N.

The frequency f is for practical purposes determined by the mechanical
mass and tension in the vocal cords. The cords are of course to some
extent "loaded" by the air, but this is a very small effect and can
be neglected. By the same token you could play a violin in a room
full of helium and it would not be appreciably out of tune.

For reed and brass instruments, the air loading is more significant,
but let's not worry about that. The "excitation plus filter" model
still applies.

As the next step, let's compare a bass singing "oooo-aaaa" on the note
A2 (110 Hz) and a soprano singing "oooo-aaaa" on the note A5 (880 Hz).
The frequency of the vocal cords determines the note (A2 or A5), but what
is the difference between "oooo" and "aaaa"? The answer has to do with
acoustical resonances in the throat and mouth. In the speech processing
business these resonances are called _formants_. They are rather low-Q
resonances. The center frequency (and to some extent the Q) of each
formant moves around. You can pretty much tell what vowel is being
spoken according to the position of the F1 and F2 formants ... and vice
versa.

You should not imagine that the acoustic resonators are being driven
on-resonance. In fact, the resonances are so broad that several different
components of the excitation are within the passband of each formant,
at least for "normal" not-too-high notes on the scale, so it's like
the "mass on two springs" model mentioned above, or the "white light
falling on prism" model.

For reed and brass instruments, what typically happens is that
/one/ of the acoustical modes (in conjunction with the mechanics of
the lips and/or reed) plays an important role in determining the
frequency of the oscillation. In particular, this /one/ mode will
probably be nearly on-resonance. Thereupon it is likely that all
the other modes will be significantly off-resonance.

The reed is very nonlinear, so the excitation is not a superposition
of frequencies. As a rule there is only one excitation frequency (f)
a time ,,, plus of course multiples of f.

This is related to the celebrated "mode locking" phenomenon in lasers,
but if that doesn't mean anything to you, don't worry about it.

For instruments in the violin family, there are important acoustical
and mechanical resonances in the body of the instrument. The frequencies
of these resonances obviously has nothing to do with the frequency
of the note being played. The violinist can change the frequency of
the excitation, but has almost (*) no control over the filter.
Conceptually, there is quite a strict separation between the excitation
and the filter: if you replace the strings on a Stradivarius it's still
a Stradivarius.

(*) except for odd things like attaching a mute.

In any case, the conventional and recommended way to analyze these
systems is to use an "excitation plus filter" model. First figure
out what sort of signal is produced by the excitation, and then figure
out how that signal is modified by the filter. The filter is typically
a bunch of oscillators being driven at off-resonance frequencies,
probably driven by several frequency components at once.

This model dates back to A.G. Bell and possibly earlier.

There's a lot more that could be said about this, but I'll stop here
for now.