Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-l] decibel dilemma



Anthony Lapinski wrote:

It says the "background" sound level in a room is 40 dB. With 100 people
talking, it rises to 60 dB. What's the sound level when 55 people leave?

1) It may be useful to revise the statement of the problem so that
the background noise is higher, say 50 dBa rather than 40. I
assume 50 in what follows. You'll see why.

2) I reckon this is either a nifty educational question, or not ...
depending on how it is presented.

The "textbook" answer requires making an approximation.

Learning how to make good approximations is important.
The task, then, is to present this question in a way that
students understand the nature of the approximation being
made. In particular, they should have some idea of _when_
the approximation is valid, and when it is not.

Possibly constructive suggestion: Ask 'em to plot the relevant
quantities as a function of N.

Here's my version:
http://www.av8n.com/physics/img48/db.png

The background noise power is shown in magenta.
The noise power from N talkers is shown in blue.
The total power is shown in black.

The abscissa is N, the number of talkers.
The ordinate is log power, in dBa ... the same for all three curves.

Note that when the talk-power is large, the difference (in dBa)
between the black curve (total) and the blue curve (talkers) is very
small, and can be neglected to a good approximation.

To an even better approximation, we can observe that the black curve
very nearly parallels the blue curve in the large-talk-power regime.
Therefore to a quite decent approximation we have:
black(100) - black(45) = blue(100) - blue(45)
which is the key to easily obtaining the textbook answer.

Note that the criterion here is not whether N is large or small;
what really matters is whether the talk-power is large or small
relative to the background power. Those quantities are related, but
not identical.


Some folks (*) have lots of experience dealing with situations like this,
where a certain sum is dominated by its largest term. Such folks can
see this graph in their mind's eye without needing to actually draw
the graph. Naive students most likely need to draw the graph.


It is straightforward to produce such a graph using a spreadsheet program.


(*) Situations like this, where a sum might be dominated by its largest
term, show up all over the place.
-- In applied math, there is the "method of steepest descent" which has
many applications to physics, including thermodynamics. What stationary
phase is to QM, steepest descent is to thermo. (See Feynman & Hibbs
for details on this.)
-- In electrical engineering, including filter analysis and synthesis.
-- In pattern recognition, there are lots of situations where the
Viterbi algorithm is used to calculate -- very efficiently -- the
"extremal cost through a graph". Practictioners are so accustomed to
doing that that they sometimes lose sight of the fact that they "should"
be doing a lattice sum, calculating the entire sum, not just the largest
term in the sum. They get fooled if/when the sum is not dominated by
its largest term. At this point, you hope there is somebody on staff
who knows how to think about approximations ... in particular, who knows
how to keep track of what's an approximation and what's not, and to keep
track of the _range of validity_ of the approximations being used.