Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

[Phys-L] unification or not .... was: standard DC circuits



Once upon a time, my friend Simplicio was studying the
acoustics of Avery Fisher Hall. He insisted on writing
everything directly in terms of the Schrödinger wave
equation for every molecule in the room. He said that
was necessary in order to have a microscopic understanding
of what was going on. Anything else would have been a
sin against the drive for the unification of physics.

I mention this because on 11/22/2013 11:28 PM, Bruce Sherwood
wrote:
Since Ruth Chabay and I have successfully taught the microscopic model of
DC and RC circuits (followed by the standard macro model) for many years in
the university-level calculus-based intro course, I should describe why we
do that. The most important reason is to treat electrostatics and circuits
as one subject rather than two. Traditionally circuits are taught as though
circuit behavior has nothing to do with electrostatics. This is a sin
against the drive for unification.

Maybe some people have the goal of unification for the sake
of unification ... but my goal is different. I say we should
teach easy ways of solving interesting and important problems.
Sometimes unification helps with that, and sometimes it doesn't.

In the case of ordinary circuits, the "unified" microscopic
approach doesn't help. Focusing on microscopic steering
charges is not a sensible way of analyzing "standard" DC
circuits, i.e. the things we have been discussing.

a) In most cases, when somebody uses a microscopic approach,
the purpose is to obtain more accuracy and/or a wider range
of validity.
b) In a few other cases, the main purpose is the converse,
namely to test the fundamental laws.

In the DC circuit case, neither of those purposes is
achieved.
a) The charge distribution diagrams I see in the
_Matter and Interactions_ book are not very accurate.
It would be vastly more accurate to use the macroscopic
lumped-circuit approach instead. Indeed, the tail is
wagging the dog here. The only way a student (or an
ordinary physics teacher for that matter) can come
anywhere close to finding the right charge distribution
is to first use the macroscopic lumped-circuit approach
and then, secondly, sketch a charge distribution that
roughly agrees. If the goal was to analyze the circuit,
the microscopic stuff represents either a big loss of
accuracy or a big waste of time.
b) Conversely, if the goal was to gain insight as to
the fundamental laws, this isn't a sensible way to
do it. There are guys at NIST who really do test
the fundamental laws. They do it using cleverly
designed 3-terminal calculable capacitors and stuff
like that ... not even remotely like the circuits
we see in the _M&I_ book.

===========

In the typical introductory course in general, and in the
_M&I_ book in particular, there are at least five different
areas where taking a more modern, more unified approach
would make things /simpler/, with /less/ cognitive workload,
yet this is not done. At the very least, this proves that
there are judgment calls involved, i.e. that there is not
really a Manichaean choice of rigorous "unity" versus "sin".

When unifications that reduce the workload are omitted,
and unifications that increase the workload are touted,
it makes my head spin, twice.

I am always skeptical of busywork, ivory towers, and spherical
cows. When I see a technique that has never been used for
any purpose other than answering homework problems, I ask
why not just skip it? Why not use the time to learn some
other technique instead, something that is useful in the
real world?

I am also wondering about the definition of "success" in
this area. The diagrams in the book are so very rough,
so very qualitative that it's not obvious to me what
counts as correct and what doesn't. The precision is
orders of magnitude less than what we routinely expect
from an Ohm's law calculation, even at the high-school
level. What's worse is that even at this level of
roughness, several of the diagrams are unambiguously
incorrect, apparently assuming constant capacitance per
unit length in situations where that cannot possibly
be true. That means a student who can "successfully"
reproduce the diagram must have learned some misconceptions.
Is it better for a student to have
a) no information about the microscopic charge distribution, or
b) misinformation about the microscopic charge distribution?

It's not obvious that (b) counts as "progress" or "success".

Circuits seen from a micro viewpoint broaden and deepen the important
concept of polarization.

Yeah, and that kind of deepening is entirely appropriate
for the advanced course ... but that does not make it
appropriate for the introductory course. Quantum mechanics
provides a broader and deeper understanding of chemistry,
but the high-school chemistry course does not use QM as the
starting point.

The microscopic steering charges are super-interesting
to physicists, and it's fine for us to discuss such
things amongst ourselves ... but that does not mean we
should inflict such ideas on the other 99.99% of the
population.

In engineering, there is such a thing as the black-box
approach.
*) Quantum mechanics explains atom and molecules.
*) Molecules explain macroscopic fluid properties.
*) Macroscopic fluids explain concert-hall acoustics.
*) The violist doesn't care about any of that stuff.
His job is hard enough already. He just wants to
know that whatever he plays will be heard OK at
every seat in the hall.

Even in physics, in vibrational spectroscopy, it makes
sense to treat the electrons as "fast" and the atom
cores as "slow" (the Born-Oppenheimer approximation)
... and any excited states within the nucleus are yet
another separate issue.

That makes N separate equations of motion where there
could have been just one. That's the point. Sometimes
it's smart to have N manageable topics instead of one
unmanageable topic.

Some guy named Hofstadter wrote a fat book, several
chapters if which are devoted to the relationship
between the holistic approach and the reductionist
approach. Executive summary: It doesn't pay to get
dogmatic about either extreme.

We agree that /sometimes/ there is overlap between
topics, and sometimes you have to open the black box
and look inside. In such a situation, you call in
a specialist. For example, inside a jet engine you
have a combination of chemistry and fluid dynamics.
The existence of special situations like this does
not invalidate the black box approach in the other
99.99% of cases. Engineering is not a sin. The
black box approach is not a sin.

Here's another example, more closely connected to
the topic of this thread: When it comes to antenna
design, the macroscopic lumped-circuit approach
doesn't work. The typical electrical engineer gives
up and hires a specialist. This is at the boundary
where electrical engineering overlaps with physics.
Ditto for "grounding a shielding" problems. Still,
the point remains that the existence of exceptional
cases does not invalidate the black-box approach
for the other 99.99% of cases ... especially the
rather simple cases that belong in the introductory
course.