Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-L] Mathematica question; stiff differential equation



On 12/13/20 11:22 AM, Carl Mungan asked:

(1) We often say (for a conservative system like this one) that the
mechanical energy is the first integral of the differential equation.
In light of your comments, would you say this statement is misleading
or at least incomplete?

That seems dubious at the terminology level and extra-dubious at the
conceptual level.

Consider this:

"The strength sounding name first integral is a relic of the times
when the mathematicians tried to solve all differential equations
by integration. In those days the name integral (or a partial
integral) was given to what we now called a solution." (See
"Ordinary Differential Equations" by Vladimir I. Arnol'd)

If you integrate the given differential equation for the first time
you got the first integral . If you again integrate the first
integral you got an equation called the second integral and so on.
Ultimately by this you will get your desire solution of the
differential equation.

That's quoted from:
https://math.stackexchange.com/questions/3203855/why-are-they-called-first-integrals

So I'd say it's misleading to call energy (or anything else!) "the"
first integral. It depends on what equation you're solving, and even
on details of the method of solution.

there are plenty of problems that are more easily solved by directly
proceeding to mechanical energy than by trying to solve Newton’s
second law. Students will ask me sometimes how to tell which kinds of
problems those are. I admit that even I am not always sure.

I'm certainly not sure either. It's an interesting question. Now it's
gonna haunt me for the next week or more.

A good example is objects rolling without slipping down an inclined plane.

Here are some additional examples:

Galileo's celebrated "stopped pendulum" aka "interrupted pendulum". You
can find the height of the top of the swing super-easily using energy.
In contrast, finding the details of the motion (position versus time)
of even just the period of such a thing requires knowing a whole lot more
about the system. That includes knowing exactly where the peg is.

So far the answer seems to be, sometimes energy is exactly what you want,
and sometimes not.

The previous example involved motion in one dimension, more or less,
given a generalized notion of angle. If we try to use the same trick for
a /spherical/ pendulum, it doesn't work at all. The highest point on the
trajectory is not a simple function of the total energy.

Now consider a symmetrical teeter-totter. Simple energy considerations
suggest that if you do the engineering just right, it exhibits neutral
stability. You can park it at any angle and it will just sit there.

Now we apply the same energy consideration to the liquid in a U-tube,
of the sort that you might use as a level-finder. If the tube diameter
is the same and other engineering details are attended to, the energy
is the same no matter whether the levels in the two arms are even or
uneven. Energy-wise, it looks just like the teeter-totter. But the
physics is not the same! The uneven situation is far from equilibrium.
The energy principle is of course not wrong, but it's a whole lot harder
to apply in this situation.

======

— energy is conserved
— momentum is conserved (three conservation laws, one for each component)
— angular momentum is conserved
— charge is conserved
— lepton numbers are conserved
— baryon number is very nearly conserved
— area in phase space is conserved
— in a Kepler problem, the LRL vector is conserved

I suspect that for every cute problem where energy is exactly the right tool
for the job, there is another where momentum is exactly right, and another
where charge is exactly right, and another where phase space is exactly
right.

The problems that land on my desk tend to be not very cute. You're gonna
need to use several different conservation laws all at once.

======================

You hinted at, but didn’t elaborate on, Lagrangian methods as a third alternative

When I was in junior high school, I wanted to build the ultimate
rich-field telescope. I wanted enough light-gathering power to let me
see the Ring Nebula. I'd seen photos, but I wanted to see it in real
time. Alas, every time I sketched out a design, I found that it
wouldn't work. It wasn't an energy problem; you can always collect
more light by making the primary mirror larger. The problem was
getting all that light into the eye. I was smart enough to not waste
time building something that didn't work, but I was annoyed that I
couldn't come up with a satisfactory design. I was not smart enough
to figure out that I was up against a fundamental physics problem. I
had never heard of phase space, much less conservation thereof, or
the brightness theorem. It turns out that the Ring Nebula is just not
very bright. You could stand in the middle of it and not be able to
see it. A telescope can bring the image closer, but it cannot
increase the surface brightness. This is not a problem with stars,
which have an enormous surface brightness, but it's a real limitation
for nebulae.

https://www.av8n.com/physics/phase-space-thin-lens.htm

It turns out that in some sense, the following are all the same:
-- Liouville's theorem (conservation of phase space)
-- the aforementioned optical brightness theorem
-- the second law of thermodynamics
-- Heisenberg's uncertainty principle
-- the fluctuation/dissipation theorem
-- unitarity of the equations of motion
-- probably other things I haven't thought of

https://www.av8n.com/physics/liouville-intro.htm

An even more general version of the same idea gives us
-- symplectic integrators
https://www.av8n.com/physics/symplectic-integrator.htm

And it is kissing cousins to:
-- Feistel ciphers

===============

The Lagrangian knows all and tells all. The energy does not.

In particular, if you know the Lagrangian, you can /choose/ a coordinate
of your liking, and the Lagrangian will *tell* you the corresponding
momentum. For example, consider an electrical LC oscillator. We have
every reason to expect that it will be isomorphic to a mechanical mass
on a spring oscillator ... but what shall we use for the coordinate?
— You can choose the charge on the capacitor as the coordinate (in which
case the flux in the inductor is the momentum.
— You can equally well choose the flux as your coordinate, in which case
the charge is the momentum.

You can do it either way. It's your choice.

When doing research, you're always stretching the limits in some way.
You're always taking calculated risks. Part of the calculation, part
of the discipline, is to avoid /unnecessary/ risk. The harder the
problem, the more important it is to use reliable methods, so that
the only risks you take are the ones you /want/ to take.

Once upon a time I needed to apply the laws of quantum mechanics to
an electrical circuit, which almost nobody had done before, and I was
pretty sure all the publications on the subject were dead wrong, and
I knew nobody would pay any attention to little old me unless I could
prove, absolutely prove, that I was doing things right. The Lagrangian
saved the day. Related concepts include the principle of least action
and the Euler-Lagrange equation.

One very unusual (in a good way) book is:
G. J. Sussman and J. Wisdom
_Structure and Interpretation of Classical Mechanics_
https://mitpress.mit.edu/books/structure-and-interpretation-classical-mechanics-second-edition

If you spend a lot of time doing quantum mechanics you might think that
the Hamiltonian is the greatest thing in the world, but really it's not.
You can't even get started with it until you know the canonical momentum
that corresponds to your coordinate. How do you know you've got that
right? The Lagrangian will tell you!

Also: I get reeeeally tired of textbooks and low-rent pundits explaining
that energy is quantized in units of h. That's ridiculous; h doesn't
even have dimensions of energy. Instead it has dimensions of area in
phase space. It turns out that Max Planck was not the village idiot. In
1900 he was one of the few people on earth who had read and understood
Boltzmann's book. In one of his publications (or one of his notebooks?)
there is a sketch of the phase space of a harmonic oscillator, with equal
amounts of area for each quantum state. He knew exactly what he was doing.
(I can't at the moment find the reference for this; sorry.) Planck tried
to warn people that they were misinterpreting his work, but they wouldn't
listen. Fifty years later Glauber more-or-less straightened things out.
But most people still haven't gotten the memo.

Energy eigenstates are not the only states. They're not even the only
basis states.

It's instructive to draw the diagram for the harmonic oscillator. It's
even easier to draw it for a particle in a box, which of course gives the
same answer, i.e. equal area per state in phase space.

A bunch of physicsy examples using the Hairer symplectic integrator are
here:
https://www.av8n.com/computer/geometric-numerical-integration/
The simplest way to get started may be to copy the Kepler example and
then modify it to do what you want.