Chronology | Current Month | Current Thread | Current Date |
[Year List] [Month List (current year)] | [Date Index] [Thread Index] | [Thread Prev] [Thread Next] | [Date Prev] [Date Next] |
... Entropy and energy and temperature are related, but one should
not be too glib about seeming to define one in terms of the others.
The kinetic energy of the particles in the transition is unaffected
because the transition happens at a fixed temperature.
I'm not convinced. Can you prove it? I suspect it's
not true.
The usual classical statement is that there is 1/2 kT
per degree of freedom, but that leaves us wondering
what a degree of freedom is.
(Classical thermo
requires us to count degrees of freedom, but doesn't
tell us how to do it.
In stat mech, I know how to
count states, which makes a lot more sense.)
Indeed I suspect that the phase transition in question
(freezing) may be associated with a wholesale loss
of degrees of freedom.
If we lose, say, some rotational
degrees of freedom, the kinetic energy per particle
will decrease.
And why is it allegedly the kinetic energy that is
determined by the temperature? Why wouldn't it be
equally true that the potential energy is determined
by temperature? The physics doesn't distinguish.
To illustrate what I mean by the physics not distinguishing,
consider the thermodynamics of an electrical LC oscillator.
Designate the energy in the inductor as "kinetic" and the
energy in the capacitor as "potential". Then there will
be 1/2 kT in each.
As another illustration, consider cooling a chunk of metal
from +1C to -1C. No phase transition is involved, but
otherwise I can make a analogy between cooling water from
+1C to -1C. Each system transfers some entropy delta_S to
the environment. In each case the transfer occurs at a
temperature of roughly T=273K. In each case this requires
an energy T*delta_S. But in the case of the metal I don't
think you can attribute the energy to potential energy or
to "large scale reconfiguration". (Choose a piece of Invar-
like metal if you want no reconfiguration at all.)
...
If the transition to the high energy
phase had (at least partially), instead, happened adiabatically then
the system's temperature would *decrease* (as in the case of
evaporative cooling, or salt-induced melting) since in that case the
increased potential energy of the particles comes from a
corresponding decrease in the particles' overall kinetic energy.
Yes, but such a process doesn't shed any light on the
relationship between energy and entropy.
(I take it "adiabatic" means "isentropic" this case.)
...
The argument is incomplete at this point, because
we've been measuring the energy, not directly
measuring the entropy. So we must show the
connection.
The connection is via the *temperature* at which the
transition takes place. The temperature is, by definition,
the proportionality factor between quasistatic infinitesimal
energy increments and quasistatic infinitesimal entropy
increments (under conditions for which no macro-work is done
on the system at hand).
That definition of temperature bothers me on
several counts.
1) For one thing, "quasistatic" is not the key
issue.
I can have a process that is quasistatic
but clearly irreversible; examples include:
-- heat flow down a copper rod, with one hot end
and one cold end.
-- Joule-Thompson expansion.
I think the key issue is _reversibility_. I will
assume that's what was intended, and move on; if
something else was intended, please explain.
2) The alleged definition also leaves us wondering what
"macro-work" is.
Suppose we want to analyze a Szilard
engine (basically a "steam engine" containing only one
molecule of steam). Then either all the work is micro-
work, or all of it is macro-work; there's no useful
distinction. And there is a perfectly clear concept
of temperature. So it would be nice to define
temperature without getting tangled up in macro-work
versus micro-work.
In the case of the Szilard steam engine of precisely one water
molecule then if the trajectory of tha molecule is determined by the
description of the system that is supposed to characterize it, then
we are not doing stat mech--we are just doing the mechanics of the
molecule and any work that it does or has done to it is just the
mechanical work involved for the mechanical system.
Since the
system's description doesn't involve a coarser-grained description
than the microscopic one the very concept of macro-work is not
relevant.
3) Suppose we are transferring entropy and energy from
system A to system B, and we want to match up the
entropy-transfer with the energy-transfer.
-- System B could receive some extra entropy,
because the process is dissipative. For purposes of
analysis we can (sometimes) rule this out by restricting
our attention to nondissipative transfers.
-- System B could receive some extra energy,
more than is required by the second law, if we do a
little "macro-work".
So we see that without careful side-conditions, the
magnitude of the energy-transfer divided by the
entropy-transfer is neither an upper bound nor a
lower bound on the temperature.
I suppose we can
make this work by searching for the _minimum_ energy
required to transfer a given amount of entropy _from
system A_ (not "to B", that's not the same thing).
OTOH a search algorithm strikes me as somewhat messy.
4) What we are doing here is starting down the path
toward deriving a classical theory of thermodynamics.
That doesn't seem like a good idea, because I don't
believe there is any self-consistent classical theory
of thermodynamics.
*) One of the rules I like to live by is that thou shalt
not shoot down a theory unless you've got something better
to replace it with. In this case, the replacement is
obvious: statistical mechanics.
Counting states isn't
completely cut-and-dried, but it makes a lot more sense
than trying to count "degrees of freedom" whatever that means.
Then we assume (or argue from experiment) that the occupation
of a state is an exponentially declining function of the
energy of the state.
Then the definition of temperature
is 100% clear: it just specifies the base of the exponent,
i.e. it specifies the energy-scale in the exponential. And
then we write down the partition function and use it to
calculate system entropy, system energy, and everything else.
The relationship between energy
changes (under no-work conditions) and entropy changes is
quite analogous to that between energy changes (under no heat
conditions) and volume changes. In the first case the
appropriate factor relating the changes is the thermodynamic
temperature, and in the later case the appropriate factor
relating them is the (negative of) the pressure.
That's true if you want it to be true. That is, you can
arrange it so you write things in terms of
PdV + TdS
but with a simple change of variables you could
equally well wind up with
PdV + SdT
or
VdP + TdS
et cetera....... So we should not imagine that there
is any super-deep correspondence between dS and dV.
Also: Even if we restrict ourselves to using dS and dV
as the independent variables, we should keep in mind that
there is a conservation law (or rather a local law of
nondecrease) that applies to S, which makes it pretty
special, unlike V
We can understand the what the value of the transition temperature is
in physical terms by observing that the transition temperature is
simply the quotient of the latent heat of the transition divided by
its entropy change. So *if* we can quantify how much extra inter-
particle potential energy the particles acquire because of their
redistribution in space when the transition occurs, and *if* we can
also quantify how much extra information is required to specify the
system's microstate (because the particles redistributed themselves
more randomly with now more available phase space per particle) then
the quotient of these two quantities *determines* the temperature of
the transition.
That doesn't seem like a very practical way to calculate
transition temperatures. Also (as I said above) this seems
to go too far in distinguishing potential thermal energy versus
kinetic thermal energy.