Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

[Phys-l] entropy +- disorder +- complexity



On 04/25/2011 04:02 PM, Stefan Jeglinski wrote:
(I'm def in the camp that
order/disorder should be barred from the
discussion)

I appreciate the sentiment. On alternate Tuesdays I'm in
the same camp. But I'm not sure that's the best approach.

IMHO, when talking to students or to the general public, it is
/with rare exceptions/ best to avoid talking about misconceptions.

However, there are exceptions, and this may be one of them. We
can make an exception whenever a misconception is particularly
prevalent and/or particularly pernicious.

For one thing, the misconceptions about associating entropy with
disorder are so prevalent that sooner or later, each of us will
be called upon to confront them.

Also, one can invoke the principle that says fallacy is more
dangerous than absurdity. Truly absurd ideas are not a problem,
because nobody will take them seriously. (This principle evidently
doesn't apply to certain politicians, but let's not go there.)

The interesting thing about associating entropy with disorder
is that the idea is approximately half true. The true half is
worth discussing on its own merits, and the untrue half is worth
refuting because it is plausible and prevalent.

Specifically: It's more-or-less true that processes that tend
to increase the disorder also tend to increase the entropy.
The process of /stirring/ is an example. Shuffling a deck of
cards is another example. This is, alas, treacherous, as we
can see by considering processes that go in the other direction.
There are plenty of processes that decrease the entropy but do
*not* decrease the disorder. A classic example is peeking at the
deck of cards after it has been shuffled; the disorder remains
large but the entropy goes instantly to zero.

This is somewhat related to the fact that disorder is a property
of the microstate, whereas entropy is a property of the macrostate.
Let's be clear: entropy is Σi Pi log(1/Pi) where each Pi is a
property of the ith microstate. Meanwhile, the entropy is an
average over the entire macrostate. Indeed the entropy is the
expectation value of the surprise-value, where the surprise-value
of the ith microstate is defined as log(1/Pi).

One problem with any discussion of disorder is the lack of any
agreed-upon way to quantify the disorder. I do not offer a
definition of disorder, but I take the lack of definition as a
sign that when the topic comes up, we should change the subject
and talk about something else instead, something more well-defined.

One candidate (not the only candidate) is to talk about the
algorithmic complexity, which is a fascinating and powerful
idea introduced by Chaitin.

This idea is often associated with another name, a more
famous name, but Chaitin in fact has a ton of priority,
so I insist on attributing it to him.

It turns out that most of the things we consider to be highly
disordered also have high algorithmic complexity. So we have
nothing to lose and much to gain by talking about complexity
instead of disorder.

At this point it is only a small leap to assign to the ith
microstate a probability of 2^(-Hi), where Hi is the algorithmic
complexity. This is certainly not the only probability measure
one could choose, but one could do worse. This has the amusing
property that the surprise-value is then equal to the complexity.
That in turn means that the entropy is the ensemble average of
the complexity, specifically, the expectation value of the
complexity.

A lot of the literature on computational complexity asserts
that it is uniquely defined "up to an additive fudge that is
bounded and doesn't matter". Well, I'm here to tell you that
there is an additive fudge ... and it matters a great deal for
present purposes. There are other purposes where it doesn't
matter, for instance if you are in the transmission business and
you care only about the average price per bit for a compressed
string. However, as soon as we start computing probabilities
fudge to the formula 2^(-Hi) then shifting Hi by an additive
constant shifts the probability by a /multiplicative/ fudge, and
this changes everything. It is true /and important/ that the
probability of a microstate be non-unique. Peeking at the deck
after it has been shuffled does not change the microstate, but
it does change the probability!

So, we are in some deep water here. Explaining this to students
or to the general public is not easy. It would be a lot easier
if everybody had a good understanding of probability, including
the idea that we get to /assign/ probability to each microstate,
and we can change the probability without changing the microstate.
A lot of people don't realize this, and don't even believe you
when you tell them. At this point I like to say, if you don't
believe that peeking changes the probabilities, I look forward
to playing poker with you.

One of the most amusing things about algorithmic complexity is
the following: Consider the category of strings such that the
algorithmic complexity of the string is significantly less than
the length of the string itself. I call these strings algorithmically
/tight/ in contrast to all the others that are algorithmically
/loose/. A simple pigeon-hole argument shows that tight strings
must be very rare. That is, the number of long strings greatly
exceeds the number of short descriptions. Most strings do not
have any description shorter than simply quoting the string.

This can be summarized by saying that almost all strings are
loose. More amusingly, we can say that all strings are loose
/except for the ones we know about/.

Note that in the literature these loose strings are called
algorithmically "random" but I consider this to be an abuse
of the word random.

There is a great deal more that could be said about this.

Bottom line: There is quite a lot to be learned by looking
into the topic of disorder, or rather complexity. It sheds
light on the foundations of what probability is. It is
absolutely not the same as entropy, but it's not completely
unrelated, either. Same church, different pew.