Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-L] applications of game theory



The following question was asked off-list, but I suspect
more than one person is wondering about the same issues,
so I'm answering on-list.

Recall that in the "tack" game, I paid $10.00 for a shot
at winning $25.00.

I don't fully understand what you wrote
about that upholstery tack random event. He flips the tack, and you
flip the coin. I don't understand as the tack probably has a preferred
landing direction while the coin is 50-50. How do you make a prediction
based on the coin flip?

Yes, the coin flip is 50/50, or close enough.

Therefore /no matter what he does/ with the tack, my random
prediction will agree with his outcome 50% of the time. So
I have a 50/50 shot at the payoff, no matter what.

This is enough to be profitable. It's not meant to be
an accurate prediction ... just a profitable prediction.
This is important: The merit of a decision depends not
just on the probability of getting this-or-that outcome,
but on the /payoff/ for each possible outcome. If somebody
is giving me better than 6-to-1 odds, I will bet on the
roll of a pair of fair dice, even though my "prediction"
is "wrong" most of the time.

Note that accepting a goodly number of "wrong" predictions
does *not* mean making a virtue of being wrong. When I
bet on the pair of dice, I predict "7". Any other prediction
would be less profitable.

This is one of the things that makes purple smoke come
out of my ears when I see the way typical science fair
projects are judged. Research is *not* some made-for-TV
guessing game where the goal is to be right all the time.
The goal is to win often enough /and win big enough/ to
make it worthwhile on average.

Why is it that the baseball coach is smarter than the
science-fair organizers? The coach knows it is OK to
have a .333 batting average ... especially if it comes
with a large OPS.
http://www.baberuth.com/stats/

When Arno Penzias was running the research area at
Bell Labs, he would tell people "If you're right more
than 10% of the time, you're not working on sufficiently
hard problems."

Returning to the "tack" game: Randomizing my guess
ensures that the other guy cannot manipulate me into
making an unwise choice. Note that a skilled magician
can very effectively predict and/or guide a person's
choice.

Neither you nor I know enough about the tack to make a
physics-based prediction ... and there's no advantage in
pretending to know more than we do. We could not hope to
do better than 50/50 ... and randomization guarantees that
we will not do worse. Converging quickly onto a guaranteed
50/50 proposition saves time.

============

Tangential remark: the physics of tossing a coin does /not/
guarantee a random outcome, especially if the tosser doesn't
want it to be random. A good RNG would rely on some sort of
chaotic dynamics, and the physics of the coin toss is not
sufficiently chaotic, not even close. OTOH under conditions
where I am tossing the coin and I *want* it to be random I
can arrange for it to be random enough.

===========

Note that from the other guy's point of view, *his* minimax
strategy is to trim the tack so that it lands point-up half
the time and sideways half the time. That minimizes his
losses, avoiding a situation where some smart physicist can
look at the tack and predict the outcome.

After the experiment I mentioned this and the guy confirmed
that he his tacks were very nearly a 50/50 proposition, so
in fact we were both running the minimax strategy. (At
this point, nobody in the class had any idea what the two
of us were talking about.) Note that neither his randomization
nor my randomization was wasted. The point is that each of
us had a way of keeping the other guy from winning more than
50% of the time.

A familiar example of the same thing is scissors/paper/stone.
The random strategy wins 1/3rd of the time, loses 1/3rd of
the time, and ties 1/3rd of the time. This is minimax.
Anything else you do runs the risk that the other guy will
figure out what your policy is, and use it against you.

Again: Look to the profit. If somebody is paying better
than 2:1 (and ties don't count), the minimax strategy
guarantees that you will turn a profit on average
/no matter what/ the other guy does.

This is another thing that makes purple smoke come out of
my ears: Probability gets short shrift in the typical
curriculum ... and what does get taught is many decades
out of date with respect to modern theory, and also
needlessly disconnected from practical applications.
There are obvious applications for physics majors,
including thermodynamics, quantum mechanics, design of
experiment et cetera ... but also for other majors,
from agriculture to zoology and lots of stuff in between,
including business and military strategy ... not to
mention everyday life.

====================================

This discussion comes under the heading of "game theory".
An excellent popular book on the subject is
William Poundstone
_Prisoner's Dilemma_
http://www.amazon.com/Prisoners-Dilemma-William-Poundstone/dp/038541580X

Note that the title refers to a famous game-theory problem that
does /not/ require randomization ... but later chapters deal
with so-called mixed strategies i.e. games that require some
amount of randomization.

===

If you want a more scholarly tome, including some highly original
research, see
Robert Axelrod
_The Evolution of Cooperation_
http://www.amazon.com/The-Evolution-Cooperation-Revised-Edition/dp/0465005640

===

The movie "A Beautiful Mind" has some game theory in it,
although if you blink you'll miss it. In particular: A one
point John Nash explains that the second-most beautiful guy
is better off courting the second-most beautiful girl and
winning, rather than going after the #1 most beautiful girl
and losing. This is an example of what is now known as
Nash equilibrium. This is discussed in Poundstone's book.