Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-l] Monty Hall problem



On 06/29/2010 09:53 AM, John Mallinckrodt wrote:

Making the (still unrealistic) assumption that there are no other
subjective clues for Monty to use in making his decision to "show a
goat and offer a switch" or for you to use in making your decision to
"accept the switch," the probability of winning the car for any given
strategy can be expressed in terms of the probabilities

C (you pick the car, Monty offers the switch)
G (you pick a goat, Monty offers the switch)
S (you exercise the offer to switch if received)

I find that Monty's best strategy is always to offer when you pick
the car and never to offer when you pick the goat and that your best
strategy is never to switch. This, of course, leads to winning the
car 1/3 of the time.

That's not terribly surprising or interesting, but it is somewhat
interesting to note that:

If C is less than twice G than, then the odds of winning
monotonically increase (beyond 1/3) as S becomes greater.
If C equals twice G, then the odds of winning are 1/3 independent of
the value of S.
If C is greater than twice G, then the odds of winning monotonically
decrease (below 1/3) as S becomes greater.

Continuing down that road:

1) No S-values other than S=1 and S=0 ever make sense. That is,
you don't have any viable "mixed strategy".

If we temporarily consider the unusual case where you know C
and G, you can calculate S, and the result for S is a highly
discontinuous function of C and G.

2) If Monty sticks to his minimax strategy, you really need to
stick to your minimax strategy, i.e. never switch. Any departure
from S=0 will cost you in direct proportion.

In other words: You strategy has a strong stability property.

3) In stark contrast, if you stick you your minimax strategy,
then in any particular instance of the game, Monty can do anything
he pleases. It doesn't matter whether he makes the offer or not,
since you are never going to accept.

In other words: His strategy has zero stability, at least in the
short run.

This is related to the fact that for any single instance of the
game, you presumably don't know C and G.

4) In yet another contrast, if we consider the _iterated_ Monty
Hall problem, where you get to play again and again, you can
imagine trying to build up an empirical estimate of C and G.
If this indicates Monty is not following his optimal strategy
you can try to exploit that. This requires some very tricky,
very advanced game theory.