The two envelopes problem, also known as the exchange paradox, is a brain teaser, puzzle, or paradox in logic, probability, and recreational mathematics. It is of special interest in decision theory, and for the Bayesian interpretation of probability theory. Historically, it arose as a variant of the necktie paradox. The problem typically is introduced by formulating a hypothetical challenge of the following type:
It seems obvious that there is no point in switching envelopes as the situation is symmetric. However, because you stand to gain twice as much money if you switch while risking only a loss of half of what you currently have, it is possible to argue that it is more beneficial to switch. The problem is to show what is wrong with this argument.
Video Two envelopes problem
Introduction
Problem
Basic setup: You are given two indistinguishable envelopes, each of which contains a positive sum of money. One envelope contains twice as much as the other. You may pick one envelope and keep whatever amount it contains. You pick one envelope at random but before you open it you are given the chance to take the other envelope instead.
The switching argument: Now suppose you reason as follows:
The puzzle: The puzzle is to find the flaw in the very compelling line of reasoning above. This includes determining exactly why and under what conditions that step is not correct, in order to be sure not to make this mistake in a more complicated situation where the misstep may not be so obvious. In short, the problem is to solve the paradox. Thus, in particular, the puzzle is not solved by the very simple task of finding another way to calculate the probabilities that does not lead to a contradiction.
Maps Two envelopes problem
Solutions
Many solutions resolving the paradox have been presented. The probability theory underlying the problem is well understood, and any apparent paradox is generally due to treating what is actually a conditional probability as an unconditional probability. A large variety of similar formulations of the paradox are possible, and have resulted in a voluminous literature on the subject.
Versions of the problem have continued to spark interest in the fields of philosophy and game theory.
Simple resolution
The total amount in both envelopes is a constant , with in one envelope and in the other.
If you select the envelope with first you gain the amount by swapping. If you select the envelope with first you lose the amount by swapping. So you gain on average by swapping.
Swapping is not better than keeping. The expected value is the same for both the envelopes. Thus no contradiction exists.
Other simple resolutions
The step 7 assumes that the second choice is independent of the first choice. This is the error and this is the source of the apparent paradox.
A common way to resolve the paradox, both in popular literature and part of the academic literature, especially in philosophy, is to assume that the 'A' in step 7 is intended to be the expected value in envelope A and that we intended to write down a formula for the expected value in envelope B.
Step 7 states that the expected value in B = 1/2( 2A + A/2 )
It is pointed out that the 'A' in the first part of the formula is the expected value, given that envelope A contains less than envelope B, but the 'A', in the second part of the formula is the expected value in A, given that envelope A contains more than envelope B. The flaw in the argument is that same symbol is used with two different meanings in both parts of the same calculation but is assumed to have the same value in both cases.
A correct calculation would be:
- Expected value in B = 1/2 ( Expected value in A (given A is larger than B) + Expected value in A (given A is smaller than B) )
If we then take the sum in one envelope to be x and the sum in the other to be 2x the expected value calculations becomes:
- Expected value in B = 1/2 (x + 2x)
which is equal to the expected sum in A.
In non-technical language, what goes wrong (see Necktie paradox) is that, in the scenario provided, the mathematics use relative values of A and B (that is, it assumes that one would gain more money if A is less than B than one would lose if the opposite were true). However, the two values of money are fixed (one envelope contains, say, $20 and the other $40). If the values of the envelopes are restated as x and 2x, it's much easier to see that, if A were greater, one would lose x by switching and, if B were greater, one would gain x by switching. One does not actually gain a greater amount of money by switching because the total T of A and B (3x) remains the same, and the difference x is fixed to T/3.
Line 7 should have been worked out more carefully as follows:
A will be larger when A is larger than B, than when it is smaller than B. So its average values (expectation values) in those two cases are different. And the average value of A is not the same as A itself, anyway. Two mistakes are being made: the writer forgot he was taking expectation values, and he forgot he was taking expectation values under two different conditions.
It would have been easier to compute E(B) directly. Denoting the lower of the two amounts by x, and taking it to be fixed (even if unknown) we find that
We learn that 1.5x is the expected value of the amount in Envelope B. By the same calculation it is also the expected value of the amount in Envelope A. They are the same hence there is no reason to prefer one envelope to the other. This conclusion was, of course, obvious in advance; the point is that we identified the false step in the argument for switching by explaining exactly where the calculation being made there went off the rails.
We could also continue from the correct but difficult to interpret result of the development in line 7:
so (of course) different routes to calculate the same thing all give the same answer.
Tsikogiannopoulos (2012) presented a different way to do these calculations. Of course, it is by definition correct to assign equal probabilities to the events that the other envelope contains double or half that amount in envelope A. So the "switching argument" is correct up to step 6. Given that the player's envelope contains the amount A, he differentiates the actual situation in two different games: The first game would be played with the amounts (A, 2A) and the second game with the amounts (A/2, A). Only one of them is actually played but we don't know which one. These two games need to be treated differently. If the player wants to compute his/her expected return (profit or loss) in case of exchange, he/she should weigh the return derived from each game by the average amount in the two envelopes in that particular game. In the first case the profit would be A with an average amount of 3A/2, whereas in the second case the loss would be A/2 with an average amount of 3A/4. So the formula of the expected return in case of exchange, seen as a proportion of the total amount in the two envelopes, is:
This result means yet again that the player has to expect neither profit nor loss by exchanging his/her envelope.
We could actually open our envelope before deciding on switching or not and the above formula would still give us the correct expected return. For example, if we opened our envelope and saw that it contained 100 euros then we would set A=100 in the above formula and the expected return in case of switching would be:
Nalebuff asymmetric variant
As pointed out by many authors, the mechanism by which the amounts of the two envelopes are determined is crucial for the decision of the player to switch or not his/her envelope. Suppose that the amounts in the two envelopes A and B were not determined by first fixing contents of two envelopes E1 and E2, and then naming them A and B at random (for instance, by the toss of a fair coin; Nickerson and Falk, 2006). Instead, we start right at the beginning by putting some amount in Envelope A, and then fill B in a way which depends both on chance (the toss of a coin) and on what we put in A. Suppose that first of all the amount a in Envelope A is fixed in some way or other, and then the amount in Envelope B is fixed, dependent on what is already in A, according to the outcome of a fair coin. ?f the coin fell Heads then 2a is put in Envelope B, if the coin fell Tails then a/2 is put in Envelope B. If the player was aware of this mechanism, and knows that they hold Envelope A, but don't know the outcome of the coin toss, and doesn't know a, then the switching argument is correct and he/she is recommended to switch envelopes. This version of the problem was introduced by Nalebuff (1988) and is often called the Ali-Baba problem. Notice that there is no need to look in Envelope A in order to decide whether or not to switch.
Many more variants of the problem have been introduced. Nickerson and Falk (2006) systematically survey a total of 8.
Bayesian resolutions
The simple resolution above assumed that the person who invented the argument for switching was trying to calculate the expectation value of the amount in Envelope A, thinking of the two amounts in the envelopes as fixed (x and 2x). The only uncertainty is which envelope has the smaller amount x. However many mathematicians and statisticians interpret the argument as an attempt to calculate the expected amount in Envelope B, given a real or hypothetical amount "A" in Envelope A. (A mathematician would moreover prefer to use the symbol a to stand for a possible value, reserving the symbol A for a random variable). One does not need to look in the envelope to see how much is in there, in order to do the calculation. If the result of the calculation is an advice to switch envelopes, whatever amount might be in there, then it would appear that one should switch anyway, without looking. In this case, at Steps 6, 7 and 8 of the reasoning, "A" is any fixed possible value of the amount of money in the first envelope.
This interpretation of the two envelopes problem appears in the first publications in which the paradox was introduced in its present-day form, Gardner (1989) and Nalebuff (1989). It is common in the more mathematical literature on the problem. It also applies to the modification of the problem (which seems to have started with Nalebuff) in which the owner of Envelope A does actually look in his envelope before deciding whether or not to switch; though Nalebuff does also emphasize that there is no need to have the owner of Envelope A look in his envelope. If he imagines looking in it, and if for any amount which he can imagine being in there, he has an argument to switch, then he will decide to switch anyway. Finally, this interpretation was also the core of earlier versions of the two envelopes problem (Littlewood's, Schrödinger's, and Kraitchik's switching paradoxes); see the concluding section, on history of TEP.
This kind of interpretation is often called "Bayesian" because it assumes the writer is also incorporating a prior probability distribution of possible amounts of money in the two envelopes in the switching argument.
Simple form of Bayesian resolution
The simple resolution depended on a particular interpretation of what the writer of the argument is trying to calculate: namely, it assumed he was after the (unconditional) expectation value of what's in Envelope B. In the mathematical literature on Two Envelopes Problem a different interpretation is more common, involving the conditional expectation value (conditional on what might be in Envelope A). To solve this and related interpretations or versions of the problem, most authors use the Bayesian interpretation of probability, which means that probability reasoning is not only applied to truly random events like the random pick of an envelope, but also to our knowledge (or lack of knowledge) about things which are fixed but unknown, like the two amounts originally placed in the two envelopes, before one is picked at random and called "Envelope A". Moreover, according to a long tradition going back at least to Laplace and his principle of insufficient reason one is supposed to assign equal probabilities when one has no knowledge at all concerning the possible values of some quantity. Thus the fact that we are not told anything about how the envelopes are filled can already be converted into probability statements about these amounts. No information means that probabilities are equal.
In steps 6 and 7 of the switching argument, the writer imagines that that Envelope A contains a certain amount a, and then seems to believe that given that information, the other envelope would be equally likely to contain twice or half that amount. That assumption can only be correct, if prior to knowing what was in Envelope A, the writer would have considered the following two pairs of values for both envelopes equally likely: the amounts a/2 and a; and the amounts a and 2a. (This follows from Bayes' rule in odds form: posterior odds equal prior odds times likelihood ratio). But now we can apply the same reasoning, imagining not a but a/2 in Envelope A. And similarly, for 2a. And similarly, ad infinitum, repeatedly halving or repeatedly doubling as many times as you like. (Falk and Konold, 1992).
Suppose for the sake of argument, we start by imagining an amount 32 in Envelope A. In order that the reasoning in steps 6 and 7 is correct whatever amount happened to be in Envelope A, we apparently believe in advance that all the following ten amounts are all equally likely to be the smaller of the two amounts in the two envelopes: 1, 2, 4, 8, 16, 32, 64, 128, 256, 512 (equally likely powers of 2: Falk and Konold, 1992). But going to even larger or even smaller amounts, the "equally likely" assumption starts to appear a bit unreasonable. Suppose we stop, just with these ten equally likely possibilities for the smaller amount in the two envelopes. In that case, the reasoning in steps 6 and 7 was entirely correct if envelope A happened to contain any of the amounts 2, 4, ... 512: switching envelopes would give an expected (average) gain of 25%. If envelope A happened to contain the amount 1, then the expected gain is actually 100%. But if it happened to contain the amount 1024, a massive loss of 50% (of a rather large amount) would have been incurred. That only happens once in twenty times, but it is exactly enough to balance the expected gains in the other 19 out of 20 times.
Alternatively we do go on ad infinitum but now we are working with a quite ludicrous assumption, implying for instance, that it is infinitely more likely for the amount in envelope A to be smaller than 1, and infinitely more likely to be larger than 1024, than between those two values. This is a so-called improper prior distribution: probability calculus breaks down; expectation values are not even defined; see Falk and Konold and (1982).
Many authors have also pointed out that if a maximum sum that can be put in the envelope with the smaller amount exists, then it is very easy to see that Step 6 breaks down, since if the player holds more than the maximum sum that can be put into the "smaller" envelope they must hold the envelope containing the larger sum, and are thus certain to lose by switching. This may not occur often, but when it does, the heavy loss the player incurs means that, on average, there is no advantage in switching. Some writers consider that this resolves all practical cases of the problem.
But the problem can also be resolved mathematically without assuming a maximum amount. Nalebuff (1989), Christensen and Utts (1992), Falk and Konold (1992), Blachman, Christensen and Utts (1996), Nickerson and Falk (2006), pointed out that if the amounts of money in the two envelopes have any proper probability distribution representing the player's prior beliefs about the amounts of money in the two envelopes, then it is impossible that whatever the amount A=a in the first envelope might be, it would be equally likely, according to these prior beliefs, that the second contains a/2 or 2a. Thus step 6 of the argument, which leads to always switching, is a non-sequitur, also when there is no maximum to the amounts in the envelopes.
Introduction to further developments in connection with Bayesian probability theory
The first two resolutions discussed above (the "simple resolution" and the "Bayesian resolution") correspond to two possible interpretations of what is going on in step 6 of the argument. They both assume that step 6 indeed is "the bad step". But the description in step 6 is ambiguous. Is the author after the unconditional (overall) expectation value of what is in envelope B (perhaps - conditional on the smaller amount, x), or is he after the conditional expectation of what is in envelope B, given any possible amount a which might be in envelope A? Thus, there are two main interpretations of the intention of the composer of the paradoxical argument for switching, and two main resolutions.
A large literature has developed concerning variants of the problem. The standard assumption about the way the envelopes are set up is that a sum of money is in one envelope, and twice that sum is in another envelope. One of the two envelopes is randomly given to the player (envelope A). The originally proposed problem does not make clear exactly how the smaller of the two sums is determined, what values it could possibly take and, in particular, whether there is a minimum or a maximum sum it might contain. However, if we are using the Bayesian interpretation of probability, then we start by expressing our prior beliefs as to the smaller amount in the two envelopes through a probability distribution. Lack of knowledge can also be expressed in terms of probability.
A first variant within the Bayesian version is to come up with a proper prior probability distribution of the smaller amount of money in the two envelopes, such that when Step 6 is performed properly, the advice is still to prefer Envelope B, whatever might be in Envelope A. So though the specific calculation performed in step 6 was incorrect (there is no proper prior distribution such that, given what is in the first envelope A, the other envelope is always equally likely to be larger or smaller) a correct calculation, depending on what prior we are using, does lead to the result for all possible values of a.
In these cases it can be shown that the expected sum in both envelopes is infinite. There is no gain, on average, in swapping.
Second mathematical variant
Though Bayesian probability theory can resolve the first mathematical interpretation of the paradox above, it turns out that examples can be found of proper probability distributions, such that the expected value of the amount in the second envelope given that in the first does exceed the amount in the first, whatever it might be. The first such example was already given by Nalebuff (1989). See also Christensen and Utts (1992).
Denote again the amount of money in the first envelope by A and that in the second by B. We think of these as random. Let X be the smaller of the two amounts and Y=2X be the larger. Notice that once we have fixed a probability distribution for X then the joint probability distribution of A,B is fixed, since A,B = X,Y or Y,X each with probability 1/2, independently of X,Y.
The bad step 6 in the "always switching" argument led us to the finding E(B|A=a)>a for all a, and hence to the recommendation to switch, whether or not we know a. Now, it turns out that one can quite easily invent proper probability distributions for X, the smaller of the two amounts of money, such that this bad conclusion is still true. One example is analysed in more detail, in a moment.
As mentioned before, it cannot be true that whatever a, given A=a, B is equally likely to be a/2 or 2a, but it can be true that whatever a, given A=a, B is larger in expected value than a.
Suppose for example (Broome, 1995) that the envelope with the smaller amount actually contains 2n dollars with probability 2n/3n+1 where n = 0, 1, 2,... These probabilities sum to 1, hence the distribution is a proper prior (for subjectivists) and a completely decent probability law also for frequentists.
Imagine what might be in the first envelope. A sensible strategy would certainly be to swap when the first envelope contains 1, as the other must then contain 2. Suppose on the other hand the first envelope contains 2. In that case there are two possibilities: the envelope pair in front of us is either {1, 2} or {2, 4}. All other pairs are impossible. The conditional probability that we are dealing with the {1, 2} pair, given that the first envelope contains 2, is
and consequently the probability it's the {2, 4} pair is 2/5, since these are the only two possibilities. In this derivation, is the probability that the envelope pair is the pair 1 and 2, and Envelope A happens to contain 2; is the probability that the envelope pair is the pair 2 and 4, and (again) Envelope A happens to contain 2. Those are the only two ways that Envelope A can end up containing the amount 2.
It turns out that these proportions hold in general unless the first envelope contains 1. Denote by a the amount we imagine finding in Envelope A, if we were to open that envelope, and suppose that a = 2n for some n >= 1. In that case the other envelope contains a/2 with probability 3/5 and 2a with probability 2/5.
So either the first envelope contains 1, in which case the conditional expected amount in the other envelope is 2, or the first envelope contains a > 1, and though the second envelope is more likely to be smaller than larger, its conditionally expected amount is larger: the conditionally expected amount in Envelope B is
which is more than a. This means that the player who looks in Envelope A would decide to switch whatever he saw there. Hence there is no need to look in Envelope A to make that decision.
This conclusion is just as clearly wrong as it was in the preceding interpretations of the Two Envelopes Problem. But now the flaws noted above do not apply; the a in the expected value calculation is a constant and the conditional probabilities in the formula are obtained from a specified and proper prior distribution.
Proposed resolutions through mathematical economics
Most writers think that the new paradox can be defused, although the resolution requires concepts from mathematical economics. Suppose for all a. It can be shown that this is possible for some probability distributions of X (the smaller amount of money in the two envelopes) only if . That is, only if the mean of all possible values of money in the envelopes is infinite. To see why, compare the series described above in which the probability of each X is 2/3 as likely as the previous X with one in which the probability of each X is only 1/3 as likely as the previous X. When the probability of each subsequent term is greater than one-half of the probability of the term before it (and each X is twice that of the X before it) the mean is infinite, but when the probability factor is less than one-half, the mean converges. In the cases where the probability factor is less than one-half, for all a other than the first, smallest a, and the total expected value of switching converges to 0. In addition, if an ongoing distribution with a probability factor greater than one-half is made finite by, after any number of terms, establishing a final term with "all the remaining probability," that is, 1 minus the probability of all previous terms, the expected value of switching with respect to the probability that A is equal to the last, largest a will exactly negate the sum of the positive expected values that came before, and again the total expected value of switching drops to 0 (this is the general case of setting out an equal probability of a finite set of values in the envelopes described above). Thus, the only distributions that seem to point to a positive expected value for switching are those in which . Averaging over a, it follows that (because A and B have identical probability distributions, by symmetry, and both A and B are greater than or equal to X).
If we don't look into the first envelope, then clearly there is no reason to switch, since we would be exchanging one unknown amount of money (A), whose expected value is infinite, for another unknown amount of money (B), with the same probability distribution and infinite expected value. However, if we do look into the first envelope, then for all values observed () we would want to switch because for all a. As noted by David Chalmers (2002), this problem can be described as a failure of dominance reasoning.
Under dominance reasoning, the fact that we strictly prefer A to B for all possible observed values a should imply that we strictly prefer A to B without observing a; however, as already shown, that is not true because . To salvage dominance reasoning while allowing , one would have to replace expected value as the decision criterion, thereby employing a more sophisticated argument from mathematical economics.
For example, we could assume the decision maker is an expected utility maximizer with initial wealth W whose utility function, , is chosen to satisfy for at least some values of a (that is, holding onto is strictly preferred to switching to B for some a). Although this is not true for all utility functions, it would be true if had an upper bound, , as w increased toward infinity (a common assumption in mathematical economics and decision theory). Michael R. Powers (2015) provides necessary and sufficient conditions for the utility function to resolve the paradox, and notes that neither nor is required.
Some writers would prefer to argue that in a real-life situation, and are bounded simply because the amount of money in an envelope is bounded by the total amount of money in the world (M), implying and . From this perspective, the second paradox is resolved because the postulated probability distribution for X (with ) cannot arise in a real-life situation. Similar arguments are often used to resolve the St. Petersburg paradox.
Controversy among philosophers
As mentioned above, any distribution producing this variant of the paradox must have an infinite mean. So before the player opens an envelope the expected gain from switching is "? - ?", which is not defined. In the words of David Chalmers (2002), this is "just another example of a familiar phenomenon, the strange behaviour of infinity". Chalmers suggests that decision theory generally breaks down when confronted with games having a diverging expectation, and compares it with the situation generated by the classical St. Petersburg paradox.
However, Clark and Shackel argue that this blaming it all on "the strange behaviour of infinity" does not resolve the paradox at all; neither in the single case nor the averaged case. They provide a simple example of a pair of random variables both having infinite mean but where it is clearly sensible to prefer one to the other, both conditionally and on average. They argue that decision theory should be extended so as to allow infinite expectation values in some situations.
Smullyan's non-probabilistic variant
The logician Raymond Smullyan questioned if the paradox has anything to do with probabilities at all. He did this by expressing the problem in a way that does not involve probabilities. The following plainly logical arguments lead to conflicting conclusions:
- Let the amount in the envelope chosen by the player be A. By swapping, the player may gain A or lose A/2. So the potential gain is strictly greater than the potential loss.
- Let the amounts in the envelopes be X and 2X. Now by swapping, the player may gain X or lose X. So the potential gain is equal to the potential loss.
Proposed resolutions
A number of solutions have been put forward. Careful analyses have been made by some logicians. Though solutions differ, they all pinpoint semantic issues concerned with counterfactual reasoning. We want to compare the amount that we would gain by switching if we would gain by switching, with the amount we would lose by switching if we would indeed lose by switching. However, we cannot both gain and lose by switching at the same time. We are asked to compare two incompatible situations. Only one of them can factually occur, the other is a counterfactual situation--somehow imaginary. To compare them at all, we must somehow "align" the two situations, providing some definite points in common.
James Chase (2002) argues that the second argument is correct because it does correspond to the way to align two situations (one in which we gain, the other in which we lose), which is preferably indicated by the problem description. Also Bernard Katz and Doris Olin (2007) argue this point of view. In the second argument, we consider the amounts of money in the two envelopes as being fixed; what varies is which one is first given to the player. Because that was an arbitrary and physical choice, the counterfactual world in which the player, counterfactually, got the other envelope to the one he was actually (factually) given is a highly meaningful counterfactual world and hence the comparison between gains and losses in the two worlds is meaningful. This comparison is uniquely indicated by the problem description, in which two amounts of money are put in the two envelopes first, and only after that is one chosen arbitrarily and given to the player. In the first argument, however, we consider the amount of money in the envelope first given to the player as fixed and consider the situations where the second envelope contains either half or twice that amount. This would only be a reasonable counterfactual world if in reality the envelopes had been filled as follows: first, some amount of money is placed in the specific envelope that will be given to the player; and secondly, by some arbitrary process, the other envelope is filled (arbitrarily or randomly) either with double or with half of that amount of money.
Byeong-Uk Yi (2009), on the other hand, argues that comparing the amount you would gain if you would gain by switching with the amount you would lose if you would lose by switching is a meaningless exercise from the outset. According to his analysis, all three implications (switch, indifferent, do not switch) are incorrect. He analyses Smullyan's arguments in detail, showing that intermediate steps are being taken, and pinpointing exactly where an incorrect inference is made according to his formalization of counterfactual inference. An important difference with Chase's analysis is that he does not take account of the part of the story where we are told that the envelope called Envelope A is decided completely at random. Thus, Chase puts probability back into the problem description in order to conclude that arguments 1 and 3 are incorrect, argument 2 is correct, while Yi keeps "two envelope problem without probability" completely free of probability, and comes to the conclusion that there are no reasons to prefer any action. This corresponds to the view of Albers et al., that without probability ingredient, there is no way to argue that one action is better than another, anyway.
In a 2012 paper on the subject, Bliss argues that the source of the paradox is that when one mistakenly believes in the possibility of a larger payoff that does not, in actuality, exist, one is mistaken by a larger margin than when one believes in the possibility of a smaller payoff that does not actually exist. If, for example, the envelopes contained $5.00 and $10.00 respectively, a player who opened the $10.00 envelope would expect the possibility of a $20.00 payout that simply does not exist. Were that player to open the $5.00 envelope instead, he would believe in the possibility of a $2.50 payout, which constitutes a smaller deviation from the true value; this results in the paradoxical discrepancy.
Albers, Kooi, and Schaafsma (2005) consider that without adding probability (or other) ingredients to the problem, Smullyan's arguments do not give any reason to swap or not to swap, in any case. Thus, there is no paradox. This dismissive attitude is common among writers from probability and economics: Smullyan's paradox arises precisely because he takes no account whatever of probability or utility.
Extensions to the problem
Since the two envelopes problem became popular, many authors have studied the problem in depth in the situation in which the player has a prior probability distribution of the values in the two envelopes, and does look in Envelope A. One of the most recent such publications is by McDonnell and Douglas (2009), who also consider some further generalizations.
If a priori we know that the amount in the smaller envelope is a whole number of some currency units, then the problem is determined, as far as probability theory is concerned, by the probability mass function describing our prior beliefs that the smaller amount is any number x = 1,2, ... ; the summation over all values of x being equal to 1. It follows that given the amount a in Envelope A, the amount in Envelope B is certainly 2a if a is an odd number. However, if a is even, then the amount in Envelope B is 2a with probability , and a/2 with probability . If one would like to switch envelopes if the expectation value of what is in the other is larger than what we have in ours, then a simple calculation shows that one should switch if , keep to Envelope A if .
If on the other hand the smaller amount of money can vary continuously, and we represent our prior beliefs about it with a probability density , thus a function that integrates to one when we integrate over x running from zero to infinity, then given the amount a in Envelope A, the other envelope contains 2a with probability , and a/2 with probability . If again we decide to switch or not according to the expectation value of what's in the other envelope, the criterion for switching now becomes .
The difference between the results for discrete and continuous variables may surprise many readers. Speaking intuitively, this is explained as follows. Let h be a small quantity and imagine that the amount of money we see when we look in Envelope A is rounded off in such a way that differences smaller than h are not noticeable, even though actually it varies continuously. The probability that the smaller amount of money is in an interval around a of length h, and Envelope A contains the smaller amount is approximately . The probability that the larger amount of money is in an interval around a of length h corresponds to the smaller amount being in an interval of length h/2 around a/2. Hence the probability that the larger amount of money is in a small interval around a of length h and Envelope A contains the larger amount is approximately . Thus, given Envelope A contains an amount about equal to a, the probability it is the smaller of the two is roughly .
If the player only wants to end up with the larger amount of money, and does not care about expected amounts, then in the discrete case he should switch if a is an odd number, or if a is even and . In the continuous case he should switch if .
Some authors prefer to think of probability in a frequentist sense. If the player knows the probability distribution used by the organizer to determine the smaller of the two values, then the analysis would proceed just as in the case when p or f represents subjective prior beliefs. However, what if we take a frequentist point of view, but the player does not know what probability distribution is used by the organiser to fix the amounts of money in any one instance? Thinking of the arranger of the game and the player as two parties in a two-person game, puts the problem into the range of game theory. The arranger's strategy consists of a choice of a probability distribution of x, the smaller of the two amounts. Allowing the player also to use randomness in making his decision, his strategy is determined by his choosing a probability of switching for each possible amount of money a he might see in Envelope A. In this section we so far only discussed fixed strategies, that is strategies for which q only takes the values 0 and 1, and we saw that the player is fine with a fixed strategy, if he knows the strategy of the organizer. In the next section we will see that randomized strategies can be useful when the organizer's strategy is not known.
Randomized solutions
Suppose as in the previous section that the player is allowed to look in the first envelope before deciding whether to switch or to stay. We'll think of the contents of the two envelopes as being two positive numbers, not necessarily two amounts of money. The player is allowed either to keep the number in Envelope A, or to switch and take the number in Envelope B. We'll drop the assumption that one number is exactly twice the other, we'll just suppose that they are different and positive. On the other hand, instead of trying to maximize expectation values, we'll just try to maximize the chance that we end up with the larger number.
In this section we ask the question, is it possible for the player to make his choice in such a way that he goes home with the larger number with probability strictly greater than half, however the organizer has filled the two envelopes?
We are given no information at all about the two numbers in the two envelopes, except that they are different, and strictly greater than zero. The numbers were written down on slips of paper by the organiser, put into the two envelopes. The envelopes were then shuffled, the player picks one, calls it Envelope A, and opens it.
We are not told any joint probability distribution of the two numbers. We are not asking for a subjectivist solution. We must think of the two numbers in the envelopes as chosen by the arranger of the game according to some possibly random procedure, completely unknown to us, and fixed. Think of each envelope as simply containing a positive number and such that the two numbers are not the same. The job of the player is to end up with the envelope with the larger number. This variant of the problem, as well as its solution, is attributed by McDonnell and Abbott, and by earlier authors, to information theorist Thomas M. Cover.
Counter-intuitive though it might seem, there is a way that the player can decide whether to switch or to stay so that he has a larger chance than 1/2 of finishing with the bigger number, however the two numbers are chosen by the arranger of the game. However, it is only possible with a so-called randomized algorithm: the player must be able to generate his own random numbers. Suppose he is able to produce a random number, let's call it Z, such that the probability that Z is larger than any particular quantity z is exp(-z). Note that exp(-z) starts off equal to 1 at z=0 and decreases strictly and continuously as z increases, tending to zero as z tends to infinity. So the chance is 0 that Z is exactly equal to any particular number, and there is a positive probability that Z lies between any two particular different numbers. The player compares his Z with the number in Envelope A. If Z is smaller he keeps the envelope. If Z is larger he switches to the other envelope.
Think of the two numbers in the envelopes as fixed (though of course unknown to the player). Think of the player's random Z as a probe with which he decides whether the number in Envelope A is small or large. If it is small compared to Z he switches, if it is large compared to Z he stays.
If both numbers are smaller than the player's Z, his strategy does not help him. He ends up with the Envelope B, which is equally likely to be the larger or the smaller of the two. If both numbers are larger than Z his strategy does not help him either, he ends up with the first Envelope A, which again is equally likely to be the larger or the smaller of the two. However, if Z happens to be in between the two numbers, then his strategy leads him correctly to keep Envelope A if its contents are larger than those of B, but to switch to Envelope B if A has smaller contents than B. Altogether, this means that he ends up with the envelope with the larger number with probability strictly larger than 1/2. To be precise, the probability that he ends with the "winning envelope" is 1/2 + P(Z falls between the two numbers)/2.
In practice, the number Z we have described could be determined to the necessary degree of accuracy as follows. Toss a fair coin many times, and convert the sequence of heads and tails into the binary representation of a number U between 0 and 1: for instance, HTHHTH... becomes the binary representation of u=0.101101.. . In this way, we generate a random number U, uniformly distributed between 0 and 1. Then define Z = - ln (U) where "ln" stands for natural logarithm, i.e., logarithm to base e. Note that we just need to toss the coin long enough to verify whether Z is smaller or larger than the number a in the first envelope--we do not need to go on for ever. We only need to toss the coin a finite (though random) number of times: at some point we can be sure that the outcomes of further coin tosses would not change the outcome.
The particular probability law (the so-called standard exponential distribution) used to generate the random number Z in this problem is not crucial. Any probability distribution over the positive real numbers that assigns positive probability to any interval of positive length does the job.
This problem can be considered from the point of view of game theory, where we make the game a two-person zero-sum game with outcomes win or lose, depending on whether the player ends up with the higher or lower amount of money. The organiser chooses the joint distribution of the amounts of money in both envelopes, and the player chooses the distribution of Z. The game does not have a "solution" (or saddle point) in the sense of game theory. This is an infinite game and von Neumann's minimax theorem does not apply.
History of the paradox
The envelope paradox dates back at least to 1953, when Belgian mathematician Maurice Kraitchik proposed a puzzle in his book Recreational Mathematics concerning two equally rich men who meet and compare their beautiful neckties, presents from their wives, wondering which tie actually cost more money. He also introduces a variant in which the two men compare the contents of their purses. He assumes that each purse is equally likely to contain 1 up to some large number x of pennies, the total number of pennies minted to date. The men do not look in their purses but each reasons that they should switch. He does not explain what is the error in their reasoning. It is not clear whether the puzzle already appeared in an earlier 1942 edition of his book. It is also mentioned in a 1953 book on elementary mathematics and mathematical puzzles by the mathematician John Edensor Littlewood, who credited it to the physicist Erwin Schroedinger, where it concerns a pack of cards, each card has two numbers written on it, the player gets to see a random side of a random card, and the question is whether one should turn over the card. Littlewood's pack of cards is infinitely large and his paradox is a paradox of improper prior distributions.
Martin Gardner popularized Kraitchik's puzzle in his 1982 book Aha! Gotcha, in the form of a wallet game:
Gardner confessed that though, like Kraitchik, he could give a sound analysis leading to the right answer (there is no point in switching), he could not clearly put his finger on what was wrong with the reasoning for switching, and Kraitchik did not give any help in this direction, either. F. Thomas Bruss on the contrary saw no justification to speak of a paradox (although he did not question the interest of other aspects of the problem), arguing that the crucial expectation argument displaying a paradox is wrong. In the A (2A) (A/2) - version the expectation argument would require measurability of the three random variables on the same probability space, which is here not compatible for two outcomes. In the wallet version, the error was to assume that A, independently of its value, is equally likely the smaller amount of A and B, only because both players are assumed to be equally rich.
In 1988 and 1989, Barry Nalebuff presented two different two-envelope problems, each with one envelope containing twice what is in the other, and each with computation of the expectation value 5A/4. The first paper just presents the two problems. The second discusses many solutions to both of them. The second of his two problems is nowadays the more common, and is presented in this article. According to this version, the two envelopes are filled first, then one is chosen at random and called Envelope A. Martin Gardner independently mentioned this same version in his 1989 book Penrose Tiles to Trapdoor Ciphers and the Return of Dr Matrix. Barry Nalebuff's asymmetric variant, often known as the Ali Baba problem, has one envelope filled first, called Envelope A, and given to Ali. Then a fair coin is tossed to decide whether Envelope B should contain half or twice that amount, and only then given to Baba.
See also
- Bayesian probability
- Bertrand's paradox
- Boy or Girl paradox
- Decision theory
- Monty Hall problem
- Necktie paradox
- Newcomb's paradox
- Siegel's paradox
- Sleeping Beauty problem
- St. Petersburg paradox
Notes and references
Source of article : Wikipedia