How Fair is Monopoly? by Ian Stewart

Reproduced with permission of the author


How Fair Is Monopoly? Everyone has played Monopoly. But few, I'd imagine, have ever thought about the math involved. In fact, the probability of winning at Monopoly can be described by interesting constructions known as Markov chains. In the early 1900s the Russian mathematician Andrey Andreyevich Markov invented a general theory of probability. I will ignore much of his work. And I won't review all of Monopoly's rules, but I will convince you that the game is fair. First, we must recall how to play it. Players take turns throwing a pair of dice. The number of dots on the dice determines how many squares around the board a player may move. A player who throws a double - say, two l's (snake eyes) - throws again. All players start from the square labeled GO. Some rolls, such as 7, naturally happen more often than others. There are six ways to roll a 7 (1 + 6, 2 + 5, 3 + 4, 4 + 3, 5 + 2, 6 + 1) from 36 possible sums of dots on the dice. So the probability of a 7 is 6/36, or 1/6. Then come 6 and 8, each having a probability of 5/36; then 5 and 9, having a probability of 1/9. Next, 4 and 10 have a probability of 1/12; 3 and 11 have a probability of 1/18; and finally 2 and 12 have a probability of 1/36. From these values we know that, over the course of many games, the first player is most likely to land on the seventh square, a CHANCE square. If he does not roll a 7, he will probably land on Oriental Avenue or Vermont Avenue, to either side of CHANCE. Thus, the first player has an excellent chance of securing one of these properties. If he does buy one, it lessens the opportunity for the other players to make a purchase on their first throw.

This fact is no doubt one reason why the game's designers put cheap properties near the start. The expensive but lucrative Park Place and Boardwalk are several turns around the board, by which time, presumably, the probabilities have evened out. But have they? To tackle that question, I shall introduce another simplification. Instead of considering a throw of both dice, let's imagine that they are thrown one at a time. Each player is allowed to make two moves: a "ghost" move, in which he ignores where he lands, and a real move. Similarly, we will adopt a mathematician's view of the game board.

For convenience, number the squares from zero to 39. Square 40 "wraps around" to square zero, Go, and we can think of the numbers as being counted modulo 40-meaning that anything larger than 39 can be replaced by what remains when it is divided by 40. Now imagine a single player making repeated throws of a single die, moving accordingly. What is the probability of landing on a given square after a given number of throws? We would hope that when the number of throws becomes large, this probability nears 1/40, for any of the 40 squares. In other words, they should all become equally likely. The way to find these probabilities is to see how their distribution "flows" over time. Each distribution can be represented by a sequence of 40 numbers, giving the probability of landing on each square individually. At the start of the game, the player is on square zero (GO), having a probability I (or always). So the probability distribution is 1 followed by 39 zeros. After a single ghost toss, the distribution becomes 0, 1/6, 1/6, 1/6, 1/6, 1/6, 1/6, O,..., 0that is, the probability of landing on the first six squares is 1/6, and the player cannot reach any others. Notice that the total probability of 1-originally concentrated on square zero-has been split into six equal parts and distributed to the squares that are one to six units farther along. This procedure is general. After each toss of the die, the probability on a square is divided by six. These six equal parts flow clockwise on to each of the next six squares. So on the next throw, the 1/6 on square one is redistributed as follows: 0, 0, 1/36, 1/36, 1/36, 1/36, 1/36, 1/36, O,..., 0. The 1/6s on squares two through six are similarly redistributed but shifted along one step each time. Finally, we add up the probabilities that have landed on each particular square. For example, square six acquires 1/36 from each of the first five sequenc-es, but 0 from the last one, so the total is 5/36. The final result is 0, 0, 1/36, 2/36, 3/36,4/36,5/36, 6/36, 5/36,4/36, 3/36, 2/36, 1/36, O,..., 0. Notice that this distribution matches our earlier expectations for tossing two dice. But now we can continue. On the third (ghost) throw, we multiply every term in the new sequence by 1/6, then shift it up one, two, three, four, five and six terms. Next, we add the numbers on each square. It's easy to write a computer program to calculate these probability distributions one by one. The results are repre- sented in the illustration at the right, starting with the "triangular" distribution obtained on the second throw. On each subsequent throw, the probability graph moves one step forward in the figure. You can see that the probability peak moves several squares to the right at each step. (In fact, on average, it moves 3.5 squares, the mean value of the numbers 1, 2, 3, 4, 5, 6.) If you continue the computer simulation, you find that the triangular shape eventually flattens out, and all the values are pretty much the same. But why does the simulation follow this pattern? For an explanation, we need Markov's theory, which provides a systematic method to track the probability flow. it begins by writing down the so-called transition matrix for the first figure. The matrix, call it M, is a square table having 40 rows and 40 columns, each numbered zero through 39. The entry in row r and column c of the table is the probability of moving, in one step, from square r to square c. The value is 1/6 if c = r + 1, r + 2,..., r + 6 (modulo 40), and 0 otherwise. Next comes a technical calculation carried out using M. The result shows that after many throws, the probability does indeed approach 1/40 for any given square. So, with a little help from Markov, we can prove that a game as complicated as Monopoly is fal; in the sense that-in the long run-no particular square is more or less likely to be landed on. Of course, the first player still has a small advantage, but this bonus is mitigated by the finiteness of his or her bank balance.