Steve Keen on the “Ergodic Fallacy” in Economics and Optimal Bet Sizing Using the Kelly Criterion
Steve Keen is an iconoclast economist who has fun challenging dogmas in various schools of economic thought. I recently interviewed him regarding a post he had made regarding trade deficits, but during our conversation he alluded to a simple coin tossing example that has relevance for finance. I thought it would be helpful for the infineo audience to spell out the argument. First I’ll explain the game and how to analyze it properly, and then I’ll draw the connection to what Keen called the “ergodic fallacy” that is so prevalent in economics and finance.
Keen’s Coin Toss Game
Keen asks us to imagine a simple gamble where you put up $100, then toss a coin and receive the following payouts: If the coin comes up heads, you win $50 (i.e. you end up with $150). If the coin comes up tails, you lose $40 (i.e. you end up with $60). How should you analyze this game?
In the interview, Keen says the standard approach—as it would be taught in an introductory finance class—is to calculate the “expected value” of the wager. There’s a 50% chance you win $50 and a 50% chance you lose $40, so there’s a net $5 expected gain. Or, you could say there’s a 50% chance you end up with $150 (which is thus worth $75 to you) and a 50% chance of ending up with $60 (which thus should be valued at $30), and so the probability-weighted outcome is $105. Again, a 5% gain on your original $100 wager. So if it’s correct to assume that investors should consider the mathematical expectation of financial outcomes, this is a good bet that you should play whenever you get the chance.
However, something is odd. If you think it makes sense to play the game once with your original $100, then—regardless of what happens on that particular toss—you would think it should make sense to play the game a second time, with either your $150 or your $60. Sure, you win some, you lose some, but since the expected value of this game is positive for the player, it must make sense in the long run to keep playing it, right?
Ironically, that’s not right. Even though a casino would never host this game because the House is at a disadvantage (i.e. the player has the edge), nonetheless it would be reckless for a single player to continually play this game, if his strategy were to “let it ride” with each subsequent toss.
To see this, consider that in a typical sequence, where the player has 1 win and 1 loss, his return is +50% and -40%, which in multiplicative terms is (1.5)x(0.6) = 0.9. In other words, starting with $100, the player will end up with $90 if he plays the game twice in a row, and either has a H-T or a T-H sequence. (Specifically, he might go $100->$150->$90, or he might go $100->$60->$90.)
I won’t dwell on the more sophisticated mathematical treatment here, but suffice to say, the difference in the two approaches related to the arithmetic mean versus the geometric mean. The standard assumption in introductory economics and finance courses is that we deal with uncertainty by taking an average of the possible outcomes, weighted by their probability of occurrence. As this simple example shows, we need to be careful about what “average” we have in mind.
Before moving on, I don’t mean to suggest that Keen’s coin tossing example is some hidden wisdom that only he has uncovered. For example, Mark Spitznagel’s latest book (on which I was a consultant for some of the chapters) covers these concepts in exquisite detail. Rather, Keen’s point—which I endorse—is that even professionals in economics and finance sting cling, by default, to approaches that don’t really make sense, particularly when they are teaching basic principles to students or the lay public.
The Kelly Criterion
Keen intended his coin-tossing example to illustrate what he called the “ergodic fallacy,” namely, where people assume that what is true at a point in time, carries forward in a dynamic setting. In the coin toss example, if you took your $100 and spread it among 100 agents acting on your behalf, and had them simultaneously play the game with 100 separate coins, that would make sense. You would be likely to gain about 5% on your bankroll, for every round of that operation. In other words, if you had about $105 after the first round finished, then you would distribute $1.05 to your 100 agents, and have them “let it ride” on the 100 separate games simultaneously, and so on.
But if instead, you are acting alone, and at any moment only have your individual bank roll, then it is far too reckless to “let it ride” with each flip. Yet having said that, this game is advantageous for the player. You do want to play it, but you must be conservative. You can only bet a fraction of your bank roll at any time.
Specifically, the Kelly criterion tells you how to size your bet, when you know the odds of winning and losing (and what the associated payoffs will be). In Keen’s coin toss example, it happens to be 25 percent. In the table below, I show what happens if we assume a continual pattern of T-H with various betting strategies:
The first column is “Let It Ride,” in which the person loses 10% every two tosses. By Round 50, his original $100 has fallen to $7.18.
The second column shows the Kelly Criterion bet size (which is 25 percent of the bankroll in this example), the third column shows a more conservative ½ of the Kelly Criterion (i.e. 12.5 percent, in this example) and the third column shows a more aggressive doubling of the Kelly Criterion (i.e. 50 percent in this example).
The person playing double the Kelly amount simply alternates between $100 and $80. Originally, he bets $50 (i.e. half of his bankroll). We assume the first toss is Tails. So he loses 40% of his wager, i.e. $20. Thus he goes into Round 2 with $80, he wagers half of it (i.e. $40), and we assume he wins, thus gaining 50% of his wager, i.e. $20. This puts him right back at $100 going into Round 3, when the cycle repeats.
When contrasting the Kelly player versus the ½ Kelly player, things are more interesting. After a win, the Kelly player is ahead, because he has been betting more aggressively. But immediately after a loss, at least for a while, the more conservative player is ahead. However, in Round 37 (highlighted in blue), even after a loss, the Kelly player is now ahead. The longer the game goes on, the bigger the advantage of the Kelly player. (I won’t spell it out here, but interested readers can follow my earlier link to the Kelly Criterion Wikipedia entry, to see that it is the strategy that outperforms all rivals, in a very precise mathematical sense. That’s why so many people find it compelling.)
Finally, just to showcase the “ergodic fallacy,” in the last column I show what happens if a player could split his bankroll among 100 agents each round, where we assume 50 of them get a Head and 50 get a Tail. This means 5% growth in the bankroll period to period, and this approach blows even the Kelly player out of the water.
Conclusion
To be clear, in the above demonstration, I simplistically assumed a player would face a constantly repeating string of Tails-Heads. In reality, one would want to run Monte Carlo simulations in which players might experience long strings of heads and tails, etc. Even so, it would still be the case that the Kelly Criterion was the most robust strategy, under any reasonable definition of success.
The overall lesson here is not simply to give advice for people who find some sucker offering a gamble that is advantageous to the player. Rather, the point is to use Keen’s simple thought experiment to illustrate the difference between the “average” outcome as a cross-section in a point in time, versus the “longitudinal” outcome going forward in time. This distinction is crucial in both economics and finance, so it’s worth highlighting.
Twitter: @infineogroup, @BobMurphyEcon
Linkedin: infineo group, Robert Murphy
Youtube: infineo group
To learn more about infineo, please visit the infineo website
Comments