Last week, we gave a short history of decision theory. Decision theory may seem intuitive to us, but only because of the depth of research done by Pascal and other Enlightenment thinkers. Pascal and others found that there are two kinds of decision theory. We discussed the first kind – a more mathematical and empirical form – with the dice roll example last week. Pascal’s Wager, a decision based on reason, is another kind entirely.
Normative vs. Descriptive Decision Theory
The two branches of decision theory typify the unending juxtaposition of the rational versus the irrational. Normative decision theory models the most ideal decision for a given situation. In normative theory, an actor is assumed to be fully rational. Normative decisions always try to find the highest expected value outcome. A fully rational actor is capable of arriving at the highest expected value with perfect accuracy. This is an ideal not often found in the real world. Practical application of normative theory is thus aimed more at creating methodologies and software.
By contrast, descriptive decision theory is more about what will occur in a situation, not what should. Descriptive decision theory takes into consideration outside factors that influence an actor’s decisions toward less optimal, less rational ends. Pascal’s Wager, for example, addresses descriptive theory. People do not often choose to believe in God by consulting a list of weighted pros and cons. The decision whether to believe in God is not so simple, nor is it fully rational in a scientific sense. Instead, people make the decision based on their own evaluation of uncertainty, risk, and expected level of gain. This is closely aligned with the older version of probability, in which the probable also encompassed the moral and irrational. In other words, the expected utility of the decision is what matters more than hard numbers.
What is the “utility” of an outcome, though? How useful is God? A better, more concrete example can illustrate this point better than Pascal’s Wager.
The St. Petersburg Paradox: A Demonstration of Descriptive Decision Theory
Let’s imagine another game. In this game, all you do is flip a coin. If the coin lands heads, a dollar is added to the pot. If you decide to flip the coin again, and it lands heads again, the pot doubles to 2 dollars. On a third heads flip, the pot doubles again to 4. And so on. If you flip tails at any point, the game is over, but you keep the pot regardless. How much would you pay to enter this game?
Mathematically, you should want to pay any price. A coin flip could theoretically land heads an infinite number of times in a row. The cost-benefit analysis should thus be obvious, right? You’ll be rich! For a one-time fee, you stand to make an untold amount of profit! Does that seem reasonable, though? Does it even seem possible? Can we actually expect that the coin could land heads millions of times in a row? Should we even expect the coin to land heads ten times in a row?
This thought experiment is known as the St. Petersburg Paradox. In theory, it’s a game you should throw your money at. In practice, it would be a challenge to find a rational human being willing to pay more than maybe 50 dollars to play. Expected utility, an essentially irrational decision, dictates whether someone plays this game. A normative assessment tells us to empty our life savings into this game. A descriptive assessment reveals, however, that nobody would play this game for more than a handful of dollars, because the odds of winning more than a handful are virtually nil. This is the fundamental difference between normative and descriptive decision theory. Thanks for playing.