Game theory is the study of the strategic interaction of rational agents. Decision theory is the study of the decision making of a rational agent. Clearly there is something similar about these two fields. What, exactly, is the relationship though? Do they study different aspects of rational action, or do they overlap? If they overlap, do they disagree on anything?

Kadane and Larkey explore the consequences for game theory of taking a more decision theoretic approach towards game theory. In particular, they look at how incorporating subjective probability into game theory might change things.

***The original paper can be found here.***

There are a few key ideas we have to get on the table before we can understand their ideas. The first is the difference between *objective* and *subjective* probability. The second is the idea of a *solution concept* from game theory. The final one is the difference between a *single-play* game and a *multiple-play* game.

The standard Bayesian account of probability is that probability is the subjective degrees of a belief, or credence, of a rational agent. That is, if an agent is rational, then her beliefs will obey the mathematical constraints of probability. For example, if a rational agent has a degree of belief 0.7 that it will rain tomorrow, then her degree of belief that it will not rain tomorrow must be 0.3, since 1 – 0.7 = 0.3.

As the agent learns about the world she updates her beliefs according to Bayes’ rule. Her beliefs can change over time, in a consistent, mathematically precise way. Furthermore, another agent, with different information, might have a different degree of belief in the same proposition. Thus, agents can disagree on the probability of different events, without either of them being wrong. Their probability assessments are their personal judgments. Under certain conditions, if the agents communicate then they will actually come to the same probability judgments. However, even in this case we would say the probability is subjective; it is the agents’ degrees of belief.

This is in contrast to *objective* probability, which is not agent-dependent. One common way that people have tried to define objective probability is by the limiting relative frequency of a series of events. For example, informally, if you flip a coin an infinite number of times, and in the long run the number of heads equals the number of tails (and some other conditions are met), then the probability of heads would be 1/2.

Kadane and Larkey point out that this is actually the kind of probability that was used by the founders of game theory, von Neumann and Morgenstern. There are certain conceptual and mathematical difficulties with defining probability in this way, however we can put those aside for this post, since Kadane and Larkey are more interested in what happens when we replace the objective notion of probability in game theory with the subjective notion. This move from objective to subjective actually follows the historical narrative, in particular by Savage:

Savage’s work began as a defense of the von Neumann-Morgenstern-Wald minimax approach; he concluded that by shifting to a subjective view of probability, upholding the principle of subjective utility integrated with respect to subjective probability.

p. 114

This leads to the authors’ very natural question, what happens when we incorporate subjective probability into game theory?

Now I want to move to our second idea — that of a solution concept. The fundamental object of game theory is the game. The game characterizes the strategic context in which agent’s find themselves. For example, consider the following game:

This is the famous prisoner’s dilemma. The bottom left of each corner represents the payoff to player 1, and the top right represents the payoff to player 2. These payoff numbers represent *everything* a player cares about. Given a game like this one we want to understand what the outcome of such an interaction will be. In other words, we are looking for the *solution *to the game.

In order to find the solution to the game we have to consider different properties that we think relevant. For example, one plausible criterion for a solution is that if one action is strictly better than another no matter what one’s opponent does, then that action is the one the agent will take. We call this property *dominance*, and a strategy that has this property is said to *dominate *every other strategy. For example, in the above game “Defect” dominates “Cooperate” for both players. Thus, if we apply this solution concept to the game, we get the prediction that both agents will defect, and that we will end up in the bottom corner.

Dominance is not the only property we care about, and furthermore it is insufficient to solve every game. Another important one is the idea of *minimax*. Informally, if an agent follows the minimax procedure, she tries to maximize her minimum possible payoff in the game.

When we select a set of properties that we think a solution to a game must meet this is called a *solution concept*. It very precisely defines what it means when we say “we predict such-and-such will happen”. When we do so, we are appealing to a specific solution concept.

The final idea I want to discuss before getting back to some of the core points of the paper is the different between a *single-play* game and a *multiple-play* game.

It is actually very straightforward. In a single play game, agents play the game only once. For example, if two agents played the above prisoner’s dilemma just once and then went home, this would be a* *single-play game. However, if they played it, say, 7 times in a row, where they learn what each player did in the previous round, then this is a multiple-play game.

Now that all of these pieces are on the table we can understand a few of the main points of this article. The driving force of the paper is that the authors

do not understand the search for solution concepts that do not depend on the beliefs of each player about the others’ likely actions and yet are so compelling that they will become the obvious standard of play for all those who encounter them.

That is, they think that too little emphasis has been put on the role that the subjective degrees of beliefs of the agents playing the game has played in game theory.

They highlight this by discussing the more decision theoretic approach to decision making. In this framework a rational agent, equipped with a utility function (capturing what she cares about) and a probability function (capturing her beliefs) takes the action that maximizes her expected utility. This is our standard account of rational decision making.

This does not often take center field in game theory — as we saw above, the main tool used in game theory is the solution concept. Most solution concepts used do not make use of subjective probability in an obvious way. They write the following about minimax:

What role would the minimax principle for a zero-sum game play in such a theory? Suppose for example that our opponent announced his intention, and committed himself in an unbreakable contract, to use the minimax strategy. Then there would be, in all games without dominant strategies for both players, several choices

p. 115jeach of which would yield an expected utility equal to the value of the game, and whose mixture with appropriate weights would be my minimax strategy. Choosing any one of these, or the minimax or any other mixture of them would be equally good, and would yield the value of the game to me. Thus, the minimax strategy is not ruled out by the subjective approach, but doe not here have the strong probative force given it by von Neumann and Morgenstern.

What we see is that a more decision theoretic approach to game theory might lead to an agent using what standard game theory suggests they will use, but only under particular circumstances. Thus not all of game theory becomes obsolete under this view, but it does cease to hold a privileged place. This shift from central to permissible is made salient by the following:

Minimax theory is, of course, incomplete, in that it does not suggest what I should do it I believe that my opponent is not playing the minimax strategy. The experiment evidence…suggests that minimax players are the exception. And if my opponent is not playing the minimax strategy, there will be, in most games, a strategy that I can follow which is superior to minimax.

p. 166

This leads to what I take to be the central insight of this paper:

That minimax strategies are a special case is an illustration of an important general connection between the subjective theory of games and the von Neumann-Morgenstern view, namely

p. 166solution concepts are a basis for particular prior probability distributions.

This is really interesting, and there are at least two things we can draw from it. The first is that, when considering how likely we think it is an opponent take a certain action, we can apply what we take to be a relevant solution concept to help us generate our probability distribution over actions. This is what the above quote is emphasizing. The second is that we can then use this distribution to choose an optimal (from our perspective) action in the style of decision theory.

Importantly, we only recover the recommendations/predictions of game theory if our probability distribution over opponent’s actions does not sufficiently differ from those given by the solution concept. But suppose it does; I think my opponent has a trembling hand and cannot perfectly execute actions, for example, or I think it not unlikely that he make a mistake. Then I might take an action that, while not recommended by traditional game theory (because we assume in that context that agents are following a certain solution context) is perfectly rational given my degrees of belief.

Incorporating subjective probability also has interesting consequences for the analysis of multiple-play games:

Multiple-play games do introduce a new complication in the Bayesian theory. To help fix ideas, we will take a two game sequence in which game 2 is played, and then game 1…When only game 1 is left, only my opinion about my opponent;’s action in game 1 is relevant. In thinking about my action in game 2, I must take into account not only my opinion about what my opponent is likely to do in game 2, but also my opinion about the effect my strategy in game 2 will have on my opponent’s strategy in game 1, conditional on each of the actions I may take in game 2. It is this feature that makes multiple play games very interesting from the subjective Bayesian perspective.

p. 117

In the more standard game theory approach many of the solution concepts allow for simpler forms of backward induction. However, by allowing for a richer set of possible probability distributions over an opponent’s actions, the game can become more interesting — and perhaps more applicable to real world contexts.

I found this to be a super interesting paper; in particular I like the idea of incorporating more decision theoretic concerns into game theory analyses.

Great analysis of the paper. I would caution proclaiming this in the current environment. In my limited experience applying to real world complexities is often rejected by the entrenchment of camps. Regardless, excellent analysis and thanks for the introduction to a paper I would have other wised missed.

LikeLike