Can two rational people agree to disagree?
This question seems really important to me. When having a conversation with a friend, for example, if we are both good-faith rational actors who are engaged in a collective truth-seeking endeavor (as is my hope!), is it possible that we can agree to disagree?
Of course, in the real world there are practical considerations that may make this more challenging. For example, sharing information takes time and resources; I cannot recount my entire life experience to a friend in one (or many!) sittings.
However, suppose that time is not an issue, that both parties are trustworthy, that both parties are rational, et cetera. If we have abstracted away from all the details of an actual conversation, and just focus on the structure of belief, can we rationally agree to disagree?
One of the most famous answers given to this question was given by the great Robert Aumann, and came to be known as Aumann’s Agreement Theorem. Other people have already given summaries and explanations of it, and I recommend you check them out. I wrote this post on this paper both in the hopes that it would provide a supplementary explanation, but also just as good practice for myself. Since there are already some other explanations to which I linked, and since the paper itself is very readable, I skip most of the mathematical details in the post in lieu of providing a more intuitive characterization of the assumptions and implications of the theorem. Indeed, Aumann himself thought that “once one has the appropriate framework, it is mathematically trivial” (p. 1236) The more interesting part is the interpretation. This is more of a short note with a little conceptual setup, encouraging you to dive into the details yourself.
***The original paper can be found here.***
First of all, by “rational” we mean Bayesian. That is, a rational agent has degrees of belief that obey the probability axioms. We call the probability function that characterizes the agent’s beliefs before observing evidence their prior (as in prior belief), and the probability function that characterizes the agent’s beliefs after observing evidence their posterior (as in posterior belief).
Aumann states the result of his paper as follows:
If two people have the same priors, and their posteriors for a given event A are common knowledge, then these posteriors must be equal. This is so even though they may base their posteriors on quite different information. In brief, people with the same priors cannot agree to disagree.
p. 1236
So we see that Aumann answers in the negative: rational agents cannot agree to disagree, assuming they have the same prior beliefs. Importantly, they also must have common knowledge of each other’s posteriors; it is not enough that they just know the other’s posterior.
This last point can be tricky (see here and here for a related puzzle), and it is worth trying to think it through (if you are going to attempt the puzzle, I suggest trying to solve it before continuing reading this post — too much will be given away).
Suppose, for example, that we both know some proposition, A. Do we necessarily have common knowledge of A? No. I might not know you know A. And you might not know that I know A. Furthermore, even if I know that you know A, I don’t know that you know that I know that you know A. And so on and so forth, to infinity. In order for two people to have common knowledge of something, for any particular depth of knowing something, both agents have to know that the other knows it to that depth. For example, “person 1 knows that A” is depth 0. “Person 1 knows that person 2 knows that A” is depth 1. “Person 1 knows that person 2 knows that person 1 knows that A” is depth 2. And so on for 3, 4, 5, … . And so for there to be common knowledge,
To take stock: so far we require that the agents have common knowledge of each other’s posteriors, and that each agent started with the same prior beliefs.
Common knowledge makes sense in the context of collaborative truth-seeking between two trustworthy agents. Obviously, if one agent is skeptical of the other’s intent, or thinks the other has made a mistake (violating the condition that the agent is fully rational), this kind of reasoning (in which both agents end up at a common posterior) cannot in general be carried out. If I think that my friend might be lying to me about her posterior for some reason, I have to take that possibility into account which can kind of wreck this whole happy agreement.
However, it seems that, amongst good-faith actors, and abstracting away from the costs of sharing information, common knowledge is a decent approximation. At the very least, we might think it captures the best case scenario.
How about the assumption of common priors? Aumann himself seems to find it plausible that people, in general, will have the same priors (p. 1237), but does not take to strong a stance on it.
I think that in practice this is often correct: in a lot of cases, as a matter of fact humans will have (at least approximately) the same priors. However, this seems a little slippery. Aumann’s Theorem took place in a rather abstract context. Abstractly, then, it doesn’t seem totally unreasonable that agents might have different priors. I certainly have different priors than some of my friends.
One might try to argue that we do not in fact have different priors; it is rather that we have had different experiences, and that if we factored those out we would be left with the same prior. I like this line of argumentation, but it is also seems a little slippery to me. When do we stop? How much do I factor out? Do we look at prior beliefs at birth? Does any such thing exist, even as a helpful modelling assumption? Are there no relevant genetic differences between individuals? Do we factor them out too, and say it comes from the “experience” of evolution?
So while having identical priors would be nice, I am less confident that this is realistic. If we do not have further rational constraints on priors besides the probability axioms, we have far too large a class of priors to ensure agreement.
The Aumann result captures an ideal edge case: rational agents with the same initial beliefs, and common knowledge of the other’s posterior, cannot disagree. However, the assumptions needed for the result are quite strong. Despite this, I think it can be helpful in a practical sense. Scott Aaronson has the view that it can be a guide to what productive disagreement should look like. As a final word, I encourage you to read his post.