Readers of the blog will know that I am a fan of the Bayesian approach to probability. This approach is also sometimes called “personal probability”, because it takes probabilities to be the degrees of belief (or credences) of rational agents.
We can think of using probability like this as a framework for managing uncertainty in a coherent way. Not only can we manage uncertainty in this way for the sake of forming accurate beliefs about the world, but we can also leverage this framework to help make decisions that further our goals. Think of gambling at a casino: to do it well you think of how likely the different outcomes are, how much you stand to win or lose if different outcomes obtain, and then take the gamble that maximizes your expected money (or utility if we are being very careful).
However, the connection between probability and decision making is deeper than that, or at least more subtle. When I described the process at the casino, we started with beliefs and your utilities (the stuff you care about), and then from those two things you figured out what the bet decision was for you. We can get your preferences between gambles from your beliefs and utilities. However, often philosophers actually go the other way, starting with your preferences between gambles, and extracting your beliefs and utilities from your preferences.
Frank Plumpton Ramsey (I know, Plumpton, it is great) sketched out how to do this in his famous 1926 Truth and Probability paper, which at some point I hope to summarize on this blog. Unfortunately this was relatively unknown for a few decades. Following up on this strategy was Leonard Savage who developed this approach (in much more detail) in the influential The Foundations of Statistics. Both are worth reading; in particular the Ramsey is quite accessible. TFS is a full book and more mathematically loaded, but is also certainly worth your time as one of the foundations of the field.
Savage was one of the main proponents of the personalist approach to probability. However, he was also very candid with some of the puzzles surrounding this approach. Given my interests and the fact that I’ve covered this approach on here a number of times, I thought it would be fun to take a look at some of his concerns are as outlined in his 1967 “Difficulties in the Theory of Personal Probability”.
***The original paper can be found here.***
When discussing the theory and its history he writes
The theory of personal probability formalizes a view of the nature of uncertainty that was discovered independently by Frank Ramsey and by Bruno de Finetti. This view has not been popular either in philosophy or in statistics, though it has recently been gaining ground in both areas. Statisticians, understandably enough, are liable to passion in controversy about the nature of uncertainty. We personalistic statisticians are happy to see our ranks grow but our impatient with the majority, who have not yet come over to us. And the majority cannot understand why a handful of statisticians who have shown competence in the past are now intent on the propaganda of indefensible and pernicious doctrine.
We see Savage echoing what we said before: personal probability is concerned with uncertainty. Unfortunately, it was slow to gain traction in various fields. Savage wrote this after 50 years ago, so we might wonder how the situation has changed since then.
I’m not sure about more mainstream epistemology, but as far as formal epistemology goes, it seems that these days (by personal observation) most formal epistemologists are are personalists of some stripe or other.
The situation is a little more dire in statistics, where (it seems to me) the personalistic view is still not the mainstream. However, it has certainly gained traction in the intervening 50 years. For example, the statistics department here at UCI is relatively Bayesian (personalist), and people like Persi Diaconis (the mathemagician) have been figuring out how to incorporate Bayesian methods into practical statistical methodology.
Whatever the current state is, though, let’s focus on Savage’s concerns. As he writes, he “shall be raising many vague questions and making relatively few clear statements” (p. 305) — let’s see if we can figure out what some of his concerns are despite the lack of clarity.
He starts by sketching an outline of the theory of personal probability. He writes the “theory seeks to distinguish between coherent behavior and blunder, or demonstrable incoherence, in the face on uncertainty. It therefore prescribes conditions on a person’s preferences among acts” (p. 306). This is as we saw before; we have some kind of coherent preferences among acts (I called them gambles before, but they are the same thing).
In Savage’s framework, there is a set of states of the world, which is “informally, a possible list of answers to all questions that might be pertinent to the decision at hand” (p. 306). For example, If I am deciding whether or not to bring an umbrella, the relevant states of the world might be “raining” and “not raining”.
There are also consequences, “conceived of as what the person experiences and what he has preferences for even when there is no uncertainty” (p. 306). For example, in the umbrella case, the outcomes might be something like “I am wet” and “I am dry”.
Here we see the first small concern of Savage. He writes “The idea of pure experience, as a good or ill, seems philosophically suspect and is certainly impractical” (p. 306). What does he mean by this? I think he means something maybe like we don’t necessarily just care about our experience of the world, but also the actual state of the world. For example, I actually care about my family and friends. I don’t just care about my experience of them. Given the choice between being in a simulation in which my friends are all happy and healthy when in fact in the real world they are not, and being in a simulation in which they are unhappy when they in fact are happy in the real world, I would choose the latter.
However, as Savage points out, there is a different, similar framework of Richard Jeffrey in which this is not a problem, because it is not dualistic like Savage’s in which there are outcomes and states of the world. Instead, in Jeffrey’s framework, outcomes and states are the same thing (they are all just states of the world).
Other problems present themselves when, facing a certain decision problem, we have to structure the actual sets of acts, states, and outcomes for the decision problem. For example, how fine a distinction should be made among states? Are “raining” and “not raining” sufficient, or should we further break it down into “raining and a full moon”, “raining and not a full moon”, “not raining and a full moon”, and “not raining and not a full moon”? How much further do we go? When should we stop?
In the end Savage does not think this kind of problem is too troubling. Even though there may not be a unique choice of representation for the decision problem, adding more irrelevancies won’t hurt. Thus he writes
In practice, it is often–I would hesitantly say always–possible to make a workable choice, that is, one with respect to which the postulates seem to be reasonably well satisfied. And it can, to some extent, be shown that different workable choices cannot lead to differing behavior.
There is, however, a more troubling issue in the framework. In the Savage framework, “acts are without influence on events and events without influence on well-being” (p. 308). Let’s see why this is the case.
Recall that acts are function from states of the world to outcomes. So, in our running umbrella example, we might have two states of the world, “raining” and “not raining”; two acts, “bring umbrella” and “leave umbrella”, and two outcomes “wet” and “dry”. The acts are function, so for example we might have this:
bring umbrella (raining) = dry
bring umbrella (not raining) = dry
leave umbrella (raining) = wet
leave umbrella (not raining) = dry
Notice that the acts do nothing to determine which state obtains or not. The acts, in conjunction with the actual event, determine the outcome.
Furthermore, it is the outcome alone that determines the well-being of the agent. In our example, the agent does not care at all about whether it is raining or not. “Dry” or “wet” is all that matters. But the agent might feel silly if she brought an umbrella and it didn’t rain. So maybe she does care about the state.
So the two issues are that we have act-state independence, meaning that the resulting state (whether it is raining) is entirely unaffected by the act, and the hard separation between state and well-being. In this example act/state independence makes sense; but this might not always be the case. We tend to think that our acts can cause certain states of the world to obtain. For example, if I hand a cashier money, I might cause some item to end up in my possession. I might then use the item to do something, which would make me happy — the outcome. We saw above how we might want to consider states when assessing well-being.
Savage notes, and we note with him, that some of these issues can be addressed. For example, you might have already noticed that we can refine the outcomes in the rain case to be “dry and did not bring umbrella”, “dry and brought umbrella”, and “wet and did not bring umbrella”. This would allow us to capture the agent feeling silly if “dry and brought umbrella” were the outcome. Similar things can be done for act-state independence, but even though this system “can be made to work better than one might at first realize” (p. 308), it is still far from perfect. This is one of Savage’s main concerns.
The next of Savage’s concern is the normative status of the theory. A norm is something that licenses an “ought” — for example, does the theory say that one ought to have one’s beliefs be governed by the probability axioms?
Savage has a number of concerns here, and he lists them off in quick succession:
It is intended that a reflective person who finds himself about to behave in conflict with the theory will reconsider, and some reflective people have indeed found the theory, taken with a grain of salt, to be compelling. I feel, but do not clearly understand, the compulsion, and a good analysis of it might be a valuable philosophical contribution. How deeply would such an analysis be bound up with the philosophy of the contrafactual? Again, the philosophy of free will seemingly has something to do with the interpretation of any normative theory; in the present case, is the connection important, academic, or illusory?p. 308
There is a lot here. On one hand, we are wondering why exactly we should reconsider our behavior if we found it conflicted with the theory. There are a number of justifications for personalism often given: Dutch books which I’ve discussed before, accuracy arguments (in short, if your beliefs are not probabilities then there exists a probability function that will always be more accurate then whatever non-probability belief you have), various convergence results (if you are Bayesian and update your beliefs according to Bayes’ rule then you expect to converge to true belief with certainty), and a constellation of others.
However, given that he is bringing in contrafactuals and free will, I don’t know if this kind of justification is exactly what Savage is looking for. A contrafactual, or more commonly a counterfactual is something that isn’t actually the case. This is related to counterfactual reasoning. For example, “if we did not have computers in 2019, I would not be writing this post” is counterfactual reasoning, since we do in fact have computers in 2019. We presumably think that the above counterfactual implication is true, and that “if we did not have computers in 2019, I would be writing this post” is false. However, why is the second one false? Treating it is as a normal “if…then…” it is true, since ” if FALSE then TRUE” is true, which is the case since I am writing this post. However, we do think that this is false. We are reasoning counterfactually, reasoning about what would happen if something which is not the case were the case.
This is a little tricky; we often think that statements are true either because they are tautologies (for example, “A or not A” is always true for any statement “A”), or because they relate to the world appropriately — “the chair I am sitting on is made of wood” if true if and only if the chair I am sitting on is made of wood. Counterfactuals don’t depend on the world, since they are about precisely what is not true in the world, and they aren’t (all) tautologies. Thus, it is unclear what grounds the truth of falsity of counterfactuals.
How does this relate to decision making? Well, we see the connection to the normative status through the common philosophical “ought implies can.” Thus, if the theory actually says that we ought to behave in a certain way, then it implies that we can behave in a certain way.
Why might this be troublesome? Well, suppose the theory says I ought to make a certain decision. Suppose, also, that my mind is constituted such that I do not in fact make that decision. Then in what sense ought I make it? If ought implies can, then it is not true that I ought make it.
This might be more helpful thinking of an artificial intelligence as the agent. This AI is a decision maker, and runs a certain program to make decisions. This program is entirely deterministic. Thus, for each decision problem, the decision the AI makes is entirely determined in advance by the program and the problem. If the correct action is A, but the computer does B, if ought implies can then we cannot really say that the computer ought to have done B–it is impossible since it is constituted to do B.
Now we might think we are in the same boat as the AI. Unless we suppose something like free will, I might be subject to the same kind of constraints as the computer; my brain will simply act as it acts, and if I do not make the decision described by the theory, then how can we really say I was wrong? Maybe if we had an account of counterfactuals and supposed I was rather than I in fact was we could reason about the case in which I did something else?
I think something like the above is the concern. I’m not quite sure how I feel about it, or what kind of progress if any has been done by it. To the best of knowledge, the main people doing work have been the folks at MIRI, for example here.
Another problem about which Savage is concerned is fact that the theory makes no “allowance within it for the cost of thinking” (p. 308). Remember that according to the theory our beliefs must be probabilities. The probability of any tautology is 1. This has odd consequences. “For example, a person required to risk money on a remote digit of would have to compute the digit, in order to fully comply with the theory” (p. 308). This is because the identity of the remote digit is not something empirical about which the agent learns, but something that the agent could simply think longer about and arrive at (by calculating the digit using an algorithm for ). The theory does not allow for the cost of such thinking; it might be rational for us limited beings to not compute it if the cost of computing it is very high, for example. Savage wants a solution for this problem.
I’ve heard of two main strategies to deal with this. One is restricting the class of things on which the agent’s probability function is defined so that this is not a problem. When thinking formally about this framework, we express the agent’s beliefs as a probability function defined on a specific set of sets called an algebra. The idea is that propositions about might not be in the agent’s algebra, and thus she could learn about as if she were learning about any empirical fact. I don’t fully understand this idea; if anyone has a reference to such a solution that is more fleshed out, I would appreciate a comment!
The other approach is one with which I am a little more familiar. It is the idea that we should build up a slightly different framework to deal with logical uncertainty. Just as we have conditions about how our probabilities should change given new empirical evidence, we should have conditions for how our beliefs should change given more thinking. Again, the only people I know about who are working on this are the folks at MIRI. Here is their paper. It is a little lengthy and technical. One of my good friends is writing a shorter, friendlier summary of the paper, to which I will try to post a link when he is done.
Other problems Savage consider center around knowing our minds:
The example about does not adequately express the utter impracticality of knowing our own minds in the sense implied by the theory. You cannot be confident of having composed the ten word telegram that suits you best, though the list of possibilities is finite and vast numbers of possibilities can be eliminated immediately; after your best efforts, someone may suggest a clear improvement that had not occurred to you.
A particularly conspicuous way in which we do not know our own minds (and therefore cannot comply with the theory) is that we are unsure, or vague, about our preferences between even relatively simple choices such as $10 or a pair of theater tickets. Some have tried to reflect the phenomenon of vagueness within the theory, while others believe that, though vagueness must somehow be reckoned with, its nature defies formalization.p. 308
So, broadly, we have concerns about the theory requiring us to explore a large possibility space that seems up feasible to explore, and concerns about uncertainty about our own preferences.
For the former, I think something like bounded models of rationality in which we require the agent to be computable or efficiently computable could be useful. For the latter, this seems to be something kind of outside the scope of decision theory. Decision theory tells us how to make optimal given our beliefs and preferences — it doesn’t tell us what they should be.
However, I think we might even be able to think about that within some kind of framework, perhaps using reinforcement learning which allows an agent to be uncertain about which utility function she is trying to optimize. Perhaps this could be used to think about uncertainty about our own preferences.
There are a few other minor concerns that Savage has dealing with facts and universals: what does it mean to refer to a fact? How can we recognize certain properties? Though interesting, I think these are less specific to the theory of personal probability, and more general problems. Perhaps there could be an interesting analysis particular to this theory, but nothing jumps out to me right now.
These, then, are some of the difficulties. Difficulties they may be. I think it is interesting that many of the problems seem to be paid more attention to by AI researchers than philosophers. Maybe it also seems that way to me because I am unaware of some of the literature. If you know any papers that address these problems, please let me know in the comments!
There certainly are open problems in the theory. However, I agree with Savage when he writes
The difficulties that disturb me do not seem to me so much to tell against the personalistic view in favor of its competitors, the frequentistic and logical (or necessary) views, as to suggest room for modification and improvement of the personalistic view itself.p. 306