# Paper Review: Bayes, Bounds, and Rational Analysis

Bayesian learning and decision theory (arguably) characterize rationality for idealized agents. However, carrying out Bayesian calculations can often be costly. In particular, the kind of agent one is–whether a human, lizard, or computer–constrains the kind of information processing one can do. This gestures towards a question: what is the relationship between idealized rationality and rationality for real world agents?

***The original paper can be found here.***

Thomas Icard’s “Bayes, Bounds, and Rational Analysis” offers a very general theoretical model for evaluating the rationality of bounded agents. The natural application of this model is taking into account the computational costs of a physically implemented agent (human, lizard, computer). We can use this framework to tackle the above question. By taking into account the computational limits of a certain kind of actual agent, we can look at whether the optimal agent under those constraints approximates Bayesian rationality, or whether some other method that looks nothing like Bayes is the optimally bounded agent.

Icard is particularly interested in how the relationship between idealized Bayesian rationality and bounded rationality plays a methodological role in cognitive science. Cognitive science uses rational analysis to solve a particular problem. When cognitive scientists are trying to understand some aspect of our cognition, whether it is navigating a room or using a language, they try to develop models of how the brain performs this task. Though these models are constrained by the empirical data they gather from experiments, this is often not enough to identify a promising model–the space of possible models is too large. Thus, in conjunction with the empirical data, cognitive scientists often use rational analysis to help identify promising cognitive models. Icard, following Anderson, summarizes the methodology of rational analysis in six steps:

1. Precisely specify the goals of the cognitive system.
2. Develop a formal model of the environment to which the system is adapted.
3. Make the minimal assumptions about computational limitations
4. Derive the optimal behavioral function given items 1–3.
5. Examine the empirical literature to see whether the predictions are confirmed.
6. If the predictions are off, iterate. (Icard 2018, 82)

Many of the functions that cognitive scientists are interested in can be characterized as inference problems under uncertainty. For example, categorizing new objects one has never seen before, or understanding a new sentence one has never heard before. Thus, since Bayesian methods excel at inference problems under uncertainty, cognitive scientists often use Bayesian rational analysis to identify a plausible model. Often, for a given task and a given (class of people), Bayesian cognitive scientists will construct a reasonable prior probability distribution $P(\theta)$ over some possible states of affairs and a conditional likelihood function $P(s|\theta)$, which is how likely the agent thinks she will observe some data $s$ given that $\theta$ is the case. The scientists then suppose that the agent’s response to certain problems will directly rely on the posterior distribution $P(\theta|s)$, which can be calculated using Bayes theorem. Thus, Bayesian cognitive scientists model aspects of our cognition as if we were performing Bayesian inference. The rationale is that Bayesian inference is, in a standard prediction problem, the optimal prediction method (this is Icard’s Fact 1 on page 89). Thus, the reasoning goes, since humans seem relatively successful at many tasks, and Bayesian methods are optimal for many of these tasks, it is a useful methodology to model our cognition as a Bayesian process.

However, there are issues with naïve application of Bayesian rational analysis. One is the problem we noted earlier: Bayesian methods can often be computationally costly and even intractable. A second is that the empirical results show that humans do not, in general, make optimal Bayesian predictions.

There is one very particular way in which humans deviate form making optimal predictions which I found incredibly striking. This is the phenomenon of posterior matching. Suppose you conduct an experiment in which you have a number of participants observe some new information, and then perform some kind of prediction task. If we ask them to give a single best prediction, a Bayes-optimal response would be to predict something like the mode of their posterior. However, we do not find this. Instead, we find that the distributions of responses will match the posterior distribution in the model.

Think about that for a second. Obviously this means that most (or at least a good number of) participants are not making the Bayes-optimal prediction, but I also want you to reflect on how strange this is. Why would humans do this? Icard has a particularly interesting and beautiful hypothesis for this phenomenon that falls right out of his model for evaluating bounded agents. Keep this in the back of your mind–we will return to it later.

In response to both the cost of Bayesian computations and the disagreement with the empirical results, many people have suggested that humans may approximate Bayesian calculations. However, Icard notes that it is one thing to claim that the ideal rational agent is a Bayesian, and another to claim that the best method for bounded agents to follow is one that explicitly approximates Bayes. Thus, he suggests a middle path. He proposes using his model for evaluating bounded agents to see whether or not approximate Bayesian agents are in fact the optimal agent in certain problems. If this is the so, then in these cases cognitive scientists would be justified in using models that approximate Bayesian computations. Icard calls this procedure boundedly rational analysis. Instead of focusing on what the ideal agent would do, pick an appropriate way to bound the agent in a given problem, and then see what the best method is under those constraints. If it is an approximately Bayesian method, then use that. Otherwise, look elsewhere.

Let’s now examine Icard’s model. The details are in the paper itself, and reward study. In this summary I will only try to extract a sketch of the model and a few applications.

This model takes an outside perspective. What I mean by this is that we are not modelling the agent on her own terms, with respect to her own beliefs and goals, but rather with respect to an objective state of affairs (or probability distribution over states of affairs) and some goals. If the agent shares these explicit goals then that is fine, but if not then we can still evaluate her.

Icard builds up the model in a few steps. Consider first evaluating the agent when we know that the state of the world is $\theta$. The agent receives some data $s$ from the world, and has to choose some action $a$. We evaluate how well the agent does with respect to a utility function $U(a,\theta)$, which depends on the action she takes and the state of the world. We also suppose that there is some likelihood function $P(s|\theta)$, which again captures our (not the agent’s) perspective of the situation.

Next, we consider the agent as some function $\mathcal{A}$, which is a function from observations $s$ to probability distributions over actions. Thus, $\mathcal{A}(s)(a)$ is the probability that the agent will respond with action $a$ after observing $s$. No assumption is made that this agent is Bayesian. Indeed, we can consider any agent.

Now, since we know the state of the world $\theta$ and the data $s$, we can define a scoring function $\sigma$ that tells us how good $\mathcal{A}$ is in this particular case. In particular, $\sigma$ is a function of $\theta, s,$ and $\mathcal{A}$. Icard has specific conditions on what type of function $\sigma$ can be, and I encourage those who are interested to consult the paper.

Now we have a very flexible way to evaluate an agent in a particular environment after she has made a particular observation. Using this as the core, Icard builds up a general framework for evaluating arbitrary agents in environments with a probability distribution over states of affairs and signals. Importantly, Icard also extends this framework so that it can incorporate computation costs into the evaluation of how successful the agent is. This gives us a very general framework for evaluating not just the success of different agents in different situations, but also for evaluating the success of different boundedly rational agents in different situations.

As an example application, Icard applies this model to the case of posterior matching described above. I won’t go into the details in this summary, though of course, as always, I encourage you to explore them yourself. However, I will provide a sketch of the approach.

Icard formulates a fairly general kind of cost function for physical agents. He considers and rejects one approach that immediately suggests itself. It seems quite natural at first to characterize the mental computation costs in terms of space, time, or energy requirements that a system would need to process stimuli in order to make a decision. However, the problem with this approach is that it is very sensitive to the specific computational architecture of the agent. Thus, we might want more of a general treatment of what the costs could be like, one that applies across a broader range of architectures.

This leads Icard to consider the computational costs in terms of the general thermodynamic costs associated with computing. Using the physical idea of work, he defines a class of what he calls entropic cost functions. He extracts and summarizes the intuition in a way that is worth quoting in full:

The intuition here is simple. Without exercising any control over one’s actions, any action would be a priori as likely as any other. Configuring oneself to ensure that a specific action is taken requires some physical work, including in situations when such work takes the form of mere thinking (about what to do, what the answer is, etc.). Even when one implicitly “knows” the right solution to a problem, distilling this solution sufficiently that one may act on it is not a costless process. The assumption is that the cost of this work is inversely proportional to the uncertainty remaining in the agent’s behavioral dispositions. (95)

Making a decision, or changing the probability of taking a certain action, takes physical work. We can also see how this connects back to the posterior matching puzzle. Changing our distribution over actions takes work. Thus, we might perform enough work to move our prior distribution, however that is encoded, to a posterior, and then sample our action from that distribution. Indeed, what this analysis shows is that, when you work out the math, this kind of rule (called a Luce choice rule) is the boundedly optimal solution in a situation with entropic costs (this is his Fact 2 (96). This would generate the kind of posterior matching we observe empirically.

I found this to be a rather remarkable hypothesis to explain the phenomenon of posterior matching. Though I think it will of course take further empirical work to see whether or not this kind of process is in fact what the brain does, this is exactly the kind of application that Icard has in mind for his evaluation model: narrowing down the search space of models for cognitive scientists.

Icard also extracts the following lesson from the investigation of his framework:

The broader lesson of this illustration is that acting optimally across a sufficiently wide range of problems, reflecting the true intricacy of typical cognitive tasks our minds face, is non-trivial. While boundedly rational agents in this setting may fall short of ideal Bayesian optimality—which make sense especially when conditional probabilities P(ϑ|s) are difficult to determine—they must nonetheless move in the direction of what Bayes’ Rule dictates, acting as though they are appropriately, if only partially, incorporating their evidence. (97)

This is a useful moral for not only trying to figure out how existing agents (humans, lizards) work, but also for thinking of how to design intelligent but bounded artificial agents. Indeed, this seems loosely to share the spirit of the Ramsey-Savage style representation theorems. The representation theorems tell us that if an agent’s preferences over gambles conform to certain rationality constraints, then we can represent her as is she were maximizing some utility function with respect to a unique probability function. Whatever her cognitive architecture is in fact doing, we can think of her as if she had utilities and probabilities. The Icard moral is similar: whatever is in fact going on in the agent, whether it be a bacterium, computer, or human, if it is acting across a range of problems, then we know that whatever it is doing, it is acting as if it is doing Bayesian updating, if only to an approximation.

I think this paper makes a valuable contribution to the literature on bounded rationality, and rationality in general. Both the model itself and the lessons we can extract from it merit our reflection.