Paper Review: Nonconglomerability for Countably Additive Measures that are not κ-additive

Probability plays a central role in this blogmany of my posts focus on where probability makes contact with philosophy and physics. However, there is also of course the mathematical theory of probability. The mathematics and the;philosophy interact in many ways, often technical results in the mathematics can be important for our work as philosophers.

For today’s post I want to dive into a paper that focuses on the more mathematical side of probability. The paper itself is very technical, and so I wont be focusing on explaining how the proof or argument in the paper works. Instead I am to provide the reader with an introduction to a few of the key concepts used in the paper, so that I can state the result of the paper with something approaching clarity.

***The original paper can be found here.***

There are three main concepts I want to discuss. The first is cardinality, the second is additivity, and the third is reflection/conglomerability. If you skim the post it will look mathematical and it is, but I give you all the tools you need to follow along—no previous knowledge required.

I

The fundamental objects in mathematics are sets. A set is a collection of elements. For example, the set X = \{a,b,c\} is the set that contains 3 elements: a, b, and c. We would say here that the cardinality of the set X is 3. Similarly, the cardinality of the set Y = \{3,4,6,7,8\} is 5. Cardinality is like the size of a set.

Things become trickier when we move to infinite sets. For example, the set of natural numbers \mathbb{N} = \{0,1,2,3,\ldots\} is infinite, since there is no largest natural number. We can’t assign this set a cardinality of any natural number, because it is bigger than any of them.

Does this mean that there are all the finite numbers, and then infinity, and these are all the possible cardinal numbers? Not quite. There are actually infinities of different sizes. To keep this separate, we would want to assign different cardinal numbers to different sizes.

To see that there are different sizes of infinities, we need to think a little more carefully about the size of a set. The natural numbers are easy because they fit our intuition. If one set has 3 members and one set has 5, then the latter clearly is larger than the former. What about the set of natural numbers from before, \mathbb{N}, and the set of odd natural numbers, \mathcal{O} = \{1,3,5,\ldots\}?

On one line of thought, it looks like \mathbb{N} should be bigger than \mathcal{O}, because every odd number is also a natural number, but not the other way around. We write this relationship formally like this, \mathcal{O} \subseteq \mathbb{N}, and say that \mathcal{O} is a subset of \mathbb{N}. The line of thought we were considering goes likes this: one set is smaller than another if it is a subset of the other.

This is not the notion mathematicians use, and we can see why. First off, it doesn’t capture simple cases. For example, the set X = \{a,b,c\} contains 3 elements, and the set Y = \{3,4,6,7,8\} contains 5, but is not the case that X is a subset of Y since they don’t have any of the same members. This criterion would not help us compare them. There we can still lean on our intuitions since it is finite. But how about these two sets

A = \{2,4,6,\ldots\}, \textrm{   } B = \{3,5,7,\ldots\}

Neither is a subset of the other, and it is unclear how we should make the judgement. Are these two sets simply incomparable in size? That would certainly be an impoverished notion of cardinality.

Mathematicians have an answer to this problem, and it involves a special type of function called a bijection. You can think of a function that takes an object from one set which we call the domain, say, A, and maps it to an object in a different set we call the codomain, say B. A bijection is a function that satisfies two properties: it is injective and surjective. A function is injective if no two elements of the domain get mapped to the same object. For example, if f is our function, and if both f(2) = 3 and f(4) = 3, then f is not injective. A function is bijecive is every element of the codomain—in our example, B—is ‘hit’ by the function. If we want to state this more carefully we would say that for every element b in B, there is some a in A such that f(a) = b.

Now we can define a rigorous notion of cardinality. The cardinality of two sets C and D is equal if and only if there is some bijective function from C to D. This is abstract, so let us consider an example.

Consider two sets X = \{a,b,c\} and Z = \{1,2,3\}. Neither is a subset of the other, since they don’t share any members. However, we can define a bijection f between them as follows:

f(a) = 1, f(b) = 2, f(c) = 3

Notice that this satisfies our definition of a bijection. Each element of Z is hit by f, and no two elements of X are mapped to the same element in Z. Since this is a bijection, we say that X and Z have the same cardinality, and we write |X| = |Z|, where |X| means the cardinality of X.

Notice that this allows us to compare the finite sets as well. Consider the sets A and B from above. We see that we can define a bijection g between these sets as follows

g(2) = 3, g(4) = 5, g(6) = 7, \ldots

Thus, |A| = |B|. Furthermore, we see that, for example, |Z| < |\mathbb{N}|. Why? There cannot be a function from Z to \mathbb{N} that is surjective, since each element of Z can only be mapped to one element of \mathbb{N}, but there are more than 3 elements in \mathbb{N}.

However, we can also use this to show that the set of all odd numbers has the same cardinality as the set of naturals. We can see this because for the two sets

\mathbb{N} = \{0,1,2,3,\ldots\}, \mathcal{O} = \{1,3,5,7,\ldots\}

we can define the function

g(0) = 1, g(1) = 3, g(2) = 5, g(3) = 7, \ldots

and this also satisfies the properties of a bijection.

Now we can finally answer one of our earlier questions: are there different sizes of infinities? To show why this has to be the case I want to introduce the notion of a powerset. The powerset P(X) of a set X is the set of all subsets of X. For example, if X = \{a,b,c\} as before, then

P(X) = \{\{\},\{a\},\{b\},\{c\},\{a,b\},\{a,c\},\{b,c\},\{a,b,c\}\}

We can see in this case that |P(X)| > |X|, since the power set has 8 elements whereas the original set has only 3.

Cantor, one of the founders of set theory, proved that for any set S, |S| < |P(S)|. Consider the following line of reasoning. Suppose there were a bijection from S to P(S), call it f. Remember, P(S) contains all subsets of S. One set we can define using f is the set of elements in S such that they are not a member of the set they get mapped to—call this set Y. For example, in the above example with X, if f(a) = \{b,c\}, then a is a member of Y, since it is not a member of \{b,c\}. However, if f(a) = \{a,b\}, then a would not be a member of Y, since it is a member of the set it got mapped to.

With this set up, consider the question: is the set Y hit by f? Remember, this was necessary for it to be a bijection, since Y is a member of the power set of S. I claim that it is not hit by f. For suppose that there were some z in S such that f(s) = Y. But then z is in Y if and only if z is not in Y. But since it can’t be both in it and not in it, this is a contradiction. Thus we see that |S| \neq |P(S)|. However, since f(x) = \{x\} is a perfectly fine injective function, we have that |S| < |P(S)|.

This is a fundamental result in set theory. It shows that there is a whole hierarchy of infinities—a whole hierarchy of cardinalities. Thus, we have a whole host of different infinite cardinalities to choose from. The \kappa in the title of the paper refers to some infinite cardinal number. This completes my summary of the first concept.

II

The next concept is additivity. In particular, we are talking about the additivity of a probability function. Additivity tells us something about how we sum probabilities of different events.

In probability theory we work with something called an event space—this is a set of events. For example, if we are rolling a die, the event space might be the set D = \{1,2,3,4,5,6\}, since there are 6 possible results of the die roll.

An event is a subset of this space. For example, rolling a 5 is an event—this is the set $\{5\}$. However, rolling an even number is also an event. This would correspond to a the set \{2,4,6\}.

Knowing the probability of each of the basic events—the sides of the die—which in this case is one sixth, we might want to ask if we can calculate the probability of a different event, like rolling an even number.

Our intuition says that we sum up the probabilities of the individual basic events. For the even event, this means summing 1/6+1/6+1/6=1/2. This makes sense, but what principle allows us to sum up the probability of other events like this?

What we are appealing to here is a hidden additivity principle. We have a probability function that takes as input an event, like rolling a 3—\{3\}— or rolling one of the 3 greatest numbers— \{4,5,6\}. The probability function \mathcal{P} assigns a real number between 0 and 1 to each event. The function must satisfy some properties, including additivity properties.

For example, consider the principle that if an event is the union—the set that contains all the elements of both sets, for example, \{a\} union \{b\} = \{a,b\}—of two disjoint events (two events are disjoint if they share no members), then the probability of the event is equal to the sum of the probability of the two events.

That was a little abstract, so let us consider an example. Suppose we are interested in the probability tat we get either a 1 or a 6. This is the event \{1,6\}. It is the union of two disjoint events: \{1\} and \{6\}. We know the probability of each of these two events: 1/6. We want to say then that the probability of getting a 1 or a 6 is 1/3. In order to do this we are appealing the principle in the preceding paragraph—a principle we might call 2-additivity, since it applied to unions of 2 events.

However, we see that from 2-additivity we get finite-additivity for free. A probability function is finitely additive if the probability of any event that is the disjoint union of finite n events is the sum of the probability of those n events.

In order to see how we get this for free from 2 additivity, consider the earlier case of the even event, \{2,4,6\}. We can write this as the union of \{2\} and \{4,6\}. We don’t yet have the probability for this latter event, but we can split it again into the union of \{4\} and \{6\}. We can then use 2-additivity to calculate the probability of the event \{4,6\}, which is 1/6+1/6 = 1/3, and then we use 2-additivity again to get the probability to get the probability of the even event, 1/6 + 1/3 = 1/2. Thus, 2-additivity is sufficient for finite additivity.

Our example of the die roll was one of a finite event space. There were only 6 possible outcomes. However, in science we regularly consider infinite event spaces. Thus, we might want to consider probability functions that have greater types of additivity. One type is countable additivity, which allows you to sum up the probabilities of events with cardinality |\mathbb{N}|—we call this type countable because if a set is of this size there exists a bijection from it to the natural numbers—the countable numbers.

Most commonly used in probability theory is countable additivity, but we can still in principle think about higher degrees of additivity , of size for example cardinality = \kappa, where \kappa is some arbitrary cardinal.

III

The last concept I want to introduce before stating the result of the paper is that of reflection/conglomerability. They are actually two different concepts, but I think starting with reflection provides a nice way to understand why conglomerability is interesting from a philosophical perspective.

The principle of reflection is a constraint on a rational agen’ts probabilities. Consider an agent who is about the undergo a learning experience, in which she will observe one element of a partition. A partition is a set of events that are disjoint from each other, and that cover the whole space. For example, we our die case from before, one partition might be the two sets \{1,3,5\} and \{2,4,6\}. The agent is trying to learn what the result of the die roll was, but we only tell her whether it was even or odd.

Reflection puts constraints on an agent’s current beliefs based on her possible future beliefs. For example, suppose that we want to know what an agent’s degree of belief that the die came up 1 should be. (Intuitively we think 1/6, but let us follow through this example to illustrate reflection, which can help us in more challenging cases). Suppose that her degree of belief in the die coming up 1 conditional on the even event is 0—that is, if she observes the even event then her new degree of belief in 1 will be 0 (since 1 is not even)—and her degree of belief in the die coming up 1 conditional on the odd event is 1/3. Furthermore, suppose that she assigns a credence of 0.5 to the roll being odd and a credence of 0.5 to the roll being even.

Reflection says that an agent’s current degree of belief in a proposition should be equal to the agent’s expectation of her future degree of belief. I illustrate this with the above case. If the agent learns that the roll was odd, then he probability in the proposition “the roll came up 1” is 1/3. If the agent learns that the roll was even, then he probability in the proposition “the roll came up 1” is 0. She assigns each of these cases probability 0.5. Thus, before she learns whether it was odd or even, her expectation of her future degree of belief is 0.5\cdot 1/3 + 0.5\cdot 0 = 1/6. Reflection says that her current degree of belief should equal one sixth. This is a kind of conservation of expected evidence principle.

This connection to rationality can help us to see why conglomerability is interesting. Consider a probability function, an event space, and an event. The conditional probabilities are conglomerable with respect to a partition of size I if for each two real constants k_{1} and k_{2}, if the conditional probability of the event E given an element of the partition is between the two constants, then this implies that the prior probability P(E) is also between these two constants.

The authors give a nice summary of how this is interesting from an epistemic perspective:

Conglomerability is an intuitively plausible property that probabilities might be required to have. Suppose that one thinks of the conditional probability P(E|h_{i}) as representing one’s degree of belief in E if one learns that h_{i} is true. Then P(E|h_{i}) \leq k_{2} for all i in I means that one believes that, no matter which h_{i} observe, one will have degree of belief in E at most k_{2}. That is, if one know for sure that one is going to believe that the probability of E is at most k_{2} after observing which h_{i} is true, then one should be entitled to believe that the probability of E is at most k_{2} now.

(p. 285)

Thus we see how we can capture an epistemic condition in a formal definition. Ideally we want our probability functions to be conglomerable.

IV

We can now state the main result of the paper, more or less. The main result is that

Subject to several structural assumptions…the cardinality \lambda of a partition where [the probability function] P is nonconglomerable is bounded above by the (least) cardinal for which P is not \kappa-additive…

(p. 289)

In other words, there will be a partition, of size at most \kappa where our probability function is not \kappa-additive, for which conglomerability fails.

So what is the story here? We have an epistemic principle, something like a conservation of evidence principle, that we want our probability functions to satisfy. However, we have a mathematical result that says that this is impossible for certain large cardinalities in certain contexts. Thus, we have a case in which we clearly see highly technical work filter back up into our epistemology. Even though in some sense it is desirable that our beliefs should satisfy this property, it may in fact be impossible. We should not require rational agents to do the impossible.

Mathematical results can limit and inform our philosophy. We need to pay attention to them.

Leave a comment