Hugh Everett III proposed his many worlds (or as he preferred to call it, relative state) formulation of quantum mechanics back in the mid 50s in order to solve the quantum measurement problem. Although it took a while for the theory to gain traction, it (or some form of it) is one of the more popular theories of quantum mechanics today.

In order to understand why Everett developed his account of quantum mechanics as he did, it is important to understand what the motivation was for his theory. One of his main motivations was to solve the measurement problem. In a minipaper Everett wrote in 1955, he works out what the measurement problem is in terms of a conflict between objective and subjective probability. Although there are perhaps cleaner accounts of the measurement problem, such as the one Everett himself eventually provided in a version of his thesis, I think that this framing of the problem is interesting, both for historical and conceptual reasons.

***The original paper can be found here.***

Everett begins by making a distinction between objective and subjective probabilities. We have met subjective probability on this blog before. The idea behind subjective probability is that is is the degree of belief of a rational agent. Everett uses this example:

A deck of cards is shuffled, and one card is selected and placed face down upon the top of a table. An observer,

pp. 57-58A, is asked whether or not that card is the ace of spades, whereupon he would probably reply that the “probability” that it is the ace of spades is 1/52. This probability would be a subjective probability, because it clearly refers to the state of information of the observer, and not to the system, namely the card, which is in actuality either the ace, no not the ace, and not a probability mixture.

The probability relates to the information an observer has. For example, if someone else is told that the colour of the card is black, then that observer’s probability judgement will be different from *A*‘s.

It is this observation that subjective probability can differ between agents that Everett thinks can help distinguish objective from subjective probability:

Since an

p. 58objectiveprobability is conceived to be a property of a system, and hence independent of states of information, it must be invariant from one observer to the next. That is, if two observers ascribe different probabilities to some aspect of the same system, thenat least one of these probabilities is subjective!

Thus, if we want to know whether or not a probability judgement is objective, we can see whether it is possible to differ between observers based on their information.

This is how Everett approaches his description of the measurement problem. He wants to show that the two dynamical laws of the standard theory of quantum mechanics taken together are inconsistent: the give rules about objective probabilities, but he gives a case in which some of the probabilities given by the theory must be subjective.

Following von Neumann’s formulation of quantum mechanics, Everett writes two postulates of the standard theory as follows:

1. Every physical system

spossesses a state function , which gives theobjectiveprobabilities of the results of any measurement which might be performed upon the system.2. The state function of an isolated system changes causally with time as long as the system remains isolated.

p. 58

Basically, if a measurement is made then the theory says that there is an objective probability distribution over outcomes. This is a kind of chance event. Everyone should agree on this probability, since it is objective. However, the second law is fully deterministic. If the system doesn’t interact with anything, and is left alone, it will evolve completely deterministically.

Everett now constructs a case in which these postulates are inconsistent:

We suppose that we have a system, , and a measuring device , for measuring some property of , which is connected to a recording device which will record the results of the measurement at a classical level, such as the position of a relay arm, and we assume that the measuring device is arranged to make the measurement automatically at some time. We further assume that the entire system, , consisting of , , and the recording device, is isolated from any external interactions.

pp. 58-59

So there is a system, maybe a particle, and a measuring device, maybe something that would measure the position of the particle. Then there is some kind of recording device that records what the measuring device outputs.

Now, since a measurement takes place, we can use the first postulate to calculate the *objective* probabilities of the various possible results of the measurement. For example, if the device is measuring the position of a particle, then the first postulate gives an objective probability distribution over where the particle might be.

However, consider the whole physical system, , which consists of the particle, the measurement device, and the recording device. This system is in a particular state, and it is isolated from any external interactions, so we can use the second postulate to calculate how the system will evolve in a deterministic fashion. This will give a particular state of the recording device, including its record of where the particle is. Thus, according to the second postulate, the “configuration of the recording device has already been determined” (p. 59).

[Thus] clearly the outcome of the measurement was determined before it took place, since the later [state of ] was strictly determined by the earlier, in which case the probability given by [the first postulate] was not objective. So that we see that in any case at least one of the probabilities was subjective, and the postulates are untenable.

p. 59

We see that the two postulates are inconsistent, because they give different objective probabilities about the state of the recording device. This is one way of understanding the measurement problem, albeit one somewhat more complicated than necessary. As I mentioned earlier, Everett sharpens up his description of the problem in his later work.

To conclude, Everett sketches three, non-exhaustive, possibilities. I quote them in full, since I find the language interesting:

1. Not every physical system possesses a state function, i.e., that even in principle quantum mechanics cannot describe the process of measurement itself. This is somewhat repugnant since it leads to an artificial dichotomy of the universe into ordinary phenomena, and measurements.

p. 59

I love the last line of this quote. The language makes clear how unsatisfactory it would be for Everett if quantum mechanics, one of our fundamental physical theories, were unable to describe certain physical systems. He is responding here to something like the Copenhagen interpretation of quantum mechanics, in which somehow quantum mechanics can’t or doesn’t describe large, or classical objects. Everett doesn’t want an “artificial” separation of the universe into different categories.

2. The wave function of an isolated system is

p. 59notalways causally determined, but may suffer abrupt discontinuities from mixed states into probability mixtures of pure states, corresponding to an internal phenomenon which is regarded as measurement. This is quite tenable, but leaves entirely unknown what is to be regarded as such a measurement, so that for this interpretation no formalism has been developed to give the points of discontinuity.

Everett’s idea here is that we might *complete* the theory in some way by adding some sufficient and necessary conditions for a collapse to occur. That is, instead of leaving the term measurement as an undefined primitive term in the theory, we could actually explain what constituted a measurement clearly and precisely.

3. The probabilities occurring in quantum mechanics are

p. 60notobjective. That is, they correspond to our ignorance of some hidden parameters. Under such an interpretation the inconsistency resolves easily, since the outside wave function possesses more information than the internal one, such as phase factors, etc., for the interaction, so that it leads to a causal description.

With this kind of strategy the probabilities are purely epistemic. There really is some kind of deterministic causal process, and we just lack sufficient information about it so it looks stochastic to us. Bell provides a nice discussion of two theories we might think of this type, if we are fairly inclusive about the notion of subjective probability (Everett’s ultimate proposal, his relative state formulation, lacks probability entirely—maybe at some point I will write a post explaining Everett’s thoughts on probability in his theory).