Paper Review: Evolution and Disagreements Between Dynamics

Plot of trajectories of 30 evolving populations over 1000 birth-death events.

Evolutionary game theory promises to provide a unifying mathematical framework for a breadth of sciences from biology and economics to anthropology and social epistemology. I endorse this project.

In evolutionary game theory the mathematics of dynamical systems are used to represent and reason about various processes of change, such as change in gene frequencies within populations, the accretion of mutations and the dispersion of adaptive forms, the evolution of communication, the development of moral cognition, the spread of social practices, the emergence of social structure, and so on.

In dynamical systems, there are roughly two sorts of models of change: stochastic and deterministic ones. Stochastic models represent the process of change as a probabilistic one. A system (say, a population of E. Coli bacteria) transitions from one state to another (say, from there being 100 individuals with gene A to 99 or 101 such individuals) with some probability determined by the relevant factors at play (say, of birth, death, mutation, migration, and so on).

Deterministic models represent the same process of change as nonrandom. From a given state, the system will always evolve identically. One might think that given the messy sorts of processes involved—the dynamics of biological and cultural processes—that a deterministic system might constitute an inappropriate representational tool. It would perhaps miss all of the unavoidable fluctuation sand perturbations of causal factors not accounted for by the factors included in the model. Happily, this need not be so. Deterministic dynamics can reliably describe the average behavior of a stochastic process when the population at play is sufficiently large so that the noise of random shocks averages out.

This insight—that unpredictable behavior at a small scale can sometimes produce predictable behavior at a larger scale—was first formally developed in statistical mechanics in the late 19th and early 20th centuries by physicists including James Clerk Maxwell, Ludwig Boltzmann (whose >article on models I highly recommend reading), and Josiah Willard Gibbs.

We are now at a place where I can share the thesis of my recently published paper in the Journal of Philosophy of Science: a modest contribution to our understanding of the relationship between the main deterministic and stochastic models of evolutionary game theory—the replicator dynamics and the Moran process.

The two models are intimately connected. The replicator dynamic describes the average behavior of the Moran process for large populations. But there is a striking puzzle: there are conspicuous conditions under which their predictions diverge. In thinking about this puzzle, I realized that the reasons typically given for their divergence are not right. So, I demonstrate mathematically that the divergence between their predictions is caused by standard techniques used in their analysis and by differences in the idealizations involved in each technique.

My work revealed problems for one of the methods of analysis—stochastic stability analysis—under a broad range of conditions. In these conditions, the method commonly used to analyze the stochastic Moran process will give us the wrong answer, while the method used to analyze the deterministic Replicator dynamics will give us the right answer. This ends up mattering a great deal when we are modeling processes that involve either a very large population (like in bacteria) or very high rates of mutations or noise in transmission of behavior (like in imitation of others’ behaviors by hunter-gatherers). I also demonstrate that there is a significant domain of agreement between the two dynamics that had gone unnoticed before because of misunderstandings of the techniques for the analysis of the models.

I provide a brief sketch of the argument bellow. If you enjoy the challenge of mathematical reasoning, you can check it out. If not, feel free to bounce out. And if you want to learn more, you are enthusiastically welcome to read my article here.

But first, some preliminaries. An evolutionary game can be thought of as composed of two parts: a game, and a dynamics. The game describes the interaction structure of a population in conflict, cooperation, signaling, mating, and so on. The dynamics provides our hypothesis as to the nature of evolution. The dynamics will specify the character of transmission, selection, mutation, and drift, along with population size and structure.

Figure 1. 2 × 2 symmetric normal form game.

We can consider an example of a generic symmetric game, given by Figure 1 above, as our description of the interaction structure of a population. This game is simply a toy example, meant to help in explaining the mathematical structure of evolutionary games. There is no particular story behind it. But, if we like, we can think of a population of bacteria, say, of E. coli, with distinct phenotypes, A and B. This is a 2 × 2 symmetric normal form game, meaning interactions of the population are always modeled as between two individuals, row player and column player, who may each be one of two types.

Note that the game is symmetric. That is, both players have the same strategies available to them, so we need only represent a single player’s perspective—in this case, row player’s—to know that the other player, column player, is facing the same situation. For those more familiar with payoffs representations as ordered pairs, we note that column player’s payoffs are dropped to remove clutter, and are inferred by symmetry from row player’s.

Considering the payoff structure, we see that if row player is an A-type, in interaction with another A-type, she will receive a payoff of a, whereas against an A-type she will receive a payoff of b. If Row Player is a B-type, interaction with A– and B-types yields payoffs of c and d, respectively. Payoffs here are to be understood as contributions to reproductive or imitative success. In this way, the game matrix specifies the results of interactions to different types.

The second component of an evolutionary game, the dynamics, translates the resulting payoffs of these interactions between types into changes in the proportions or counts of the types in the population. For example, if we take our game as before, and introduce a large population x=\langle x_{A},x_{B} \rangle where the proportion of A-types is denoted by x_{A}, and the proportion of B-types is denoted by x_{B}, then the expected payoff to $A$ and $B$ types will be given by \pi(A,x)=ax_{A}+bx_{B} and \pi(B,x)=cx_{A}+dx_{B}, respectively. These may be read as the expected reproductive success of the types, and determine their representation in the population in the next generation. With the introduction of a dynamics, the relative proportions of the types in the population affect the payoffs to each strategy, and so fitness becomes dependent on the relative frequencies of types.

Consider the replicator dynamics. The leading idea behind the replicator dynamics is that types that are more fit than the population average fitness grow in relative proportion, and types that are less fit than average shrink in proportion. This can be described by a system of differential equations

\dot{x}_i=x_i[\pi(i, x)-\pi(x,x)] \quad \text{for } i \in S

where S is the set of possible types, \dot{x}_i denotes the rate of change of the population proportion of type i, x_i denotes the population proportion of type i, \pi(i, x) denotes the expected fitness for type i from interacting with the population, and u(x, x) denotes the population average fitness.

The prediction of the replicator dynamics then is simply a state to which the population converges and at which the population remains. (Though, I should note that there are cases—such as in cyclical and chaotic dynamics—in which no such convergence occurs. For an in-depth analysis of the challenge such cases of non-convergence provide, you can check out one of my earlier articles here.)

Good. Now, to understand our motivating puzzle we can consider the game given in Figure 2 (this is called an anti-coordination game where each type would prefer to meet the other type to its own—can read the matrix to see why?) and examine the predictions as to the evolutionary outcomes of its corresponding population game under each dynamics.

Figure 2. An anti-coordination game

For both dynamics, let us assume large populations, random pair-wise interactions, true-breeding, the absence of mutation, and infinite-horizon play. Under the replicator dynamics, the prediction is that, from most all initial conditions, evolution will deliver the population to the polymorphic state x=\frac{1}{2}, where A-types and B-types coexist in equal proportions. In contrast, for the same game, the Moran process predicts that evolution will deliver the population, with equal probability, to one of the two monomorphic states x = 0 or x = 1, where the population is composed entirely of either A-types or B-types. The evolutionary outcome that is a moral certainty in one model is an impossibility in the other.

Such a divergence in the predictions of the two dynamics leads naturally to the following questions: How are we deriving the predictions of each dynamics? And what is the cause of their divergence?

The standard explanation for divergence in such cases is that the dynamics differ in the time horizons of their predictions: the replicator dynamics approximates the short-to-medium-run behavior of evolution, while the Moran process can capture its long-run behavior (Taylor et al. 2004; Nowak 2006). The prediction of the replicator dynamics is polymorphism, and this is correct for the short-to-medium run. The prediction of the Moran process is monomorphism, and this is correct for the long run. Young (1998, 47) states this clearly: “While [the replicator dynamics] may be a reasonable approximation of the short run (or even medium run) behavior of the process, however, it may be a very poor indicator of the long-run behavior of the process.”

The dynamics differ with respect to the time horizons of their predictions. This is true, but, in the case of interest, this is not the cause of the divergence in their predictions, and it is not the answer to our puzzle.

The full answer can be found here. But I can share that the answer to this puzzle involves a lovely little inequality I call the strong mutation condition

\eta(N + 1) > 1,

where \eta denotes the mutation rate of the population and N the size of the population. Intuitively, strong mutation corresponds to when, on average, at least one new mutant enters the populations per generation. When the condition is satisfied, the predictions of our two dynamics—the Moran process and replicator dynamics—will once again coincide and the technique of stochastic stability will mispredict the long-run behavior of the Moran process.

If there is a lesson here, it is that—in science—we can keep our best tools for analysis, but we must also understand their shortcomings, and what assumptions they might be smuggling in, and what relationships they might obscure.


Boltzmann, L. (1902) Model. In. Encyclopaedia Britannica. London: “The Times” Printing House 10th Edition, 788-791. 

Mohseni, A. (July 2019) Stochastic Stability and Disagreements between Dynamics, Philosophy of Science 86, no. 3: 497-521. [find here]

Mohseni, A. (Manuscript). The Limitations of Equilibrium Concepts in Evolutionary Games. [find here]

Young, H. P. (1998). Individual Strategy and Social Structure: An Evolutionary Theory

of Institutions, Princeton University Press. [find here]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s