I’ve been talking in these posts about my pilot simulations–ten populations for 50,000 generations–and have been trying to scout out possible hypotheses by looking at individual populations. Today I want to zoom out for a minute to look at a slightly larger scale experiment and the challenges and opportunities that come with that greater scale.
How big is ‘big data’? Well, I’m mostly kidding by even using the term, but for me, the crucial measure of scale is how much abstraction is there between you–the person trying to interpret the results–and the data itself. Looking at the mean of the fitnesses of individuals in a population is one layer of abstraction, but that starts to fail when you have, say, 100 replicates populations, as we do here:
A bit of a mess, my attempts to make a pretty graph aside. But is this a visualization problem, or an analysis problem? I could fuss more with colors, scaling, transparency, a third dimension, smoothing, etc, but the real issue is that I don’t know what I’m trying to ask the data, and I won’t necessarily know the right questions to ask, and answer by statistics, until I get a good look at it.
This kind of circular relationship between knowing the data and knowing how to summarize it is why people are so nuts for good ways to plot data in the first place. Let me try to tackle some really simple questions with simple summaries, keeping one eye on the complexities that are getting trampled in the process.
First, does the pace of adaptation slow with time? Let’s start by checking out the grand mean of fitness over time, focusing on the latter 90% of adaptation, after the big, rapid changes in the very beginning:
Casually, it does look like the pace is slowly, but not very quickly. This is cool and exciting; so exciting that it took me a while to see the problems lurking in these data. Let’s back up a step: why are the lines in the first figure so fuzzy? If we zoom in, we can see that the fuzz is real fluctuation over time:
Why should the mean fitness in a population of 10,000 fluctuating so much generation to generation? These populations don’t experience any kind of environmental variability, but they do have some randomness in how they develop into adults. As described here, these genotypes code for gene networks, where genes are transcribed into mRNA which is translated into proteins, which then act to turn up or down gene expression. Because these mRNA and protein molecules are present in small numbers, they are subject to random fluctuations; these fluctuations typically affect the phenotype of the adult. But can the random variations in phenotype honestly explain the variability of the mean of a sizable population?
To find out, I plotted the distribution of phenotypes produced by a few example genotypes and compared this to the fitness function. This function has a Gaussian shape, and very simply assigns a fitness to every organism based on its phenotype. The y-axis here is compound: it shows the fitness associated with each phenotype on the x-axis (dashed black line), and also the relative density of the phenotype distributions for four genotypes.
The most important thing to take away from this is that the range of phenotypes, which results from random variation in development, is huge compared to the width of the fitness function. Except for the very best of the 100 populations (red line), most individuals have near-zero fitness. Development is so random compared to what the environment will bear that producing offspring is basically like buying lottery tickets.
Evolutionary biologists have a mathematical tool, called the effective population size, for understanding what happens in populations with a lot of random variation in reproductive success. A population like this, with a high luck component to reproduction, will adapt as if it had many fewer individuals–its effective size will be much smaller than 10,000. Many good mutations will be lost to bad luck, while some bad or neutral ones will increase to high frequencies by the same token. Under these circumstances, it’s incredible that some populations are adapting so well! I’m tempted to delve deeper into that question, but a nagging concern is holding me back–is this kind of chance variation biologically reasonable? I’m sure I could eventually spin a neat story of how these simulated populations deal with the world I’ve built for them, but would that story ever circle back to tell us about biology?
My guess–and this is only a guess–is that it would not. And so I’ve started a repeat of this computational experiment, with the only difference being a much broader fitness function– a much more generous relation between phenotype and fitness, with the effect that the tremendous role of luck in development that we see here will be reduced. While I hate to turn my back on the questions raised by this experiments, I have to trust my sense of what’s a dead end. And, right now I have no idea what adaptation will look like with weaker selection–faster or slower? More variability between lines or less? A few big substitutions or many small ones? I’m excited to find out.