The devil is in the details

Among several mathematical phenomena I study is the relationship between the classical deterministic and stochastic modeling choices for a chemical reaction network. The stochastic model is a “first-principles” model where we track the counts of every molecular species and update these counts instantaneously when a reaction occurs. (Which reaction occurs, and when, is determined by a probabilistic rule—hence “stochastic model”.) The deterministic model is a scaling of this process which averages out the noise and leads (usually) to a system of ordinary differential equations.

An interesting consequence of the deterministic averaging process is that irreversible transitions present in the stochastic model may disappear. In particular, the “averaged push” may be away from a trapping state while the stochastic model, which wiggles around, may predict some chance of being absorbed.

For example, consider the following interaction-based disease spread model:

where S is the susceptible population and I is the infected population. On a population-level (i.e. deterministic model) the model can predicts a positive endemic state where the number of people who are infected (“reaction 1”) and those who clear the infection (“reaction 2”) are balanced. When considered on an individual level scale (i.e. stochastic model), however, it is possible for the infected individuals to spontaneously lose the infection within a short window of time. When this happens, however, the infection can no longer re-enter the population and so a different non-endemic terminal state is predicted. The moral of the story is that it is very different to predict the spread of a disease within a city or country (disease may sustain itself) than it is within a small town or single household (disease will die out). That’s why quarantines work!

This “extinction phenomenon” was the subject of our recent paper [1] which focused on interaction-based biochemical reactions. It was recently pointed out by Robert Brijder, however, that one of our examples for future work did not operate quite as simply as we had imagined. The relevant mechanism was taken from [2] and is

I will spare the full details of our analysis except to note that the stochastic model permits a rather unusual trapping state (which is significantly easier to verify than to discover!). Consider the following sequence of reactions:

It is not difficult to see that this sequence of reactions is possible for any state with as few as a single molecule of each species in the network. If we wait long enough, this sequence will happen and will shut the pathways down. In fact, the only reactions which may occur indefinitely is the reversible final pair of reactions—monotonously exchanging X’s for Y’s until the end of time.

As probably seems reasonable at this point, we concluded that all non-terminal reaction pathways become exhausted. That is, they may not continue to occur forever. This, however, turns out to be incorrect! But we have already explicitly found a sequence of reactions which *must* occur and which leads to a locking state. How can we then claim that the reaction pathways could continue to fire indefinitely?

The trick is to reconsider one of our original assumptions. We have assumed that we initially have at least one molecule of each species; there is no reason to assume this. The real question is whether this can make a difference. After all, the states which shut pathways down have low molecularity. Forcing our initial counts to be artificially low would seem to help us here.

To show how this intuition can be misguided, consider taking X=0 and Y=0 so that the only possible reactions are

It is relatively easy to show that, for any positive amounts of the remaining species, the system is structurally persistent—try as long as you might to exhaust one of the species and you will always find yourself with enough to activated at least one reaction in the system. Every reaction will fire indefinitely. Our conclusion, that this particular set of pathways always exhausts itself, was wrong!

The devil, of course, is in the details. There seem to be two things which can happen: either the system gravitates toward a state with only X and Y, or a state with everything else and no X and Y. In order to make the system survive, we needed to start by first taking things away.

This mutual exclusion is not foreign to interaction systems. We can intuit that, in a population of foxes and rabbits, removing the foxes will benefit the survival chances of the rabbits. But do the X’s and Y’s really act like predators for the other pathways? It is hard in this model to see how this could be. In fact, there are separate conservations on the relevant species which prevent such a direct analogy, and so it is not clear exactly what is going on. We thank Robert Brijder for point out this peculiar and interesting fact.

References:

[1] D. Anderson, G. Enciso, and M.D. Johnston. Stochastic analysis of chemical reaction networks with absolute concentration robustness, J. R. Soc. Interface, 11(93): 20130943, 2014.

[2] J. Neigenfind, S. Grimbs, and Z. Nikoloski. On the relation between reactions and complexes of (bio)chemical reaction networks, J. Theor. Biol. 317:359-365, 2013.

Advertisements