Numerical convergence as a model for senescence

Aging and death is as inescapable as taxes and leaf blowers, but why? How? Despite millennia of obsession and billions of dollars in research, we’re still far from understanding the biochemical particulars of why we age and, ideally, how to stop it. Yet, if we don’t solve immortality, we’re all going to die.

I’m the opposite of an expert on anything biological but I saw an interesting numerical feature of aging data and will discuss it here.

Aging and age-related diseases are responsible for the vast majority of negative impacts on quality of life, and healthcare expenditure. The graph below charts the Gompertz-Makeham law of mortality. Retaining the annual chance of death for a 20 year old would result in an expected lifetime of around 1000 years. In addition, 20 year olds rarely get cancer, heart disease, diabetes, or arthritis and generally enjoy a good quality of life. Younger people have incredible regenerative capacity, but as we age our body somehow forgets how to get back to its original state and gradually accumulates damage that ultimately raises risks for disease until the odds of death approach unity.

Gompertz-Markham law of human mortality.

That’s not to say that progress has not been made. Taking data from US government research, we see that overall chance of death at any age has gradually reduced as health tech, job safety, and nutrition has improved. The improvements have not yet maxed out but at the same time there haven’t been any huge jumps. The gradient of the curve is still the same.

Mortality for white men in the US, 1910-2003.

Except for early childhood mortality and a jump between 15 and 20 related to excessive testosterone, to a good approximation yearly human mortality starts off at about 0.00005 (pretty good!) and then increases by about 9% per year. For the purposes of this blog, I’m interested in this 9% figure. For other species, this number can vary a lot. Mice rarely live longer than 3 years. Humans who follow the right diet and exercise regimen might live 10-15 years longer, which corresponds to bumping that 9% down to 8%.

Indeed, at the far right reaches of the mortality curve we see the gradient of the line bend down slightly. This is a statistical artifact as the population “boils off” with only relatively slower-aging people left over.

In aging research, some focus has been made on “Blue Zones“, geographic areas with higher proportions of long-lived people. Do people in Blue Zones age noticeably slower or have different biochemistry? The current consensus seems to be that yes, there really are some places with longer-lived people but that diet and lifestyle, rather than genetics, seem to play a role.

For example, consider the mortality curves for Okinawa, an island in the south of Japan, plotted in the year 2000.

Mortality crossover between Okinawa and Japan in 2000 | Download Scientific  Diagram
Mortality curves for Okinawa in 2000.

Compared to the rest of Japan, Okinawans (who are genetically not necessarily the same as main island Japanese people) age at roughly the same rate – the gradient of the curve is the same. Before the age of about 60 (this data comes from 2000), Okinawan mortality is actually slightly higher. And Okinawans over the age of 55 in 2000 represent a relictual population who survived the brutal battle of Okinawa, during which these people endured political oppression, starvation, propaganda, mass violence, and mass suicide prior to liberation.

We already know that caloric restriction can induce improved homeostasis so we probably shouldn’t be surprised that some small fraction of populations that were almost universally agrarian and pre-industrial (and not-exactly great with birth records…) worldwide prior to the end of WW2 lived longer than the rest. If we identify 10,000 such groups of people whose lives basically sucked before 1950 and examine their mortality curves, some small fraction of them would outperform the mean. But none of them are running marathons at 100.

It’s not that I don’t think current research on the biochemistry of aging is useful – what else should we be doing? Children are not getting cheaper and easier to produce and without finding ways to maintain youthful health in our increasingly aged population we’re going to run into serious demographic problems. But we haven’t exactly cracked the problem: What is it that causes our chance of death to increase 9% every year, or mice’s chance by 500%? Or, more pointedly, how to knock this back by a sensible fraction without living lives of deprived asceticism. After all, a 20 year old can eat burgers and not exercise and basically be okay.

To the heart of the matter. When I looked closely at the mortality curves above, I started having flashbacks to my PhD. A linear increase, on a logarithmic scale, is pretty standard for numerical integration. Of course the figures published in my thesis all have good convergence properties, but I can reproduce one that looks more typical.

The problem comes down to evolving a state in a numerical computer simulation. The state is some group of numbers that encapsulates information about a system at the present time. The system could be a mass on a spring, a solar system, a rocket motor, a binary black hole inspiral, a neuron, a human body, or indeed the wavefunction of the entire universe. To perform the simulation, the state needs to be updated by evaluating its rates of change at the present time, then extrapolating into the future. There are literally entire buildings at every university on Earth full of boffins who do nothing but think about this problem (solving PDEs) so we will not be covering it in depth here.

The general problem is a big nasty highly coupled non-linear chaotic system, but at its root it can probably be modeled as a whole bunch of masses on springs all tied together. So, let’s take the simplest possible example, a mass on a spring. This is where we started with ODEs when I taught Caltech Physics 20.

The mass experiences a restoring force proportional to displacement F = -kx, where k is the Hooke spring constant. Newton’s law states force = mass x acceleration or F = ma. Acceleration is the second time derivative of position x, or d^2x/dt^2 = a. Anyone who has done basic calculus can now solve this problem, and if you haven’t, take my word for it. Set k and m to 1 – this corresponds to choosing a particular set of units. Then a = -x. The sinusoidal functions solve this nicely, so we can say that x = sin(t), plus amplitudes and phases that depend on choice of initial condition. Velocity v = dx/dt = cos(t). This particular problem is so simple it has an analytic solution expressible with elementary functions, but in general the real world is never this nice. I mean, almost everything is a mass on a spring if you look closely enough but once enough masses and springs are connected together, simulations are our only hope.

To simulate this problem, we specify some initial state (x0, v0). We then update that state by referring to the time derivative of these quantities. dx/dt = v, very straightforward. dv/dt = a = -x, also very easy. So d(x,v)/dt = (v,-x). There is a nice way of writing this out with matrices which allows also an analytic representation of the result of a discrete time integration, which is also a rarity due to this problem’s simplicity.

But for our purposes here, we’ll just approximate dx/dt as (x1-x0)/h, where x1 is x at the next time step, x0 is the original value, and h is a small time step, where “small” is the very opposite of intuitively obvious, and will turn out to be very important.

Rearranging the equations, x1 = x0 + h v0, and v1 = v0 – h x0. Anyone who can use a spreadsheet can now solve this problem numerically. Below I’ve plotted a short integration phase diagram, with v on the vertical axis and x on the horizontal axis. The familiar curves of the sine and cosine functions are projections of circular motion, so it’s not a surprise that a mass on a spring, starting at (x=1, v=0) first accelerates backwards, reaches a symmetric negative end point, then cycles back.

Phase diagram of mass-on-a-spring

What happens if we continue integrating? Below I’ve zoomed in on the original starting point and we see that the state returns, almost, to its starting point. But not quite. The radius of the circle has increased a bit.

Phase diagram of mass returning to starting point and missing.

The radius of the circle actually corresponds to something like the (square root of the) energy of the system, 0.5 kx^2 + 0.5 mv^2. Unsurprisingly, this simplest of integrators has failed to conserve energy. After many steps, extra energy has accumulated in the simulation through numerical error. Looking at energy gives us a neat way to examine numerical convergence.

These two curves show energy (radius^2) for the exact same problem integrated for 100 s with time steps of 0.001 s and 0.01 s, showing that a larger time step results in poorer convergence and significant non-conservation of energy. Physically, the amplitude of the spring is steadily increasing by some percentage per cycle. The rate of increase is proportional to numerical divergence. Worse convergence means a faster increase, a more rapid violation of conservation of energy, a greater difference between what the result should be and what the simulation says. This exponential divergence is mathematically identical to the mortality curves captured by the Gompertz-Makeham law discussed above.

There are of course ways to deal with this. Smaller time steps converge better. Higher order integrators, such as Runge-Kutta 4, enable much better convergence with relatively large time steps. Symplectic integrators can be built that explicitly conserve quantities like energy and momentum, though often at the cost of phase accuracy. For some kinds of problems, particularly multiscale problems, turbulence, combustion, or shock waves, throwing resolution and smaller time steps at a problem doesn’t necessarily make them better. C’est la vie – there is a reason universities have buildings full of experts trying to solve these problems.

The key take away is that a given set of integrators may or may not solve a given problem well. Often a key determining factor is the state of the problem. For example, the field is rife with example toy problems which behave convergently for a while and then undergo some kind of phase change and “blow up”. If you initialize the problem with one of the unphysical states which can result, it will continue to blow up. To return to the mass on a spring example, poor convergence is caused by a failure of the integrator to accurately approximate the actual changes with time caused by the state variables evolving too quickly.

This is a generic problem. Any integrator or control system that under-resolves the underlying physics will not work properly. If the physics calls for changes that the integrator or mesh doesn’t resolve, you will probably have a bad time. This is the main reason turbulence is so painful – it contains swirlies at smaller and smaller scales down practically to infinity. To have any hope of success, the coder will have to add “artificial dissipation”, usually in the form of some kind of viscosity, to suck energy out of the problem modes faster than they can blow up. Indeed, this sort of hack is the essence of any control system that works in the real world.

Which is where we return to biology. Humans are a festering carnival of hormones, chemicals, feedback loops, and hacky DNA translation mechanisms. And yet our biochemical machinery has a way of taking our current state and evolving it into the future that mostly conserves quantities that must be conserved, such as blood oxygenation level. Acute failures of any number of discrete feedback systems form the proximate cause of many illnesses, and in many cases the medical process works by identifying the broken system and artificially moving parameters back into the (poorly-named) dead band, then making the patient wait for inbuilt regenerative systems to do the rest.

There isn’t some global RK4 integrator crunching along with discrete time steps – that would be too easy. Instead there are hundreds of separate and mostly analog feedback loops flying in close formation. But the underlying mathematics of exponential divergence applies to analog systems (such as PID controllers) and especially to composite systems, where the central limit theorem often comes into play.

What we see with bulk population mortality curves is exactly what we would expect to see if we were monitoring the convergence of thousands of similar simulations, or the same simulation run thousands of times with slightly different initial conditions (such as in weather forecasting). Over time the state quantities gradually diverge from their initial harmony. Integrated homeostatic systems consistently restore equilibrium, but there is hysteresis and loss of information. Homeostatic mechanisms are themselves perturbed by the steadily degrading state, and the resulting feedback is a slow (or fast, depending on perspective) slide into an ever less convergent state. The process is deterministic.

In numerical simulations, there are plenty of hacks to try. One could speed up feedback loops, decrease timesteps, reformulate the underlying equations, attempt to add dissipation, filtering, or systemic decoupling. Perhaps the reason exercise and caloric restriction improves life expectancy a bit is because it tempers state excursions relative to the capacity of feedback systems to recenter them? Fundamentally our biochemistry is limited by the speed that certain chemicals diffuse through cytoplasm. We can’t just read our DNA 5x faster to live 5x longer (if our integrator is first order convergent, which is optimistic!). And yet, some organisms live for thousands of years and some (usually relatively simple) even appear not to age at all.

Our bodies perform hundreds of millions of chemical cycles per year and only age by 9% over that time period. Compared to a non-functioning homeostatic system (i.e. recently dead person) we age barely at all. What would it take to push our current homeostatic mechanisms from 10 9s of closure to 11?

With the appropriate numerical formulation, choice of coordinates and integrator, the gold standard for numerical integration is machine-precision convergence for the length of the integration. This is possible, and in some cases not even all that hard. It’s not physically impossible to consider biological systems capable of infinite regeneration. In some cases you can even take a wildly divergent numerical simulation, slam on the brakes, add a bunch of filtering, and recover the original physical state. Enough information is preserved to do that, or at least recover one that stays convergent for a while longer.

In humans, I am pretty confident saying that there isn’t one, or even a few, biochemical feedback loops that are letting down the team. Probably most of them degrade at about the same rate. There is no selective pressure for them to be much better than average. Within the full diversity of the human species there are no doubt individuals with much better performance in some subsystem, but you would never see it in aging data because the machine-body meat-robot-as-a-whole is a team effort. Part of the challenge is that different biological feedback systems have different fundamental frequencies, ranging from bone-renewal taking 7-10 years right down to optic nerves firing 50 times a second.

I don’t have any specific insights to offer on which systems are the worst offenders, or whether it’s even worthwhile to play whackamole by gradually refactoring the entire genome, one system at a time. David Sinclair seems to think that epigenetic expression regulates the aging process. But we could probably do worse than develop diagnostic techniques that output enough data that we can resolve the evolution of the human state in real time, then start looking for parts that appear to be barely keeping up.

If we approached this like a numerical convergence problem, we would look for modes which are blowing up. Formally, these are eigenstates of the linearized RHS operator with eigenvalues outside the zone of convergence of the integrator. Practically speaking, they are usually concentrations of energy in high frequency modes at the Shannon-Nyquist sampling limit. Generally it seems with senescence that feedback loops are slackening and processes are slowing down. Diabetes is a good example of a critical state component getting out of bounds, with fast-acting deleterious consequences. Same for heart arrhythmias, which often go wonky days before an eventual heart attack. What other systems have too much energy in them? What processes are degrading homeostatic capacity?

11 thoughts on “Numerical convergence as a model for senescence

  1. Very interesting — you make a strong case for this analogue. I followed along to the end, but I was surprised when you made the opposite conclusion to what I expected. Maybe you can give some insight to where my understanding goes wrong!
    As a layman to biology (albeit less so to numerical integration), my understanding of the process of ageing is basically “cells get worse at replication the more they replicate”. DNA transcription errors rack up, and the copies degrade. This is why caloric restriction is so effective to slow ageing: your body decreases the rate of cellular replication when it doesn’t have the resources to allocate for it. Obviously this is still a kind of numerical convergence error. But the ground truth of the system (the “state vector”, so to speak) is your original DNA, not the literal state of your bodily systems, and the DNA is /supposed/ to not evolve with time.
    (Maybe in a sense ageing is like a symplectic integrator racking up phase error? Is that a more apt comparison?)
    So if this is right, you don’t want to decrease timesteps to get higher fidelity in the integrator, you want to increase timesteps so that you have fewer chances to go wrong. Obviously there’s a limit to this; your body can’t use the same cells for your entire lifespan. But it seems to have worked well for the Okinawans!

    Like

  2. For fun, some years back I wrote an n-body spring system to simulate bubbles. Of course it would blow up. I started with euler, then RK, then different damping mechanisms, basically all the stuff listed here. (I did not realize back then how terrifically hard this problem is!) I eventually found regimes that were stable and never blew up as long as pertebations were within some bounds. But maybe that was only possible because the springs all had the same natural length and constants. Is there any hope to similarly stabilize a biological system with heterogenous “springs” ?

    Like

  3. Assuming we can create artificial life with digital/mechanical systems, do you think those will necessarily suffer from the same feedback issues?

    Or tangentially, perhaps there is something special about systems of interconnected chaotic oscillators that produce life and consciousness?

    Like

      1. We have no easy way to know if any tree is conscious. Anything that can get away with thinking as slowly as a tree may doesn’t need specialized nerve tissue, or even a noticeable amount of thinking stuff, to be massively intelligent.

        The reason we have these brain things packed into our skulls is that we need answers fast, fast, fast. Every bit of brain needs to be as close as can be managed to every other bit, and they have to work all in parallel. We have as much brain stuff as our ancestors could afford to keep fed, on average. (I wonder if von Neumann used more calories than the rest of us.) We need the brain stuff not for survival — a lot less did fine — but to compete socially with other humans. Arguably, human brains amounted to a mating display like peacock feathers or deer antlers, until recently.

        If you weren’t in a massive hurry, the thinky bits could (and should) be spread out. The same bits could worry at different parts of the problem, in sequence, instead of all at once. Thinking would take up hardly any of your energy budget, so you could afford to be very, very smart. The thinky bits could even have some other work to do when not called upon for thinking, making them even harder to recognize, were anybody even looking.

        There’s no reason to think all trees would be equally intelligent. Maybe sequoias and aspen excel among vegetal thinkers, but it could just as well be cacao. (Most octopods live only a year, but are standouts among the cephalopods.) Who knows where intelligence could confer more reproductive success, rooted in a forest? Arguably, sequoias may be disadvantaged for developing intelligence, as their reproductive cycle is so long. On the other hand, they have had lots more time than we have had, presuming we only really got started toward the end of the Miocene epoch.

        Like

  4. I think of aging at the level of cells rather than chemicals, and discrete rather than analog processes. A cell’s identity can be thought of as a bunch of switches, that turn the transcription of various proteins on and off. But it’s not like transistors, where each switch responds to a single input voltage. Instead, turning on transcription of a protein requires the binding of multiple proteins to its regulatory sequences, along with epigenetic modifications. Specifying the identity of a cell at this level isn’t all that complicated: we only have on the order of 20,000 genes, and a bacterial cell can get by with fewer than 3,000. Each gene has only a few regulatory sequences that affect it directly.

    The behavior of a cell is also more digital than analogue. Many proteins get activated by the addition or removal of phosphate groups. There are also positive-feedback loops, such as a voltage-gated ion channel allowing ions to flow the direction that changes the voltage in the direction that opens the channel. If the process crosses its threshold, it goes to completion. Then another process is activated later, and switches it back to the other state. There’s on or off, not sensitivity to the tenth decimal place of the initial conditions.

    Ones and zeroes can make sense, or they can make gibberish, but I don’t expect them to make chaos (unless they’re in a computer that’s simulating a chaotic analogue process, which I don’t expect cells to do). When they make gibberish in a place that matters, for most ways they can do so, the cell just dies. Other cells, that still make sense, keep dividing and replace the ones that die. But sometimes, instead of making complete sense or complete gibberish, it makes bugs. That can be cancer, or it can have the cells that need to keep dividing do so at less than replacement rate, or it can have a normal number of cells but have bugs in their behavior.

    Like

    1. Aging is at a lot of levels. For instance, there’s chemical level aging in the form of sugar cross-links that chemically stiffen your connective tissues.

      The SENS foundation has what they (Too optimistically, I think!) think is a complete list of aging mechanisms, seven in all, and have research projects going to deal with all of them. They’re quite diverse.

      Basically, there are a lot of irreversible (In the context of existing repair mechanisms.) physical changes going on in your body over time. Not just control systems getting out of a stable zone.

      Though that might very well be a factor, especially with the physical degradation shrinking that stable zone.

      The curves above resemble the classic “bathtub” curve of product failure, to my (engineer) eye. Early mortality in products that were manufactured defective, a long period of low mortality once those have been eliminated from the population, then a rising level of mortality due to wear.

      Liked by 1 person

  5. I had a realization about aging recently, thanks to my ophthalmologist. She warned me that, after a certain but unpredictable age, if I continued using contact lenses, my eyes would go all to hell very quickly. I gave them up.

    What I realized was that this would not be because anything changed suddenly at that age. Rather, the amount of damage wearing them caused would, at some specific point, exactly match the (declining) rate that I could heal the damage. As long as all is healed by morning, all is well. After that, damage would accumulate at increasing speed.

    This process must play out in numerous other milieus. Knees come to mind. Roads. Politics.

    Like

  6. As you mentioned, it’s known that calorie restriction increases lifespan. There’s a signaling pathway in each cell that gets activated when nutrition is available, that stimulates the mTOR complex, the “mechanistic target of rapamycin”. And then there’s the drug rapamycin, discovered on Rapanui (Easter Island), that inhibits mTOR, thus simulating fasting. The NIH runs the Interventions Testing Program (ITP) to investigate promising life extension drugs on large populations of mice, and rapamycin is the biggest success so far – it really works. There’s a community of people, many of them MDs, that are experimenting with this drug without serious side effects. More info: https://www.rapamycin.news/t/rapamycin-frequently-asked-questions-faq/59

    Like

Leave a comment