top of page

The State of Things: Markov Chains and Living Systems

  • 3 days ago
  • 7 min read

The probability of change depends on where a system stands — not where it's been. What Markov Chains reveal about living systems and design.

The State of Things: Markov Chains and Living Systems

There's a thread that has been running silently through the last few posts — never quite named, but there underneath each one.

It started with power laws — the observation that a few rare, large events tend to dominate entire systems. Most of the rain falls in a handful of storms. Most of the carbon sits in a few giant trees. The extremes write the rules, not the average day.

Then distributions — a broader look at how variability organizes itself in ecological systems. Not just what's typical, but the shape of what's possible. How often disruption arrives. How waiting times behave. How small early differences compound into large late ones.

Then the question of determinism and stochasticity — which parts of a system can be calculated and which can only be buffered. Where precision rewards you and where redundancy does. And the uncomfortable middle ground where a system looks deterministic right up until it doesn't.

Each post quietly circling the same underlying question: how do systems actually behave over time, and what does that mean for how we intervene?

Markov Chains are where that question finally gets a sharper frame. Not an answer — but a way of looking that changes what you see.

States: Seeing a System as a Set of Conditions

Before transitions, the idea of states.

A system is always somewhere. Not fixed in a single position, but in a recognizable condition. Soil is compacted or structured. A water body is aerobic or anaerobic. A pest population is in balance or tipping toward outbreak. A patch of land is in early succession or late, recovering or degraded, productive or stressed.

In reality these aren't clean binaries — soil exists on a continuum, pest pressure comes in degrees. But thinking in states is a useful simplification, and maybe a more honest one than it first appears. It forces a clearer description of where a system actually is, rather than where we'd like it to be or where we assume it's headed.

There's a question that comes naturally from this framing, one that turns out to be surprisingly hard to answer on a real site: what state is this system actually in right now? Not what state was it in last season, not what state the design intended — where is it today?

Getting that wrong is perhaps more common than it seems. And it matters, because the answer shapes everything that follows.

Transitions: What Governs Where the System Goes Next

Here's where Markov Chains enter, and the idea is simpler than the name suggests.

The probability of a system moving from one state to the next depends on its current state — and only on its current state. Not on the full history of how it got there. Not on what happened three seasons ago. Just on where it is now.

This has a familiar feel. The exponential distribution from the distributions post had the same memoryless property — the probability of a storm arriving doesn't depend on how long it's been since the last one. The system doesn't keep track. Markov Chains extend that logic into something more structured: not just individual events, but the movement of an entire system between recognizable conditions.

So a soil system might have some probability of transitioning from compacted to structured in a given season — depending on inputs, biology, rainfall. And some probability of staying compacted. And some very small probability of jumping directly to highly productive. Those probabilities aren't fixed — they shift with management, with season, with what else is happening in the system. But they exist, and they govern where the system tends to go.

What's interesting is what this does to the question of intervention. Management decisions don't control outcomes directly. They shift transition probabilities. Compost doesn't guarantee a structured soil state — it raises the likelihood of moving toward one. Cover cropping doesn't prevent compaction — it reduces the probability of transitioning back to it. The intervention and the outcome are connected, but through probability, not through certainty.

That's a different mental model than most design thinking operates on. And possibly a more accurate one.

Before going further, it's worth knowing where this idea actually came from. Its origin is every bit as fascinating as its power.

A Debate About Free Will

Late nineteenth century Russia. Two mathematicians, one argument.

Pavel Nekrasov was a mathematician with strong theological commitments. He had been working with the Law of Large Numbers — the statistical principle that, given enough observations, averages stabilize. What interested Nekrasov was what that stability required. He believed it required independence between events. Each observation had to be unconnected to the last for the mathematics to hold.

From there, he made a leap that his contemporaries found startling. Human moral choices, he argued, produced observable statistical regularities in society — crime rates, marriage rates, social behaviors that followed predictable patterns across large populations. If those regularities required independence, and independence meant each choice was unconnected to external pressures, then human choices must be genuinely free. Free will, Nekrasov concluded, was not just a theological claim — it was mathematically demonstrable.

Andrey Markov found this unconvincing in the extreme. Not the theology particularly — the mathematics. He believed Nekrasov was wrong about what the Law of Large Numbers required, and that the independence assumption was far too strong. Dependent events — where each outcome influences the next — could produce the same statistical regularities without any requirement for independence.

To prove it, he constructed a formal model of dependent sequences. A system moving through states, where each transition probability depended on the current state. He showed that even with this dependence, stable long-run behavior emerged. The Law of Large Numbers held. Nekrasov's theological conclusion collapsed.

Markov had invented Markov Chains to win a mathematical argument about free will.

The irony is hard to miss. A framework built explicitly to argue for dependence — to show that the past state shapes the next one — became one of the most widely used tools in probability theory, applied everywhere from genetics to economics to, eventually, the modeling of ecological systems. And it came not from engineering or biology but from a dispute about whether God could be inferred from crime statistics.

Domains collide in the strangest places.

Now back to the landscape.

The Absorbing State Problem

Some states are easy to leave. Others are remarkably hard to exit once entered.

Desertification. Persistent anaerobic conditions in a pond or soil. An invasive monoculture that has restructured the seed bank and light regime. Compaction that returns season after season despite intervention. These are states with very high probabilities of self-perpetuation — conditions that, once reached, tend strongly to stay.

In Markov terms these are called absorbing states. Not because nothing can change, but because the transition probability away from them is very low. The system keeps returning. Not because the interventions are wrong exactly — but because the intervention is changing conditions within the state rather than shifting the probability of leaving it.

This reframing is useful and a little uncomfortable. It suggests that working harder inside a state is not the same as moving toward a different one. A degraded pasture that reverts after every replanting attempt isn't failing to respond — it's responding perfectly to its own transition probabilities. The soil biology, the seed bank, the hydrology, the grazing pressure — all of these are keeping the transition probability toward recovery very low, and the probability of reversion very high.

The design question shifts from "what do I apply here" to "what would need to change for the transition probability to shift." Sometimes that's a different question entirely.

Design as Transition Management

If states have transition probabilities, then design is perhaps best understood as the management of those probabilities — not the control of outcomes.

A swale doesn't guarantee water infiltration. It raises the probability of transitioning from dry to moist soil conditions across a slope, and lowers the probability of transitioning toward erosion or runoff. Diversity doesn't prevent pest outbreaks — it reduces the transition probability from balance to outbreak, and likely speeds the transition from outbreak back to balance. Storage doesn't eliminate deficit — it makes the transition to deficit less likely and recovery from it faster.

Fix what can be fixed. Buffer what cannot. That phrase from the previous post still applies — but Markov framing adds something to it. The buffers aren't just absorbing variability. They're actively shaping which transitions are more and less likely. They're changing the probability structure of the system.

There's something almost liberating about this, even though it sounds more uncertain than conventional design thinking. It lets go of the expectation that interventions produce guaranteed outcomes — and replaces it with something more honest. Did this shift the probability in the right direction? Did the system move more often toward the state we were aiming for? Is recovery faster after disturbance than it was before?

Those are measurable questions, even if they take time to answer.

Observation as Transition Mapping

Which brings up something that's easy to undervalue on a real site.

Observation over time builds something like an empirical transition map. Which states does this system move between? How often? Under what conditions does it tip from balance toward stress? What seems to trigger recovery — and how reliably?

A snapshot of current conditions answers where the system is. Accumulated observation over seasons and years begins to answer where it tends to go from here — and what the underlying probability structure of the place actually looks like. That's a different and deeper kind of site knowledge. Not just what's here, but how it moves.

This is perhaps where patient, genuinely curious observation starts to pay dividends that no amount of upfront analysis can substitute for. Not observing to confirm what's already expected — but watching long enough and carefully enough that the transition tendencies of a specific place start to become legible.

Every site has its own probability structure. Its own tendencies. Its own states it gravitates toward and its own conditions under which it moves. Learning that is perhaps one of the more underrated skills in regenerative design.

And Yet

Markov Chains say the current state is what matters — not the history. And that has been a useful frame throughout this series.

Sitting with it long enough, something still feels incomplete. Like a calculation that's almost right but keeps producing a remainder that won't disappear.

We'll look at that remainder next.

Sign up for our newsletter or connect with us on social media to stay up-to-date with our latest posts and permaculture inspiration.

Explore our inspiring series and posts:

Love the post? Share it with your circle, inspire your people.

Join thousands of readers
exploring regenerative design.

One email per month. No spam. Unsubscribe anytime.

Thanks for subscribing!

bottom of page