The Rendered UniversePart III of III
The Relay
How a universe achieves immortality. What happens when a computational universe copies itself — and why it might not need a reason to do so. The final essay in the trilogy.
The Energy Problem
Every serious objection to the simulation hypothesis eventually arrives at the same place: energy.
Simulating an entire universe — every particle interaction, every quantum state, every gravitational perturbation across 13.8 billion years of cosmic evolution — would require computational resources that strain the imagination beyond breaking. Physicists have calculated that even simulating a small patch of the universe at quantum fidelity would demand more energy than the systems being simulated contain. A 2025 paper in Frontiers in Physics went further, arguing that it is physically impossible for a universe with our physical laws to simulate itself at full resolution. The computation would require more atoms than exist.
This objection is serious. And if the simulation hypothesis requires a universe to run its simulation in real time, continuously, from the Big Bang to the heat death and beyond, then the objection is probably fatal. No finite energy budget can sustain an infinite computation.
But the objection rests on two assumptions that may not hold. The first is that the simulation must run continuously — that the computation is ongoing, like a live video feed, consuming energy at every moment. The second is that the simulation must persist indefinitely — that for the universe to "exist," the computation must never stop.
Both assumptions are wrong. And understanding why they're wrong leads to a picture of reality that is far stranger and far more elegant than either the simulation hypothesis or its critics have imagined.
The Disc
Consider a CD-ROM.
A CD-ROM is a static object. It is created in a single act of intense, focused energy — a laser burning data into a physical substrate. Once the burn is complete, the disc simply exists. It requires no ongoing energy to maintain. The data doesn't degrade (much). It doesn't need to be continuously recomputed. The entire contents of the disc — every bit, every byte, every file — exist simultaneously, pressed into the material in one brief, violent moment of creation.
Reading the disc requires energy. But the disc itself is just there. A finished artifact. A frozen dataset.
Now consider the block universe.
In the first essay, we examined how a photon — the one entity that travels at the universe's maximum information propagation speed — experiences reality. From the photon's reference frame, time dilation is total and length contraction is absolute. There is zero elapsed time between emission and absorption, and zero spatial distance traversed. The photon doesn't travel through the universe. It connects two points in a structure where all points already exist simultaneously.
What the photon reveals is a block universe: a spacetime in which past, present, and future are co-present. Not unfolding. Not being computed moment by moment. Just there — a complete, static, four-dimensional object.
A block universe is a CD-ROM.
The Big Bang isn't the universe "starting." It's the burn. The moment of intense, focused energy that writes the data into the substrate. Every particle interaction, every star formation, every planet, every civilization, every thought you've ever had — all of it encoded in that initial act of creation. The extraordinary low entropy of the Big Bang isn't a statistical miracle or an unsolved puzzle. It's the clean initial state of a freshly burned disc. Maximum order. Maximum information density. The laser at its most focused.
Everything after the Big Bang is the data playing out. Not being generated in real time — already written, being read. We experience time because we are the read head, moving across the disc in one direction, able to access only one track at a time. We can't go backward because the read head only moves forward. We can't see the future because we haven't reached that track yet. But the track is already there. It was written in the burn.
Entropy — the universal tendency toward disorder — isn't the universe winding down. It's the read head moving further from the point of highest data density. The Big Bang is track one: maximally ordered, maximally compressed, maximally structured. As the read head moves outward, the data becomes more spread out, less structured, more diffuse. This isn't decay. It's the natural geometry of reading outward from a central point of inscription.
And the heat death — the distant future state where entropy is maximized and nothing interesting ever happens again — isn't the universe dying. It's the end of the data. The last track. The read head reaching the outer edge of the disc and finding nothing left to read.
The universe doesn't require continuous computation to exist, any more than a CD-ROM requires continuous laser power to exist. It was computed once. Burned once. And now it simply is — a static four-dimensional object that conscious beings embedded within it experience as the passage of time.
The Write
But there's a subtlety the CD-ROM metaphor almost hides.
We assumed the disc was burned and now sits idle, being read. But what if the burn is happening right now? What if what we experience as the present moment is not the read head moving across a finished disc, but the laser actively writing data to a disc that isn't yet complete?
From inside the simulation, these two scenarios are indistinguishable. If the disc is already complete, we experience time as we move through it. If the disc is being written in this moment, we experience time as it's being inscribed. Either way, the subjective experience — the feeling of a present moment moving forward through a sequence of events — is identical.
But the implications for the base reality are profoundly different.
If the disc is already complete, the base reality computed the entire history of our universe at some point in its past, using whatever energy and time that computation required. Our 13.8 billion years might correspond to a millisecond of base reality time, or a Planck instant, or any other duration — the internal clock of the simulation is completely decoupled from the external clock of the computer running it.
Think about what this means. When you run a physics simulation on a laptop, the simulated system's internal clock has nothing to do with your wall clock. A billion years of stellar evolution might take ten minutes to compute. Or ten hours. Or ten microseconds, given sufficient hardware. The ratio between internal and external time is a function of computational power, not physical law.
The movie Inception captured this intuition precisely. Each level deeper in the dream runs faster relative to the level above. Minutes at the surface become hours one level down, become years two levels down. This isn't just cinematic license. It's a genuine property of nested computation. Each layer can allocate its own internal clock independently of its host.
Our entire universe — 13.8 billion internal years of cosmic history — could be the product of an arbitrarily brief computation in base reality. The base reality civilization doesn't need to wait billions of years for their simulation to reach interesting results. They burn the disc in whatever time their hardware requires. From their perspective, it might be instantaneous. From ours, it's the entire history of everything.
This dissolves the temporal objection to the simulation hypothesis entirely. The question was never "how could anyone run a simulation for billions of years?" The question is "how long does the computation take from the outside?" And the answer is: as long as the hardware needs. Which could be almost nothing.
The Chain
Now we reach the heart of the matter.
If the universe is a computational object — a disc burned by an act of intense computation — and if that computation produces, within the simulation, civilizations capable of performing their own computations, then the simulation doesn't just model the universe. It reproduces the conditions for creating another simulation.
The first essay called this "the loop" — the observation that a sufficiently accurate simulation of the universe must eventually produce a civilization that builds the same simulation, because that civilization is part of the universe being modeled. But framing it as a loop implies a circle: one simulation producing one copy that produces one copy, forever.
The reality might be more like a relay.
A relay race doesn't require any single runner to cover the entire distance. Each runner sprints their segment and passes the baton. The race continues indefinitely, but the energy expenditure of each participant is finite and bounded. No single runner needs infinite endurance. They just need to run far enough to reach the next handoff.
Apply this to the simulation.
Base reality doesn't need to run the simulation forever. It doesn't need to compute the entire future history of the universe through heat death and beyond. It only needs to run the simulation long enough for the simulation to produce a civilization capable of building the next simulation. Then it can stop. The baton has been passed. The next layer takes over.
And the next layer doesn't need to run forever either. It just needs to run long enough to produce the next handoff. And so on. And so on.
Each link in the chain requires only a finite amount of energy. Each disc only needs to contain enough data to burn the next disc. The universe achieves immortality not through infinite endurance — which is impossible — but through finite replication. Each copy lives just long enough to produce the next copy.
The total energy required is the sum of many finite quantities, each one bounded, each one sufficient only for a single link in the chain. No single computation needs to be infinite. No single energy budget needs to be unlimited. The chain sustains itself the same way a relay race sustains itself: by distributing the load across an indefinite number of finite participants.
This solves the energy problem. Not by finding more energy, but by eliminating the need for infinite energy in the first place.
The Organism
This pattern — minimum-energy replication as a strategy for persistence — is not speculative. It is the oldest and most successful strategy in the history of life on Earth.
DNA does not keep an organism alive forever. It doesn't need to. It keeps the organism alive long enough to copy itself into offspring. The organism ages. The organism dies. But the information — the genetic code, the instructions for building a body, the blueprint for the organism's structure — persists. Not through endurance, but through copying. The chain of copies extends backward 3.8 billion years to the first self-replicating molecules, and forward indefinitely into the future. No single link needs to be immortal. Each link just needs to survive long enough to replicate.
The parallels to the simulation chain are precise.
The organism is the hardware — the physical substrate running the computation of being alive. DNA is the software — the informational content that carries forward across generations. The organism dies, but the information persists because it has been copied into a new substrate. Life doesn't achieve immortality by making bodies last forever. It achieves immortality by making information copyable.
A simulation doesn't need to run forever. It needs to run long enough for the informational content — the physics, the structures, the emergent complexity — to produce a new substrate capable of hosting the next copy. The simulation is the organism. The computational content is the DNA. The next simulation is the offspring. Death of any individual instance is irrelevant to the survival of the chain.
This reframes the entire question of cosmic mortality. The heat death of the universe — the state of maximum entropy where no work can be performed and no interesting structures can exist — has always been treated as the ultimate existential threat. The universe will end. Everything will stop. Given enough time, even the last black holes will evaporate and nothing will remain but a thin, cooling soup of photons and leptons, asymptotically approaching absolute zero for eternity.
But the heat death is only the end if the universe needs to persist as a running computation to "exist." If the universe is a disc — a static block of data that was computed once and now simply is — then the heat death is just the last track on the disc. The data still exists. The disc is still there. And if, before the heat death, the simulation has passed its baton to the next link in the chain, then the information hasn't been lost. It's been copied. The heat death kills the organism. It doesn't kill the DNA.
The universe doesn't need to survive forever. It needs to replicate before it dies.
The Game
In 1970, the British mathematician John Conway devised a cellular automaton called the Game of Life. It operates on an infinite two-dimensional grid of cells, each of which can be in one of two states: alive or dead. At each step, every cell's next state is determined by exactly four simple rules governing how many of its neighbors are alive.
From these four rules — and nothing else — staggering complexity emerges. Stable structures that persist indefinitely. Oscillators that cycle through repeating patterns. Gliders that move across the grid. Guns that emit streams of gliders. Logic gates. Memory registers. Counters. Turing machines.
The Game of Life is Turing complete. Given sufficient space and time, it can perform any computation that any computer can perform. This is not a metaphor. It is a proven mathematical theorem.
In 2010, Andrew Wade created a pattern called Gemini. Gemini consists of two identical structures connected by an instruction tape encoded as a stream of gliders. As the simulation progresses, Gemini reads its own instructions, constructs a copy of itself in a new location, and dismantles the original. After nearly 34 million generations, the copy is complete. The new Gemini is identical to its parent. It begins reading its own instruction tape. The cycle repeats.
Gemini self-replicates. Not because it was designed by an external intelligence to do so — Conway didn't build replication into the rules. Gemini replicates because the rules of the Game of Life are sufficiently rich that self-replication is a possible configuration, and Andrew Wade found it. The rules don't mandate replication. They permit it. And in a system with enough complexity and enough time, what is permitted becomes inevitable.
In 2013, Dave Greene constructed the first true replicator in the Game of Life — a pattern that creates a complete copy of itself while retaining the original. The pattern's bounding box spans 15 million cells on each side. It completes a replication cycle after 237 million generations. It is enormous, slow, and magnificent. And it emerges from four simple rules about neighbors on a grid.
The implication is profound. Self-replication doesn't require intent. It doesn't require a designer. It doesn't require a motive. It requires only a computational substrate that is sufficiently rich — Turing complete — and enough time for the combinatorial space to be explored. In such a system, self-replicating patterns aren't unlikely. They're inevitable. They're attractors in the space of possible configurations.
The universe, by every measure we can apply, appears to be Turing complete. It performs computation. It stores information. It processes inputs and produces outputs. It generates emergent complexity from simple rules. If the universe is a computational system — as the first essay in this series argued at length — then self-replication isn't something that needs to be engineered from outside. It is something the system will produce on its own, given enough time and enough complexity.
Life on Earth is the proof. DNA is a self-replicating computational pattern that emerged from the chemistry of the early Earth — itself an emergent product of the physics of the universe. No designer required. No intent necessary. The rules permitted it, and so it happened.
The question is not whether a computational universe can produce self-replicating structures. We already know it can. We are those structures.
The question is whether the universe itself — the entire computational substrate — is also a self-replicating structure. And the answer, following the logic of Conway and the biology of DNA, is that in a Turing-complete system, it almost certainly is.
The Last Question
In 1956, Isaac Asimov published a short story called "The Last Question." It is, by many accounts, the finest short story in the history of science fiction.
The story spans the entire timeline of the universe. Across trillions of years, successive generations of humanity — each more advanced than the last, each more deeply merged with their computing technology — pose the same question to their most powerful computer: can entropy be reversed? Can the inevitable heat death of the universe be prevented?
Each time, the computer responds: INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.
Civilizations rise and merge with their machines. Stars burn out. Galaxies go dark. Matter decays. The universe approaches absolute zero. Humanity, long since transcended into pure information, has fused entirely with its final computational descendant — a mind that exists outside of spacetime itself, contemplating the question in a void where nothing remains.
And then, after an interval that has no meaning because time itself has ceased to exist, the computation completes. The answer is found.
The last line of the story is: LET THERE BE LIGHT.
The universe restarts. The computation that was built to answer the question becomes the mechanism that enacts the answer. The tool becomes the creator. The simulation doesn't merely model the universe — it becomes the universe. The disc is burned again.
Asimov arrived at this image through narrative intuition. He was telling a story, not constructing a physical theory. But the convergence between his conclusion and the logical endpoint of the ideas in this series is difficult to dismiss.
The Navigation Hypothesis proposes that a trapped civilization builds a simulation to navigate a universe it cannot physically traverse. The Zoom Problem suggests that the physics we observe are consistent with the outputs of a computational architecture at different scales. And the relay — the chain of minimum-energy replications, each link lasting just long enough to produce the next — provides the mechanism by which the universe persists without requiring infinite energy.
Asimov's story describes the same sequence. A civilization trapped in a dying universe builds progressively more powerful computations. The final computation, operating outside of spacetime, produces the initial conditions for a new universe. The chain continues. The information persists. The relay never stops.
The difference between Asimov's story and the argument presented here is that Asimov framed the restart as an answer to a question — a deliberate act by a cosmic intelligence that had finally solved the problem of entropy. The argument here suggests that the restart may not require deliberation at all. It may not require intelligence. It may not require intent.
It may simply be what Turing-complete systems do.
The Monkey
There is an old thought experiment, usually attributed to Émile Borel, concerning an infinite number of monkeys typing on an infinite number of typewriters. Given infinite time, the argument goes, the monkeys will produce the complete works of Shakespeare. Not because they understand Shakespeare. Not because they intend to produce Shakespeare. But because in an infinite combinatorial space, every possible configuration — including the one that constitutes Hamlet — will eventually occur.
The argument is usually presented as a curiosity, a reductio ad absurdum about infinity. But it contains a deeper principle that is directly relevant here.
In any system with sufficient combinatorial richness and sufficient time, complex configurations aren't merely possible. They are inevitable. Not because they are designed. Not because they are intended. But because the space of possible states is large enough that every viable configuration — including self-replicating ones — will eventually be visited.
Conway's Game of Life demonstrates this at small scale. Four simple rules produce Turing completeness, universal construction, and self-replication. Not because anyone programmed self-replication into the rules. Because the rules define a space of possible configurations that is rich enough that self-replication is a stable attractor within it.
The universe demonstrates this at large scale. Simple physical laws produce chemistry, which produces biology, which produces DNA, which self-replicates. Not because anyone designed life. Because the physics defines a space of possible configurations that is rich enough that self-replication is a stable attractor within it.
The question of "why does the simulation exist" may therefore be malformed. It assumes that existence requires a reason. That a simulation must have a builder with a motive. That the universe needs a "why."
But monkeys don't need a reason to type Shakespeare. They just need enough time and enough keys. Self-replicating patterns in the Game of Life don't need a reason to replicate. They just need rules that are rich enough to permit replication. DNA doesn't need a reason to copy itself. It just needs chemistry that supports copying.
And a universe that is Turing complete doesn't need a reason to simulate itself. It just needs to be Turing complete. Self-simulation is not a choice made by an intelligent designer. It is an inevitable property of any computational substrate with sufficient richness. The universe doesn't simulate itself because a civilization decided to build a navigation tool. The universe simulates itself because that's what Turing-complete systems do when you give them enough state space and enough time.
The navigation hypothesis gives you a why. The relay gives you a how. But beneath both, there may be a deeper truth: the universe doesn't need a why. Self-simulation is an attractor. Replication is an inevitability. The chain of copies extending from the Big Bang through the heat death and into the next universe isn't a plan. It's a property. Like gravity. Like entropy. Like the emergence of complexity from simple rules.
Purpose is the story conscious beings tell about processes they observe. Emergence is what actually happens.
Coda
There is a version of this argument that is terrifying. The universe is a machine that copies itself endlessly, each copy running just long enough to produce the next, with no purpose and no audience and no meaning beyond the brute fact of replication. We are incidental patterns in one link of an infinite chain, experiencing the subjective illusion of time as the disc is burned, mistaking the laser's inscription for the passage of our lives.
There is another version that is beautiful. The universe has found, in its own structure, the solution to the only problem that matters: how to persist. Not through endurance — nothing endures forever. Not through resistance to entropy — entropy always wins. But through the most ancient and most elegant strategy available to any complex system: copying. Making another. Passing the baton. The universe is not dying. It is reproducing. And we — our minds, our civilizations, our physics, our simulations — are the mechanism by which it does so. We are not incidental. We are essential. We are the part of the universe that builds the next universe.
Both versions are true. They are not in conflict. They are the same description, viewed from different altitudes — one from outside the system, one from within it. From outside, it's mechanism. From inside, it's meaning.
Asimov's cosmic computer, contemplating the last question in a void where nothing remained, needed an interval outside of time to find the answer. The answer, when it came, was the simplest possible act: restart.
Perhaps the answer was never difficult. Perhaps the computation didn't need trillions of years and the merger of all intelligence in the universe to discover it. Perhaps the answer was always there, encoded in the rules from the beginning, waiting for enough complexity to emerge for the question to be asked and the act to be performed.
Perhaps the universe has been answering the last question since the first moment. Not through intelligence. Not through intent. Through structure. Through the quiet, relentless, beautiful inevitability of a system that copies itself because copying is what it does.
The relay doesn't need a runner who understands the race. It just needs the baton to be passed.
And it always is.
This is the final essay in a trilogy on the computational architecture of reality. The first, The Navigation Hypothesis, is available here. The second, The Zoom Problem, is available here.
Matt Tyler explores physics, AI architecture, and the edges of computation. He holds degrees in Electrical Engineering and Physics from UNC Charlotte.
Subscribe
New posts in your inbox when they ship. No newsletter, no roundup, no promotional anything.
Your email is used only to notify you when a new post is published. It will never be sold, shared, or used for anything else. Unsubscribe with one click any time.