Vazza Overstates Constraints on the Simulation
Lincoln Cannon
28 May 2025
Most of us first encounter the Simulation Hypothesis through science fiction, often experienced as something of a metaphysical thrill-ride. But as computational theory and cosmology advance, serious thinkers – philosophers like Nick Bostrom, physicists, computer scientists, and even theologians – have begun analyzing the feasibility of computed worlds. Recently, Franco Vazza published “Astrophysical constraints on the simulation hypothesis for this Universe: why it is (nearly) impossible that we live in a simulation.” In this paper, he provides a scientific analysis of the Simulation Hypothesis.
Vazza’s analysis is impressive in both scope and detail. He incorporates influential contemporary hypotheses about the relationship between information, energy, and the structural constraints of our universe. These include the Holographic Principle, Landauer’s limit, and astrophysical energy bounds.
From them, Vazza reasons that any simulation of our universe (even on reduced scales) would require astronomically large amounts of energy. So large, he judges from his calculations, the energy requirements would be greater than anything feasible within our universe. Not even black holes instrumented as computers, at what he deems to be the bounds of theoretical speculation, could handle the demands of a low-resolution real-time simulation. Thus, he concludes, energy requirements render the Simulation Hypothesis practically impossible for any simulator that may operate within physics like our own.
Of course he doesn’t know about physics unlike our own, which he admits. But he points out, rightly, the practical triviality of speculation about physics unlike our own. The more alien the imagined physics, the less such imagination implies anything meaningful about our own potential. So alternative physics can’t save the Simulation Hypothesis.
Although I’m not expert in the related physics, I assume Vazza has accurately characterized the scientific hypotheses on which he calls. And although I haven’t carefully reviewed his logic and math, I assume they are valid and correct. However, even granting those particular assumptions, a model is only as strong as the ensemble of its assumptions.
As much as I appreciate Vazza’s audacity, I find his conclusion overstated and his apparent confidence unwarranted. He overlooks or glosses over foundational assumptions that deserve more attention. It’s premature to declare the Simulation Hypothesis “impossible,” or even nearly so.
Overestimating Costs
Perhaps the greatest overreach arises from assumptions that Vazza uses to calculate energy requirements. He acknowledges the Simulation Hypothesis doesn’t depend on simulating an entire universe, or even an entire planet at full resolution, although he focuses considerable attention on such ideas. And he does briefly consider the possibility of solipsism. But he stops short of fully considering minimal costs for supporting subjective experience.
Consciousness is not well understood by science or philosophy. Mind may emerge from or supervene on relatively coarse substrate, which resists easy quantification. We cannot say, at least for now, what a minimum necessary substrate for experience would be. But we can say, with the confidence of speaking from definition, that a simulation would need only provide whatever substrate proves sufficient for consistent and convincing experience.
To achieve that, a simulation may economize substantially. For example, it could leverage compressed statistical descriptions of substrate that, in turn, feed on-demand minimal-resolution rendering of substrate. Vazza suggests this would still be too costly due to the energy requirements of error correction, which he briefly characterizes in a footnote as being consistent across both irreversible (standard) and reversible computing contexts. However, competing hypotheses suggest that the cost of error correction may be considerably decreased within the context of reversible computing.
It’s worth recalling that, in calculations like these, small differences in assumptions can multiply into vast discrepancies between conclusions. We can see this across a diversity of approaches to the Simulation Argument. And that’s to be expected, as we can see this elsewhere. For example, small differences in values assigned to components of the Drake Equation yield wildly different estimates for the existence of extraterrestrial civilizations.
Underestimating Superintelligence
Central to the Simulation Hypothesis is the idea that the simulators are, compared to us, vastly more intelligent – superintelligent. Vazza’s assessment of energy production possibilities, however, come merely from contemporary observations of transient natural phenomena such as supernovae. What about hypothetical technologies for sustainable energy that humans can already imagine, such as encapsulating stars in Dyson spheres, extracting energy from the rotation of black holes, or harnessing vacuum energy? And what if entirely novel methods for energy acquisition are discovered or created by superintelligence – intelligence that is, by definition, far superior at discovery and creation?
Contemporary engineers, of the intelligent but far from superintelligent variety, have already begun to take seriously energy production possibilities that their recent predecessors would have immediately deemed impossible, or even ridiculed. Recently, Google announced that its quantum computer performed a computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion years (vastly longer than the age of our observed universe). And they observed that this “lends credence to the notion that quantum computation occurs in many parallel universes.” Such possibilities should temper our confidence in calculations stemming from strict energy ceilings based on present paradigms.
We Are Proof of Concept
Superintelligent possibilities may not be merely hypothetical. Whatever the metaphysics, something clearly has the energy to run the world of experience in which we now find ourselves. The very existence of our own consciousness-supporting world is proof of concept, whether or not superintelligence ever actually attains such power. And Vazza’s “impossible” conclusion may be construed as direct contradiction of this experience.
Moreover, possibility isn’t the most salient issue. Our world is clearly possible, at least once. The more salient issue is efficient cause. While those that Vazza considers may be impossible, at least one other efficient cause must be possible. And superintelligence may be capable of replicating it, computationally or otherwise.
Further, the regularity with which biology, culture, and technology converge on similar solutions across disparate moments in time and space should make us deeply skeptical that we (or our world) are unique, the first or only in kind. Proposing that our world of experience is unique, and that it cannot be computationally or otherwise artificially replicated, may raise more philosophical problems than it solves.
Conclusion
I sincerely commend Vazza for elevating consideration of the Simulation Hypothesis. But, in his own words, “the number of mysteries for physics to investigate is still so immense.” That immensity of mystery applies to the potential cost of simulation, as well as the potential capacity of superintelligence, even in worlds with physics like our own. And, contemplating repeated convergence in our actual world of experience, we should recognize Vazza’s “impossible” conclusion to be hyperbole.
If you’d like to explore my critique of Vazza’s paper at greater depth, check out “A Critical Examination of Astrophysical Constraints on the Simulation Hypothesis.” It’s a paper that I asked Google Gemini to generate, elaborating on and further substantiating these ideas. AI already has the ability to draw broadly (and yet still fallibly) on human expertise far beyond that which any of us can do individually. The immensity of mystery, it seems, isn’t destined to decrease.