Search Lincoln Cannon Menu
Lincoln Cannon

Thanks for visiting!

Bayesian Generalized Simulation Argument and Calculator

Lincoln Cannon

15 December 2025

Listen to recording

"Computer" by Lincoln Cannon

I’ve composed a Bayesian formulation of the generalized Simulation Argument. To help guide you through it, I construct the formulation step by step, beginning with Nick Bostrom’s popular formulation of the standard Simulation Argument, and proceeding through Brian Eggleston’s and my own refinements to the argument. Finally, I generalize a composite of the previous formulations, applying them to all feasible creation mechanisms.

The generalized Simulation Argument is a principal component of the Creation Argument of the New God Argument. But it’s not identical with the Creation Argument. The Creation Argument also includes two pragmatically-motivated assumptions: that we will not become extinct before evolving into superhumanity, and that superhumanity probably would create many worlds emulating its evolutionary history. The idea is that trusting actively in such possibilities actually increases their probabilities, as we work to realize them.

This article is technical – more so than most of my articles. If you’re interested in metaphorical rabbit holes, you’ll probably enjoy it. But if you just want the gist, I recommend reading only the opening paragraphs of each section. Then, if you’re reading the article online, you can jump to the end of the article and use the Simulation Argument calculator to help you decide whether we’re destined for DOOM or living in a world created by SUPERS.

Bostrom Simulation Argument

Nick Bostrom’s formulation of the Simulation Argument formalizes a mathematical relationship between the potential computational power of superhuman civilizations and the probability that our world is computed (avoiding inaccurate connotations of “simulated”). The argument demonstrates that if human-like civilizations typically evolve to a superhuman stage and compute emulations of their evolutionary ancestors, computed observers would vastly outnumber non-computed observers. This results in a trilemma where civilizations either universally perish before reaching this capacity, systematically abstain from using it, or generate a population distribution where we almost certainly live in a computed world. By applying principles of indifference to connect objective frequencies with subjective beliefs, the argument compels us to expect that our world is computed, unless we assume abstinence or doom await us.

1) P(SUPERS) : fraction of all human worlds that become superhuman

2) E[WORLDS] : average total number of human worlds that are ever computed directly or indirectly by a single superhuman world

3) E[HUMANS] : average total number of humans that ever live in a single human world

4) E[HUMANS | COMPUTED] : average total number of humans that ever live in all human worlds that are ever computed directly or indirectly by a single superhuman world

5) E[HUMANS | COMPUTED] = P(SUPERS) * E[WORLDS] * E[HUMANS]

6) E[HUMANS | NONCOMPUTED] : average total number of humans that ever live in a single human world that was not computed directly or indirectly by any superhuman world

7) E[HUMANS | NONCOMPUTED] = E[HUMANS]

8) P(COMPUTED) : fraction of all humans that live in worlds that are computed directly or indirectly by any superhuman world

9) P(COMPUTED) = E[HUMANS | COMPUTED] / (E[HUMANS | COMPUTED] + E[HUMANS | NONCOMPUTED])

10) P(COMPUTED) = (P(SUPERS) * E[WORLDS] * E[HUMANS]) / ((P(SUPERS) * E[WORLDS] * E[HUMANS]) + E[HUMANS])

11) P(CHOOSE) : fraction of all superhuman worlds that choose to compute human worlds

12) E[WORLDS | CHOOSE] : average total number of human worlds that are ever computed directly or indirectly by a single superhuman world that chooses to compute human worlds

13) E[WORLDS] = P(CHOOSE) * E[WORLDS | CHOOSE]

14) P(COMPUTED) = (P(SUPERS) * P(CHOOSE) * E[WORLDS | CHOOSE] * E[HUMANS]) / ((P(SUPERS) * P(CHOOSE) * E[WORLDS | CHOOSE] * E[HUMANS]) + E[HUMANS])

15) P(COMPUTED) = ((P(SUPERS) * P(CHOOSE) * E[WORLDS | CHOOSE]) * E[HUMANS]) / (((P(SUPERS) * P(CHOOSE) * E[WORLDS | CHOOSE]) + 1) * E[HUMANS])

16) P(COMPUTED) = (P(SUPERS) * P(CHOOSE) * E[WORLDS | CHOOSE]) / ((P(SUPERS) * P(CHOOSE) * E[WORLDS | CHOOSE]) + 1)

17) (E[WORLDS | CHOOSE] ≫ 1) ⇒ P(SUPERS) ≈ 0 or P(CHOOSE) ≈ 0 or P(COMPUTED) ≈ 1

18) Cr(DOOM) : subjective credence that our world will never become superhuman

19) x : specific fraction of all human worlds that become superhuman

20) Cr(DOOM | P(SUPERS) = x) = 1 - x : predictive indifference principle that you have insufficient information to distinguish the probable fate of our world from that of typical human worlds

21) (E[WORLDS | CHOOSE] ≫ 1) ⇒ Cr(DOOM) ≈ 1 or P(CHOOSE) ≈ 0 or P(COMPUTED) ≈ 1

22) Cr(ABSTAIN) : subjective credence that our world will become a superhuman world that abstains from computing human worlds

23) y : specific fraction of all superhuman worlds that choose to compute human worlds

24) Cr(ABSTAIN | P(SUPERS) = x, P(CHOOSE) = y) = x * (1 - y) : predictive indifference principle that you have insufficient information to distinguish the probable values of our world, if it becomes superhuman, from those of typical superhuman worlds

25) (E[WORLDS | CHOOSE] ≫ 1) ⇒ Cr(DOOM) ≈ 1 or Cr(ABSTAIN) ≈ 1 or P(COMPUTED) ≈ 1

26) Cr(COMPUTED) : subjective credence that our world was computed directly or indirectly by a superhuman world

27) z : specific fraction of all humans that live in worlds that are computed directly or indirectly by any superhuman world

28) Cr(COMPUTED | P(COMPUTED) = z) = z : indexical bland indifference principle that you have insufficient information to distinguish your experience in our world from typical experience in computed worlds

29) (E[WORLDS | CHOOSE] ≫ 1) ⇒ Cr(DOOM) ≈ 1 or Cr(ABSTAIN) ≈ 1 or Cr(COMPUTED) ≈ 1

Eggleston Simulation Argument with Self-Exclusion

Brian Eggleston refines the Simulation Argument by applying the principle of self-exclusion, which asserts that a civilization cannot compute itself and restricts the set of our possible computers to other worlds in our past. This constraint clarifies that the probability of our world being computed depends not on our own future potential, but on whether previous civilizations successfully reached a superhuman stage before us. The logic changes the trilemma by adding a temporal filter: we exist in a computed world only if previous worlds avoided abstinence and doom long enough to compute emulations of their evolutionary history. Consequently, the force of the Simulation Argument depends on us rejecting the possibility that our world is the only or first to survive where all others failed.

0E) P(OTHERS) : fraction of all worlds that become human before our own

1E) P(SUPERS | OTHERS) : fraction of all human worlds that become superhuman before our own

2) E[WORLDS] : average total number of human worlds that are ever computed directly or indirectly by a single superhuman world

3) E[HUMANS] : average total number of humans that ever live in a single human world

4E) E[HUMANS | COMPUTED, OTHERS] : average total number of humans that ever live in all human worlds that are ever computed directly or indirectly by a single superhuman world before our own

5E) E[HUMANS | COMPUTED, OTHERS] = P(OTHERS) * P(SUPERS | OTHERS) * E[WORLDS] * E[HUMANS]

6) E[HUMANS | NONCOMPUTED] : average total number of humans that ever live in a single human world that was not computed directly or indirectly by any superhuman world

7) E[HUMANS | NONCOMPUTED] = E[HUMANS]

8E) P(COMPUTED | OTHERS) : fraction of all humans that live in worlds that are computed directly or indirectly by any superhuman world before our own

9E) P(COMPUTED | OTHERS) = E[HUMANS | COMPUTED, OTHERS] / (E[HUMANS | COMPUTED, OTHERS] + E[HUMANS | NONCOMPUTED])

10E) P(COMPUTED | OTHERS) = (P(OTHERS) * P(SUPERS | OTHERS) * E[WORLDS] * E[HUMANS]) / ((P(OTHERS) * P(SUPERS | OTHERS) * E[WORLDS] * E[HUMANS]) + E[HUMANS])

11) P(CHOOSE) : fraction of all superhuman worlds that choose to compute human worlds

12) E[WORLDS | CHOOSE] : average total number of human worlds that are ever computed directly or indirectly by a single superhuman world that chooses to compute human worlds

13) E[WORLDS] = P(CHOOSE) * E[WORLDS | CHOOSE]

14E) P(COMPUTED | OTHERS) = (P(OTHERS) * P(SUPERS | OTHERS) * P(CHOOSE) * E[WORLDS | CHOOSE] * E[HUMANS]) / ((P(OTHERS) * P(SUPERS | OTHERS) * P(CHOOSE) * E[WORLDS | CHOOSE] * E[HUMANS]) + E[HUMANS])

15E) P(COMPUTED | OTHERS) = ((P(OTHERS) * P(SUPERS | OTHERS) * P(CHOOSE) * E[WORLDS | CHOOSE]) * E[HUMANS]) / (((P(OTHERS) * P(SUPERS | OTHERS) * P(CHOOSE) * E[WORLDS | CHOOSE]) + 1) * E[HUMANS])

16E) P(COMPUTED | OTHERS) = (P(OTHERS) * P(SUPERS | OTHERS) * P(CHOOSE) * E[WORLDS | CHOOSE]) / ((P(OTHERS) * P(SUPERS | OTHERS) * P(CHOOSE) * E[WORLDS | CHOOSE]) + 1)

17E) (E[WORLDS | CHOOSE] ≫ 1) ⇒ P(OTHERS) ≈ 0 or P(SUPERS | OTHERS) ≈ 0 or P(CHOOSE) ≈ 0 or P(COMPUTED | OTHERS) ≈ 1

18) Cr(DOOM) : subjective credence that our world will never become superhuman

19E) x : specific fraction of all human worlds that become superhuman before our own

20E) Cr(DOOM | P(SUPERS | OTHERS) = x) = 1 - x : predictive indifference principle that you have insufficient information to distinguish the probable fate of our world from that of typical human worlds before our own

21E) (E[WORLDS | CHOOSE] ≫ 1) ⇒ P(OTHERS) ≈ 0 or Cr(DOOM) ≈ 1 or P(CHOOSE) ≈ 0 or P(COMPUTED | OTHERS) ≈ 1

22) Cr(ABSTAIN) : subjective credence that our world will become a superhuman world that abstains from computing human worlds

23) y : specific fraction of all superhuman worlds that choose to compute human worlds

24E) Cr(ABSTAIN | P(SUPERS | OTHERS) = x, P(CHOOSE) = y) = x * (1 - y) : predictive indifference principle that you have insufficient information to distinguish the probable values of our world, if it becomes superhuman, from those of typical superhuman worlds before our own

25E) (E[WORLDS | CHOOSE] ≫ 1) ⇒ P(OTHERS) ≈ 0 or Cr(DOOM) ≈ 1 or Cr(ABSTAIN) ≈ 1 or P(COMPUTED | OTHERS) ≈ 1

26E) Cr(COMPUTED | OTHERS) : subjective credence that our world was computed directly or indirectly by a superhuman world before our own

27E) z : specific fraction of all humans that live in worlds that are computed directly or indirectly by any superhuman world before our own

28E) Cr(COMPUTED | OTHERS, P(COMPUTED | OTHERS) = z) = z : indexical bland indifference principle that you have insufficient information to distinguish your experience in our world from typical experience in computed worlds before our own

29E) (E[WORLDS | CHOOSE] ≫ 1) ⇒ P(OTHERS) ≈ 0 or Cr(DOOM) ≈ 1 or Cr(ABSTAIN) ≈ 1 or Cr(COMPUTED | OTHERS) ≈ 1

30E) Cr(UNIQUE) : subjective credence that our world is the first or only to become human

31E) w : specific fraction of all worlds that become human before our own

32E) Cr(UNIQUE | P(OTHERS) = w) = 1 - w : indexical indifference principle that you have insufficient information to distinguish the temporal location of our world from that of typical human worlds before our own

33E) (E[WORLDS | CHOOSE] ≫ 1) ⇒ Cr(UNIQUE) ≈ 1 or Cr(DOOM) ≈ 1 or Cr(ABSTAIN) ≈ 1 or Cr(COMPUTED | OTHERS) ≈ 1

Cannon Simulation Argument with Self-Exclusion and Technological Uniformity

My formulation strengthens the Simulation Argument by introducing the principle of technological uniformity, which posits that feasible technologies tend to emerge repeatedly across civilizations situated within similar physics. This constraint refines the reference class to focus on potential computers that resemble our own world, avoiding speculation about radically unique or magical physics. By applying the principle of mediocrity, the argument asserts that if computing worlds is feasible for us, it was likely feasible for predecessors operating within similar laws of physics. Consequently, the force of the original argument is restored, compelling us to expect that our world is computed, unless we assume abstinence or doom await us.

0CU) P(OTHERS | UNIFORM) : fraction of all worlds that become human before our own, given uniform physics

1CU) P(SUPERS | OTHERS, UNIFORM) : fraction of all human worlds that become superhuman before our own, given uniform physics

2) E[WORLDS] : average total number of human worlds that are ever computed directly or indirectly by a single superhuman world

3) E[HUMANS] : average total number of humans that ever live in a single human world

4CU) E[HUMANS | COMPUTED, OTHERS, UNIFORM] : average total number of humans that ever live in all human worlds that are ever computed directly or indirectly by a single superhuman world before our own, given uniform physics

5CU) E[HUMANS | COMPUTED, OTHERS, UNIFORM] = P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * E[WORLDS] * E[HUMANS]

6) E[HUMANS | NONCOMPUTED] : average total number of humans that ever live in a single human world that was not computed directly or indirectly by any superhuman world

7) E[HUMANS | NONCOMPUTED] = E[HUMANS]

8CU) P(COMPUTED | OTHERS, UNIFORM) : fraction of all humans that live in worlds that are computed directly or indirectly by any superhuman world before our own, given uniform physics

9CU) P(COMPUTED | OTHERS, UNIFORM) = E[HUMANS | COMPUTED, OTHERS, UNIFORM] / (E[HUMANS | COMPUTED, OTHERS, UNIFORM] + E[HUMANS | NONCOMPUTED])

10CU) P(COMPUTED | OTHERS, UNIFORM) = (P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * E[WORLDS] * E[HUMANS]) / ((P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * E[WORLDS] * E[HUMANS]) + E[HUMANS])

11) P(CHOOSE) : fraction of all superhuman worlds that choose to compute human worlds

12) E[WORLDS | CHOOSE] : average total number of human worlds that are ever computed directly or indirectly by a single superhuman world that chooses to compute human worlds

13) E[WORLDS] = P(CHOOSE) * E[WORLDS | CHOOSE]

14CU) P(COMPUTED | OTHERS, UNIFORM) = (P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE] * E[HUMANS]) / ((P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE] * E[HUMANS]) + E[HUMANS])

15CU) P(COMPUTED | OTHERS, UNIFORM) = ((P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE]) * E[HUMANS]) / (((P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE]) + 1) * E[HUMANS])

16CU) P(COMPUTED | OTHERS, UNIFORM) = (P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE]) / ((P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE]) + 1)

17CU) (P(OTHERS | UNIFORM) ≈ 1 and E[WORLDS | CHOOSE] ≫ 1) ⇒ P(SUPERS | OTHERS, UNIFORM) ≈ 0 or P(CHOOSE) ≈ 0 or P(COMPUTED | OTHERS, UNIFORM) ≈ 1

18) Cr(DOOM) : subjective credence that our world will never become superhuman

19CU) x : specific fraction of all human worlds that become superhuman before our own, given uniform physics

20CU) Cr(DOOM | P(SUPERS | OTHERS, UNIFORM) = x) = 1 - x : predictive indifference principle that you have insufficient information to distinguish the probable fate of our world from that of typical human worlds before our own, given uniform physics

21CU) (P(OTHERS | UNIFORM) ≈ 1 and E[WORLDS | CHOOSE] ≫ 1) ⇒ Cr(DOOM) ≈ 1 or P(CHOOSE) ≈ 0 or P(COMPUTED | OTHERS, UNIFORM) ≈ 1

22) Cr(ABSTAIN) : subjective credence that our world will become a superhuman world that abstains from computing human worlds

23) y : specific fraction of all superhuman worlds that choose to compute human worlds

24CU) Cr(ABSTAIN | P(SUPERS | OTHERS, UNIFORM) = x, P(CHOOSE) = y) = x * (1 - y) : predictive indifference principle that you have insufficient information to distinguish the probable values of our world, if it becomes superhuman, from those of typical superhuman worlds before our own, given uniform physics

25CU) (P(OTHERS | UNIFORM) ≈ 1 and E[WORLDS | CHOOSE] ≫ 1) ⇒ Cr(DOOM) ≈ 1 or Cr(ABSTAIN) ≈ 1 or P(COMPUTED | OTHERS, UNIFORM) ≈ 1

26CU) Cr(COMPUTED | OTHERS, UNIFORM) : subjective credence that our world was computed directly or indirectly by a superhuman world before our own, given uniform physics

27CU) z : specific fraction of all humans that live in worlds that are computed directly or indirectly by any superhuman world before our own, given uniform physics

28CU) Cr(COMPUTED | OTHERS, UNIFORM, P(COMPUTED | OTHERS, UNIFORM) = z) = z : indexical bland indifference principle that you have insufficient information to distinguish your experience in our world from typical experience in computed worlds before our own, given uniform physics

29CU) (P(OTHERS | UNIFORM) ≈ 1 and E[WORLDS | CHOOSE] ≫ 1) ⇒ Cr(DOOM) ≈ 1 or Cr(ABSTAIN) ≈ 1 or Cr(COMPUTED | OTHERS, UNIFORM) ≈ 1

Cannon Generalized Simulation Argument with Self-Exclusion and Technological Uniformity

My generalized Simulation Argument expands the scope of the original, recognizing that computation is just one of many potential mechanisms (along with terraforming or cosmoforming) for creating worlds. This broader vision maintains the expectation that world creation is a scalable process, allowing a single parent civilization to create many child worlds through various technological means. By integrating the previous constraints of time and uniformity, the argument asserts that if creation is typically feasible and scalable, then created worlds will vastly outnumber uncreated worlds. Therefore, unless we assume that all forms of creation are impossible or ethically prohibited, we should conclude with high credence that our own world is created.

0CU) P(OTHERS | UNIFORM) : fraction of all worlds that become human before our own, given uniform physics

1CU) P(SUPERS | OTHERS, UNIFORM) : fraction of all human worlds that become superhuman before our own, given uniform physics

2CG) E[WORLDS] : average total number of human worlds that are ever created directly or indirectly by a single superhuman world

3) E[HUMANS] : average total number of humans that ever live in a single human world

4CG) E[HUMANS | CREATED, OTHERS, UNIFORM] : average total number of humans that ever live in all human worlds that are ever created directly or indirectly by a single superhuman world before our own, given uniform physics

5CG) E[HUMANS | CREATED, OTHERS, UNIFORM] = P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * E[WORLDS] * E[HUMANS]

6CG) E[HUMANS | NONCREATED] : average total number of humans that ever live in a single human world that was not created directly or indirectly by any superhuman world

7CG) E[HUMANS | NONCREATED] = E[HUMANS]

8CG) P(CREATED | OTHERS, UNIFORM) : fraction of all humans that live in worlds that are created directly or indirectly by any superhuman world before our own, given uniform physics

9CG) P(CREATED | OTHERS, UNIFORM) = E[HUMANS | CREATED, OTHERS, UNIFORM] / (E[HUMANS | CREATED, OTHERS, UNIFORM] + E[HUMANS | NONCREATED])

10CG) P(CREATED | OTHERS, UNIFORM) = (P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * E[WORLDS] * E[HUMANS]) / ((P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * E[WORLDS] * E[HUMANS]) + E[HUMANS])

11CG) P(CHOOSE) : fraction of all superhuman worlds that choose to create human worlds

12CG) E[WORLDS | CHOOSE] : average total number of human worlds that are ever created directly or indirectly by a single superhuman world that chooses to create human worlds

12.1CG) E[WORLDS | CHOOSE, COMPUTED] : average total number of human worlds that are ever created using computation directly or indirectly by a single superhuman world

12.2CG) E[WORLDS | CHOOSE, NONCOMPUTED] : average total number of human worlds that are ever created using mechanisms other than computation (e.g., terraforming or cosmoforming) directly or indirectly by a single superhuman world

12.3CG) E[WORLDS | CHOOSE] = E[WORLDS | CHOOSE, COMPUTED] + E[WORLDS | CHOOSE, NONCOMPUTED]

13) E[WORLDS] = P(CHOOSE) * E[WORLDS | CHOOSE]

14CG) P(CREATED | OTHERS, UNIFORM) = (P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE] * E[HUMANS]) / ((P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE] * E[HUMANS]) + E[HUMANS])

15CG) P(CREATED | OTHERS, UNIFORM) = ((P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE]) * E[HUMANS]) / (((P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE]) + 1) * E[HUMANS])

16CG) P(CREATED | OTHERS, UNIFORM) = (P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE]) / ((P(OTHERS | UNIFORM) * P(SUPERS | OTHERS, UNIFORM) * P(CHOOSE) * E[WORLDS | CHOOSE]) + 1)

17CG) (P(OTHERS | UNIFORM) ≈ 1 and E[WORLDS | CHOOSE] ≫ 1) ⇒ P(SUPERS | OTHERS, UNIFORM) ≈ 0 or P(CHOOSE) ≈ 0 or P(CREATED | OTHERS, UNIFORM) ≈ 1

18) Cr(DOOM) : subjective credence that our world will never become superhuman

19CU) x : specific fraction of all human worlds that become superhuman before our own, given uniform physics

20CU) Cr(DOOM | P(SUPERS | OTHERS, UNIFORM) = x) = 1 - x : predictive indifference principle that you have insufficient information to distinguish the probable fate of our world from that of typical human worlds before our own, given uniform physics

21CG) (P(OTHERS | UNIFORM) ≈ 1 and E[WORLDS | CHOOSE] ≫ 1) ⇒ Cr(DOOM) ≈ 1 or P(CHOOSE) ≈ 0 or P(CREATED | OTHERS, UNIFORM) ≈ 1

22CG) Cr(ABSTAIN) : subjective credence that our world will become a superhuman world that abstains from creating human worlds

23CG) y : specific fraction of all superhuman worlds that choose to create human worlds

24CU) Cr(ABSTAIN | P(SUPERS | OTHERS, UNIFORM) = x, P(CHOOSE) = y) = x * (1 - y) : predictive indifference principle that you have insufficient information to distinguish the probable values of our world, if it becomes superhuman, from those of typical superhuman worlds before our own, given uniform physics

25CG) (P(OTHERS | UNIFORM) ≈ 1 and E[WORLDS | CHOOSE] ≫ 1) ⇒ Cr(DOOM) ≈ 1 or Cr(ABSTAIN) ≈ 1 or P(CREATED | OTHERS, UNIFORM) ≈ 1

26CG) Cr(CREATED | OTHERS, UNIFORM) : subjective credence that our world was created directly or indirectly by a superhuman world before our own, given uniform physics

27CG) z : specific fraction of all humans that live in worlds that are created directly or indirectly by any superhuman world before our own, given uniform physics

28CG) Cr(CREATED | OTHERS, UNIFORM, P(CREATED | OTHERS, UNIFORM) = z) = z : indexical bland indifference principle that you have insufficient information to distinguish your experience in our world from typical experience in created worlds before our own, given uniform physics

29CG) (P(OTHERS | UNIFORM) ≈ 1 and E[WORLDS | CHOOSE] ≫ 1) ⇒ Cr(DOOM) ≈ 1 or Cr(ABSTAIN) ≈ 1 or Cr(CREATED | OTHERS, UNIFORM) ≈ 1

Simulation Argument Calculator

Calculate the probability that you are living in a computed world. Adjust input values to see how different assumptions change the odds that superhumanity created our world. If you run into any issues, please let me know.

Configuration

Start here by choosing the model of the Simulation Argument that you want to use. The Bostrom model is the basic formulation of the argument. The Eggleston model applies the principle of self-exclusion (the idea that we cannot be a civilization that came before us). The Cannon models apply the principle of technological uniformity (the idea that physics are the same everywhere), and generalize the argument to include all means for creating worlds (computation, terraforming, cosmoforming, or others).
Turn explanatory text on or off. Keep it on "Show" if you want to learn what each input does.

Custom Scenario

What are the odds that other human-like civilizations existed before us? A value of 0 means we are definitely the only or first human-like civilization. A value of 1 means many came before us.
Applying the principle of technological uniformity, we assume we are typical rather than special. Therefore, it is highly likely that other human-like civilizations existed before us. This value is fixed at 1.
Enter the fraction of human-like civilizations that manage to survive long enough to reach a superhuman level of technology. A value of 0 means none become superhuman, while 1 means all become superhuman.
Of the human-like civilizations that started before us, enter the fraction that survived to become superhuman. A value of 0 means none became superhuman, while 1 means all became superhuman.
Assuming physics is the same everywhere, enter the fraction of past human-like civilizations that survived to become superhuman. A value of 0 means none became superhuman, while 1 means all became superhuman.
If a civilization becomes superhuman, what fraction decides to compute new worlds with people like us? A value of 0 means no one computes new worlds, perhaps due to ethics or lack of interest. A value of 1 means everyone chooses to compute new worlds.
If a civilization becomes superhuman, what fraction decides to create new worlds with people like us via computation, terraforming, cosmoforming, or other means? A value of 0 means no one creates new worlds, perhaps due to ethics or lack of interest. A value of 1 means everyone chooses to create new worlds.
If a superhuman civilization decides to compute new worlds, how many worlds do they compute on average? This number is often assumed to be very large, potentially in the millions or even much higher.
If a superhuman civilization decides to create new worlds, how many worlds do they create on average? This includes all means for creating new worlds. Leave this blank if you prefer to use the specific numbers below.
Specifically, how many new worlds do superhuman civilizations compute on average? This number is often assumed to be very large, potentially in the millions or even much higher. Leave this blank if you prefer to use the total number above.
Specifically, how many new worlds do superhuman civilizations create on average using other means, such as terraforming planets or cosmoforming baby universes? Leave this blank if you prefer to use the total number above.

Click "Calculate" to see the results. The calculator will display the probability for each possible fate of our world, based on your input values. You can also reset the calculator to start over.

Preset Scenarios

Don't want to guess the numbers? Try one of these common scenarios. The "Conservative" scenario uses the same input values as the default custom scenario, and reveals an incoherence in common assumptions.

Thanks for reading! My work has significant costs in time and money. If you find value in it, please consider supporting me in the following ways:

  1. Comment thoughtfully below.
  2. Share on social media:
  3. Subscribe to my newsletter.
  4. Make a donation:
  5. Check out my sponsors!

Sponsors

Thrivous Thrivous is the human enhancement company. We develop nootropics to enhance cognition and geroprotectors to promote healthy aging. Use code SUPERHUMAN for 50% off your first order at thrivous.com.

Comments