Search Lincoln Cannon Menu
Lincoln Cannon

Thanks for visiting!

Compassion in the New God Argument (Version 3.4)

Lincoln Cannon

17 September 2019 (updated 25 October 2020)

Listen to recording

On Saturday 3 August, I presented the New God Argument at the Sunstone conference in Salt Lake City. As part of the presentation, I simplified the formulation of the Compassion Argument slightly. And I spent more time than usual elaborating on how the first assumption of the Compassion Argument arises from the Orthogonality Hypothesis and the Convergence Hypothesis, which artificial intelligence researchers have been developing for several years.

In response to my presentation, a friend who was already familiar with the argument sent me several questions. This post (1) presents the latest formulation of the argument, (2) shares some elaboration on the Orthogonality and Convergence Hypotheses, and (3) responds to my friend’s questions.

The New God Argument is a logical argument for faith in God. Given assumptions consistent with contemporary science and technological trends, the argument proves that if we trust in our own superhuman potential then we should also trust that superhumanity probably would be more compassionate than we are and created our world. Because a compassionate creator may qualify as God in some religions, trust in our own superhuman potential may entail faith in God, and atheism may entail distrust in our superhuman potential.

Faith Assumption

The Faith Assumption is a proposition that humanity will not become extinct before evolving into superhumanity. The proposition may be false. However, to the extent we do not know it to be false, we may have practical or moral reasons to trust that it is true. In any case, the Faith Assumption is a common aspiration among secular advocates of technological evolution, and it may be consistent with the religious doctrine of theosis, also known as deification: the idea that humanity should become God.

[F1 assumption] humanity will not become extinct before evolving into superhumanity

Compassion Argument

The Compassion Argument is a logical argument for trust that superhumanity probably would be more compassionate than we are. The basic idea is that humanity probably will continue to increase in decentralized power, so it probably will destroy itself unless it increases in compassion. If we trust in our own superhuman potential, we should trust that superhumanity would be more compassionate than we are.

[CO1 assumption] EITHER humanity probably will become extinct before evolving into superhumanity OR superhumanity probably would not have more decentralized power than humanity has OR superhumanity probably would be more compassionate than we are

[CO2 assumption] superhumanity probably would have more decentralized power than humanity has

[CO3 deduction from CO1, CO2, and F1] superhumanity probably would be more compassionate than we are

Creation Argument

The Creation Argument is a logical argument for trust that superhumanity probably created our world. The basic idea is that humanity probably would not be the only or first to create many worlds emulating its evolutionary history, so it probably will never create many such worlds unless it is already in such a world. If we trust in our own superhuman potential, we should trust that superhumanity created our world.

[CR1 assumption] EITHER humanity probably will become extinct before evolving into superhumanity OR superhumanity probably would not create many worlds emulating its evolutionary history OR superhumanity probably created our world

[CR2 assumption] superhumanity probably would create many worlds emulating its evolutionary history

[CR3 deduction from CR1, CR2, and F1] superhumanity probably created our world

God Conclusion

The God Conclusion is a logical deduction for faith in God. Given assumptions consistent with contemporary science and technological trends, the deduction concludes that if we trust in our own superhuman potential then we should also trust that superhumanity probably would be more compassionate than we are and created our world. Because a compassionate creator may qualify as God in some religions, trust in our own superhuman potential may entail faith in God, and atheism may entail distrust in our superhuman potential.

[G1 deduction from CO3 and CR3] BOTH superhumanity probably would be more compassionate than we are AND superhumanity probably created our world

Definitions

faith : trust : belief that something is reliable or effective for achieving goals

compassion : capacity to refrain from thwarting or to assist with achieving goals

creation : the process of modifying situations to achieve goals

intelligence : capacity to achieve goals across diverse situations

superintelligence : intelligence that is greater than that of its evolutionary ancestors in every way

humanity : all organisms of the homo sapiens species

posthumanity : evolutionary descendents of humanity

superhumanity : superintelligent posthumanity

God : superhumanity that is more compassionate than we are and that created our world

Orthogonality Hypothesis

The Orthogonality Hypothesis has been developed among researchers of artificial intelligence. It is the idea that intelligence and final goals are orthogonal. In other words, we probably cannot predict the final goals of any given intelligence based on its level of intelligence (the Semi-Orthogonality Hypothesis is technically more accurate).

Some have supposed that a high level of intelligence must necessarily be associated with particular kinds of goals, such as compassion. However, evidence strongly suggests otherwise. Great intelligence has been applied in ways that have both harmed and helped others. And it appears likely that great intelligence will continue to present such risk.

So, as we imagine the final goals of artificial intelligence or any form of superintelligence (whether of artificial or hybrid origin), we should not simply assume that they will be compatible with the welfare of humanity.

Convergence Hypothesis

The Convergence Hypothesis, likewise developed among researchers of artificial intelligence, is the idea that instrumental goals are predictable, at least to some significant extent, no matter the level of intelligence. In other words, despite the orthogonality of intelligence and final goals, all intelligence may correlate with instrumental goals.

From the simplest computer program to the greatest human genius, all require resources with which to operate. Thus, acquisition and maintenance of those resources is essential to pursuit of any final goal. And possible applications of resources overlap considerably.

This is important because it brings intelligences together wherever resources are available. And when they come together, they must variously choose whether to try to conquer or cooperate (or some mix of the two) to acquire and maintain resources.

Compassion Argument Assumption CO1

The first assumption of the compassion argument is derived from the Orthogonality and Convergence Hypotheses, and Game Theory. It observes that intelligences with approximately equal power are more likely to cooperate when they converge. If one intelligence is considerably greater in power than another, it’s much harder if not impossible to predict whether it will choose to cooperate or conquer.

Thus, to ensure the greatest likelihood of cooperative outcomes, it’s best to maintain decentralization of power. If we don’t maintain decentralization of power, the likely outcomes are either mutual destruction (the first part of the assumption’s trilemma) or a singleton with centralized power (an implication of the second part of the assumption’s trilemma). If we do maintain decentralization of power, the likely outcome is increasing cooperation, which, at its limit, is practically indistinguishable from compassion.

Questions and Responses

Now, I move on to questions and comments from my friend. They are indented. My responses follow.

1) Compassion is necessary but not sufficient. There is a problem with compassion as a superhuman motive: It is an experience of normal human feeling that frequently does not lead to committed action for and in behalf of people (or particular entities.) A similar problem exists with the experience of empathy—a feeling that often remains passive. This is not a mere semantic quibble. I think you need a strong argument to sustain your presumption that compassion leads to helpful intervention or creative actions. I don’t think you will find such an argument without adding an additional motivating desire to actually expend precious psychic/physical energy for others. For rhetorical purposes you need a more accurate and compelling term that denotes and connotes an action-driving motive for superhuman beneficence. This is why love is often used–people get that it means caring service. Compassion has the ‘wimpy’ problem—like ‘universal love’ it sounds like everyone should like it philosophically; but we only viscerally experience loving particular relations that move us to act powerfully. We can imagine from interpersonal experience that love will motivate superhuman action more intensely than it does for us. I know there are several linguistic problems with the term love in English and other languages—but they could be (mostly) resolved by defining love as intentional desire and action for mutual good. Maybe the term is ‘compassionate love’ works if you want to keep compassion involved.

I agree that compassion as a feeling, or even love as a feeling, would be insufficient as grounds for the Compassion Argument and thus for the New God Argument as a whole. Hopefully my elaboration on the Orthogonality and Convergences Hypotheses, above, will help people understand that compassion is not the proposed cause, but rather the proposed effect.

The proposed cause is final goals, which we might understand to be desires among humans. It is desires, no matter their content, that brings us together. And it is decentralization of power to achieve those desires that leads us, predictably, to cooperate with each other. As decentralized power increases, so increases our mutual risk, hopefully leading to further increases in cooperation. And, as mentioned above, cooperation at its limit is practically indistinguishable from compassion.

In other words, the Compassion Argument isn’t even necessarily talking about a feeling, even in its conclusion. It’s only talking about that which is practically indistinguishable from the behavior we generally assume truly compassionate persons would engage in.

2) Adequate is adequate—don’t overreach. Many would argue that humans over history are on a moral vector of increasing compassion or love, many would say human are not morally progressing. I do not know how we would measure this claim, but in the interest of rhetorical potency you need not address the more or less argument. You can say superhumans will evolve to be adequately compassionate and loving to keep from destroying each other because they already have done so—ending their world wars before extinction levels.

I agree with this in part. The Compassion Argument is not about realizing superlative compassion in any final sense. It is about realizing ever-increasing levels of cooperation, as required by ever-increasing levels of decentralized power.

At any given level of decentralized power, no more cooperation is required than that which actually suffices for survival. However, the decentralized power of our human-machine civilization is increasing rapidly, even accelerating. And it doesn’t seem likely to slow down any time soon (unless we destroy ourselves).

So there’s practical merit to wondering about and preparing for our future at the limits of comprehension. As Arthur C. Clarke observed, any sufficiently advanced technology is indistinguishable from magic. To survive magic, we will need sublime compassion, which is indistinguishable from any sufficiently advanced cooperation.

3) Purpose understood is convincing. Why would superhumans desire to create more—why more simulations? I think you need to make an explicit supported claim that superhumans will desire to continually create more—qualitatively and quantitatively. This has two aspects: more of the SAME and more in the sense of ORIGINALITY. As to the same, you need to give plausible reasons why superhumans bother doing more emulative simulations inside other simulations inside other simulation, etc. ad infinitum? In short, why continue to expand superhuman sims at all when ‘they have seen it all’ a gazillion times? (Nietzsche’s answer was if we don’t remember it, life is interesting enough to continuously repeat it.) Why do sims desire more sims? Why are superhumans so human as to desire eternal continuation of PRIOR simulations like ours? Why do they want ‘kids’ (so to speak) that emulate what parents have gone through? Now as to experiencing originality, what is it? Is it something so unique or new that it is unintelligible—having NO prior referent? This is inconceivable (literally.) Superhumans must desire to experience something similar but truly different enough to be interesting—to be original. You could say that each sim world such an ‘original’ and infinite numbers of prior and future originals are plausible if superhumans have a problem with boredom. (See next point.) I am nudging you to suggest a probable motive that we now can now imagine for superhumans to ‘originate’ more once their survival is assured indefinitely. Is it compassionate love for future entities? That’s a hard one for us to grasp.

This is a great question, aiming at the motivation behind the second assumption of the Creation Argument.

Note that the Creation Argument doesn’t require any particular creative mechanism. It generalizes the logic of the Simulation Argument across computation and any other feasible creative mechanism, such as terraforming or cosmoforming, as well as any possible combinations.

So, for the purposes of the argument, we don’t need to worry about the motivation for or feasibility of any particular mechanism. Rather, we can generalize the concern into two questions. First, why would intelligence apply itself to emulating its evolutionary history? And second, to what extent would that be possible via some mechanism?

The argument doesn’t purport to answer these questions. It encapsulates them both in an assumption. However, I’ll briefly share my thoughts on each, in reverse order.

Will it be possible? I imagine a combination of computation and cosmoforming (imagine 3D-printed versions of the most successful projects in SimCity) may prove viable.

Why would we do it? I think we’ll continue emulating our evolutionary history for all the diverse reasons that we currently do cosmology, archeology, and family history, which is an increasingly popular endeavor.

I’m hesitant to characterize all of those reasons too narrowly, but I do think we might accurately characterize them all as important contributors to self-understanding and purpose-making, and therefore to general empowerment. In fact, it seems that interest in emulating evolutionary history may be a predictable instrumental goal (among higher level intelligences with the power to conceptualize and engage in such endeavors), no matter the final goal.

4) Let sleeping dogs lie . . . why superhumanity might not seek MORE. Expansion itself presents a problem of superhuman risk to whatever purposes they attempt to achieve—even the most loving or benevolent—because originality is not replication by definition and it has unknown, unintended consequences. Superhumanity presumably still does not know the future effects of its continually original experiments, and risks unintended consequences. We see that of course already with bio-engineering or with any social, cultural or technical innovative experiments for that matter. We tend to presume that to be superhuman is to have such high control of future events that foreknowledge and accurate prediction are assured. This raises a question of determinism and free will and good/bad outcomes for superhumanity that could helpfully be rhetorically addressed in the argument. Superhumanity could be simply included to the possibility of humanity destroying itself.

So long as power is decentralized, there will remain strong incentive for intelligence to seek more power. And, as always, the seeking of more power will encourage some amount of risk-taking. If I don’t risk, others will. And they may get an advantage that could be used against me.

So, in accordance with the convergence hypothesis, intelligence will converge around instrumental goals that increase power. A thorough understanding of evolutionary history, and the procreation of more creators to participate in increasing decentralized power, both seem to be likely convergent goals.

In contrast, only an exceptionally strong intelligence seems likely to take the risk of trying to minimize the procreation of more creators. And other intelligences would likely band together to work against that intelligence. This is essentially expressed in the Mormon narrative of the War in Heaven.

5) Most humans would grasp the notion that superhumanity living forever already somewhere might have become bored by now, and stopped simulating worlds. THAT our world came to be in the past few billion years (nothing to speak of in foreverness) implies that either it happened by chance or intention. If the latter, likely by superhumans who were not YET bored with the infinitely regressive activity noted in the section above. (Perhaps this is an aesthetic reason for Ray K’s intuition that superhumanity does not yet exist. He would rather say it all came by chance ONCE than face the boggling problem of infinite regressions.) I have tried above to suggest infinite creative regressions and progressions can always be NEW ENOUGH to keep superhumans interested in more creativity. I have added that a motive for more ‘new’ loving relations—each new one unique and changing—gives potency to the desire for the lively tension between stasis (order) and change (chaos). In short, love of particulars drives the desire for loyal CONTINUITY in risky CREATIVITY. In other words superhumans would have already achieved ‘final unity or non-differentiated oneness’ by now and there would be no need or desire for further experimentation. Again, as a critique of superhuman telos, I presume there has been enough time to come to a finality, and since we are in PROCESS now, I am looking to explicate a motive for furthering the PROCESS as opposed to a final stasis that was (presumably at a point in the past) a live option for superhumanity.

Stasis is not only boring. It is also dangerous. And danger is an even stronger motivator.

Perhaps there are old Gods who’ve become bored and opted out. But that seems to me to say more about the limits of our imagination than it does about the actual possibilities for superintelligent minds.

In any case, creativity (and particularly procreativity) is probably an effective way to cultivate decentralization of power, which may reduce the risk of hostility, conquest, and destruction.

6) Ray K’s belief that the first gods do not yet exist aside, many would find it possible to believe in an optimal or final STATE that has been discovered or created by superhumans already. Such believers would either conclude from observing our world that, if benevolent superhumans exist, they would assist all entities they care about to eventually arrive at that final and best of all states. Process philosophers conclude to the contrary, that superhumans or god has no final goal in mind other than continuing the process of change—not attaining a state of being but forever becoming. The New God argument seems so loaded toward the process presumption that I think it should somehow be clarified. I find no place for the traditional God of Christianity, Judaism or Islam, but I might be missing something, of course. If Ray K is right, the process is emerging from chance, and we (along with our technically resurrected predecessors)–or if wrong, our superhuman descendants who have resurrected us—will find out or collaborate with superhumanity in developing new teloi ad infinitum. There is no certain template for an original first run of existence, not a plan for a finale. Some think the process has been ongoing for intelligent entities and that superhumanity probably already exists—and probably intervenes in worldly matters somehow (if they care about us benevolently and have not received the Prime Directive from the Star Ship Federation.) In either case, The New God argument presumes that a process of origination (creative risk) or emulation (repetitive mimesis) or both (see above) has been on-going ad infinitum by super-capable superhumanity that has learned both to deal with surprise and manage by design the extent universes—arising and collapsing. The process idea is perpetual in this argument. SO, rhetorically speaking, is there a compelling way to allow room for Final State people to buy into this argument? Could some teleological openness to concepts of both finality and process be coherent with the project? I have a problem helping with this because I find no way for the creation or simulation presumption to be finished or to have an end once it begins so to speak. Both NO telos and a FINAL telos (sheer chance or nirvana) seem incoherent with the New God Argument.

The New God Argument is certainly compatible with process theology. And it’s certainly compatible with the perspective that there’s no original or final optimization.

If I were persuaded that a universal optimization were possible, I would become an atheist again. So far as I can tell, the only ethical explanations for intentionally creating a world like that which we experience and observe (with all of its suffering) are dependent on the possibility of genuine creativity consequent to genuine risk.

However, I don’t think this is in utter opposition to the traditional theologies, or at least not in utter opposition to their authoritative texts. While the traditional theologies have been and often are expressed in terms of superlatives, those superlatives may be and often are understood as approximations relative to us.

What does cooperation look like at the limits of human understanding? It looks like omnibenevolence. As long as we don’t take that dogmatically, I think we’re okay.

The traditional theologies have been exploring superintelligence (code name “God”) for millennia. They are to a science of superintelligence as alchemy and astrology are to chemistry and astronomy. They are where we started, that from which we’ve learned, and a foundation from which to build.

7) A final personal note undergirding my critique about superhuman entities’ motives to continue existing: I presume that superhumans access some notion of ‘infinitely regressive remembered social history of selves’ in thematic chapters or lives. This would allow them to have their exciting creative originality cake and eat their loyal comforting secure identity cake too. I make an argument for relative rates of change in material forms/persons that allows entities to experience enough apparent stasis for appreciation of an intelligible identity—even though slow changing entities in fact become different over time. Everything is in motion—even souls—but some entities have the capacity to remember and value special events because they are changing ‘slow enough’ relative to other entities. Gas, fluid, solid. This notion is a response to Buddhism’s treatment of ultimately un-endurable samsara.

I agree with this. Identity is a dynamic psychosocial construct. It is not atomic. It is not static or final.

However, it’s not arbitrary. It has limits and thresholds. My identity is only as good as my ability to recognize myself. And it’s only as good as my community’s ability to recognize me. So I and we can change, but we must change together at rates that retain our mutual sense of identity. What rate is sufficient? That’s always contextual.

I call this the principle of psychosocial sufficiency. Whatever is sufficient is sufficient. If it’s not sufficient, it’s not. If I want to resurrect my dead ancestors, I’ll have to do that within a context that is psychosocially sufficient for both them and me. If I want to live eternally with my family and friends, I’ll need to do that in a context that is psychosocially sufficient for both them and me.

All will change. Change is pervasive and persistent. But we can manage change and preserve purposeful identity. That’s eternal life.

Thanks for reading! My work has significant costs in time and money. If you find value in it, please consider supporting me in the following ways:

  1. Comment thoughtfully below.
  2. Share on social media:
  3. Subscribe to my newsletter.
  4. Make a donation:
  5. Check out my sponsors!

Sponsors

Thrivous Thrivous is the human enhancement company. We develop nootropics to enhance cognition and geroprotectors to promote healthy aging. Use code SUPERHUMAN for 50% off your first order at thrivous.com.

Comments