Religion and Superintelligence
12 September 2015
Checking for recording ...
Writing for the Daily Dot, Dylan Love recently put together a good piece on religion and superintelligence. The title has changed since the original posting. The new one is, in my estimation, inaccurate, but it probably improved clicks. Here’s the original title: “Will we be able to convert robots to Christianity?” He interviewed me for the article, and my complete answers to his questions are below.
“Are you aware of any religious suggestion (Biblical or otherwise) that humanity will face something like the singularity?”
Sure. Of course, people often see what they want to see in religious writings, but there are religious texts that do seem to lend themselves to interpretations consistent with Transhumanist interpretations of the Technological Singularity. After all, it’s not a random accident that some refer to the Singularity as the “rapture of the nerds”.
For example, the Bible contains many ideas that religious Transhumanists tend to associate with emerging and future technology risks and opportunities. On the negative side, some see the risk of unfriendly superintelligence in prophecies related to the antichrist (2 Thessalonians 2: 1-4), and various related technological risks in the apocalyptic prophecies of destruction (Revelation 13). On the positive side, some see the opportunity of friendly superintelligence in prophecies related to the return of Christ and theosis (1 John 3: 1-2 and many others), and various related technological opportunities in the millennarian prophecies of transfiguration (1 Corinthians 15: 51-53).
Mormon scripture elaborates further on these ideas, describing our present time as one of accelerated learning (D&C 121: 26-33), and our future potential as being beyond present notions of poverty, suffering, and death (D&C 101: 26-34) – on a transformed Earth (D&C 130: 9-11) in transformed bodies (D&C 88: 27-33) like the body of God (D&C 130: 22), who may be described as “superintelligent posthumanity”.
“How realistic do you personally think the arrival of some sort of superintelligence is? How ‘alive’ would it seem to you?”
For practical and moral reasons, I trust in our opportunity and capacity, as a human civilization, to evolve intentionally into compassionate superintelligence. I don’t think it’s inevitable, and I do think there are serious risks. But I do trust it’s possible, particularly if we put aside passive, escapist, and nihilistic attitudes about our future, and work to mitigate the risks while pursuing the opportunities.
Some imagine machine superintelligence to be more likely (easier to achieve) than enhanced-human superintelligence. I’m not sure they’re right, but I think the possibility could have grave ramifications and merits serious consideration, if for no other reason than the fact that even really stupid machine intelligence, given enough power, can be incredibly dangerous. But there may be some things we can do about that, like instrumenting decentralized reputation into the Internet.
Either way, I anticipate that humanity or its evolutionary descendents will eventually create machine intelligence equivalent to and surpassing human intelligence, and that the line between machine and non-machine intelligence will blur to the point of meaninglessness. In fact, there’s reason to suppose we may already be living in a computed world, so maybe we’re already indistinguishable from machine intelligence (see the New God Argument).
“Assuming we can communicate with such a superintelligence in our own natural human language, what might be the thinking that goes into preaching to and ‘saving’ it? On what other levels might be try to interact with it?”
By definition, human efforts to interact with superintelligence, machine or otherwise, would be at best something like ape efforts to interact with humans. However, so long as machine intelligence approximates human intelligence, and so long as there’s sufficient overlap and intelligibility in our respective cognitive spaces, I’m sure humans will attempt to persuade machines to just about all of our vying ideas, and machines will do the same in return.
The interactions could of course take traditional forms, like persuasion through presentation or discussion. But we should also anticipate new and unfamiliar forms of interaction enabled by whatever technological interfaces become available, such as brain to computer interfacing.
“Are you aware of any ‘laws’ or understandings of computer science that would make it impossible for software to hold religious beliefs?”
No. Of course there are some naive voices among the anti-religious that would like to imagine a technical incompatibility between machine intelligence and religious beliefs, but humans are already proof of concept. I do think we can identify some limits to the possibility space of intelligence in general, based on logic and physics, but religiosity remains clearly within the possibility space.
It’s worth pointing out, perhaps, that some of us conceive of religion too narrowly to account for how it’s actually functioned from deep history to the present. For example, Transhumanism is not atheism, and a strong case can be made that often (but not always) Transhumanism manifests itself as a religion, even if misrecognized.
“How might a religious superintelligence operate? Would be it benign?”
Religious superintelligence may be either the best or the worst kind of superintelligence – sublimely compassionate or horribly oppressive. I like to think of religion as applied esthetics, the most powerful social technology for provoking strenuous action toward a common goal. As such, it’s not inherently good or evil. Religion is just power, to be used for good or evil, as it clearly has been used for both historically.
While the particular forms of religion will continue to evolve, the general function of religion seems unlikely to go away. So, as we do with all powerful technologies, we should aim to mitigate the risks of religion while pursuing its opportunities. Religion already isn’t benign, and any religion worthy of a superintelligence certainly would be even less so.