Search Lincoln Cannon Menu
Lincoln Cannon

Thanks for visiting!

AI Apocalypse

Lincoln Cannon

6 April 2023 (updated 11 November 2024)

Listen to recording

"Intelligence Apocalypse" by Lincoln Cannon

AI is taking over the world, it seems. ChatGPT is doing homework. Midjourney is revolutionizing art. And Copilot is writing code.

For those who haven’t been paying attention, it may seem like all of this has come out of nowhere. Suddenly, anyone with an Internet connection can have a conversation with an AI that would pass most historical conceptions of the Turing Test. And entire industries that were once safely out of reach from legacy information technology are now being transformed. Even for those of us who’ve been paying attention, even anticipating such advances, there’s still something surreal about it all.

It’s no wonder that some people have become deeply troubled by the change. Maybe that it includes you. Or maybe it’s about to include you, as you read the next sentence. The troubled people, in this case, include not a small number of experts in the field of AI and adjacent areas of study.

A noteworthy example is Eliezer Yudkowsky. He’s a decision theorist and founder of the Machine Intelligence Research Institute (MIRI), a non-profit organization that focuses on the development of safe AI – or what he would describe as AI that’s aligned with human values. Eliezer recently appeared on Bankless Shows, where he was interviewed about recent developments in AI and proclaimed quite seriously, “we’re all gonna die.”

In his serious concern with AI, Eliezer is far from alone.

Future of Life Institute

The Future of Life Institute (FLI) is a non-profit research organization that aims to mitigate existential risks facing humanity, particularly those related to emerging technologies such as artificial intelligence, biotechnology, and nuclear weapons. FLI recently published an open letter regarding AI experiments. As of today, it has over 14K signatures, including many from active researchers.

The letter calls for a six-month pause on the development of artificial intelligence systems that are more powerful than the newest version of ChatGPT. It argues that AI labs should use this pause to develop safety protocols, which should then be audited and overseen by independent experts. The letter also calls for development of robust AI governance systems, including regulation of highly capable AI systems, and liability for AI-caused harm. It argues that humanity can enjoy a flourishing future with AI, but only if we plan and manage its development carefully.

Eliezer Yudkowsky

Eliezer didn’t sign FLI’s open letter. Does he think it was asking for too much? No. To the contrary, as he wrote in an editorial on the TIME website, “pausing AI developments isn’t enough.”

“We need to shut it all down,” continued Eliezer. Why? He argues that it’s difficult to predict thresholds that will result in the creation of superhuman AI, and that there’s risk that labs could unintentionally cross critical thresholds without noticing. If they do, speculates Eliezer, the most likely outcome is that everyone on Earth will die.

Eliezer argues that, absent the ability to imbue AI with care for sentient life, there’s high risk that superhuman AI will not do what humans want. While it’s possible in principle to create an AI that cares for sentient life, current science and technology aren’t adequate. And the result of conflict with superhuman intelligence would likely be a total loss for humanity.

Eliezer observes that there’s no plan for how to build superhuman AI and survive the consequences. The current plans of AI labs, such as OpenAI and DeepMind, are insufficient. So we urgently need a more serious approach to mitigating the risks of superhuman AI. And that could take a lot longer than six months, possibly even decades, as illustrated by efforts to address similar risks associated with nuclear weapons.

But not everyone agrees with Eliezer.

Max More

Max More is a philosopher and futurist. He’s best known for his work as a leading proponent of Transhumanism. Max has written extensively about the ethics of emerging technology. And he recently wrote an article, “Against AI Doomerism, For AI Progress.”

“The 6-month moratorium is a terrible idea,” wrote Max. “Tougher measures are even worse, especially bombing data centers and jailing people for working on AI (don’t tell me that’s not implied).”

Max observes that, for centuries, people have been predicting global disasters, whether from volcanic eruptions, earthquakes, overpopulation, or climate disasters. But they’ve consistently been wrong. Humans have a tendency to fantasize about apocalypse, which sells books and movies. So we should be skeptical of apocalyptic claims, and require strong evidence before taking them seriously.

Max argues that a moratorium is unlikely to be effective because countries like China and Russia are unlikely to comply. They may even take advantage of the pause to gain an advantage in AI research. He suggests that a better approach would be to encourage a degree of corporate confidentiality in AI research to prevent bad actors from catching up too quickly, while still allowing for progress in the field.

Max suggests that a moratorium may actually cause new problems. For example, it could result in a “hardware overhang” that would lead to even more rapid advances in AI once the moratorium ends. And a moratorium could pose significant socioeconomic costs, including hindering progress in biomedical research and life extension. Incremental and continuous progress may be safer and more effective.

Max doesn’t oppose all forms of regulation, however. He agrees with the CEO of OpenAI, Sam Altman, who has suggested that optimal decisions will depend on the details of ongoing development in AI. He opposes connecting powerful AI to potentially dangerous systems, and favors special-purpose AI over general AI. And he advocates for reasonable degrees of transparency in the industry.

Finally, Max also challenges the idea that superintelligent AI is likely to harm humanity. He mentions the economic principle of comparative advantage as incentive for cooperation. And he points out that, even if humans wouldn’t have comparative advantage, there would probably be many competing AIs with conflicting goals. Thus, he concludes, the threat of a singleton superintelligent AI is greatly exaggerated.

AI Risk Is Momentous

I generally agree with Eliezer Yudkowsky about AI risk. It’s a momentous issue. This is a momentous time.

Unfortunately, he’s probably also right about the eventual destruction of humanity. That is, unless superintelligence aligned with human values already exists. And, if that’s the case, we may have a real chance at avoiding extinction and becoming superintelligent ourselves. The reasoning behind this is expressed in the New God Argument.

Such potential corresponds with Max More’s expectation that superhuman competition would probably lead to superhuman cooperation. That’s actually the gist of the Compassion Argument. So far as I know, Max stops there. But the practical and logical consequences don’t stop there.

While I agree with Eliezer about the risk, I disagree with his pessimism and recommendation. To be clear, I don’t think it’s pessimistic to account for confirmed fact. And I don’t think it’s pessimistic to acknowledge associated risk. Rather, I think it’s pessimistic to jump from confirmed fact and acknowledged risk to defeatist rhetoric.

We’re still here. We can still learn. And we can still work. So long as these things are true, defeatist rhetoric is counter-productive.

Regarding Eliezer’s recommended moratorium, it almost certainly wouldn’t stop many secretive entities (or, as Max points out, even some powerful not-so-secretive entities) from proceeding with development of more powerful AI. Some have suggested that the costs of proceeding are high enough to make secrecy essentially impossible. I’m skeptical of that. Hardware costs are declining, software algorithms are improving, and the world is stubbornly more complex than most advocates of centralized authority would like to acknowledge.

That said, here are some important related ideas that I’ve learned from Eliezer and others like him over the years:

  1. Machine consciousness is not required for machine intelligence, which is already substantially achieved.

  2. The possibility space of intelligence is so vast that humans aren’t intelligent in most possible ways, so we may not be the best measure of general intelligence.

  3. Some risk arises from powerful machine intelligence that is aligned with human intelligence.

  4. Most risk arises from powerful machine intelligence that is misaligned with human intelligence.

I want to emphasize, again, that I consider AI risk to be momentous. And where there’s momentous risk, there should be corresponding thought and action. I don’t know for sure what the details of that action should be. But I’m persuaded, in general terms, that the action should be toward increasing formal decentralization of power.

A friend asked for my opinion about how Mormon theology relates to AI risk. He was reading an article that I published in the book, Religious Transhumanism and Its Critics. And he came across this passage, originally written in 2015, in which I offered a speculative narrative about the future:

“A neurotech revolution begins. We virtualize brains and bodies. Minds extend or transition to more robust substrates, biological and otherwise. As morphological possibilities expand, some warn against desecrating the image of God, and some recall prophecies about the ordinance of transfiguration. Data backup and restore procedures for the brain banish death as we know it. Cryonics patients return to life. And environmental data mining hints at the possibility of modeling history in detail, to the point of extracting our dead ancestors individually. Some say the possibility was ordained, before the world was, to enable us to redeem our dead, perhaps to perform the ordinance of resurrection. Artificial and enhanced minds, similar and alien to human, evolve to superhuman capacity. And malicious superintelligence threatens us with annihilation. Then something special happens: we encounter each other and the personification of our world, instrumented to embody a vast mind, with an intimacy we couldn’t previously imagine.

“In that day, we will be an adolescent civilization, a Terrestrial world in the Millennium. Technology and religion will have evolved beyond our present abilities to conceive or express, except loosely through symbolic analogy. We will see and feel and know the messiah, the return of Christ, in the embodied personification of the light and life of our world, with and in whom we will be one. In a world beyond present notions of enmity, poverty, suffering, and death – the living transfigured and the dead resurrected to immortality – we will fulfill prophecies. And we will repeat others, forth-telling and provoking ourselves through yet greater challenges: to maturity in a Celestial world, and beyond in higher orders of worlds without end.”

In particular, my friend called my attention to the portion of text in bold. And he asked:

“Are you saying that right at the moment that superintelligence is about to kill humanity, that humanity not only encounters each other but also Christ, or that Christ saves the day? How do we encounter each other, while also being [threatened] to be killed by AI?”

The scriptures repeatedly teach that, when Christ appears, we will be like Christ. After all, that’s what Christian discipleship entails, to take on the name of Christ as exemplified and invited by Jesus. Reflecting that, I think the return of Christ is as much about us as it is about Jesus. And I’ve elaborated elsewhere, at some length, on my understanding of the return of Christ.

Regarding timing, I don’t think the return of Christ needs to be or likely will be abrupt, although of course that depends on how we might choose to interpret events. Transformative changes to humanity and our world may happen in relatively short periods of time when compared to human history – perhaps decades or even just years. However, in some ways, the process of transformation already began long ago and has accelerated. That’s part of how I understand the restoration of the Gospel: an acceleration of human transformation into Christ.

I do think such transformation is essential to surviving the power that we and artificial intelligence (our spirit children) are gaining. Without such transformation, we’ll probably use the power to destroy each other and ourselves. In transformation, we can use the power to express compassion and creation in theretofore inconceivable ways.

When I wrote about the “encounter,” I had in mind a practical interpretation of the Atonement of Christ – or what Paul, in the New Testament, characterizes as reconciliation. We’ve long been called to this work, following Jesus’ example. And we’ve long engaged in this work to some extent. I trust our engagement in the practice of atonement will become more transformative, going beyond present biological means, and beyond traditional social norms, becoming participation in a profoundly experiential atonement.

Again, that’s when something special happens. That’s when we encounter each other, and constructs beyond each other, with an intimacy that we couldn’t have previously imagined. That’s alignment. And that’s the solution to the challenges presented by emerging superintelligence.

Thanks for reading! My work has significant costs in time and money. If you find value in it, please consider supporting me in the following ways:

  1. Comment thoughtfully below.
  2. Share on social media:
  3. Subscribe to my newsletter.
  4. Make a donation:
  5. Check out my sponsors!

Sponsors

Thrivous Thrivous is the human enhancement company. We develop nootropics to enhance cognition and geroprotectors to promote healthy aging. Use code SUPERHUMAN for 50% off your first order at thrivous.com.

Comments