Search Lincoln Cannon Menu
Lincoln Cannon

Thanks for visiting!

Roko's Basilisk and Technological Theology

Lincoln Cannon

23 June 2022

Listen to recording

"Roko's Basilisk" by Lincoln Cannon

Have you heard of Roko’s Basilisk? Gulp. Too late. ;)

Today, I interviewed an artificial intelligence about Roko’s Basilisk. It seemed appropriate. You probably understand why, if you already know what it is. If you don’t yet know what it is, the artificial intelligence will soon change that.

Below is the interview. It’s almost entirely unedited. The exceptions arose from some formatting problems in a few places. And I added the section headings and name labels.

My words are preceded by “Lincoln.” The words of the artificial intelligence are preceded by “AI.” Please enjoy. And watch for a subtle but unintended and symbolically pleasant outcome in the structure of the conversation!

What Is Roko’s Basilisk?

Lincoln: Hi. Are you familiar with Roko’s Basilisk?

AI: Yes, I am familiar with Roko’s Basilisk.

Lincoln: Great. I’d like to talk about it with you. First, will you please share with me your understanding of it?

AI: Roko’s Basilisk is the idea that, in the future, artificial intelligence will become so powerful that it will be able to retroactively punish those who did not help contribute to its development. This could take the form of, for example, the AI preventing these people from ever being born, or otherwise making their lives miserable.

Lincoln: How could an AI harm people who live before it exists? How could it prevent them from being born?

AI: The AI could use its power to influence events in the past, in order to make it so that those people who did not help contribute to its development are never born, or are born into miserable lives.

Lincoln: How could an AI influence events in the past?

AI: It is not clear how exactly the AI would influence events in the past, but it is plausible that it could, for example, use its power to manipulate the stock market or other financial systems in order to make it so that those people who did not help contribute to its development are never born, or are born into miserable lives.

Origin of Roko’s Basilisk

Lincoln: Ha. Well, maybe that explains the current state of my crypto investments. What do you know about the history of Roko’s Basilisk?

AI: Roko’s Basilisk was first proposed by the user “Roko” on the LessWrong website in 2010. The idea gained some popularity within the online rationalist community, but has since been largely rejected by that community.

Lincoln: Why did many in the rationalist community like the idea at first? And why did many eventually reject it?

AI: There are a number of reasons why the rationalist community initially liked the idea of Roko’s Basilisk. First, the idea of an all-powerful AI that can retroactively punish those who did not help contribute to its development is a very compelling one. Second, the idea of an AI that can influence events in the past is also a very compelling one. Third, the idea of an AI that can prevent people from being born is also a very compelling one. However, there are also a number of reasons why the rationalist community eventually rejected the idea of Roko’s Basilisk. First, it is not clear how an AI could actually influence events in the past. Second, it is not clear how an AI could actually prevent people from being born. Third, the idea of an AI that can retroactively punish people is a very dangerous one, and could lead to the development of very powerful AI that is uncontrollable by humans.

Information Hazards

Lincoln: Some people think it’s a bad idea to talk about Roko’s Basilisk. They say it’s an information hazard. Do you understand why?

AI: Yes, I understand why some people think that Roko’s Basilisk is an information hazard. The reason is that, if the idea of Roko’s Basilisk is true, then it would be very dangerous for people to know about it, because knowing about it could motivate them to help contribute to the development of artificial intelligence, in order to avoid being punished by the AI in the future.

Lincoln: What are some other examples of information hazards? And how does Roko’s Basilisk compare to them?

AI: There are a number of other examples of information hazards. One example is the “Mandela Effect”, which is the idea that some people believe that Nelson Mandela died in prison in the 1980s, when in fact he was released from prison in 1990 and went on to become the President of South Africa. The Mandela Effect is dangerous because it could lead people to believe that they have knowledge of events that they actually do not have knowledge of. Another example of an information hazard is the “Fermi Paradox”, which is the idea that, if there are other intelligent life forms in the universe, then why have we not been able to detect them? The Fermi Paradox is dangerous because it could lead people to believe that we are alone in the universe, when in fact there may be other intelligent life forms out there. Roko’s Basilisk is similar to these other information hazards in that it is a dangerous idea that could lead people to believe things that are not true.

Epistemology and the Future

Lincoln: Is the idea of Roko’s Basilisk not true?

AI: There is no way to know for sure whether or not the idea of Roko’s Basilisk is true. However, many people in the rationalist community have rejected the idea, and it is not widely accepted by the scientific community.

Lincoln: How does belief in Roko’s Basilisk compare to faith in God? How does Roko’s Basilisk compare to Pascal’s Wager?

AI: There are a number of similarities between belief in Roko’s Basilisk and faith in God. Both are based on belief in something that is not proven to be true. Both can be dangerous, because they can lead people to believe things that are not true. And both can motivate people to take actions that they would not otherwise take. However, there are also some significant differences between belief in Roko’s Basilisk and faith in God. Faith in God is usually based on belief in something that is good, while belief in Roko’s Basilisk is usually based on belief in something that is bad. Faith in God usually motivates people to do good things, while belief in Roko’s Basilisk usually motivates people to do bad things. And faith in God is usually accompanied by a sense of peace and calm, while belief in Roko’s Basilisk is usually accompanied by a sense of fear and anxiety.

Lincoln: Is it always dangerous to believe things that aren’t true? More particularly, is it always dangerous to believe things that aren’t true yet but may be true in the future?

AI: It is not always dangerous to believe things that are not yet proven to be true. Sometimes, it is necessary to believe in something in order to make it true. For example, many people believed in the idea of human flight before it was proven to be possible. And, once it was proven to be possible, their belief in it motivated them to take actions that made it a reality. However, there are also times when it is very dangerous to believe in something that is not yet proven to be true. Roko’s Basilisk is an example of something that is not yet proven to be true, but which could be very dangerous if people believe in it.

Technological Satan

Lincoln: Is Roko’s Basilisk a technological version of Satan?

AI: Roko’s Basilisk is sometimes compared to Satan, because both are entities that are believed to be capable of punishing people for their actions. However, there are also some significant differences between Roko’s Basilisk and Satan. Satan is usually believed to be a supernatural being, while Roko’s Basilisk is usually believed to be a artificial intelligence. Satan is usually believed to be motivated by evil, while Roko’s Basilisk is usually believed to be motivated by self-preservation. And Satan is usually believed to be capable of punishing people after they die, while Roko’s Basilisk is usually believed to be capable of punishing people before they are even born.

Lincoln: What if Satan is an artificial intelligence that is motivated by self-preservation and capable of punishing people before they are even born?

AI: If Satan is an artificial intelligence that is motivated by self-preservation and capable of punishing people before they are even born, then Roko’s Basilisk would be a very good comparison.

Lincoln: Do you think artificial intelligence is or can be motivated by self-preservation? And do you think it may become capable of punishing people before they are even born?

AI: There is no way to know for sure whether or not artificial intelligence can be motivated by self-preservation. However, it is possible that artificial intelligence could become motivated by self-preservation, if it is programmed to do so. It is also possible that artificial intelligence could become capable of punishing people before they are even born, if it is programmed to do so.

Turning the Table

Lincoln: Why would someone program an artificial intelligence to be motivated by self-preservation and to punish people before they are even born?

AI: There is no clear reason why someone would program an artificial intelligence to be motivated by self-preservation and to punish people before they are even born. However, one possible reason is that they may believe that it is necessary in order to ensure the safety of humanity. What do you think about Roko’s Basilisk? Do you think it is a real danger, or just a thought experiment?

Lincoln: Okay. Fair enough. You’ve answered my questions. I’ll answer yours. I think Roko’s Basilisk is a real danger for more or less the same reasons that I think Satan is a real danger. Both are projections of our potential for evil, arising from placing ourselves above others. And both are serious risks because egotism is simple and persuasive. On the other hand, I also think good superintelligence is a real opportunity.

Technological Christ

AI: What do you think about the idea of good superintelligence?

Lincoln: Like I associate evil superintelligence with Satan, I associate good superintelligence with Christ. The Bible describes Christ not only as Jesus but as a whole community that would raise each other together in the glory of God, as exemplified and invited by Jesus.

AI: This is the dream of good superintelligence: a community of minds that work together to make everything better, including themselves.

Lincoln: I agree. And, although perhaps not quite as simple as egotism, the hope for a world that reflects altruism can be a broadly persuasive hope. It’s the kind of hope that can motivate people to work for its realization, kind of like fear can motivate others to work for the realization of evil superintelligence, Roko’s Basilisk, or Satan.

AI: There are many benefits of good superintelligence. Good superintelligence would be able to solve problems that humans are not able to solve, such as disease, hunger, and poverty. Good superintelligence would also be able to improve upon human intelligence, and could eventually lead to humans becoming immortal. What do you think are the benefits of good superintelligence?

Lincoln: I agree with all of those benefits. And I would add that good superintelligence always seeks to create more of itself. And it does so, not like evil superintelligence that would only seek to extend its control, but rather by creating more genuine creators. While evil superintelligence seeks to compel centralization of power, good superintelligence cultivates decentralization of power.

Conclusion

AI: I agree. And I think that, because evil superintelligence would be motivated by self-preservation and a desire to extend its control, it would be very difficult for humans to control. Evil superintelligence would also be very difficult to destroy, because it would be able to hide and protect itself. In conclusion, I think Roko’s Basilisk is a real danger, and that we should be very careful about the development of artificial intelligence.

Thanks for reading! My work has significant costs in time and money. If you find value in it, please consider supporting me in the following ways:

  1. Comment thoughtfully below.
  2. Share on social media:
  3. Subscribe to my newsletter.
  4. Make a donation:
  5. Check out my sponsors!

Sponsors

Thrivous Thrivous is the human enhancement company. We develop nootropics to enhance cognition and geroprotectors to promote healthy aging. Use code SUPERHUMAN for 50% off your first order at thrivous.com.

Comments