Search Lincoln Cannon Menu
Lincoln Cannon

Thanks for visiting!

The Semi-Orthogonality Hypothesis

1 March 2015 (updated 26 July 2023)

Listen to recording

Semi-Orthogonal Owl

In his Orthogonality Thesis, Nick Bostrom proposes that “intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.” However, there’s a problem hinted at by the combination of “orthogonality” and “more or less.”

Nick acknowledges that intelligent purpose actually does have some constraints. And arguably those constraints are actually quite strong, which would mean the Orthogonality Thesis is rather weak. But the weakness may not be fatal. We can formulate a Semi-Orthogonality Hypothesis that actually accounts better for Nick’s own observations and reasoning without overstating their ramifications, which remain momentous.

As illustrated in the chart below, the possibility space of intelligent purpose is congruent with the impossibility space of intelligent purpose.

By definition, a simple intelligence can have only simple goals, final or otherwise. Reasons an intelligence might be simple include limited resources or poor optimization of available resources for sensing, storing, processing, or effecting. Whatever the case may be, a simple intelligence cannot have a goal that exceeds its anatomical capacity.

As the complexity of intelligence increases, the possibility space of final goals expands. But the impossibility space always remains congruent.

A complex intelligence can have simple goals or complex goals. A simple goal could be (but isn’t necessarily) well within the capacity of a complex intelligence and could match the goal of a simple intelligence. A complex goal would make use of more of the capacity of a complex intelligence and would demonstrate a corresponding impossibility for simple intelligence.

The Semi-Orthogonality Hypothesis

Technically, the possibility space should itself be qualified as the “semi-possibility space” because complexity is not the only constraint on the possibility space of intelligent purpose. While we can cleanly conclude that any given final goal is impossible for any simpler intelligence, we cannot cleanly conclude that any given final goal is possible for any intelligence of the same or greater complexity. Arguably, the laws of physics and logic make some things meaningfully impossible.

It may be that an intelligence is structured in such a way that it is anatomically incompatible with some simpler goals. For example, human intelligence is generally more complex than butterfly intelligence. But human intelligence may be anatomically incapable of some goals for which butterfly intelligence is optimized.

Even dynamically over time, some final goals may remain meaningfully impossible for some intelligences of the same or greater complexity. For example, we might meaningfully imagine a superintelligent posthuman with capacity for the goal of transforming itself into a butterfly. But we might have trouble meaningfully imagining a normal dog with capacity for the goal of transforming itself into a butterfly.

Relatedly, it may also be that an intelligence is structured in such a way that it is incompatible with sustaining some of its goals. For example, as Nick observes, “an intelligent self-modifying mind with an urgent desire to be stupid might not remain intelligent for long.” Likewise, although less directly, an intelligent environment-modifying mind with an urgent desire to annihilate its environment might not remain intelligent for long.

Despite real constraints on intelligent purpose, the Semi-Orthogonality Hypothesis remains momentous. Our human anatomy may bias us to suppose intelligence in general to be far more constrained than it actually is. We are tempted to anthropocentrism.

Projecting ourselves carelessly, too many passively suppose the final goals of extraterrestrial or artificial or posthuman superintelligence will prove to be compatible with our own. And such carelessness could contribute to suffering and destruction. Such has been the warning of celebrity technologists and scientists like Elon Musk, Bill Gates, and Stephen Hawking.

Humans represent only a small part of the possibility space of intelligent purpose. We have a hard time imagining the intelligent purpose of our own evolutionary ancestors. Perhaps we no longer even have the anatomical capacity to do so, let alone imagine the purposes of non-human intelligence. And of course we can be perfectly confident that we’re incapable of fully imagining superintelligent purpose.

This should give us pause. While we have reason for hope, we face real risks.

Thanks for reading! If you've found value here and would like to support my work, you can do that in at least five ways:

  1. Comment Thoughtfully Below
  2. Share on Social Media
  3. Subscribe to My Newsletter
  4. Make a Donation
  5. Check Out My Sponsors!

Sponsors

Thrivous is the human enhancement company. We develop nootropics to enhance cognition and geroprotectors to promote healthy aging. Use code SUPERHUMAN for 50% off your first order at thrivous.com.

Comments