How strong are constraints on intelligent purpose?
Lincoln Cannon
13 May 2012 (updated 26 July 2023)
In “The Superintelligent Will: Motivations and Instrumental Rationality in Advanced Artificial Agents,” Nick Bostrom argues briefly for the idea that the intelligence and the purpose of an agent are mostly independent. He calls this the “orthogonality thesis.” I’m not persuaded by his reasoning.
He acknowledges three constraints on the relation between intelligence and purpose. And although he characterizes them as “weak,” at least two of them seem strong.
1) “It might be impossible for a very unintelligent system to have very complex motivations, since complex motivations would place significant demands on memory.”
In this first constraint, Nick acknowledges that a simple agent cannot have a complex purpose. Thus, for any given agent, there are innumerable purposes beyond its capacity. This doesn’t seem to qualify as a weak constraint on the relation between intelligence and purpose.
2) “In order for an agent to ‘have’ a set of motivations, this set may need to be functionally integrated with the agent’s decision-processes, which again would place demands on processing power and perhaps on intelligence.”
I’m not sure that I understand this second constraint. However, the other two seem strong enough on their own to challenge a “weak” characterization.
3) “For minds that can modify themselves, there may also be dynamical constraints; for instance, an intelligent mind with an urgent desire to be stupid might not remain intelligent for very long.”
Third, Nick acknowledges that if an agent can modify itself then it may have dynamic constraints. The example he gives is that an intelligent agent with the purpose of making itself stupid will not remain intelligent for long.
This seems to lead to a constraint Nick doesn’t acknowledge. Or, at least, it seems to lead to a stronger formulation of the third constraint: if an agent can modify anything (itself or its environment) then it may have dynamic constraints.
A broader example, then, would be that any agent with any purpose that directly or indirectly undermines itself cannot perpetuate itself as it is. And, logically, there are innumerable self-undermining purposes. Again, this doesn’t seem to qualify as a weak constraint on the relation between intelligence and purpose.
If you like these thoughts, you might also like “The Semi-Orthogonality Hypothesis.”