Clarity Natural Nootropic Improve Memory and Cognition

How strong are constraints on intelligent purpose?



In "The Superintelligent Will: Motivations and Instrumental Rationality in Advanced Artificial Agents", Nick Bostrom argues briefly for the idea that the intelligence and the purpose of an agent are mostly independent (which he calls the "orthogonality thesis"). I'm not persuaded by his reasoning. He acknowledges three constraints on the relation between intelligence and purpose, and although he characterizes them as "weak", at least two of them seem strong.

1) "it might be impossible for a very unintelligent system to have very complex motivations, since complex motivations would place significant demands on memory"

In this first constraint, he acknowledges that a simple agent cannot have a complex purpose. Thus, for any given agent, there are innumerable purposes beyond its capacity. This doesn't seem to qualify as a weak constraint on the relation between intelligence and purpose.

2) "in order for an agent to 'have' a set of motivations, this set may need to be functionally integrated with the agent's decision-processes, which again would place demands on processing power and perhaps on intelligence"

I don't understand this second constraint. However, the other two seem strong enough on their own to challenge a "weak" characterization.

3) "For minds that can modify themselves, there may also be dynamical constraints; for instance, an intelligent mind with an urgent desire to be stupid might not remain intelligent for very long"

Third, Nick acknowledges that if an agent can modify itself then it may have dynamic constraints. The example he gives is that an intelligent agent with the purpose of making itself stupid will not remain intelligent for long. This seems to lead to a constraint Nick doesn't acknowledge, or at least to a stronger formulation of the third constraint: if an agent can modify anything (itself or its environment) then it may have dynamic constraints. A broader example, then, would be that any agent with any purpose that directly or indirectly undermines itself cannot perpetuate itself as it is, and logically there are innumerable self-undermining purposes. Again, this doesn't seem to qualify as a weak constraint on the relation between intelligence and purpose.

[Thanks for reading! You might also like "The Semi-Orthogonality Thesis".]

Lincoln Cannon
Mormon Transhumanist Association
Christian Transhumanist Association
The New God Argument
Thrivous
Discerner
Lincoln Cannon LLC
Clarity Natural Nootropic Improve Memory and Cognition