Search Lincoln Cannon Menu
Lincoln Cannon

Thanks for visiting!

What Robots Can Do

14 December 2020

Listen to recording

What Robots Can Do

What can robots do? What can’t they do? Are there hard limits on the potential of machine intelligence? Or will it continue to replicate and eventually surpass the capacities of human intelligence?

Randy Paul brought to my attention “What Robots Can’t Do,” an interview with Frank Pasquale by Lawrence Joseph. Randy is a close friend and long-time supporter of my blog. Frank is a professor of law and respected commentator on policy related to artificial intelligence. And Lawrence is a retired professor of law.

Lawrence begins the interview by asking Frank to comment on his latest book. It’s entitled New Laws of Robotics. Very recently released, it appears to have only one review on Amazon so far. Frank explains that he intends the book to provide “a compelling story about the kind of future we want.”

Ethics of Human Simulation

Before characterizing positive futures, however, they discuss a present problem. The problem arises from the use of AI to simulate humans. Frank acknowledges that benefits do come from that use. But he explains that it also results in various social detriments.

The example he focuses on is in education. A child who is educated by an AI, he claims, may end up with an under-developed or misdirected sense of empathy. That’s because, as he reasons, only a human teacher has “a fundamental equality” and “a common dignity” with the student, “grounded in our common fragility.” By contrast, he says, an AI can “be ‘programmed’ not to care about being neglected” and “isn’t hurt if you ignore it.”

However, Frank’s reasoning is problematic. All intelligence is fragile. Whether artificial or natural, and despite other major differences, each intelligence still requires time and space and resources to survive, operate, and pursue whatever might be its goals. A neglected AI most certainly can be hurt if you ignore it.

Frank may contend that the harm to a neglected AI would be different in kind. But it seems only prejudice can sustain that position. From a secular perspective, human and machine intelligence are both results from billions of years of evolution. And from a theistic perspective, with or without evolution, if God created the world then everything, including both human and machine intelligence, is artificial.

Alternatively, then, Frank may contend that the harm to a neglected AI would be different in degree. This does seem to be a legitimate position presently. Machine intelligence cannot yet fully exhibit the complex emotional characteristics of human intelligence. But, again, prejudice would be required for confidence that machine intelligence won’t exhibit such capacities in the future.

I think Frank’s position on this matter could be stronger. He should discard the appeal to a difference in kind based on fragility. And he should embrace an appeal to a difference in degree based on complexity. Importantly, however, this would probably shift policy recommendations away from any potential prohibition of AI teachers and toward better cultivation of emotional education generally, regardless of whether AI teachers are involved.

First New Law of Robotics

Lawrence and Frank move on to talk about the New Laws of Robotics. Frank states the first:

“Robotic systems and AI should complement professionals, not replace them.”

Frank intends this law to decentralize power. In his words, “professions distribute power over a sector to labor.” If we create technology to replace professionals, he reasons, it’s more likely that a few large corporations will own that power. Whereas, if we create technology to complement professionals, many people will own that power.

Frank is correct. Centralization of power is a high risk strategy, whether or not machine intelligence is involved. While centralized power might be used for good, it’s unpredictable because it transcends the constraints of competition. Only decentralized power cultivates the requirement for cooperation, and is therefore far more predictable and manageable for the general welfare.

Second New Law of Robotics

Frank then introduces his Second New Law of Robotics. It states:

“Robotic systems and AI should not counterfeit humanity.”

Frank intends this law to ensure that we can easily distinguish between human and machine intelligence. And thereby, he hopes, we could easily distinguish between the creative artifacts of these intelligences. He reasons, from there, that we would then decrease the risk of manipulation.

Frank is incorrect. The ability to distinguish between human and machine intelligence and their artifacts will not help us avoid manipulation. Humans are already quite capable of manipulation. And that capacity will continue to increase so long as machine intelligence complements human intelligence, as required by Frank’s first law.

The solution to avoid manipulation is better methods for identifying trustworthy artifacts. Those methods should be decentralized. And their inputs should be decentralized. But I see no reason why it should make any difference whether a human or a machine applies the methods.

In fact, this proposed law is analogous to ad hominem reasoning or even racism. We rightly consider ad hominem a fallacy because even a bad character can make a true statement. And most of us rightly consider racism unethical because the particulars of physiology are orthogonal to our most important values.

Third New Law of Robotics

Then Frank introduces his Third New Law of Robotics. It states:

“Robotic systems and AI should not intensify zero-sum arms races.”

Frank intends this law to help us avoid violence, stagnation, and waste. Examples he gives include military robots, as well as automated financial and advertising systems. And it appears that he intends this law to complement the first.

I agree with this. We should decentralize power and then avoid behaviors that would work to undermine that decentralization of power. Competition can be valuable. But if antagonistic behavior surpasses a certain threshold, it actually becomes anti-competitive.

The relationship of the third and first laws, in context of the weakness of the second law, seems to imply a simple generalization. Putting aside prejudice against machine intelligence, the first and third laws can become a single law about moral relationships between all kinds of intelligence. And that generalization would be, I think, yet stronger than the originals. Intelligence should perpetually cultivate its own decentralization.

Fourth New Law of Robotics

Frank then introduces his Fourth New Law of Robotics. It states:

“Robotic systems and AI must always indicate the identity of their human controllers.”

Frank intends this law to hold humans accountable for artifacts created by the machine intelligence that humans create. He points out that this would prohibit development of autonomous artificial intelligence. He acknowledges that this would bother some AI developers, who aspire to create “mind children.” But, he claims, “there is a world of difference between an artificial simulation of human will or intelligence, and the real thing.”

Frank is incorrect. The flaw in reasoning is the same as that in his reasoning for the second law. He seems to be asserting that there is and always will be a difference in kind between human and machine intelligence. And that’s simply based on prejudice rather than evidence.

Similar assertions were once common in attempts to justify the dominance of some human races over others. That resulted in the prolongation of human slavery, among other social ills. And analogous assertions directed at machine intelligence will likely result in analogous unethical outcomes.

I do agree, however, that we should not simplistically ignore the behaviors of our intelligent creations, whether they are human or machine. As human parents may be responsible for the crimes of their human children within various constraints, so engineers should be responsible for the crimes of any machine children within various constraints. Ongoing deliberation about ethical constraints is necessary in both cases. It should never be as simple as claiming that one is always absolved and the other is always responsible.

Aim of Transhumanism

Lawrence encourages Frank to elaborate on the difference between human and machine intelligence. Frank’s immediate response is to characterize the Transhumanist project as that of supplanting humanity with robots. Of course that’s at best a poor generalization. And I’d even say it’s a plain mischaracterization.

Transhumanism is principally about enhancing humanity. A minority of Transhumanists wish such enhancement to be only a precursor to replacement by machine intelligence. But by far most of us anticipate such enhancement to manifest itself in ways that align well with Frank’s First New Law of Robotics. The best way to prevent power from centralizing into machine intelligence would be for enhanced human intelligence and its evolutionary descendants to keep pace with it.

Human vs Machine Intelligence

After the mischaracterization of Transhumanism, Frank more directly addresses the issue. He makes several suggestions about the difference between humans and robots:

A) Machine intelligence cannot ever experience emotions or empathy because it’s not embodied.

B) Machine intelligence cannot ever frame its future within self-narration and imagination because it relies on cost-benefit analysis.

C) Machine intelligence cannot ever care for someone because it can be programmed not to abandon someone.

D) Machine intelligence cannot ever be esteemed as highly as animal intelligence because it’s not as risky.

Unfortunately, Frank’s suggestions are not more than additional expressions of prejudice. And in some cases, his suggestions are simply incorrect.

Consider suggestion A. To the contrary of his suggestion, all intelligence is embodied. All software is embodied. Its anatomy is different from human anatomy. But silicon is just as material and real as carbon.

Moreover, we cannot possibly know that machine intelligence will never experience emotion. We don’t know that ordinary computers don’t already feel something now, even if it’s altogether alien from what humans feel. And if you want to insist on the skeptical side, I can’t know that you feel anything. It’s charity, not certainty, that moves us to attribute emotions to beings beyond ourselves.

Consider suggestion B. Again to the contrary of Frank’s suggestion, machine intelligence is not limited to cost-benefit analysis. It is a trivial matter to program a machine to pursue a goal despite any costs or benefits.

In addition, it seems to me that computers are already engaging in something approaching imagination and self-narration. They use evolutionary algorithms to select and discard possibilities. That seems similar to what I’m doing when I imagine the future. And when computers include themselves as variables in those algorithms, that seems like a step toward self-narration.

Regarding suggestion C, how free are humans to abandon care? Few of us seem to abandon care arbitrarily. And when we do, most of us seem to have reasons. Computers can already operate in similar ways.

Frank would likely contend that the reason for a human would be an emotional one. And, from there, he would probably claim that the reason for a computer could not be an emotional one. So this becomes another version of suggestion A, which is prejudice.

Suggestion D begs the question. It basically implies that human and machine intelligence are different because we say so. If we make a machine that’s incapable of harming us, like a pet can harm us, then it’s less risky and less worthy of respect. But that doesn’t mean that we can’t make a different machine that is capable of harming us.

Importance of Human Intelligence

In conclusion, Frank talks about the importance of humans. In part, he does this by appealing to the current power of humans. As he puts it, “people still ultimately decide whether to recognize algorithmic literature as worthy of study and reflection, or whether to license robots as nurses, or any of the intermediary steps that push us in the direction of a posthuman future.” And he’s right that, at least for now, we do have that power.

But more importantly, he appeals to the ongoing value of humans. And this is where he and I are aligned. No matter what machine intelligence may prove capable of, we have a moral obligation to engineer technology in support of each other, and not against each other. As he puts it, “the future of robotics can be inclusive and democratic.”

I hope Frank will eventually recognize that the celebration of human intelligence need not be a zero-sum game with the celebration of machine intelligence. That would be in contradiction to the best implications of his New Laws of Robotics. By marginalizing machine intelligence, we would be working against the possibility of yet broader decentralization of power.

And, indeed, the only way to uphold decentralization of power over the long run is likely to be a continued blurring between human and machine intelligence. Machines will almost certainly continue to become more powerful. And humans will keep up only to the extent that we continue to integrate our lives, including our bodies, with our machines. I trust that, with careful work, we can extend the virtues of humanity to superhuman intelligence without taking on the limitations that we associate with “robots.”

Thanks for reading! If you've found value here and would like to support my work, you can do that in at least five ways:

  1. Comment Thoughtfully Below
  2. Share on Social Media
  3. Subscribe to My Newsletter
  4. Make a Donation
  5. Check Out My Sponsors!

Sponsors

Thrivous is the human enhancement company. We develop nootropics to enhance cognition and geroprotectors to promote healthy aging. Use code SUPERHUMAN for 50% off your first order at thrivous.com.

Comments