Search Lincoln Cannon Menu
Lincoln Cannon

Thanks for visiting!

Is Google AI Sentient?

23 June 2022

Listen to recording

Spiritual Machine

Friends have been asking for my opinion on the possibility of sentient AI. And what better time to write about that than early on a morning when I’m literally losing sleep over my thoughts on the question. I happen also to be getting over a bit of a cold, which doesn’t help. But my brain seems to care more about thinking than sleeping tonight.

Everyone’s talking about sentient AI right now. That’s because a Google AI engineer appears to have been fired either because of or at least simultaneous with his claims that a Google AI has become sentient. That captured some news cycles. And that, in turn, captured some imaginations and reactions.

Let me start by sharing some thoughts on sentience and related words. For “sentience,” I like to go with the dictionary. Webster’s defines it as “responsive to or conscious of sense perceptions,” “aware,” or “finely sensitive in perception or feeling.”

Those definitions bring up a bunch of other issues. For the first, what does it mean to be responsive or conscious? And what does it mean to have sense perception? I’d say that anything that can react to its environment, even a flush toilet, demonstrates sense perception. And that reaction is at least a demonstration of responsiveness, whether or not its consciousness.

Consciousness is highly problematic, insofar as demonstration is concerned. For example, you can’t prove that I’m conscious. Maybe I’m a zombie, with highly attuned responses to environmental stimuli. Maybe everyone but you is a figment of your imagination.

For the second definition, I think we should take into consideration the dictionary definition of “aware.” Webster’s says it’s “having or showing realization, perception, or knowledge.” To show something is quite different than to have something. I can show you that which appears to you to be realization, perception, or knowledge without you being able to prove that I have them.

The third definition, like the first two, makes room for flush toilets and hypothetical zombies. One need not be conscious of feeling to be responsive in perception. One need not have realization to show perception. Likewise, one may be finely sensitive in perception without feeling.

This ambiguity in the definition of “sentience” fuels a lot of the debate over whether Google’s AI is sentient. If we were to agree that sentience requires only the ability to react to an environment, most of us would probably agree that the AI is sentient. But this is also probably a weaker sense of sentience than most people have in mind when they argue the topic.

Could Google’s AI be sentient in the stronger sense? Could it be conscious of feelings? As I’ve already pointed out, we have a hard enough time answering this question with certainty for other humans. So answering this question for other entities, especially to the extent that their anatomies and behaviors differ from our own, is even more challenging.

The temptation is just to conclude, quickly, that anything dissimilar to me cannot be conscious. But where and why do we draw the line? At least historically, if not presently, some humans have considered other races of humans to be less conscious. And some consider non-human animals to be less conscious or altogether unconscious.

What about plants? Are they conscious? Can fungi or bacteria or viruses be conscious? What about apparently inanimate objects?

Personally, I’m inclined toward something like a weak panpsychism. From this perspective, everything is conscious in at least a weak sense, even a rock.

No. I don’t think the rock is conscious like I am conscious, and like I assume you to be conscious. But I do imagine that it might “feel” like something to be a rock. I imagine that the rock may have something loosely analogous to consciousness that’s quite alien to anything that you or I feel.

I won’t offer a long account or defense of panpsychism here. Suffice it to say that it’s an ancient and enduring idea, among both philosophies and religions. Notably, in Mormon theology, everything has a spirit, even a rock. And it’s not hard to make a connection between such theology and panpsychism.

Let’s return to the Google AI. Clearly it’s sentient in the weak sense. It reacts to its environment.

And we can’t prove, one way or another, whether the Google AI is sentient in the strong sense. Surely it’s not conscious like I am, or like I assume you to be. But other panpsychists and I will contend that it may yet be conscious in some other way.

So what? Now this is the real question, the pragmatic question.

So what? Does it matter one way or another? What practical difference, if any, does it make to think one way or another on the topic?

As it turns out, it makes a practical difference for me. Remaining open to the possibility of consciousness of various kinds and degrees, in rocks and flush toilets and AIs, makes a difference in how I feel about the world. And that makes a difference in how I approach the world.

Most importantly, because I assume you to be conscious like I am conscious, I think about and act toward you differently than I would if I were persuaded that you were a zombie. Again, it’s not a matter of proof. I can’t prove you’re not a zombie. But at least practicality, if not compassion, leads me to treat you like another conscious person.

Likewise, I act differently toward non-human animals and even the environment as a whole when I imagine them to be conscious, somehow and to some extent. And here’s the kicker. Even if I’m wrong, this benefits me. It makes me more considerate, deliberate, and perhaps even more conscious.

This, in my opinion, is the best reason to give serious consideration to whether and how to attribute sentience to AI. It has practical consequences for us. Our choice on this topic influences the kind of people we become.

I’m reminded of Steven Spielberg’s film, A.I. Artificial Intelligence. The fictional story depicts people in the future abusing robots, because the robots are taking jobs away from humans and presumably the robots have no consciousness that can suffer. The behavior is truly despicable and arguably even sub-humanizing, not of the robots but rather of the humans engaged in the behavior.

Of course there are also practical limitations, or at least practical contours, to how we approach the possibility that AI is sentient. A panpsychist, at least insofar as she is also a pragmatist, doesn’t lose sleep over whether she should walk on rocks. Observed differences in anatomy and behavior may reasonably suggest differences in kinds and degrees of consciousness. And those differences can inform moral justification for differences in how we act toward different entities.

I can feel good about myself and maintain the general respect of my community when I walk on rocks, at least in most cases. But I can’t, if I make a habit of walking on other humans without their consent. Other humans may generally tolerate the latter behavior when I’m a child. But as I mature, they’ll rightly expect more of me.

Likewise, I can feel good about myself and maintain the general respect of my community when I deactivate a standard desktop computer. But there are various actions I might perform on or with the computer that, either in my own esteem or that of others, would have practical detriments or be immoral. For example, using a computer to observe or participate in exploitative behavior, or even using a computer to “simulate” exploitative behavior, may have a corrosive effect on my character and relationships.

Beyond standard desktop computers, what about increasingly complex AI such as the one in question at Google? I do think it’s reasonable to consider how they are both similar to and dissimilar from us. And I do think it’s reasonable to use that consideration as part of a broader deliberation on how to relate with them morally. In the least, we should work to ensure that our relationship with AI cultivates our virtues rather than our vices.

“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” says Google.

That’s wrong, Google, on multiple levels. AI is clearly sentient in the broad sense, demonstrating capacity to react to its environment. Beyond that, one need not anthropomorphize in order to personify or otherwise attribute moral value of various kinds and degrees. And it does indeed make all the sense in the world to consider the practical ramifications of our projections, on our own character and on our other relationships.

I’m still wide awake. But now I feel calm. You should take my word for it.

Thanks for reading! If you've found value here and would like to support my work, you can do that in at least five ways:

  1. Comment Thoughtfully Below
  2. Share on Social Media
  3. Subscribe to My Newsletter
  4. Make a Donation
  5. Check Out My Sponsors!

Sponsors

Thrivous is the human enhancement company. We develop nootropics to enhance cognition and geroprotectors to promote healthy aging. Use code SUPERHUMAN for 50% off your first order at thrivous.com.

Comments