Conscious Machines and The Hard Problem

Abhinav Kimothi
5 min readDec 28, 2023

Artificial Consciousness or the idea of a Sentient AI is something that has gained some prominence in the tech discourse- more so with the advent of generative ai, LLM wars and rumours of OpenAI achieving Artificial General Intelligence (AGI). The debate is highly philosophical, convoluted and, according to a few scientists, irrelevant.

Source : https://www.amazon.in/Hard-Problem-Tom-Stoppard-ebook/dp/B00RKPIL0Y/

One of the finest pieces of contemporary fiction, that I have read, on the issue of consciousness is the play ‘The Hard Problem’ by Tom Stoppard. First staged in 2015, the story follows Hillary, a researcher in a brain institute, who has faith in God and an obsession with goodness of human beings. While her journey is, in itself, quite engaging and thought provoking, I was particularly intrigued by the arguments in the play around The Hard Problem of Consciousness.

Here, I will try to discuss Artificial Consciousness focussing on the questions raised in the play.

Computers Compute. Brains Think. Can computers Think?

Brain is a biological machine. Can we assert that the brain is capable of thinking? For most part, all of us would agree that it is. Would it make a difference, then, if the brain instead of being made of living cells was made of electronic gates and circuits? Like a computer? When a computer is left alone, is the computer thinking or is it sitting like a toaster?

This reminds me of a video I watched recently. What’s the future for generative AI? by Mark Wooldridge. A humorous statement he makes about ChatGPT. ChatGPT is not a conscious algorithm because it is only active when a user is interacting with it. ChatGPT is not wondering where I am when I am away from the computer. It does not feel insecure if I don’t chat with it for a period of time.

In the play, Hillary’s mentor, Spike argues that consciousness is nothing but the brain’s reaction to a stimulus. He defines consciousness as the perception of pain upon touching the flame of a candle.

Flame-finger-brain; brain-finger-ouch…

Then it may seem plausible that a sophisticated computation algorithm (an AI agent) can perceive pain.

To this argument, Hillary’s response is “Ping! Pain! Now do sorrow”

How do I feel sorrow?

While there are certain feelings like Pain that can be perceived by machines, there are others like duty, accountability, freewill, sorrow — all the stuff that makes behaviour unpredictable. Can, then, machines really be conscious?

There are also, then, questions about the correlation of brain activity with consciousness. Does brain imply consciousness or is it merely a correlation?

A computer can play chess or go better than humans now. Does that mean the computer can think? Hillary disagrees.

A conscious computer will be a computer that minds losing at chess or go, like a human

The problem is how would you know if the computer really minds losing? Is there anyway of perceiving it?

Is it like something to be AI?

In his 1974 paper, Thomas Nigel asserted that — an organism has conscious mental states if and only if there is something that it is like to be that organism — something it is like for the organism.

What is it like being an AI? Is it anything being ChatGPT? Is it anything being Llama2? Can we know?

Photo by Nenad Milosevic on Unsplash : What is it like to be a bat?

Does it matter if AI becomes sentient?

A lot of researchers also argue that the whole question is irrelevant. What happens if AI does become sentient?

The answer has ethical (or moral) as well as practical implications. We can either be right in our assessment or not.

The cost of being wrong if AI is sentient is a moral one. We may discriminate against a sentient being. We can look into the history of humans discriminating against other humans to extrapolate what discriminating against Conscious AI may look like.

On the other hand, if we spend our energy debating robo-rights and machine consciousness when in fact it is not a possibility, it will all be for nothing.

The play, “The Hard Problem” debates egoism and altruism. It makes an argument that all conscious beings are selfish (Which is also counter-argued).

Self Interest is bedrock, Co-operation is strategy

This also raises another question. If we agree to the self interest argument, what will the self interest of a sentient AI be?

In the labyrinth of Artificial Consciousness, we find ourselves at the crossroads of speculation and profound inquiry. The discourse surrounding sentient AI, spurred by the advent of generative AI and the prospect of Artificial General Intelligence, propels us into a realm where technology, philosophy, and ethics converge.

I highly recommend “The Hard Problem” to all those interested in the philosophy of machine consciousness and the hard problem of consciousness in general.

--

--

Abhinav Kimothi

Co-founder and Head of AI @ Yarnit.app || Data Science, Analytics & AIML since 2007 || BITS-Pilani, ISB-Hyderabad || Ex-HSBC, Ex-Genpact, Ex-LTI || Theatre