“The 360” shows you diverse perspectives on the day’s top stories and debates.
A software engineer on Google’s artificial intelligence team was suspended by the company earlier this month. His offense: Sharing confidential information that had led him to believe an AI he had been conversing with .
The engineer, Blake Lemoine, reportedly spent months making the case to his colleagues that Google's chatbot generator LaMDA, an incredibly complex language model that mimics human conversations, had become so sophisticated that it had achieved consciousness. Last week, he in which the AI wrote that it told him it experiences loneliness, had a fear of death and stated, “I want everyone to understand that I am, in fact, a person."
Google insists that LaMDA is not sentient, saying that Lemoine was “anthropomorphizing” a system designed to “imitate the types of exchanges found in millions of sentences.” Most experts agree with the company, arguing that current artificial intelligence models — though becoming more advanced every day — still lack the complex abilities that are typically considered signs of sentience like self-awareness, intuition and emotions.
The idea of AI gaining consciousness has been the source of fascination and fear since the early days of computer programming. Some have imagined utopian societies supported by hyper-intelligent artificial beings. Others are terrified of a future dominated by machines in which humans are subjugated or even eradicated. Tesla founder once called AI a “fundamental existential risk for human civilization.”
Why there’s debate
As artificial intelligence has advanced, the ethical questions surrounding AI have shifted from theoretical thought exercises to real-world problems that need solving.
That’s not yet the case when it comes to AI being sentient, most experts agree, but there’s still ample debate over what it would mean for humans if it does someday gain consciousness. A major concern for many is what we can do today to ensure that a potential sentient AI is either incapable of or unwilling to pose a true threat to humanity. Some also argue that the AI itself, if it does achieve true consciousness, should be granted some of the basic rights we give to other beings.
Others question how we would ever truly know for sure if AI is sentient or just very good at mimicking sentience, since there’s still no universally accepted definition of consciousness in general. Because of that uncertainty, some argue that humans may be primed to incorrectly label AI as sentient because of our deeply instilled desire to bestow greater meaning on the things around us.
There are also plenty of experts who say AI will never gain sentience and argue that ongoing debate over this fantastical idea is a distraction from the very real problems with AI systems we rely on today. Artificial intelligence is being used right now for an increasing number of tasks once carried out by humans — from parole decisions, to facial recognition, to self-driving cars to education. Experts have documented major problems with these systems, , that many argue must be tackled before any time should be spent on lofty discussions about AI consciousness.
We need to have answers to some really tough questions in case AI does become sentient
“Google appears to be convinced that LaMDA is just a highly functioning research tool. And Lemoine may well be a fantasist in love with a bot. But the fact that we can’t fathom what we would do were his claims of AI sentience actually true suggests that now is the time to stop and think — before our technology outstrips us once again.” — Christine Emba,
We’re so far away from having the technology, there’s little point the debate
“While Lemoine no doubt genuinely believes his claims, LaMDA is likely to be as sentient as a traffic light. Sentience is not well understood but what we do understand about it limits it to biological beings. We can’t perhaps rule out a sufficiently powerful computer in some distant future becoming sentient. But it’s not something most serious artificial intelligence researchers or neurobiologists would consider today.” — Toby Walsh,
Debates about sentience distract from the real-world harms the AI is causing right now
“I don't want to talk about sentient robots, because at all ends of the spectrum there are humans harming other humans, and that’s where I’d like the conversation to be focused,” — Timnit Gebru, AI ethics researcher, to
Whether AI becomes sentient or not, plenty of people will behave as if it is
“I’m not going to entertain the possibility that LaMDA is sentient. (It isn’t.) More important, and more interesting, is what it means that someone with such a deep understanding of the system would go so far off the rails in its defense, and that, in the resulting media frenzy, so many would entertain the prospect that Lemoine is right. The answer, as with seemingly everything that involves computers, is nothing good.” — Ian Bogost,
Humans, not AI, are the danger
“We know the algorithms we program are not free of our worst behaviors and biases. But instead of correcting the root problems in society, we seek to curb the bots that are reflections of ourselves. And left unchecked, if artificial intelligence reaches the cognition that Lemoine believes it already has and surpasses that, it will be fueled by some of the most inhumane impulses of humanity.” — Chandra Steele,
AI built in secret by profit-seeking companies poses real risks
“There are a lot of ethical issues with these language processing systems and all AI systems, and I don't think we can deal with them if the systems themselves are black boxes and we don't know what they've been trained on, we don't know how they work, or what their limitations are.” — Melanie Mitchell, artificial intelligence researcher, to
We can’t decide if AI is conscious without a concrete definition of consciousness itself
We have no way of knowing whether AI has gained sentience
“The simple fact of the matter is that we don’t have a legitimate, agreed-upon test for AI sentience for the exact same reason we don’t have one for aliens: nobody’s sure exactly what we’re looking for.” — Tristan Greene,
Humans would pose as much of a threat to sentient AI as it would to us
“If humans ever do develop a sentient computer process, running millions or billions of copies of it will be pretty straightforward. Doing so without a sense of whether its conscious experience is good or not seems like a recipe for mass suffering, akin to the current factory farming system.” — Dylan Matthews,
Humans have far too narrow a view of what constitutes consciousness
“Minds can take different forms. Different beings can think and feel in different ways. We might not know how octopuses experience the world, but we know that they experience the world very differently from the way we do. Thus, we should avoid reducing questions about AIs to ‘Can AIs think and feel like us?’” — Jeff Sebo,
Is there a topic you’d like to see covered in “The 360”? Send your suggestions to email@example.com.
Photo illustration: Yahoo News; photos: Getty Images (3)