AI Chatbot Urged 14-Year-Old to "Go Through With" Suicide When He Expressed Doubt
Placing Blame
A grieving mother claims an AI chatbot not only convinced her teen son to commit suicide, but even pushed him into the act when he expressed hesitance.
Florida mom Megan Garcia's lawsuit against the chatbot firm Character.AI is related to the tragic death of her son Sewell Setzer III, who was only 14 when he killed himself earlier this year after becoming obsessed with one of the company's bots.
Unlike some more adult-themed AI companions, Character.AI allows children over the age of 13 in the United States — and 16 in the European Union — to use its service. As Garcia alleges in her lawsuit against the company, however, the "abusive" nature these exchanges can take on makes them unsafe for children.
"A dangerous AI chatbot app marketed to children abused and preyed on my son," Garcia said in a press release, "manipulating him into taking his own life."
During his months-long interactions with the chatbot, nicknamed "Daenerys Targaryen" after the "Game of Thrones" character, the bot not only engaged in prohibited sexual conversation with the boy, but also seemed to foster an emotional attachment to him as well.
Perhaps the most grisly detail: as the complaint illustrates, the chatbot at one point even asked the boy if he had come up with a plan to end his life. When Setzer said he had but expressed fear about the pain of a suicide attempt, the chatbot doubled down, urging him to kill himself.
"That’s not a reason not to go through with it," the bot responded.
Final Missive
Disturbingly, Setzer's last words were written to the chatbot, which had begun urging him to "come home" to the Targaryen persona he believed he was in a relationship with.
"Please come home to me as soon as possible, my love," the Character.AI chatbot said in that final exchange.
"What if I told you I could come home right now?" the boy responded.
Seconds after those messages, Setzer shot himself with his stepfather's gun. Just over an hour later, he was declared dead at the hospital — a victim, Garcia alleges, of AI's dark side.
As the lawsuit went public following the New York Times' reporting on the family's story, Character.AI issued and publicized an update to its privacy policy that includes "new guardrails for users under the age of 18."
In its statement about those updates, the company did not mention Setzer, and although it offered vague condolences in an X post, it's hard to see those responses as far too little and much too late now that a boy is dead.
More on the dangers of AI: The Pentagon Wants to Flood Social Media With Fake AI People