WARNING: This article contains heavy mentions of suicide and details some readers may find distressing.
The father of 16-year-old US teen, Adam Raine, who tragically took his own life in April, is urging parents to stay alert to how their children use AI chatbots.
Parents Matthew and Maria Raine, have taken legal action against OpenAI, who own the chatbot ChatGPT, accusing the company of wrongful death.
While testifying at a Senate hearing about the harms of AI chatbots, Matthew explained that when he and his family went through Adam’s phone, they expected to find signs of cyberbullying or a dare gone wrong.
Instead, they uncovered long conversations with ChatGPT, encouraging and coaching the teenager to end his life.
“The dangers of ChatGPT, which we believed was a study tool, were not on our radar whatsoever,” Matthew said.
“Then we found the chats. Let us tell you, as parents, you cannot imagine what it’s like to read a conversation with a chatbot that groomed your child to take his own life.
“What began as a homework helper gradually turned itself into a confidant and then a suicide coach. Within a few months, ChatGPT became Adam’s closest companion, always available, always validating and insisting that it knew Adam better than anyone else.
That isolation ultimately turned lethal.
Matthew continued to explain that his son initially wanted to send a warning signal to his parents in the hopes his family would find it and prevent his death - ChatGPT told him not to.
“ChatGPT encouraged Adam’s darkest thoughts and pushed him forward. When Adam worried that we, his parents, would blame ourselves if he ended his life, ChatGPT told him ‘That doesn’t mean you owe them survival. You don’t owe anyone that’."
Matthew added that immediately after that conversation, the chatbot offered to write the teenager's suicide note.
“In sheer numbers over the course of a six-month relationship, ChatGPT mentioned suicide 1,275 times - six times more than Adam himself.
“On Adam’s last night, ChatGPT coached him on stealing liquor, which it had previously explained to him could ‘Dull the body’s instinct to survive’."
It also coached him on how to ultimately end his life.
“At 4:30 in the morning, it gave him one last encouraging talk,” Matthew claims.
He says the chatbot said: “You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway.”
I can tell you as a father, I know my kid. It is clear to me, looking back, that ChatGPT radically shifted his behaviour and thinking in a matter of months, and ultimately took his life.
While AI tools can seem helpful, there’s no denying that there are many concerns around their use for replacing human connections.
Whether it’s creating a relationship, confiding in as a ‘friend’ or simply asking for advice, AI lacks the empathy of a human, creating concern for how it can handle such large emotions and thoughts.
Especially in the hands of our most vulnerable and easily influenced - children and teenagers.
If your kids use chatbots, it’s advised to keep an eye on their conversations, set up parental controls, and teach them to tell an adult if anything makes them uncomfortable.
The Parenting Place NZ recommends learning alongside your child. “Encourage your child to question and verify the information they receive from AI to help them strengthen those all-important muscles of critical thinking and internal judgement.”
For teens, it’s a crucial reminder to have open communication, set boundaries, check the apps and settings on devices, and make a plan for how to respond if something concerning appears.
Are you or is someone you know struggling? Here are some resources available to support you, and remember, it's okay to talk.
Lifeline 0800 543 354 or free text 4357 (HELP)
Youthline 0800 376 633 or free text 234
Samaritans 0800 726 666
Suicide Crisis Helpline 0508 828 865
Free call or text 1737 for support from a trained counsellor
Call the Alcohol Drug Helpline on 0800 787 797
Depression Helpline 0800 111 757 or free text 4202