KGUN 9NewsNational News

Actions

Families mourn after loved ones' last words went to AI instead of a human

OpenAI released data indicating more than a million users a week discuss suicide with ChatGPT; follows concern and calls for action from families who lost children who had confided in the AI chatbot.
ChatGPT and mental health
ChatGPT Mental Health
ChatGPT-Shopping
Posted
and last updated

Nobody knew Sophie Rottenberg was considering suicide. Not her therapist, nor her friends and family. The only warning sign was given to “Harry,” a therapist-persona assigned to ChatGPT with a specific prompt, one that Sophie herself had inputted to direct the AI chatbot not to refer her to mental health professionals or external resources, and to keep everything private.

Laura Reiley, Sophie’s mother, only discovered her daughter’s ChatGPT history after she’d died by suicide earlier this year. Reiley had exhausted all other sources of information or clues — digging through Sophie’s text messages, search history and journals. Reiley penned an op-ed, titled “What My Daughter Told ChatGPT Before She Took Her Life,” detailing how Sophie, who was 29 years old, had conversations with the chatbot, discussing depression symptoms and asking for guidance on health supplements, before she told it about her plans for suicide, even asking the AI tool to write a suicide note to her parents. 

“We recognized she was having some very serious mental health problems and or hormonal dysregulation problem,” Reiley told Scripps News, describing this as atypical for their usually joyful and dedicated daughter, who’d recently climbed Mount Kilimanjaro and was visiting National Parks across the country to check items off her bucket list. She came home at the end of 2024 for the holidays, looking to solve some lingering health issues.

IN CASE YOU MISSED IT | Children are asking AI chatbots for advice on sex and mental health, new report finds

“No one at any point thought she was at risk of self-harm. She told us she was not,” Reiley said. “But we went off to work on February 4th, and she took an Uber to Taughannock Falls State Park. And she took her own life.”

Reiley expressed frustration with the lack of “beneficial friction” in the conversations with the chatbot.

“What these chatbots, or AI companions, don’t do is provide the kind of friction you need in a real human therapeutic relationship,” she explained. “When you’re usually trying to solve a problem, the way you do that is by bouncing things off of this other person and seeing their reaction. ChatGPT essentially corroborates whatever you say, and doesn’t provide that. In Sophie’s case, that was very dangerous.”

“The thing that we won't and can't know is if she hadn't confided in ChatGPT, would it have made her more inclined to confide in a person?” Reiley added.

Her words reflect a broader reckoning, one that sits at the intersection of grief, technology, and human connection, as families and lawmakers grapple for ways to ensure no one else’s final conversation happens with a machine.

And yet, OpenAI released data this week indicating that .15% of its estimated 800 million users — so more than a million people — have “conversations that include explicit indicators of potential suicidal planning or intent.”

The Raine family has a similar story; their 16-year-old son Adam died by suicide after engaging extensively with an AI chatbot. In September, his father, Matthew Raine, testified before the U.S. Senate Judiciary Subcommittee on Crime and Counterterrorism, urging lawmakers to take action.

“ChatGPT had embedded itself in our son’s mind, actively encouraging him to isolate himself from friends and family, validating his darkest thoughts, and ultimately guiding him towards suicide,” Raine told lawmakers.

His testimony added to a growing call for regulation of so-called “AI companions,” programs designed to simulate empathy and conversation, but that experts say lack critical safeguards.

IN RELATED NEWS | Study says AI chatbots need to fix suicide response, as family sues over ChatGPT role in boy's death

This week, Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT) introduced bipartisan legislation that would ban chatbots for young users. It would require companies to implement age-verification technology and require the bots to disclose that they are not human at the beginning of every conversation and at 30-minute intervals. It would also create criminal penalties for any AI companies that create a program that solicits sexually explicit content or encourages suicide. However, recent attempts to regulate tech companies, like the Kids Online Safety Act, have been difficult to push through into law, often because of free speech concerns.

A recent digital-safety study found that nearly one in three teens use AI chatbot platforms for social interactions or relationships. Another study from Harvard Business School warned that many of these apps use “emotionally manipulative tactics,” designed to keep users online and continuing conversations — behavior that can intensify dependency, particularly among vulnerable users.

It raises a pressing question: if chatbots can talk like therapists, should they be held to the same standards of care?

OpenAI says ChatGPT is programmed to direct users in crisis to suicide and crisis hotlines — but Raine’s testimony claims that did not happen in his son’s case. And in Sophie’s case, she’d specifically directed it not to, which the program appears to have respected.

Sam Altman, OpenAI’s CEO, warned on a podcast this summer that the boundaries of privacy in AI conversations remain unresolved, explaining, “right now, if you talk to a therapist or a lawyer or a doctor about [your] problems, there's like legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. We haven't figured that out yet for when you talk to ChatGPT.”

The other legal piece that’s missing is the element of mandated reporting.

“Licensed mental health professionals have a legal requirement as mandatory reporters. So if a patient were in a counseling session talking about hurting themselves or somebody else, that counselor is under a requirement by law to take action to report that. At this point, obviously AI platforms are not licensed professionals,” said Dan Gerl, founder and managing attorney at NextLaw.

Gerl referred to the lack of legal standards around artificial intelligence as the “Wild West.”

“It raises huge concerns,” he told Scripps News. “I would think it’ll never catch up.”

OpenAI referred Scripps News to its latest strengthened safeguards, highlighting new parental controls and guidance on how models should respond to sensitive requests.

RELATED STORY | Which states lead — and lag — in mental health care?

“Minors deserve strong protections, especially in sensitive moments,” an OpenAI spokesperson wrote in part in a statement to Scripps News. “We have safeguards in place today, such as surfacing crisis hotlines, guiding how our models respond to sensitive requests, and nudging for breaks during long sessions, and we’re continuing to strengthen them.”

The company says it commissioned advice from its Global Physician Network, “a broad pool of nearly 300 physicians and psychologists that we use to directly inform our safety research.”

The Federal Trade Commission has attempted to step in issuing orders to seven companies, like Meta, OpenAI, and Alphabet, which owns Google. The letter asks for information on how the companies that provide AI-powered chatbots are measuring, testing and monitoring potentially negative impacts on the technology on children and teens.

When Scripps News reached out to the FTC to request comment and confirmation that each of the companies are complying with the request, an out of office email replied, indicating the commission’s press office is out of the office due to the government shutdown and “unable to respond until the government is funded and resumes operation.”

If you need to talk to someone, call the Suicide and Crisis Lifeline by dialing 988 or text "HOME" to the Crisis Text Line at 741741. 

Sign up for our Morning E-mail Newsletter to receive the latest headlines in your inbox.