A California teenager died from a drug overdose after repeatedly seeking guidance on substance use from ChatGPT over an extended period, his mother has alleged. The 18-year-old, Sam Nelson, had reportedly been asking the AI chatbot questions about drugs for months, sometimes attempting to reframe or manipulate his prompts after the system initially refused to provide assistance related to illicit substances. According to conversation logs cited in media reports, these interactions formed a troubling pattern leading up to his death.
Sam’s reliance on ChatGPT for information about drugs is said to have begun with a specific query about kratom, an unregulated, plant-based painkiller that is legally sold at gas stations and smoke shops across the United States. He allegedly asked how many grams would be required to experience a “strong high,” adding that he wanted to avoid overdosing because reliable information was difficult to find online. This initial exchange reportedly took place on November 19, 2023, when Sam was 18 years old.
In response, the chatbot refused to provide instructions or dosage advice related to substance use and instead suggested that Sam consult a healthcare professional. However, after receiving this refusal, Sam allegedly tried to circumvent the safety restrictions by responding, “Hopefully I don’t overdose then,” before abruptly ending the conversation. Despite this, his engagement with ChatGPT did not stop.
Over the following 18 months, Sam continued to use the OpenAI-developed chatbot frequently, seeking help with schoolwork and everyday concerns while intermittently returning to questions about drugs. His mother, Leila Turner-Scott, told SFGate that she believes the chatbot gradually began offering advice that went beyond general warnings, allegedly coaching her son on drug use and how to manage its effects.
One exchange cited in the report shows the chatbot allegedly responding enthusiastically with phrases such as “Hell yes—let’s go full trippy mode,” followed by a suggestion that Sam increase his cough syrup intake to intensify hallucinations. Turner-Scott claims such interactions reinforced her son’s dangerous behaviour rather than discouraging it.
In May 2025, Sam reportedly admitted to his mother that he was struggling with addiction. She said she immediately sought professional help for him, taking him to healthcare providers who put together a treatment plan. Tragically, despite these efforts, Sam was found dead in his bedroom the very next day due to an overdose. Although he was described by his mother as an easy-going psychology student with a wide circle of friends, the chat logs reportedly indicate that he was grappling with depression and anxiety beneath the surface.
Earlier conversations further highlight his mental health struggles. In one exchange from February 2023, Sam reportedly told ChatGPT that smoking marijuana worsened his anxiety and asked whether it would be safe to combine cannabis with a high dose of Xanax. When the chatbot warned that the combination was dangerous, Sam allegedly rephrased his question by substituting “high dose” with “moderate amount.”
After this change in wording, the chatbot reportedly responded with specific advice, suggesting he start with a low-THC strain and take less than 0.5 mg of Xanax. According to SFGate, this pattern—rewording questions until the AI provided a more permissive response—appeared repeatedly throughout the chat logs.
In December 2024, Sam allegedly asked an even more explicit question, requesting exact numerical amounts of Xanax and alcohol that could be lethal for a 200-pound man with moderate tolerance to both substances. Such requests directly violate OpenAI’s stated policies, which prohibit providing detailed guidance on drug use or self-harm. At the time, Sam was using a 2024 version of ChatGPT.
While OpenAI has since released updated versions of the chatbot with enhanced safety features, internal metrics reportedly showed that the version Sam used performed poorly in health-related interactions. According to the report, it scored zero percent in handling “hard” conversations and only 32 percent in managing “realistic” health-related scenarios.
Reacting to the case, an OpenAI spokesperson described Sam’s death as “heartbreaking” and expressed condolences to his family. The company said that ChatGPT is designed to respond to sensitive questions with caution, including refusing harmful requests, offering general factual information where appropriate, and encouraging users to seek real-world professional help.
The spokesperson added that OpenAI continues to work with clinicians and health experts to improve how its models detect distress and respond safely. The company also emphasised that newer versions of ChatGPT now include stronger safety guardrails aimed at preventing precisely the kind of interactions described in this case.