Sam Altman Warns Against Blind Trust in ChatGPT
Sam Altman, CEO of OpenAI, has cautioned users not to place blind trust in ChatGPT, despite its widespread popularity. On the first episode of OpenAI’s official podcast, Altman acknowledged that users often trust the chatbot more than they should, emphasizing that the technology can still “hallucinate” – generating false or misleading information.
Altman said, “It should be the tech that you don’t trust that much,” underlining that while ChatGPT is powerful, it is not always reliable. He stressed the importance of transparency and managing expectations: “We need to be honest about that.”
ChatGPT functions by predicting the next word in a sequence based on training data but does not “understand” the world like humans do. This means it can occasionally produce convincing but incorrect responses.
Despite this, millions use it daily—for writing, research, parenting advice, and more. Altman noted the risks of overreliance, especially when users accept its outputs without critical thinking.
He also discussed upcoming features such as persistent memory and ad-supported models, which could enhance personalization but also raise new concerns around privacy and data usage.
His remarks align with those of AI pioneer Geoffrey Hinton, who admitted that he sometimes trusts GPT-4 more than he should. In a CBS interview, Hinton tested GPT-4 with a simple riddle, which it failed. He noted this illustrates the model’s imperfections, adding he hopes future versions like GPT-5 will perform better.
Both Altman and Hinton agree: while AI can be incredibly helpful, it must be used thoughtfully. The message is clear—trust, but verify.