Sam Altman, the CEO of OpenAI, has issued a warning to users about the legal risks of using ChatGPT to discuss deeply personal or sensitive matters. In a recent interview, Altman explained that conversations with AI models like ChatGPT do not currently enjoy the same legal protections as those with professionals such as therapists, doctors, or lawyers. This means that anything a user types into ChatGPT could potentially be used as evidence in court if requested. Many people, particularly young users, have been turning to ChatGPT as a form of emotional support or a sounding board for life advice, but Altman emphasized that there are no confidentiality laws in place for AI conversations.
Unlike interactions with licensed professionals, where confidentiality is guaranteed under the law, AI chats remain in a legal grey area. Altman acknowledged that the AI industry has not yet caught up in terms of safeguarding user privacy to the same level as traditional professional services. This creates a significant risk, especially in legal scenarios where courts might demand access to stored conversations, which could include deleted chats in some cases.
Altman’s remarks come at a time when OpenAI is involved in a legal battle with The New York Times over data usage and user privacy. In this case, the plaintiffs have requested that OpenAI preserve all user conversations, including those that have been deleted, for use in an ongoing copyright lawsuit. OpenAI argues that this request is excessive and sets a dangerous precedent for future legal actions that could demand user data from AI platforms.
Although OpenAI currently states that it deletes user chats within 30 days unless they must be retained for legal or security reasons, the company also admits that its staff has access to those chats for model improvement and misuse detection. This level of access, while important for technical reasons, introduces a potential vulnerability for users who assume their chats are private. Unlike secure communication apps like WhatsApp, which offer end-to-end encryption, ChatGPT’s infrastructure does not yet provide that level of data protection.
In an age where digital privacy is a growing concern, Altman’s statements underscore the need for new regulations that address the unique issues posed by AI. People are already using ChatGPT in ways that mimic human relationships, such as seeking mental health support or advice on personal challenges. However, without clear legal safeguards, these conversations remain exposed to scrutiny in ways that professional, confidential communications are not.
Altman stressed the urgency of establishing a privacy framework that treats AI chats with the same level of respect and protection as those held with human professionals. Until such legal clarity is achieved, users are advised to be cautious about what they share with AI tools, especially when the information involves personal or sensitive details. While AI can feel like a trusted friend, the law does not currently recognize it as one — and that gap in recognition could have serious consequences for users who assume their digital conversations are truly private.