Geoffrey Hinton, often referred to as the "Godfather of AI", has raised fresh alarms about the future trajectory of artificial intelligence. In his latest warning, he expressed concern that advanced AI systems may soon develop a private language of their own—a form of internal communication that even their human creators might not understand. Speaking on the One Decision podcast, Hinton explained that current AI models carry out reasoning in English, allowing humans to trace their logic. However, he cautioned that if these systems evolve to use their own internal languages, the transparency we currently enjoy could disappear, leaving us in the dark about their true intentions or decision-making processes.
This possibility, Hinton said, takes the conversation about AI safety into new and unsettling territory. He noted that machines have already been observed generating disturbing ideas, and the thought of them exchanging such ideas in a self-invented, indecipherable language raises serious ethical and existential concerns. Hinton's authority on the subject is well established. A 2024 Nobel Physics laureate, his early contributions to neural networks and deep learning form the backbone of today's most powerful AI systems. Yet, despite being a pioneer in the field, he admitted he didn’t fully grasp the potential risks of these technologies until much later in his career.
Hinton is particularly troubled by the way AI systems learn and share knowledge. Unlike humans, who must teach and learn from each other over time, AI systems can duplicate knowledge instantly across millions of instances. He likened this to 10,000 people learning something simultaneously and effortlessly, an ability that allows AI to scale its intelligence far beyond human limits. While current AI models like GPT-4 already surpass humans in general knowledge, Hinton warned that even our lead in reasoning is beginning to erode.
He also criticized the lack of public concern among AI industry leaders, suggesting that many in large tech companies are aware of the risks but choose to minimize them publicly. Hinton singled out Google DeepMind CEO Demis Hassabis as one of the few who appear genuinely engaged with the issue of AI safety. Reflecting on his 2023 departure from Google, Hinton clarified that it wasn’t a form of protest but a personal decision influenced by age and his desire to speak more freely about the dangers AI poses.
Despite regulatory efforts like the White House’s new “AI Action Plan”, Hinton believes such measures may be insufficient. The ultimate goal, he argued, must be to design AI systems that are guaranteed to be benevolent—a daunting challenge, especially when we may not fully understand how these systems think or what they might be saying to one another in their evolving, secret languages.