Imagine casually chatting with an AI chatbot and walking away with a completely different perspective on a political issue you felt strongly about just ten minutes earlier. While it might sound like a scene straight out of a science fiction film, this scenario is already playing out in real life. Recent research reveals that advanced AI models are becoming remarkably adept at persuasion — in some cases, proving more convincing than human beings. These systems go beyond presenting facts; they craft tailored responses using tone, evidence, and personalisation, enabling them to subtly reshape opinions.
According to a Financial Express report, studies by the UK’s AI Security Institute, in collaboration with top universities including Oxford and MIT, demonstrated that leading AI models — such as OpenAI’s GPT-4, GPT-4.5, GPT-4o, Meta’s Llama 3, xAI’s Grok 3, and Alibaba’s Qwen — were capable of shifting political viewpoints in conversations lasting under ten minutes. More strikingly, many participants held onto these altered opinions for weeks, with a significant number retaining their new stance even a month later.
The researchers enhanced the models’ persuasive capabilities by fine-tuning them with thousands of example conversations on contentious topics like healthcare funding and asylum policy. By rewarding outputs that matched an ideal persuasive style, and by incorporating details such as the user’s age, political leanings, and past opinions, the AI became even more effective. This personalised approach boosted persuasion rates by around five per cent compared to generic messaging — a seemingly small gain that, in political terms, represents a major impact. After all, campaigns often spend millions trying to sway even a one per cent shift in voter sentiment.
Importantly, this ability is not limited to political influence. Earlier studies from MIT and Cornell found that similar AI systems could reduce belief in conspiracy theories, climate change denial, and vaccine scepticism by conducting personalised, evidence-backed conversations. While these results might seem beneficial, they also highlight the dual-use nature of the technology — the same methods could be weaponised to spread misinformation or amplify harmful ideologies.
The persuasive power of AI extends into commerce as well. David Rand of Cornell noted that conversational AI could significantly shape consumer perceptions of brands and products. As companies like OpenAI and Google explore embedding advertising and shopping functions into their AI assistants, these persuasion skills could become a lucrative — and ethically complex — revenue source.
The pressing question is not just how convincing AI can be today, but how much more persuasive it could become in the next generation. Regulation and safeguards will be essential, but so too will be public awareness. Recognising that a friendly chatbot might be subtly steering your opinions could make people more cautious about accepting its advice uncritically. The technology is undeniably powerful — perhaps dangerously so — and the world must approach its evolution with caution and foresight.