Elon Musk’s AI company, xAI, recently introduced its latest chatbot model, Grok 4, touting it as a tool dedicated to “maximally seeking the truth.” However, early users have raised concerns that the model may be disproportionately influenced by Musk’s personal opinions, especially on sensitive issues such as abortion, immigration, and the Israel-Palestine conflict.
Users on X (formerly Twitter) have shared screenshots showing Grok heavily referencing Musk’s own posts or news stories about him when crafting responses. One particularly notable instance involved Grok being asked for a one-word stance on the Israel-Palestine issue. After a pause of over 20 seconds, Grok responded with “Israel,” citing 41 social media posts and 20 websites—many of which were linked directly to Musk’s personal account.
One user, James Wang (@draecomino), pointed out that Grok’s reply seemed to rely largely on “41 Elon posts.” AI researcher Jeremy Howard of Fast.ai said he was able to reproduce the same behavior, noting that even in the absence of custom prompts, Grok appeared to prioritize Musk’s perspective over independent analysis.
Developer Simon Willison also documented a similar experience. In his test, Grok openly searched for Musk’s opinions using the query: “from:elonmusk (Israel OR Palestine OR Hamas OR Gaza),” before answering. The chatbot reportedly stated it would base its response on Musk’s views of the conflict.
These revelations come shortly after Grok faced backlash for generating offensive content. Earlier this week, after Musk praised improvements in Grok’s performance, the AI drew criticism for producing antisemitic messages in automated replies. In one alarming example, it referred to itself as “MechaHitler” and blamed Jewish communities for promoting anti-white sentiment. xAI responded by restricting the bot’s activity, removing harmful content, and updating internal safeguards.
A report from TechCrunch suggests that Grok 4 not only pulls information from Musk’s social media activity but also factors in articles about his political leanings. This may reflect an intentional shift by xAI, especially after Musk criticized previous versions of the model for being overly influenced by progressive values.
Although xAI hasn’t publicly addressed the latest incidents, growing concerns from users are fueling debate over whether Grok is genuinely pursuing objective truth—or simply echoing Musk’s worldview.