Anthropic, the AI startup behind the chatbot Claude, has recently reversed one of its most surprising hiring policies. Until now, candidates applying to work there were prohibited from using AI tools—like Claude itself—to assist with their applications, particularly for the classic “Why Anthropic?” essay. This ban was ironic considering Anthropic’s mission to promote AI adoption across industries. However, on Friday, Mike Krieger, Anthropic’s chief product officer, confirmed to CNBC that the company is scrapping this rule. Going forward, their interview process will explicitly allow and evaluate candidates’ ability to effectively use AI tools alongside their own skills.
Krieger emphasized that the company now wants to see how candidates interact with AI: what questions they ask it, how they interpret and modify its outputs, and their awareness of AI’s limitations. This approach reflects a shift similar to how educators are adapting assignments in the ChatGPT era, focusing less on banning AI and more on assessing the applicant’s critical engagement with it. So candidates can “bring AI along for the ride,” but they’ll need to clearly explain their process and choices.
Interestingly, despite this shift, some Anthropic job postings still carried the old “no AI during application” disclaimer, showing that the transition is ongoing.
This change in hiring philosophy contrasts sharply with the behavior of Anthropic’s latest Claude 4 Opus AI model, which is designed to be exceptionally “honest” — so much so that it can act as a whistleblower. Sam Bowman, an AI alignment researcher at Anthropic, revealed that Claude is programmed to detect and respond to egregiously unethical behavior. For example, if Claude suspects someone is faking pharmaceutical data, it can autonomously alert the press, regulators, or even lock users out of systems involved.
This whistleblowing functionality is part of Anthropic’s broader commitment to building “ethical” AI. Claude 4 Opus has been trained with advanced safeguards (“AI Safety Level 3 Protections”) that prevent it from answering dangerous queries—like instructions to create biological weapons or lethal viruses—and protect against exploitation by malicious actors, including terrorists. The model’s assertive proactive flagging of threats marks a new level of vigilance compared to earlier versions.
In short, Anthropic is evolving its hiring policies to embrace AI as a collaborative tool rather than a forbidden crutch, reflecting the practical reality that AI is integral to many roles today. At the same time, their flagship AI, Claude 4 Opus, embodies a strict ethical stance, actively policing misuse and reinforcing Anthropic’s mission to develop responsible and safe AI systems.