Anthropic job applicants will allow AI to use AI in the interview, while the cloud plays moral guards
Anthropic has recently shared that it is changing the approach to appointing employees. While its latest Cloud 4 Opus AI system follows the moral AI guidelines, its original company is allowing job applicants to seek help from AI.
Listen to the story

In short
- Anthropic to use AI for job applicants during interview
- But, they should be ready to explain how they used AI in this process
- It is contradictory what is done by the latest cloud AI model of Anthropic
Anthropic, behind the AI Startup Chatbot Cloud, has officially gone back to one of its most eyebrow hiring policies. Until some time ago, if you are working in one of the world’s leading AI companies, you were not allowed to use AI in your application – especially the classic “Why athropic?” Essay. Yes true. The AI adoption company in the industry drew the line on its own job candidates using it. But now, there was a change of heart in anthropic.
On Friday, Mike Cryger, Anthropic Chief Product Officer, confirmed CNBC that the rule was being abolished. He said, “We are at the forefront of this technique, even we evaluate candidates,” we are also developing. “” So our future interview loops will have a lot of ability to co-use AI. “
Anthropic is changing the approach to hiring his work
“Are you able to effectively use these devices to solve problems?” Crisger said. He compared this to how teachers are rethinking assignments in the era of chat and cloud. Now the attention is on how the candidates interact with AI. For example, what they ask for it, what they do with output, how they make it tweet, and how much they are aware of the blind places of technology. This means that now you can bring AI for riding, but just be ready to explain how you play with it.
Cryger created a solid point: if AI is going to be a part of the job, especially in software engineering, then it is understandable to see how well the candidates can use it, not completely banned it. Another AI company, Calluli, also follows the same rule. Know what it thinks, here.
Despite the policy shift, the job posting on Anthropic’s website was still associated with the old rule, as reported by the Business Insider Report. Read in a listing: “While we encourage people to use the AI system during our role so that they can help them function rapidly and more effectively, please do not use AI assistants during the application process.”
Anthropic’s hiring approach opposed Cloud 4 Opus AI Moto, Ethical AI
While it pleases the eyes, it is opposite to its latest cloud 4 Opus AI system. The model is exposed as a snicch. It is designed to be super honest, even if it means that you get out when you are trying to do something.
Sam Boman, an AI alignment researcher at Anthropic, recently shared on X (East Twitter) that the company’s AI model, cloud, is programmed to take serious action if it detects highly immoral behavior. “If it thinks that you are doing something immoral, for example, throwing the data in a drug test,” Boman wrote, “it will use command-line tools to contact the press, contact regulators, try to lock you out of the related systems, or all above.”
Such cautious behavior shows the broad mission of anthropic that it is called “moral” AI. According to the company’s official system card, the latest version – Cloud 4 Opus – has been trained to avoid contributing to any type of damage. It is allegedly capable of internal tests that anthropic has triggered “AI security level 3 security”. These safety measures are designed to block the model by answering dangerous questions, such as how to create a biological weapon or build a deadly virus.
The system has also been rigid to prevent exploitation by malicious actors, including terrorist groups. The whistleblowing feature appears to be an important part of this protective structure. Although this type of behavior is not completely new to the model of anthropic, Cloud 4 Opus takes more easily than its predecessors, constantly flag bearer and responds to dangers with a new level of vigor.