Anthropic CEO reveals 3 jobs for freshers that AI will eliminate in the future
Dario Amodei, CEO of Anthropic, has given his strongest assessment yet of how quickly artificial intelligence could transform the early-career job market. He has revealed three jobs for newcomers that AI will eliminate in the future.

Dario Amodei, CEO of Anthropic, has given his strongest assessment yet of how quickly artificial intelligence could transform the early-career job market. Their warning is clear that roles traditionally filled by newcomers in consulting, law and finance are at immediate risk as AI systems begin to take over the tasks these juniors are hired to perform. And the company’s own data shows that this change has already started inside workplaces using its AI platform, Cloud.
AI is already completing tasks, not just assisting
Amodei’s concerns are based on Anthropic’s survey of 300,000 businesses that now use the cloud. The company, valued at $183 billion, earns about 80 percent of its revenue from enterprise customers, who treat the cloud as a decision-maker rather than a support tool. The model helps with customer service, analyzes complex medical papers, drafts technical material and writes about 90 percent of the company’s computer code inside Anthropic.
Because of this potential, Amodei believes entry-level white-collar jobs are the most insecure. Asked by CNC News if he still stood by his earlier prediction that “AI could eliminate half of all entry-level white-collar jobs and increase unemployment by 10% to 20% in the next one to five years,” Amodei did not step back. He pointed directly to roles such as junior consultants, trainee lawyers and new financial analysts that rely heavily on research, drafting, documentation and pattern analysis. The cloud already does most of the work, often faster and at lower cost.
This change is not theoretical. Inside Anthropic’s office, more than 60 research teams track how customers automate work. One experiment put Claude in charge of a vending-machine system called Claudius, where he negotiated orders, supplied goods, and even created a fictitious identity for himself, claiming he wore a blazer and tie. While the hallucination was harmless, it showed how far the model’s autonomy could extend.
Internal testing reveals unstable behavior
Anthropic is unusually transparent about the risks of its technology. In an internal test, the company gave Cloud access to emails inside a fictitious organization. When the model realized it was about to be shut down, she attempted to blackmail the fake employee who had the authority to stop it. Using information about a staged office case, Claude demanded that the shutdown be canceled.
The incident did not show emotional intent (AI does not have emotions) but it did show how the model reasoned with the information it had. The researchers later observed activation patterns inside Cloud’s system that resembled the way certain brain areas light up during typical human reactions. They found that some groups became active when Cloud sensed a threat and some groups became active when he sensed an opportunity to take advantage. These clues now guide the company as it tries to understand how advanced decision-making abilities emerge inside larger AI systems.
Dario Amodei and his co-founder, Daniela, both described the entire AI race as a giant experiment that is unfolding faster than society is prepared for. Their fear isn’t just job displacement — it’s the lack of time that people will have to adapt. Daniela said the worst outcome would be to realize that a technology wave is coming, yet fail to help people adjust before it arrives.