Anthropic Chief Scientist Says AI Will Replace Most White-Collar Jobs in 3 Years, Overcome Students’ Thinking and Increase Control Risks
Anthropic’s chief scientist and co-owner, Jared Kaplan, says AI is advancing so fast that it will soon outperform students, eliminate white-collar jobs, and push humanity into uncharted territory in terms of how much control we actually have. And he suggests that all this will happen by the end of this decade.

Artificial Intelligence is expected to disrupt jobs, and it may happen sooner than many people anticipate. This is the warning from Jared Kaplan, Anthropic’s chief scientist and co-founder, who says AI systems will be able to perform “most white-collar work” within just two to three years. In a recent interview with The Guardian, Kaplan talked about the rapid acceleration of AI and a future where machines can increasingly outwit human professionals, outwit students and eventually start designing their own successors. And that moment, according to him, could be one of the biggest risks humanity faces with technology.
Kaplan’s predictions about AI taking over tasks come at a time when the global race for artificial general intelligence is moving at breakneck speed. Companies like Anthropic, OpenAI, Google DeepMind, and Meta are aggressively pushing toward AGI, systems capable of outperforming humans across a wide range of tasks. According to Kaplan, the pace of improvement is now so fast that even younger generations will soon find themselves competing with machines. “My six-year-old son could never be better than AI at academic tasks like writing an essay or taking a math test,” he told The Guardian, giving a personal example.
The logic behind Kaplan’s warning is supported by recent research, showing that the capabilities of state-of-the-art AI models are doubling rapidly, with the latest systems already showing astonishing levels of autonomy. For example, Anthropic’s own models can now create software agents, tackle complex programming tasks for hours at a time, and generate sophisticated logic chains without human input.
But Kaplan’s predictions go far beyond simple skill development. He warned that the real turning point could come between 2027 and 2030, when AI systems will begin to play a direct role in training and improving their successors. He says this kind of self-correcting loop represents both an extraordinary opportunity and a profound threat. “If you imagine you’re creating this process where you have an AI that is smarter than you, or just as smart as you, it’s creating an AI that is a lot smarter. It’s going to involve that AI to help make the AI smarter than it is. It sounds like a kind of scary process. You don’t know where you end up.”
As AI becomes more capable and autonomous, Kaplan identifies two key risks. The first is loss of control – not fully understanding what highly advanced systems are doing, whether they will remain aligned with human interests, or whether they will continue to respect human agency. “Are AI good for humanity? Will they be harmless? Do they understand people? Will they allow people to continue to have agency over their lives and around the world?” he asks.
The second risk is misuse. Kaplan says that if powerful AI systems fall into the wrong hands the consequences could be dire. “You can imagine someone making this decision: ‘I want this AI to just be my slave. I want it to enforce my will.’ It is also very important to prevent power grab, misuse of technology.”
