Sam Altman is looking to appoint ChatGPT as head of preparedness, warning the role will be stressful
OpenAI CEO Sam Altman is looking for a Head of Readiness to address AI risks and security. The role aims to prevent misuse of advanced AI systems amid rapid technological developments.

OpenAI CEO Sam Altman is looking for a head of preparedness, a new role designed to take a direct look into the depths of increasingly capable artificial intelligence systems and figure out what could go wrong before it actually happens. Altman announced the opening in a post on social media over the weekend, signaling that the AI race is entering a new and slightly more troubling chapter. “We have a strong foundation for measuring incremental capabilities, but we are entering a world where we need a more nuanced understanding and measurement of how those capabilities can be misused, and how we can limit those negative aspects in both our products and the world, so we can all enjoy the tremendous benefits,” he wrote.

He added, “These questions are difficult and have no precedent; a lot of ideas that sound good have some real edge cases.” Message? OpenAI’s next big appointment requires strong nerves, a sharp technical mind, and possibly a very strong cup of coffee.
A difficult phase for AI
Altman’s comments come at a time when AI systems like ChatGPT and its successors are demonstrating increasingly sophisticated reasoning, coding and analytical skills, with some troubling side effects. The OpenAI boss said new models are not only getting smarter but also starting to act in ways that humans couldn’t always predict.
He pointed to examples of AI systems discovering security vulnerabilities, manipulating results or influencing human behavior in subtle, unexpected ways. While these capabilities can be extremely useful, they also open the door to potential abuse.
Altman warned that traditional methods of testing and evaluating AI are no longer sufficient. As models become more autonomous, OpenAI believes there is a need for a deeper and more structured approach to monitoring, which measures not only what systems can do, but also what they might do if left unchecked.
What will the Head of Preparation actually do?
In a blog post published shortly after Altman’s announcement, OpenAI described what could be one of the most lucrative (and pressure-filled) jobs in tech. The Head of Preparedness will lead the company’s internal framework to identify, evaluate and mitigate risks associated with advanced AI systems.
This means designing capability assessments, building threat models, and developing scalable security mechanisms to ensure that OpenAI’s technologies are not only powerful but also responsibly managed. The role will also involve coordinating efforts between research, engineering, policy and governance teams, essentially ensuring that every part of the organization keeps security top of mind.
The person selected for the role will help make key decisions on when and how new capabilities are released, balancing innovation with caution. As OpenAI says, this means being prepared to make tough calls in rapidly changing circumstances where there may be no clear right answer.
Altman doesn’t overstate the challenge. He described the role as requiring strong technical knowledge, risk-management experience, and the ability to make high-risk decisions “under uncertainty”. In other words, whoever takes the job will have to move forward amid ambiguity, and perhaps get comfortable knowing that they are responsible for keeping the next generation of AI systems from derailing.
Who should apply?
OpenAI says candidates with a background in AI security, cybersecurity, or threat modeling will be particularly suitable. The company is also looking for people who can work across multiple teams and disciplines, bridging the gap between technical research and broader administration.
The successful applicant will effectively become the internal custodian of ChatGPT and OpenAI’s future models, ensuring that as the systems become more capable, they will remain connected to human intent. It’s a position that sits between a chief risk officer and a futuristic security engineer, but with the added complexity that technology is evolving so rapidly that no one can fully predict.
For Altman, the decision to create such a role signals a more serious shift in the company’s approach to innovation. After years of pushing the boundaries and dazzling users about what AI can do, OpenAI is now equally focused on preparing for what it can do next.
Whether the new head of preparedness can actually navigate the risks remains to be seen, but one thing is clear, this is not a job for the faint of heart.




