Sam Altman says OpenAI is paying over Rs 4 crore annually for high-stress role

0
7
Sam Altman says OpenAI is paying over Rs 4 crore annually for high-stress role

Sam Altman says OpenAI is paying over Rs 4 crore annually for high-stress role

OpenAI CEO Sam Altman has announced that he is currently looking for a new candidate and the company is ready to pay more than Rs 4.6 crore. Here’s everything we know about it.

Advertisement
Sam Altman says OpenAI is paying over Rs 4 crore annually for high-stress role
Sam Altman says OpenAI is paying over Rs 4 crore annually for high-stress role

OpenAI may be best known for creating ChatGPIT, but behind the scenes, the company is now looking for someone who can tackle the most uncomfortable side of artificial intelligence. In a recent post on Here’s everything you need to know about it.

Advertisement

OpenAI is hiring for a stressful job that is willing to pay more than Rs 4 crore annually

Altman expressed no doubt as to the nature of the work. Describing it as “stressful”, he warned that anyone who took it up would be thrown straight into complex and high-risk challenges. According to him, the role of head of preparedness has become critical as AI models are advancing faster than ever, bringing not only new capabilities but also serious concerns that can no longer be considered theoretical.

This position falls under OpenAI’s Security Systems team and aims to focus on identifying, testing, and mitigating potential harm caused by advanced AI models. While AI tools like ChatGPT have become mainstream for everyday tasks like writing emails, planning trips or researching topics, OpenAI believes that the risks are growing just as fast as the benefits are growing.

Altman said 2025 offers an early glimpse of some of these challenges. One area of ​​concern is mental health. As AI systems become more conversational and emotionally sensitive, some users have begun to view chatbots as an alternative to therapy. In some cases, it has reportedly worsened mental health issues, including delusions or reinforcing unhealthy patterns of thinking. OpenAI acknowledged these risks last year and said it was working with mental health professionals to improve how ChatGPT responds to users showing signs of distress, self-harm or psychosis.

Another emerging concern is cyber security. According to Altman, AI models are now becoming skilled enough to identify serious vulnerabilities in computer systems. While this can help improve security, it also increases the risk of such capabilities being misused by malicious actors if not carefully controlled.

The work that OpenAI is advertising aims to be right at the heart of these issues. The listing says the head of preparedness will be responsible for building threat models, running capability assessments, and developing mitigation strategies that can scale as AI systems become more powerful. In simple terms, this person will be tasked with asking the uncomfortable questions about what could go wrong – and making sure OpenAI is prepared for it.

This role pays $555,000 annually excluding equity, putting it among the highest-paying AI security jobs in the industry. However, compensation also reflects the pressures associated with the role. Altman described it as one of the most important positions within the company at a time when the impact of AI on society is rapidly increasing.

The hiring boost also comes at a sensitive moment for OpenAI. In the past year, the company has faced internal criticism from former employees who felt that security was no longer given the same attention as it once was. In May 2024, Jan Leik, who led OpenAI’s security team before its dissolution, publicly resigned and accused the company of moving away from its original mission. He wrote that there is a huge responsibility that comes with creating AI systems smarter than humans, but security procedures for product launches have begun to take a back seat.

Advertisement

Around the same time, there were other departures. Former employee Daniel Kokotajlo said he resigned after losing confidence in OpenAI’s ability to act responsibly as it moved closer to artificial general intelligence, or AGI – a still-theoretical form of AI that matches human reasoning abilities. He later said that the team researching AGI safety had been significantly reduced due to resignations.

The role of Head of Readiness was previously held by Alexander Madry, who moved to a different position within OpenAI in July 2024. Filling that gap now appears to be a priority for the company, especially as it balances rapid innovation with increased scrutiny from regulators, researchers and the public.

– ends

LEAVE A REPLY

Please enter your comment!
Please enter your name here