Home Tech Hub OpenAI says US government will get first look at ChatGPT 5

OpenAI says US government will get first look at ChatGPT 5

0

OpenAI says US government will get first look at ChatGPT 5

OpenAI has teamed up with the US AI Safety Institute to provide early access to ChatGPT-5, which aims to ensure the safety and reliability of AI. The collaboration comes amid growing concerns over AI safety.

Advertisement
OpenAI says US government will get first look at ChatGPT 5

OpenAI has announced a partnership with the US AI Safety Institute. As part of this collaboration, OpenAI will provide the Institute with early access to its next foundational model, ChatGPT-5. According to OpenAI CEO Sam Altman, the partnership with the federal body aims to ensure that upcoming AI models are safe and reliable before public release.

Altman announced the collaboration in a recent post on Platform X, in which he explained that “our team is working on an agreement with the US AI Safety Institute to provide them with early access to our next baseline model so we can work together to advance the science of AI evaluation.”

Advertisement

Under this partnership, the US AI Safety Institute, formally established by the National Institute of Standards and Technology (NIST), is tasked with developing guidelines and standards for AI measurement and policy to ensure that safety protocols are robust and empirically validated.

The announcement of the partnership between OpenAI and the US AI Safety Institute comes amid growing concerns about AI safety and the direction of OpenAI’s priorities. Earlier this year in May, OpenAI disbanded its Superalignment team, which was dedicated to ensuring that AI models align with human intentions and do not go ‘rogue’. Following this disbandment, several key team members left OpenAI, including Jan Leakey and co-founder Ilya Sutskever. Leakey later joined Anthropic AI, while Sutskever founded a new AI safety startup, Safe Superintelligence Inc.

Leike also expressed disappointment at OpenAI’s lack of promised resources, particularly compute power, to support security efforts. He criticized OpenAI’s leadership for prioritizing the launch of new products over ensuring security. However, responding to this concern, Altman reiterated the company’s commitment to security. Sam said the company is allocating at least 20 percent of its computing resources to security efforts, a pledge made in July.

Meanwhile, in announcing the collaboration, Altman has also announced the removal of non-disparagement clauses from employee contracts. The move is aimed at creating an environment where current and former employees feel free to express their concerns without fear of retribution. “This is important for any company, but especially for us, and is a vital part of our safety plan,” Altman wrote in his post.

Notably, OpenAI’s partnership with the US AI Safety Institute is not its first collaboration with government bodies on AI safety. Last year, both OpenAI and DeepMind agreed to share their AI models with the UK government, underscoring the growing trend of collaboration between AI developers and regulatory bodies.

Additionally, OpenAI has appointed retired US Army Gen. Paul M. Nakasone to its board of directors to focus on security and governance. In a blog post announcing his appointment, OpenAI wrote that “Nakasone’s appointment reflects OpenAI’s commitment to security, and underscores the growing importance of cybersecurity as the impact of AI technology continues to grow.”

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version