AI startups OpenAI and Anthropic have signed agreements with the United States government to research, test and evaluate their artificial intelligence models, the US Artificial Intelligence Safety Institute said on Thursday.
The first-of-its-kind agreement comes at a time when companies are facing regulatory scrutiny over the safe and ethical use of AI technologies.
California legislators are set to vote this week on a bill that would broadly regulate the development and use of AI in the state.
“Safe, trustworthy AI is critical to the positive impact of technology. Our collaboration with the US AI Safety Institute leverages their extensive expertise to rigorously test our models before widespread deployment,” said Jack Clark, co-founder and head of policy at Anthropic, which is backed by Amazon and Alphabet.
Under this agreement, the US AI Safety Institute will have access to major new models from both OpenAI and Anthropic before and after their public release.
These agreements will also enable collaborative research to evaluate the capabilities of AI models and the risks associated with them.
“We believe the Institute has an important role to play in defining U.S. leadership in responsibly developing artificial intelligence, and hope our work together will create a framework that the rest of the world can build upon,” said Jason Kwon, chief strategy officer at ChatGPT maker OpenAI.
“These agreements are just the beginning, but they mark an important milestone as we work to help responsibly advance the future of AI,” said Elizabeth Kelley, director of the US AI Safety Institute.
The institute, which is part of the US Department of Commerce’s National Institute of Standards and Technology (NIST), will also collaborate with the UK’s AI Safety Institute and provide feedback to companies on potential security improvements.
The U.S. AI Safety Institute was launched last year as part of an executive order by President Joe Biden’s administration to evaluate known and emerging risks of artificial intelligence models.
(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)