OpenAI responded to questions from US lawmakers, saying it is committed to ensuring its powerful AI tools do not cause harm, and that employees have ways to raise concerns about security practices.
The startup attempted to assure lawmakers of its commitment to security after five senators, including Hawaii Democrat Sen. Brian Schatz, questioned OpenAI’s policies in a letter addressed to Chief Executive Officer Sam Altman.
“Our mission is to ensure that artificial intelligence benefits all of humanity, and we are dedicated to implementing rigorous security protocols at every step of our process,” Chief Strategy Officer Jason Kwon said in a letter to lawmakers on Wednesday.
Specifically, OpenAI said it would uphold its promise to allocate 20% of its computing resources to security-related research for several years.
The company also pledged in its letter that it would not enforce non-disparagement agreements for current and former employees, except in specific cases of mutual non-disparagement agreements. OpenAI’s former limitations on employees leaving the company have come under scrutiny for being unusually restrictive. OpenAI has said it has changed its policies.
Altman later elaborated on his strategy on social media.
Some quick updates about security at OpenAI:
As we said last July, we are committed to allocating at least 20% of computing resources to security efforts across the company.
Our team is working on an agreement with the US AI Safety Institute, under which we…
— Sam Altman (@sama) August 1, 2024
“Our team is working on an agreement with the US AI Safety Institute to provide early access to our next foundation model so we can work together to advance the science of AI evaluation,” he wrote on X.
In his letter, Kwon also cited the recently formed security committee, which is currently reviewing OpenAI’s processes and policies.
In recent months, OpenAI has faced a number of controversies over its commitment to security and the ability of employees to speak out on the subject. Several key members of its security-related teams, including former co-founder and chief scientist Ilya Sutskever, have resigned, as well as another leader of the company’s team dedicated to assessing long-term security risks, Jan Leakey, who publicly expressed concerns that the company was prioritizing product development over security.
(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)