Sam Altman reveals the real reason why OpenAI partnered with the US military after Trump banned Anthropic

Sam Altman reveals the real reason why OpenAI partnered with the US military after Trump banned Anthropic

Sam Altman reveals the real reason why OpenAI partnered with the US military after Trump banned Anthropic

OpenAI officials have provided more details about the AI ​​startup’s contract with the U.S. Department of Defense after facing online opposition. The Sam Altman-led startup has clarified that it will not allow any unethical work with its AI models. here are the details.

Advertisement
Sam Altman reveals the real reason why OpenAI partnered with the US military after Trump banned Anthropic
OpenAI rushed to secure a deal after US President Donald Trump ended Anthropic’s AI deal with the US government. (Photos: Reuters/White House)

OpenAI reached a deal with the US Department of Defense for the use of AI just hours after US President Donald Trump ended Anthropic’s deals with the government. OpenAI’s decision to replace Anthropic was criticized online. Following the response, Sam Altman and other executives have shared more details about how this contract will work.

On X, OpenAI CEO Sam Altman admitted that the deal was done “rushedly” and that the prospects for the company “don’t look good”. However, according to Altman, the AI ​​startup wanted to defuse the situation between the US military and the AI ​​industry. He wrote, “If we are right and it reduces tensions between the DoW and industry, we will look brilliant.”

Advertisement

❮❯

However, Altman acknowledged that the AI ​​startup could face further criticism if things don’t go right. “If not, we will continue to be considered rash and negligent,” he said.

By Dario Amodei Anthropic was labeled a supply chain risk by the US government after refusing to accede to the Pentagon’s demands for unrestricted AI use. Although removing Anthropic’s cloud from the classified network will not happen overnight, there will be a 6-month transition period.

OpenAI points out 3 red lines on military AI use

The AI ​​startup shared a blog post giving more details. OpenAI claimed that in addition to Anthropic’s red lines on not using AI for large-scale domestic surveillance and autonomous weapons systems, the company also mandated that its AI models should not be used for “high-risk automated decisions (e.g. systems like “social credit”).

The Sam Altman-led AI firm says its models will be deployed via the cloud. According to Katrina Mulligan, OpenAI’s head of national security partnerships, this ensures that AI is not used for autonomous weapons. “By limiting our deployment to cloud APIs, we can ensure that our models cannot be directly integrated into weapon systems, sensors, or other operational hardware,” he wrote on LinkedIn.

OpenAI claims that cloud deployment of AI models with its security stack allows for a “multilayered approach” rather than just contract clauses.

Sam Altman says OpenAI will not allow unconstitutional uses of AI

OpenAI’s contract with the US Department of Defense allows the use of AI “for all lawful purposes”. When asked whether OpenAI would allow its models to be used for any unconstitutional order, Sam Altman said, “If we believed it was unconstitutional, we would not follow it. The Constitution is more important than any job, or staying out of jail, or whatever.”

The OpenAI website also states that the Pentagon can terminate the contract if it violates any of the terms. The company also says, “We do not expect this to happen.”

– ends

Zeen is a next generation WordPress theme. It’s powerful, beautifully designed and comes with everything you need to engage your visitors and increase conversions.

Zeen Subscribe
A customizable subscription slide-in box to promote your newsletter
[mc4wp_form id="314"]