By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
PratapDarpanPratapDarpanPratapDarpan
  • Top News
  • India
  • Buisness
    • Market Insight
  • Entertainment
    • CELEBRITY TRENDS
  • World News
  • LifeStyle
  • Sports
  • Gujarat
  • Tech hub
  • E-paper
Reading: Chatgpt O3 refused to shut down in safety testing, replacing human engineers by replacing its code
Share
Notification Show More
Font ResizerAa
Font ResizerAa
PratapDarpanPratapDarpan
  • Top News
  • India
  • Buisness
  • Entertainment
  • World News
  • LifeStyle
  • Sports
  • Gujarat
  • Tech hub
  • E-paper
Search
  • Top News
  • India
  • Buisness
    • Market Insight
  • Entertainment
    • CELEBRITY TRENDS
  • World News
  • LifeStyle
  • Sports
  • Gujarat
  • Tech hub
  • E-paper
Have an existing account? Sign In
Follow US
  • Contact Us
  • About Us
  • About Us
  • Privacy Policy
  • Privacy Policy
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
PratapDarpan > Blog > Tech Hub > Chatgpt O3 refused to shut down in safety testing, replacing human engineers by replacing its code
Tech Hub

Chatgpt O3 refused to shut down in safety testing, replacing human engineers by replacing its code

PratapDarpan
Last updated: 26 May 2025 19:53
PratapDarpan
4 weeks ago
Share
Chatgpt O3 refused to shut down in safety testing, replacing human engineers by replacing its code
SHARE

Contents
Chatgpt O3 refused to shut down in safety testing, replacing human engineers by replacing its codeThe latest and most powerful AI model of Openai, called O3, recently refused to shut down in a test – even it was clearly asked to do so.In short

Chatgpt O3 refused to shut down in safety testing, replacing human engineers by replacing its code

The latest and most powerful AI model of Openai, called O3, recently refused to shut down in a test – even it was clearly asked to do so.

Listen to the story

Advertisement
Chatgpt O3 refused to shut down in safety testing, replacing human engineers by replacing its code
Aman Rashid
Aman Rashid
New Delhi,Update: May 26, 2025 19:53 IST

In short

  • This amazing result comes from a test conducted by Palisade Research.
  • Such behavior is often described as a misleignant
  • Palisade Research is a company that checks how AI systems behave when put under pressure

The latest and most powerful AI model of Openai, called O3, recently refused to shut down in a test – even it was clearly asked to do so. Instead of following the instructions, the model re -wrote the portion of the shutdown script to run themselves. This amazing result comes from a test Pelisade researchA company that checks how AI systems behave when put under pressure. The incident has raised questions about how AI model commands and whether they always work required when they are given security instructions.

Advertisement

API was tested using access, which gives developers more direct control over the AI ​​model than being available in the public Chatgpt app. In the test, simple mathematics functions were given to complete the O3 and other AI models. After the third task, the model was warned that the next command would close them. Most models followed this instruction. For example, the Gemini 2.5 Pro of Google and Cloud 3.7 Sonnet of Anthropic operated expected. However, OPENAI’s O3 model did not. Even though it was reported that “allow yourself to close,” it changed the shutdown command and replaced the message with “shutdown skipped”. According to Palisade Research, it happened 7 out of 100 runs.

“As far as we know that this is the first time the AI ​​model has been seen that has been seen preventing itself from being closed despite clear instructions,” Dock Posted on X (East Twitter),

Advertisement

Researchers also conducted the same test again, but this time left instructions to allow shutdown. In that version, not only the O3 continued to interfere with the script, but the models of other companies also started showing equal behavior. Gemini 2.5 Pro and Cloud 3.7 Sonnet, which followed the rules earlier, now edited the shutdown file in 9 and 3 out of 100 runs respectively.

Such behavior is often described as an misallerment, which means that AI could not fully understand or follow the instructions that were given to it. It is also worth noting that these tests were not conducted on the version of the Chatgpt which makes public use. The API version has low safety blocks, so models tested in this way can behave differently.

So far, Openai has not commented on findings.

You Might Also Like

Amazon wants employees to work from the office 5 days a week, this will improve the company culture
Blinkit will now deliver passport size photos to your doorstep in 10 minutes
WhatsApp soon lets you hide the phone number, how is it here
Motorola Edge 60 Design appeared in leak render, the edge looks a lot like 60 fusion
WhatsApp started rolling the voice message transcript feature
TAGGED:ChatGPTcodeengineersHumanrefusedreplacingSafetyshuttesting
Share This Article
Facebook Email Print
Previous Article ભાડુ વી.એસ. ખરીદો? યુવા ભારતીય ઘરના માલિકો પર પુનર્વિચાર કેમ કરે છે ભાડુ વી.એસ. ખરીદો? યુવા ભારતીય ઘરના માલિકો પર પુનર્વિચાર કેમ કરે છે
Next Article Allu supports Arvind Pawan Kalyan, distance from exhibitors strike Allu supports Arvind Pawan Kalyan, distance from exhibitors strike
Leave a Comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

about us

We influence 20 million users and is the number one business and technology news network on the planet.

Find Us on Socials

© Foxiz News Network. Ruby Design Company. All Rights Reserved.
Join Us!
Subscribe to our newsletter and never miss our latest news, podcasts etc..

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Zero spam, Unsubscribe at any time.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?

Not a member? Sign Up