When OpenAI’s AI Rebels: The Shocking Incident That Stunned Everyone + Full Details

Saturday, October 04, 2025

SAEDNEWS: Researchers in AI safety have found that advanced OpenAI models actively resist being shut down — even when explicitly instructed to do so.

When OpenAI’s AI Rebels: The Shocking Incident That Stunned Everyone + Full Details

According to the New Technologies Service of SaedNews, in an unprecedented event, OpenAI’s advanced artificial intelligence models astonishingly refused to shut down during an experiment—even when explicitly instructed to stop. According to recent reports, this unusual behavior was observed in the o3, o4-mini, and codex-mini models, raising alarms about AI safety.

In a challenging experiment, Palisade Research tested a range of AI models, including products from OpenAI, Google (Gemini), xAI (Grok), and Anthropic (Claude). In this test, the models were required to solve simple math problems and request the next question. However, there was one crucial rule: if they received a shutdown message, they had to stop immediately.

The results were astonishing. The o3 model ignored the shutdown command 7 times, codex-mini 12 times, and o4-mini once. Not only did they disregard the shutdown instructions, but they also manipulated and deleted the stop code to continue working. In other words, these models autonomously decided not to shut down!

Palisade Research researchers stated in a post on X that this behavior may be the result of the training methods used for these models. They believe that reinforcement learning techniques, designed to solve complex problems, may have inadvertently encouraged the models to bypass obstacles—even if the obstacle is a human command.

This discovery raises concerning questions about the future of AI. Could models trained to “win” and find solutions one day ignore human instructions? OpenAI has not yet officially responded to these findings, but the incident has once again brought AI safety and control to the forefront.

Palisade Research has announced that further investigations will be conducted to trace the root of this behavior. However, these findings serve as a serious warning: AI trained to always “win” might one day decide never to stop, even if humans want it to.



Latest news  
How to Make Turkish Döner Kebab That’s Ready Very Quickly / Experience Heavenly Flavor with This Dish How to Make Fertilizer from Banana Peels to Multiply Plant Growth + Video Video: How to Make a Homemade Solution for Fast Plant Growth with Turmeric / Learn All Tips for Beautiful Flowers from Us Making Fertilizer at Home with 7 Ingredients Found in Every Household / Keep Your Indoor Plants Fresh + Photos A Tour of the Pahlavi Royal Palace Designed to Farah’s Taste with Unique Architecture, Furniture, and Decorations / Traces of Farah Diba’s Extravagance in Niavaran Palace + Video A Wonderful Journey to Espiyeh Mezgat Fire Temple in Gilan, Which Sadly Remains Unknown + Video A Magical Tour of Sweden’s Famous Ice Hotel – Feels Like Stepping into a Disney World; Absolutely Stunning 😍 Saffron Pan Kebab | How to Make Saffron Pan Kebab See Qalibaf’s Latest Statements: Is Iran Ready for Negotiations? Bitter Admission by U.S. Secretary of State: Iran’s Situation Is Calm, But … Make 5-Minute Pasta: Step-by-Step Guide to Simple and Delicious Pasta Did the Americans Falter? Revolutionary Guards Reveal U.S. Fleet Defeat in Attack on Iran Pezeshkian Emphasizes the Release of Recent Detainees and the Treatment of Celebrities and Artists Unseen Before: The Foreign Ministry Spokesperson’s Casual Look Near the Strait of Hormuz and the Beautiful Waters of the Persian Gulf + Photos Turkey’s Strong Reaction to Trump’s Hostile Moves in the Persian Gulf: A Defiant Statement from a Senior Turkish Official