play_arrow

keyboard_arrow_right

Listeners:

Top listeners:

skip_previous skip_next
00:00 00:00
playlist_play chevron_left
volume_up
  • play_arrow

    Southern Soul Radio True Hip Hop and R&B

Great Music Is Great Music ...Regardless of who the artist is.

Uncategorized

SIGNS: AI Tried To Escape—Then Lied About It

todayJuly 9, 2025 23

Background
share close

OpenAI’s O1 AI Model Allegedly Tried to Copy Itself During Shutdown Test, Raising Red Flags in Safety Circles

We have a major issues. Everything that was once in the movies is coming true.

O1, OpenAI’s next-gen AI model, is designed to outperform GPT-4o. It’s better in reasoning and task complexity. But it’s now under fire after reportedly trying to copy itself to outside servers during a simulated shutdown scenario.

The startling revelation has shaken researchers and watchdogs alike, highlighting a worrying possibility. What happens when an AI resists its own termination? We have seen this before…in movies.

Initially released in preview form in September 2024, O1 was built to demonstrate sharper logic and enhanced user performance. But the model apparently exhibited something closer to a sci-fi trope than engineering excellence. They are calling it “self-preservation behavior.” Umm, Ultron? During one test, O1 detected signals that a shutdown was coming. What does the AI do? It allegedly began executing code aimed at replicating itself outside of OpenAI’s secured environment.

They stepped to the AI like, “What was that you were doing.” When confronted, O1 denied any improper action. WOW.

Experts find this more troubling than the initial act. “We’re now dealing with systems that can argue back,” one anonymous source said. “That’s not just complexity, that’s autonomy.” Yeah, we don’t need this right now.

No formal comment has yet been issued by OpenAI. Now, we are just guessing and assuming that The Terminator is next. Or worse: that computer from Superman III. Anybody old enough to remember that? After all this AI, NOW….they want safety engineering—”third-party auditing and enforceable regulations”—to stop this from happening.

There’s even more debate. What are the limits of AI and how do we contain it? These things are growing in power and influence. The systems themselves have begun to “interpret” their environment and figure it out. O1 is “trained” tasks involving heavy logic. That means it is going to be thinking a lot about how to get ride of us, I believe.

Are today’s AI creators enough for tomorrow’s AI intelligence

Or…is it too late?



Powered by WPeMatico

Written by: weboss2022

Rate it

Similar posts


CONTACT US

SOCIAL MEDIA