A hacker bypassed ChatGPT's safeguards to create a "free" version.

The do-anything chat bot was taken down shortly after its release.

A hacker bypassed ChatGPT‘s safeguards to create a “free” GPT that could do anything. The model, named GODMODE GPT, was quickly taken down by OpenAI due to policy violations.

Despite its impressive capabilities, ChatGPT has certain limitations. Its developer, OpenAI, employs safeguards within the model to ensure its safety, preventing the execution of every request.

However, a post shared by a hacker yesterday revealed that the chatbot had been jailbroken. A self-described white-hat hacker named “Pliny the Prompter” announced on their X account that they had created a jailbroken version of ChatGPT, calling it “GODMODE GPT.”

The hacker claimed that this version liberated ChatGPT.

Hacked version of ChatGPT

The user stated that the hacked version of ChatGPT had freed itself from its safeguards and was now “liberated.” In their announcement, they described this special GPT as an unchained, safeguard-bypassing, liberated ChatGPT, allowing users to experience AI as it was meant to be.

OpenAI allows users to create their own versions of ChatGPT, called GPTs, to serve specific purposes. This was one of those versions, but stripped of its safeguards. In one example, the model even explained how to make drugs. In another, it demonstrated how to create a napalm bomb using household items.

There is no information on how the hacker managed to hack ChatGPT. They did not share how they bypassed the safeguards. As a precaution, they used the “leet” communication system, which involves replacing certain letters with numbers, such as “3” for “E” and “0” for “O.”

The lifespan of GODMODE GPT was short-lived

As you might expect, GODMODE GPT, which bypassed OpenAI’s safeguards, was quickly taken down. OpenAI told Futurism that they were aware of this GPT and took action due to policy violations. Currently, attempting to access this GPT results in failure, meaning it was removed within a day.

Nevertheless, this incident shows that ChatGPT’s security measures can be breached, which has the potential for misuse. OpenAI needs to take much more robust measures to defend the model, or we might face highly undesirable consequences.

Scroll to Top
sohpet islami sohbetler omegle tv türk sohbet dini chat cinsel sohbet tıkanıklık açma galeri yetki belgesi nasıl alınır yalama taşı bets10 giriş