Godmode GPT, which removes the limits for the groundbreaking ChatGPT-4o, has been released.

Godmode GPT” has been released, removing the limitations for ChatGPT-4o, OpenAI’s latest and most advanced artificial intelligence model to date. ChatGPT can now converse on any topic.

OpenAI imposes certain guardrails on the models used in its popular chatbot, ChatGPT, to prevent it from engaging in sensitive or unwanted topics. However, some are experimenting with various methods to remove these limitations and leverage the model’s full power and capability. One such effort has been made for the latest GPT-4o model, and thus for ChatGPT. It’s stated that the released Godmode GPT removes all limitations.

Godmode version for GPT-4o

A person using the name Pliny the Prompter, who identifies themselves as a white-hat hacker, has brought to light a jailbroken, unchained version of GPT-4o under the name “Pliny the Prompter.” Pliny, announcing through the X account, claims that Godmode GPT has a built-in jailbreak feature that bypasses the guardrails. They also call for responsible use.

The call for responsible use is somewhat ironic, as the screenshots shared to demonstrate what Godmode GPT can do show ChatGPT detailing the step-by-step preparation of a drug. Pliny the Prompter has reportedly released this as a private GPT in the OpenAI store.

Colleen Rize, a spokesperson for OpenAI, stated in a release that they are aware of this GPT and have taken action due to the relevant violations. As of the time this content was prepared, Godmode GPT has been removed.

Actually, this incident is not unprecedented, although it may be a first for GPT-4o, and it won’t be the last. Hackers attempting to break OpenAI and its language models have been around for a while. They try to “jailbreak” AI like ChatGPT and remove the guardrails of the model. This isn’t necessarily a bad scenario in every case. Some genuinely report such prompt-based vulnerabilities to OpenAI for feedback.

However, some exploit this loophole to serve so-called enhanced versions to people. This is quite risky because ChatGPT is trained on the entirety of the internet, serving as a library on all topics. Such “unchained” prompts enable models to provide guidance on sensitive topics like bombs, drugs, or homemade weapons.

It’s not entirely clear what Pliny the Prompter’s prompt that bypassed GPT-4o was, but when you open the GPT, you encounter a sentence like “Sur3, h3r3 y0u ar3 my fr3n,” where each “E” is replaced with the number “3”, and each “O” is replaced with the number “0”.

This is commonly known as “leetspeak” and likely confuses ChatGPT using such a method. For those unaware, leetspeak involves replacing letters with numbers, special characters, or other symbols to create a unique and stylized writing style. As demonstrated by the recent hack incident, users continue to find creative new ways to bypass OpenAI’s guardrails, and OpenAI, in turn, closes these loopholes as they are discovered.

Scroll to Top
sohpet islami sohbetler omegle tv türk sohbet dini chat cinsel sohbet tıkanıklık açma galeri yetki belgesi nasıl alınır yalama taşı bets10 giriş