OpenAI didn't keep its promise in combating 'dangerous artificial intelligence.'

The leading artificial intelligence company, OpenAI, is not fulfilling its promises to manage 'superintelligent' artificial intelligences correctly and combat their dangers.

OpenAI had established the Superalignment team, responsible for managing and guiding “superintelligent” AI systems. The company had promised that the team could utilize a significant portion of the company’s computational resources, up to 20%. However, within a year, a series of resignations occurred, including Jan Leike, who headed the Superalignment team, leading to the dissolution of the team. A recent report by Fortune reveals that OpenAI never fulfilled its promises.

OpenAI did not keep its promise.

According to multiple sources, OpenAI never provided the promised 20% of computational power to the team. In fact, the team’s computational resources never even approached this threshold. It’s reported that OpenAI leadership repeatedly rejected the team’s requests.

In the recent resignation of Jan Leike, which we covered in detail, it was mentioned that employees resigned due to security concerns. Leike was one of the leaders of the Superalignment team, which was founded by one of the company’s co-founders, Ilya Sutskever, who also departed from the company last week.

As a result, doubts are arising about OpenAI. While the company claims to take artificial intelligence safety seriously in public, it fails to adhere to these principles internally. This raises questions about the validity of the company’s other commitments.

Is OpenAI silencing those who resign?

Last year, when introducing the Superalignment team, OpenAI stated that reaching the threshold of “Superintelligence” could pose an existential risk for humanity. Superintelligence refers to a hypothetical artificial intelligence system that would be smarter than the collective intelligence of all humans in the future, surpassing Artificial General Intelligence (AGI). Superalignment was tasked with researching or needing to research solutions to combat or prevent the risks posed by such a system.

The new report indicates that the Superalignment team did not receive the necessary “20% computational power” as mentioned. In fact, the exact meaning of this “20% computational power” statement is vague and not fully understood. Nevertheless, resources were not allocated to meet this computational power, and requests for additional GPU capacity were consistently denied by management. These claims about the company were actually confirmed in a post by X following Jan Leike’s resignation: Leike stated that “security culture and processes take a back seat to shiny products.”

Interestingly, individuals who spoke to Fortune did so anonymously because they feared losing their jobs or earned capital at the company. According to the sources, departing employees were pressured to sign separation agreements containing a strict rule that if they criticized the company publicly, OpenAI could reclaim their earned capital. CEO Altman stated that he was unaware of this provision and expressed being “truly ashamed” by it. He also said that OpenAI had never enforced this clause and attempted to reclaim anyone’s earned capital.

Scroll to Top
sohpet islami sohbetler omegle tv türk sohbet dini chat cinsel sohbet tıkanıklık açma galeri yetki belgesi nasıl alınır yalama taşı bets10 giriş