The OpenAI Files: Ex-staff claim profit greed betraying AI safety

The OpenAI Files: Ex-staff claim profit greed betraying AI safety

Table of Contents

‘The OpenAI Files’ report, assembling voices of concerned ex-staff, claims the world’s most prominent AI lab is betraying safety for profit. What began as a noble quest to ensure AI would serve all of humanity is now teetering on the edge of becoming just another corporate giant, chasing immense profits while leaving safety and ethics in the dust.

Then there is the very heart of it which is to rip up the original rulebook. The most important promise OpenAI made when it began its existence was imposing a maximum amount of money investors would benefit. It was the legal assurance way they operated, whereby in the event of their having been able to succeed in making the world changing AI, then it is humanity who would have the vast benefits of the same, and not only a few billionaires. It is now that promise that is about to be crossed off, in the name of appeasing investors who demand infinite rates of return.

For the people who built OpenAI, this pivot away from AI safety feels like a profound betrayal. “The non-profit mission was a promise to do the right thing when the stakes got high,” says former staff member Carroll Wainwright. “Now that the stakes are high, the non-profit structure is being abandoned, which means the promise was ultimately empty.”

Deepening crisis of trust

Many of these deeply worried voices point to one person: CEO Sam Altman. The concerns are not new. Reports suggest that even at his previous companies, senior colleagues tried to have him removed for what they called “deceptive and chaotic” behaviour.

The same sense of disdain accompanied him to OpenAI. Ilya Sutskever, one of the co-founders of the company who also worked with Altman over the years, and since adjacent founded his own startup went to a chilling conclusion that was appreciated by the company: I don t think Sam is the person who should have the finger on the button when it comes to AGI. He found Altman to be untruthful and someone who caused a mess which was a frightening attribute of a possible person in control of our future as a whole.

Such sentiments were shared by Mira Murati, the previous CTO. She said, I do not feel good about Sam taking us towards AGI. She said that Altman was toxic because he would give people what they wanted and then when they would be in his path, would tear them down. It implies suppression of which Tasha McCauley, a former OpenAI board member says should not be accepted when the stakes of AI safety are this high.

This crisis of trust has had real-world consequences. Insiders say the culture at OpenAI has shifted, with the crucial work of AI safety taking a backseat to releasing “shiny products”. Jan Leike, who led the team responsible for long-term safety, said they were “sailing against the wind,” struggling to get the resources they needed to do their vital research.

Another former employee, William Saunders, even gave a terrifying testimony to the US Senate, revealing that for long periods, security was so weak that hundreds of engineers could have stolen the company’s most advanced AI, including GPT-4.

Desperate plea to prioritise AI safety at OpenAI Files

But those that do walk away are not walking away. They have proposed a plan on how they can rescue OpenAI when it hangs, one final attempt to salvage the initial vision.

They want the company to restore its nonprofit centre of gravity and allow it to have the real power it once had and game over on safety. They are asking to have clear and honest leadership, and there should be a new and in-depth inquiry into the behaviour of Sam Altman.

They desire true, external controls, and OpenAI cannot grade its own paper on AI safety. And they are begging to have a culture where individuals can raise their voices without the fear of losing their jobs and savings, where there is actually a strong form of whistleblower protection.

Lastly, they are demanding that OpenAI remain faithful to what they offered financially when starting it: the profit ceilings should remain. The achievement should be that of popular welfare as opposed to limitless capitalistic gain.

This isn’t just about the internal drama at a Silicon Valley company. OpenAI is building a technology that could reshape our world in ways we can barely imagine. The question its former employees are forcing us all to ask is a simple but profound one: Who do we trust to build our future?

As former board member Helen Toner warned from her own experience, “internal guardrails are fragile when money is on the line”.

Right now, the people who know OpenAI best are telling us those safety guardrails have all but broken.

Leave A Comment