By firing Sam Altman, the pause requested by a large group of experts six months ago was implemented for five days. OpenAI’s huge success, partly due to a non-profit governance structure that oozes goodness, was briefly paused, leading many to question whether it is possible to work towards Artificial General Intelligence (AGI) and serve humanity simultaneously. OpenAI Gate shows an almost sectarian obsession with profit and progress, with Sam Altman and with AGI.
“Our mission is to ensure that artificial general intelligence benefits all of humanity.”
OpenAI’s mission, which Sam Altman constantly preaches and refers to, sounds sensible and responsible, reminiscent of what Google once had: Don’t Be Evil. Such a responsible mission has certainly played a role in the huge success that the commercial part of OpenAI has had, especially since the launch of ChatGPT last year.
The mission is part of the governance structure of the non-profit organisation OpenAI, which Elon Musk as an investor helped establish back in 2015 when he, Altman, and others were worried that a commercial tech giant would come first with AGI. But then in 2019, a commercial subsidiary was established with Sam Altman, a proud tech optimist, as a front figure. The year before, Elon Musk had left OpenAI.
Responsible for OpenAI is the board of directors and they have good intentions. With a purpose to put humanity above profit. And it is in charge of the CEO Sam Altman, who is extremely profit-oriented. Naturally, this created tensions, and, especially in the past year, there have been clashes between the non-profit and for-profit sections, according to American media.
A few weeks ago, one of the board members, Helen Toner, Director of Georgetown University’s Center for Security and Emerging Technology, bumped into Sam Altman. She co-authored a research paper pointing out that OpenAI’s launch of ChatGPT in November 2022 set off an AI race to the bottom, where competitors followed suit and launched similar products without sufficient regard for security. At the same time, competitor Anthropic’s approach to AI was highlighted as being better than OpenAI’s in the paper. The management of Anthropic are former OpenAI employees, but they left the company – after a conflict with Sam Altman – to develop something they believe is safe and responsible.
The conflict prompted Altman to consult with another member of the board, Ilya Sutskever, also an employee and co-founder of OpenAI, but known to be concerned about what AI and especially AGI could do to humanity. Altman tried to get Toner off the board, but Sutskever chose to stand with the board in getting Altman out, according to The New York Times. A decision that he regretted and apologized on X after a few days.
The non-profit’s board undoubtedly screwed up in the way they fired Sam Altman and underestimated his popularity. Although they wrote in the dismissal that he hadn’t been honest in his communication with them, it wasn’t nearly enough to satisfy the horde of worshippers among employees and profit-hungry investors. Massive criticism poured down on the board, who were called clowns. 95% of OpenAI employees demanded their resignation and the return of their Sam.
At X you get the impression of sectarianism, which takes me back to 2010, when I visited Google with a group of Danish media executives. After touring the company, one manager exclaimed in dismay that we were visiting a sect.
When you want to run something big, you have to get everyone on board, and this is often done in the US with lofty promises of doing something good for humanity. That in itself is almost religious. The one-sided uncritical enthusiasm for the work towards AGI at OpenAI can easily bring your mind to compare it to a cult. That 800+ employees were willing to quit their jobs and work for Microsoft instead of OpenAI, which is still called a ‘start-up’, is in itself deeply puzzling.
Altman’s AGI Aspirations
Over the past few months, I’ve read many interviews with Sam Altman, and I share the concern of Helen Toner and many others: His drift toward AGI is frightening. AGI is the end goal for OpenAI – and for at least seven other companies, according to AI expert Ian Hogarth, who advises the British government, among others. He writes in the Financial Times that AGI, also known as superintelligence, is a God-like machine that can outmaneuver humans in almost all areas and that it could be a danger to humanity.
Sam Altman tends to show agreement with the concerned AGI critics, but when you read what he’s really saying, all your warning lights should be flashing wildly. For example, here in Wired he says that OpenAI insists on creating a soft landing for the singularity (the merging of man and machine) and that it doesn’t make sense to build AGI in secret and then just throw it out into the world.
Altman is also interviewed in a podcast with the CEO of a Norwegian head of a large oil fund, where he fires off a cliché about what he would do with all the people who risk unemployment and meaninglessness because of AGI; that he should just sit on a beach.
In the US, Profit is God
Tech giant Microsoft plays a major role in the ‘good’ start-up OpenAI. It would never have been able to launch ChatGPT without Microsoft’s billion-dollar investment – including massive computing power. Shortly before his firing, Sam Altman was in negotiations with Microsoft for more billions, and shortly after his firing, Microsoft CEO Satya Nadella entered the scene as the big hero, praising him for his ‘genius’ on social media. He publicly promised both Sam Altman and Greg Brockman positions at Microsoft and anyone else who wanted to leave OpenAI.
What the people behind OpenAI in 2015 were worried about happened: The commercial tech giant Microsoft gained control over those who were furthest along in the development of AGI. It is telling that about Sam Altman’s ‘good’ intentions that he was ready for Microsoft.
Elon Musk has made fun of all parties on the sideline. Both because the employees of OpenAI are heavy users of Google’s video service rather than major investor Microsoft’s. When Nadella announced that he would hire the two, Elon Musk wrote that they would now be forced to use Teams.
The winner in all this was profit. Even Mel Brooks couldn’t have written a better script for a farce. And Microsoft undoubtedly had a big say in it. On Wednesday morning, OpenAI wrote on X that they had reached an agreement with Sam Altman to bring him back and that changes had been made to the board. The two women, Tasha McCauley, a researcher at Rand Corporation, and security expert Helen Toner, are out. Only Adam D’Angelo, CEO of Quora, stays, and in come two men, Bret Taylor, former head of Salesforce and Larry Summers, former US Treasury Secretary.
The answer to the question of whether you can work towards AGI and serve humanity at the same time must be a resounding no.
I hope to meet Helen Toner one day. If not, read an in-depth interview with her about what she has been through in the last seven days. She has probably had to close off incoming messages to avoid ending up in a psychiatric hospital. And she just did her duty. Her damn important duty.
Timeline. Milestones of the Week
Friday 18th November
Sam Altman is fired from the board and Mira Mirati is appointed as interim CEO. Co-founder Greg Brockman, who also sits on the board, resigns.
Speculation that Sam Altman is coming back. He poses on X with a photo of an OpenAI guest card.
Late Sunday night, the board announced that they had hired a new interim CEO (Mira Mirati was on Sam’s side) Emmett Shear, former head of Amazon’s Twitch and critical of AGI.
Microsoft announces that they will hire Sam Altman and Greg Brockman to lead an Advanced Research Lab, but will still support OpenAI, which they own 49% of
A memo from Emmet Sheer tries to reassure the employees that everything will be solved, but over 90% of the employees sign a statement and ultimatum that Sam and Greg must come back and the board must resign. Otherwise, they will resign and go to Microsoft.
Member of the OpenAI board, co-founder of OpenAI Ilya Sutskever, regrets and apologizes on X.
US media outlet The Information reveals that board member Helen Toner, a security expert at Georgetown University, has published research showing that Anthropic’s way of working with AI is more secure than OpenAI’s. According to the media, Sam Altman had been trying to get her kicked off the board.
At 6:00 am EU time, OpenAI announced on X that they had made an agreement that Sam Altman would return and that the board would be replaced except for one member.
The illustration is made with OpenAI’s DallE with the prompt: An illustration depicting the scene in a non-profit organization’s boardroom with two female board members and a profit-eager Chief Executive Director.