A custody battle in the court of public opinion is pitting ChatGPT creator OpenAI against its formerly (and possibly future) chief executive Sam Altman, with coded appeals over the weekend from employees and Silicon Valley bigwigs throwing signals to industry observers.
Since an OpenAI blog post announcing Altman's departure Friday afternoon, followed soon after by President Greg Brockman quitting the company, the room has been spinning.
A memo to OpenAI employees from Chief Operating Officer Brad Lightcap, obtained Saturday by CNBC, said that Altman’s firing was not related to any misconduct.
“We can say definitively that the board’s decision was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices,” the COO wrote. “This was a breakdown in communication between Sam and the board.”
Those board members, Open AI Chief Scientist Ilya Sutskever, Quora CEO Adam D'Angelo, ex-GeoSim Systems CEO Tasha McCauley and Helen Toner of Georgetown's Center for Security and Emerging Technology, initiated the split in a pair of conference calls.
Altman and Brockman had previously served on the board as well.
Altman and Brockman have been reportedly weighing options, including the start of a new firm of their own, a return to OpenAI under different company governance standards, or the departure of key board members unwilling to diverge from strongly held mission perspectives about AI's potential.
On Sunday, The Information reported that other AI companies were seeing some upticks in resume submissions from OpenAI employees seeking new jobs, while Bloomberg reported that Altman had been seeking to raise billions in funding for a new chip venture.
Several OpenAI employees over the weekend took to X/Twitter with cryptic heart emojis. Reporters for the tech publication The Verge interpreted them to hint at support of Altman over the board of directors. The Information followed with word of an invite for Altman and Brockman to visit OpenAI's headquarters as part of a reinstatement push Sunday.
Altman posted a selfie of himself on social media Sunday afternoon, apparently making the visit while wearing a lanyard with a guest pass attached.
Drawing on a wide array of large-language-model technology and an unknown amount of built-in moderation, the bot allowed The Standard to write a typical San Francisco news story. In more serious impacts farther afield, teachers and professors in higher education institutions were weighing potential impacts on curriculums, pedagogies and assignments.
At the OpenAI office itself in San Francisco's Mission District, which a San Francisco Standard reporter once tried to visit, scrutiny followed on the kinds of compensation and amenities provided to engineers. The company also drew lawsuits over alleged misuse of personal and copyrighted data in its models.
Earlier this year, Altman testified before Congress and at local tech summits about the need for regulation for AI from within and without the industry. He continued those calls as recently as this week in an onstage appearance at the Asia-Pacific Economic Cooperation gathering.
“We don’t need heavy regulation here, probably not even for the next couple of generations,” Altman said. “But at some point—when the model can do the equivalent of a whole company, and then a whole country, and then the whole world—maybe we do want some collective global supervision of that and some collective decision-making.”
In September, reports surfaced that the company was planning to sell existing shares in a so-called tender offer that would put its valuation at over $80 billion.
A month later, the company officially signed a new lease in San Francisco's Mission Bay neighborhood, subleasing part of Uber's office space and spurring speculation about how its staff might affect other smaller nearby businesses and ventures.
Some observers suggested that the board struggled to keep up with OpenAI’s rapid growth and increasing complexity.
At the company's first developer conference this month, Altman acknowledged signs of the service's popularity, claiming it had over 100 million weekly users and 2 million developers tinkering with its APIs.
Some of Silicon Valley's boldfaced names, no doubt drawing on their own experiences with leadership challenges and intra-firm turmoil, offered support for Altman but also for systemic resolution of a situation they seemed to suggest couldn't simply be copied and pasted over.
"OpenAI investors (like [Microsoft]) need to step up and demand that the governance weaknesses at [OpenAI] be fixed," former Yahoo (and current Sunshine) CEO Marissa Mayer said late Saturday. "Yes, they should do right by [Altman]. But, regardless, decisions around leadership & direction in a technology as important as fire and electricity can't be decided by 2-4 people on a bad day."
"Sam should get his job back, the board should be replaced by founders and investors who have skin in the game, the nonprofit should be converted to a C-corp, and [Elon Musk] should get shares for putting in the first $40M+. In other words, undo all the shenanigans."
Musk concurred in a reply: "Given the risk and power of advanced AI, the public should be informed of why the board felt they had to take such drastic action."
George Kelly can be reached at firstname.lastname@example.org