Skip to main content
Business

Here’s what Sam Altman said at APEC, a day before he got fired from OpenAI

Sam Altman, CEO of OpenAI looks offscreen with a blue background.
Sam Altman spoke at the APEC CEO Summit about AI a day before his ouster from the company. | Source: Justin Katigbak/The Standard

To say that Sam Altman’s firing from ChatGPT maker OpenAI Friday came as a shock would be an understatement.

The last few weeks have been wildly lucrative for OpenAI, fresh off its inaugural DevDay, where it launched a beefed-up GPT-4 large language model for ChatGPT and a future of custom AI agents. Reports also hinted at the possibility of OpenAI snagging an $80 billion valuation.

Altman looked to be on top of the world, boasting at DevDay that ChatGPT has reached the 100 million monthly user mark since the chatbot launched an AI arms race. 

Little is known for now about the abrupt firing: According to a blog post shared on the OpenAI website Friday, Altman was let go after a review process by the company’s board of directors that found “he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” Mira Murati, the company’s chief technology officer, will fill in during the interim while the company searches for a successor to Altman.

Altman was cryptic in a post on X, formerly Twitter, about his ouster. “i loved my time at OpenAI. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people. will have more to say about what’s next later. 🫡”

But, in what was one of his last public appearances—Altman also spoke at an event in Oakland Thursday evening, according to the New York Times—Altman joined a panel at the APEC CEO Summit Thursday with high-level executives from Google and Meta and moderated by Laurene Powell Jobs.

What felt like a victory lap at the time now feels a bit like reading tea leaves to understand Altman’s sudden sacking.

Here is a transcript of nearly everything Altman said during the CEO Summit panel, where he said that generative AI “will be the most transformative and beneficial technology humanity has yet invented,” AI technology will likely not need heavy regulation “for the next couple of generations” and that enough “societal antibodies” have been built up to address misinformation, especially leading up to the 2024 elections globally.

The transcript has been lightly edited.

Jobs: [Meta Chief Product Officer] Chris [Cox], [Google senior vice president] James [Manyika] and Sam, why are you devoting your life to this work?

Altman: It’s definitely my life’s work and what I always wanted to work on since I was a little kid; I’d studied it in school. I got kind of sidetracked for a while, but as soon as it looked like we had an attack vector, it was very clear that this was what I wanted to work on. I think this will be the most transformative and beneficial technology humanity has yet invented.

I think more generally, the 2020s will be the decade where humanity as a whole begins the transition from scarcity to abundance. We’ll have abundant intelligence that far surpasses expectations. Same thing for energy, same thing for health, a few other categories, too. But the technological change happening now is going to change the constraints of the way we live and the sort of economy and the social structures and what’s possible.

I think this is going to be the greatest step forward that we’ve had yet so far, and the greatest leap forward of any of the big technological revolutions we’ve had so far. So I’m super excited. I can’t imagine anything more exciting to work on, and on a personal note, just in the last couple of weeks, I have gotten to be in the room, when we sort of like push the sort of the veil of ignorance back and the frontier of discovery forward and getting to do that is like a professional honor of a lifetime. So, it’s just so fun to get to work on that.

Jobs: I’d like each of you to talk a little bit about how you think about some of the existential threats you and others have articulated, as well as the state of regulations, what’s proper and what’s too much. How do we get it right now and then be open to evolving as the technology evolves?

I had dinner with Yuval [Harari, an Israeli author and public intellectual] in Tel Aviv in early June of this year. He was very concerned.

And I understand it. I really do understand why, if you have not been closely tracking the field, it feels like things just went vertical. And sure, you know, people maybe were doing some before, but people had these papers here, this model here, this neural thing here, but people that use machine translation don’t really feel like they’re using AI.

All of a sudden, there was this sort of perception of something that’s qualitatively changed. You know, now I can talk to this thing. It’s like the Star Trek computer I was always promised and I didn’t expect to happen. Why this year? Why not a year ago? Why not in 10 years? Like, what happened? Yeah, so I think a lot of the world has collectively gone through a lurch this year to catch up. Now, as humans can do with many other things, people are like, “Yeah man, where’s GPT-5? What have you done for me lately?” We already moved on, and that’s great. I think that’s awesome. That’s a great human spirit I hope we never lose. But the first time you hear about this or use it, it feels much more creature-like than tool-like, and then you get to use it more, and you see how it helps you, and you see what the limitations are.

And it’s, like, another thing on the technology tree that has been unlocked. Now. I do think this time it’s different in important ways. This is maybe the first tool that can self-improve in the way that we understand it. But we need new ideas I think we’re on a path of self-destruction as a species right now. We need new ideas, we need new technology if we want to flourish for tens and hundreds of thousands and millions of years more and I think a lot of people see the potential of that in AI. But it’s not like a clean story of victory. We do have to mitigate these downsides. It’s great in the short term; it does all these wonderful things to help us and, in the medium term, how it can help us cure diseases and find new ways to solve our most pressing problems. But on the other hand, how do we make sure it is a tool that has proper safeguards as it gets really powerful?

Today, it’s not that powerful, not that big of a deal. But people are smart, they see where it’s going. And even though we can’t quite intuit exponentials well as a species much, we can tell when something’s going to keep going on; this is going to keep going. And so you get this question of, “How do we get as much of the benefits as possible, not unduly slow those down?” An AI tutor for everyone on Earth? Yes, please. Sounds amazing. AI medical advisor. Yes, cure every disease, great. 

Jobs: But in the hands of bad actors, there could also be very, very negative consequences.

What kind of limits are we going to put in place? Who’s going to decide what those are? How are we going to enforce them? What are the rules of the road going to be nationally, where we have to have some agreement and people realize the challenge of that? That said, this has been a significant chunk of my time over the last year. I really think the world is going to rise to the occasion, and everybody wants to do the right thing.

Jobs: Mm-hmm. And what about the executive order? How close does that get to getting it right?

Lots of things there that are worthy of quibbling and lots of areas to improve it, but as a start and saying, “We’re going to do something here.” I think it’s a good start. The real concern of the industry right now, to paraphrase, is how do we make sure we get thoughtful guardrails, on the real frontier models, without us all turning it into regulatory capture and stopping open source models and smaller companies. I think open source is awesome. I’m thrilled you all are doing it. I hope we see more of it. 

Jobs: I think we should have a conversation about that. But keep going because you have some elements that are open source. 

It’s a hard message to explain to people that current models are fine. We don’t need heavy regulation here. Probably not even for the next couple of generations, but at some point when the model can do—the equivalent of a whole company, and then a whole country and then the whole world—maybe we do want some collective global supervision of that and some collective decision-making.

We’re not telling you you have to totally ignore personal harms. We’re not saying you should go after small companies and open-source models. We are saying, “Trust us, this is going to get really powerful and really scary, and you’ve got to regulate it later”—very difficult needle to thread through all of that.

Jobs: Sam, let’s talk about next year’s elections and what you anticipate. 

I really do think we underrate how much societal antibodies have already been built. But it’s imperfect. Also, the dangerous thing there is not what we already understand, which is the sort of existing images and videos, but it’s all the new stuff—the known unknowns, the unknown unknowns that are going to come from this. We talked recently a little bit about this idea of personalized one-on-one persuasion and how we don’t quite know how that’s going to go. But we know it’s coming. There’s a whole bunch of other things that we don’t know because we haven’t all seen what, you know, generative video or whatever can do. And that’s going to come fast and furious during an election year, and the only way we’re going to manage through that is a very tight feedback loop. We, collectively, the industry, society, everything.

Jobs: Hmm. I suppose the problem is that often the damage is done and then we notice and then we correct, and I also understand about broad antibodies at the societal level, because we’ve now been swimming in a sea of propaganda and misinformation. However, we still have a lot of people in this country and elsewhere who believe in conspiracy theories that are easily debunked, but nevertheless, they believe in them. And that has to do with human nature and the way that the brain latches onto information, and that’s something that that we can’t quickly evolve past. 

We’ve struggled with that for a long time in human history. Conspiracy theories are always that thing somebody else believes. But I don’t want to discount that problem at all. That is something deep about human psychology, but it’s not new with this technology. It may be amplified more than before.

Relative to what we’ve already gone through with the internet, it’s not clear that AI-generated images are going to amplify it much more.

Jobs: What is the most remarkable surprise that [you believe] will have happened in your field or in your company in 2024?

The model capability will have taken such a leap forward that no one expected.

Jobs: Wait, say it again? 

The model capability, like what these systems can do, will have taken such a leap forward no one expected that much progress. 

Jobs: And why is that a remarkable thing? Why is it brilliant?

Well, it’s just different to expectation. I think people have, like, in their mind how much better the model will be next year, and it’ll be remarkable how much different it is.