The second day of the APEC CEO Summit was marked by last-minute cancellations of some of the highest-profile scheduled speakers.
Chinese President Xi Jinping, set to give a talk Thursday morning, canceled at the last minute, instead offering a statement on the CEO Summit’s website. He was present at the main APEC leaders' meeting.
Elon Musk, too, pulled out at the 11th hour after being slated to speak with Salesforce CEO Marc Benioff. He also attended an event for Xi on Wednesday evening. Hours before, Musk replied in support to a post on X widely perceived as antisemitic. Former U.S. Secretary of State John Kerry was selected to fill in on Musk’s behalf, and a conversation originally planned to focus on AI was redirected toward climate change efforts.
AI was a major topic of conversation in the bilateral talks between Xi and President Joe Biden at the Filoli estate Wednesday. That theme was carried by some of the biggest names in Silicon Valley, including OpenAI CEO Sam Altman and Google CEO Sundar Pichai.
Mayor London Breed has for months now proclaimed San Francisco as the world’s capital for artificial intelligence.
And what better way to emphasize that distinction than by introducing San Francisco’s eminent AI leader and two of the region’s biggest tech companies to the dignitaries and executives that comprise the CEO Summit’s audience? If you missed the address, don’t worry: Breed’s office released a fact sheet with many of the data points she referenced in the speech.
The panel, moderated by Emerson Collective founder Laurene Powell Jobs, featured AI leaders at Meta and Google alongside CEO Sam Altman of ChatGPT maker OpenAI. Naturally, the technologist-heavy panel focused a fair bit on more esoteric matters, like the pros and cons of open-sourcing AI software.
But much of the conversation boiled down to the future of regulating AI, given President Biden’s executive order on AI regulations and a major AI safety summit held in the United Kingdom.
“We all are totally down for regulation,” said Meta Chief Product Officer Chris Cox.
Cox and Google Senior Vice President James Manyika, perhaps by virtue of working at established Big Tech firms, were more cautious about completely unleashing the technology.
The executive order put out by the Biden administration is “a good start,” according to Altman, although he said there are points to quibble on. But he brushed off the short- and medium-term ramifications of artificial intelligence and expressed a more accelerationist sentiment.
“We don’t need heavy regulation here, probably not even for the next couple of generations,” Altman said. “But at some point—when the model can do the equivalent of a whole company, and then a whole country, and then the whole world—maybe we do want some collective global supervision of that and some collective decision-making.”
But Powell Jobs took the conversation back to short-term matters, asking the pressing question of what role AI will play in next year’s elections in the United States and abroad.
“How do we as consumers trust what they’re doing?” she asked, specifically taking Meta to task for the way it handled a doctored video of Nancy Pelosi in 2020.
Cox touted the 90 fact-checkers it employs across 60 countries to review viral content, while Manyika pointed to SynthID, which watermarks and identifies images made by Google’s generative image-maker, Vertex AI.
Altman, meanwhile, said the world largely already understands visual misinformation created by existing generative AI technologies. “The dangerous thing is the new stuff—the known unknowns, the unknown unknowns—that are going to come from this,” he said.
For better or worse, it seems he’s thinking ahead—some might say too far ahead.
To give credit where it’s due, Bloomberg’s Emily Chang held Google CEO Sundar Pichai’s feet to the fire on all manner of topics. Namely, its ongoing federal antitrust lawsuit, criticism from Google workers for its involvement in Project Nimbus as the Israel-Hamas conflict intensifies, and concerns that Google is “missing the boat on the AI” despite pumping up to $2 billion into San Francisco artificial intelligence startup, Anthropic.
Pichai circled around the tough line of questioning, for the most part, seeking to bring the conversation back to the topic promised by the panel’s title: “Innovation That Empowers.”
He positioned Google as a partner to governments and as a technology provider to Anthropic, even as Chang described Google as wielding "geopolitical power." He noted that there needs to be a “global consensus on smart regulations” surrounding AI but asserted the need for “innovation that’s bold and responsible.”
When it comes to regulating AI, Pichai compared the problem to that of combating climate change, saying all countries have a “shared incentive to solve for safety.”
China has a major role to play in that work, even as Pichai admitted the company’s presence in that country is limited.
Pichai, curiously, underplayed Google’s significant market share in search—and, consequently, its influence on how information is spread. “I generally don’t think we have that position,” he said. “We are not 90% of users’ information needs.” (He pointed to his kids’ social media use, which likely includes TikTok, as a factor in that.)
In response to its involvement in Project Nimbus—an Israeli government cloud computing project—Pichai explained that Google is a partner with “like-minded governments that share democratic values around the world.
“Project Nimbus is—this was a [request for proposals] from Israel’s Ministry of Finance to modernize their digital infrastructure,” Pichai explained of the controversial project. “We are proud to be doing Project Nimbus like we do with many governments around the world.”
As their conversation came to an end, Chang asked about Google revising its motto from “Don’t be evil” to “Do the right thing” in 2015. “What does it mean to do the right thing in an AI-powered world?” she asked.
Pichai responded with a quip about AI helping to get the company steering the right direction.
“But, look, it has to be grounded in the fundamental values of humanity, human rights and universal human values we all agree on,” he said. “It’s part of the foundation for it, and we will build it up from there.”