Skip to main content
Business

Oppenheimer’s grandson: AI is ‘probably not’ as big a risk as nuclear weapons

A man in a blue suit sits on a tan chair with a headset microphone, framed by a bright, nature-themed background with greenery and a flowing river.
Charles Oppenheimer, grandson of J. Robert Oppenheimer, discusses artificial intelligence Wednesday at Dreamforce. | Source: Amanda Andrade-Rhoades/The Standard

The grandson of J. Robert Oppenheimer — the physicist known as the father of the atomic bomb — said the existential threat to humanity posed by artificial intelligence has not reached the level of nuclear weapons, but world leaders should be on guard.

“There’s this question of whether [AI] is going to improve itself and be as powerful as fission and kill us all,” Charles Oppenheimer said during a panel Wednesday at Salesforce’s Dreamforce conference. “Probably not, at least as far as we know.”

The conference bills itself as the “largest AI event in the world,” with three days of speakers, demos, and even decor overwhelmingly obsessed with the technology.

Oppenheimer is a San Francisco-based investor who has worked with several companies in the software industry, including Salesforce. He also founded the Oppenheimer Project, a nonprofit with a stated goal of advancing technology safely through international cooperation.

Lately, he said, he’s been focused on ensuring that nuclear energy is used to help society.

Oppenheimer this year signed a letter calling on world leaders to consider the risks AI could pose with “the wisdom and urgency required.” Other signatories included Richard Branson and pioneering AI scientist Geoffrey Hinton.

“We do not yet know how significant the emerging risks associated with Artificial Intelligence will be,” the letter reads. “We are at a precipice.” 

But unlike with nuclear weapons, where the threats to humanity were clear from the beginning, the future of AI is opaque. When fission was discovered, Oppenheimer said, there was the immediate potential to make bombs. With AI, we don’t “even fully understand if we’ve reached a level of intelligence.”

When it comes to using AI to inform decisions around topics like healthcare, there’s a level of risk that should be taken seriously, he said.

His preferred solution in keeping society safe from the harms of AI is to take a lesson from how the world failed to deal with nuclear proliferation and, in contrast, hold international discussions.

When asked in a follow-up interview whether he supports SB 1047, state legislation that would create regulation to prevent AI systems from causing catastrophes, Oppenheimer demurred, saying he wasn’t up to date on the measure.

Although he believes it might be too early for legislation, Oppenheimer said that when the time comes, governments should heed scientists’ advice — something he said didn’t happen in his grandfather’s era.

“I want people to exercise that judgment. The further up the stack you get from the base science, the more choices you’re making in an organization,” Oppenheimer said. “AI is pretty far up there. It’s not creating itself. We’re creating it.”

SB 1047 has passed out of the state Legislature and awaits a signature or a veto from Gov. Gavin Newsom.

When asked about the bill during his Dreamforce appearance Tuesday, Newsom echoed critics of the legislation, saying he was concerned about the “chilling effect” it may have on innovation in the state.

“We dominate this space, and I don’t want to lose that,” Newsom said.