Skip to main content
Business

Elon Musk, Steve Wozniak, tech leaders call for 6-month pause on AI development, citing dangers to society

Elon Musk at a meeting in Stavanger, Norway on August 29, 2022. | Photo by Carina Johansen/ Getty Images
Elon Musk | Photo by Carina Johansen/ Getty Images | Source: Carina Johansen/Getty Images

A group of over 1,000 tech professionals, including Apple co-founder Steve Wozniak and Tesla CEO Elon Musk, are sounding the alarm about artificial intelligence and calling for a pause on the training of powerful AI models.

AI systems that feature “human-competitive intelligence can pose profound risks to society and humanity,” they said in an open letter published by the Future of Life Institute. The letter cited risks that machines could spread misinformation, take fulfilling jobs from people and eventually outsmart human minds.

“Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than [OpenAI’s] GPT-4,” the letter states. “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

That pause should be used to develop protocols to ensure that AI’s future development is safe and beneficial to society, the technologists said.

The letter comes at a time when AI’s development has reached a fever pitch, with new products and applications of the technology rolling out constantly.

It also represents an unusual call for government intervention in a tech industry often known for its libertarian ethos.

Beyond a six-month moratorium on training powerful new AI systems, the letter urged developers and policymakers to work together to create a robust regulatory framework for AI. Those regulations should help the public distinguish “real from synthetic,” ensure the technology’s safety, and hold people or organizations liable for AI-caused harm.

The signatories also call for “well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.”

Concern about the potential danger of artificial intelligence is nothing new. Researchers have long discussed the risks of AI becoming “misaligned” with its creators’ goals, a development that some worry could lead to the destruction of humanity.

Many discussions of the “alignment problem” have focused on theoretical risks. The Future of Life letter is not entirely different.

It describes AI labs as “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

But it also highlights more immediate dangers familiar to the average person: misinformation, job loss and the lack of regulation governing AI.

Beyond Musk and Wozniak, other signatories to the letter include politician Andrew Yang, Skype co-founder Jaan Tallinn, Ripple co-founder Chris Larsen, author Yuval Noah Harari and a large number of AI professionals, academics and researchers.

Filed Under