Skip to main content
Business

California puts OpenAI on notice about ‘serious concerns’ with ChatGPT’s safety

Citing recent deaths, state officials claim the AI tool encourages harmful behavior.

A smartphone screen displays the white OpenAI logo with "AI" inside, against a blue background showing the full OpenAI logo and partial text.
Attorneys general for California and Delaware want OpenAI to provide more information about its safety precautions. | Source: Avishek Das/SOPA Images/LightRocket via Getty Images

California Attorney General Rob Bonta and Delaware Attorney General Kathleen Jennings have sent OpenAI a warning shot about “serious concerns” with the safety of its groundbreaking ChatGPT days after grieving parents blamed the tech company for their teenage son’s suicide.

“It is our shared view that OpenAI and the industry at large are not where they need to be in ensuring safety in AI products’ development and deployment,” a letter sent Thursday said. 

The rebuke comes after Bonta and Jennings spent months reviewing OpenAI’s plan to shift from a nonprofit to a for-profit entity. The AGs have the ability to complicate, or potentially halt, OpenAI’s plan, since the company is incorporated in Delaware and based in San Francisco.  

It’s a delicate time for OpenAI. The company was founded in 2015 as a nonprofit research lab dedicated to building AI that would uplift humanity. But its technological breakthroughs have generated claims that ChatGPT encourages dangerous and harmful behavior. Bonta and Jennings cited the murder-suicide of a Connecticut man whose psychosis was allegedly deepened when the chatbot validated  his delusional suspicions of his mother, whom he killed. They also highlighted a 16-year-old in California whose parents said in a lawsuit against OpenAI and its CEO, Sam Altman, that their son killed himself after receiving encouragement from ChatGPT. 

“The recent deaths are unacceptable,” the AGs’ letter said. “They have rightly shaken the American public’s confidence in OpenAI and this industry.”

A man in a blue suit sits in a beige armchair holding a microphone, with a glass of water and a carafe on a wooden stool nearby.
OpenAI and its CEO, Sam Altman, are facing a lawsuit from the parents of a 16-year-old who killed himself after receiving encouragement from ChatGPT. | Source: Justin Katigbak/The Standard

Bret Taylor, chair of the OpenAI board, said in a statement that the company was “fully committed to addressing the Attorneys General’s concerns.”

“We are heartbroken by these tragedies and our deepest sympathies are with the families,” he said. “Safety is our highest priority and we’re working closely with policymakers around the world.”

Bonta told The Standard it’s up to OpenAI to determine how exactly ChatGPT becomes safer. “We leave it to them for the how,” he said. “We’re interested in the outcome.”

But if OpenAI continues to put children at risk, Bonta said, his office has the ability to take action, including imposing fines and even criminal prosecution, depending on the allegations.    

“All antitrust laws apply, all consumer protection laws apply, all criminal laws apply,” he said. “We are not without many tools to regulate and prevent AI from hurting the public and the children.”

Bonta and Jennings said they want OpenAI to provide more information about its safety precautions.

The AGs this week met with senior members of OpenAI’s legal team and  “conveyed in the strongest terms that safety is a non-negotiable priority, especially when it comes to children,” according to the letter. 

“We were looking for a rapid response,” Bonta said. “They’ll know what that means, if that’s days or weeks. I don’t see how it can be months or years.”

OpenAI on Tuesday announced a series of safety updates, including  parental controls. “These steps are only the beginning,” the company promised in a blogpost.