Regulators in Italy have ordered San Francisco-based artificial intelligence company OpenAI to stop processing Italian citizens’ data or face a fine of $22 million.
On March 31, the Italian Data Protection Authority temporarily banned the company’s viral chatbot ChatGPT, saying that OpenAI collects user data in violation of privacy laws and does not shield children from inappropriate content.
OpenAI did not respond to a request for comment by publication time.
The move marks the second time in recent months that the regulator has blocked a popular AI platform over privacy and content concerns.
In February, the agency hit Replika, an AI-powered “virtual friend,” with a data ban, saying that it endangered children and emotionally fragile people, Reuters reported at the time.
The bans come at a moment of heightened concern about the rapid development of AI technology.
Just days before Italy moved on ChatGPT, a group of tech leaders—including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak—published an open letter calling for a six-month pause on the training of advanced AI models. That time should be used to develop a regulatory framework for the technology and ensure its future development is beneficial to humanity, they said.
But Italy stands out among Western countries in its attempts to limit the reach of AI.
In a statement on its website, the Italian Data Protection Authority raised concerns both about the legality of OpenAI’s operations in the country and the security of the data it collects.
“[T]here appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies,” the agency said.
It also said that ChatGPT lacks an age verification mechanism, which means it could expose children to inappropriate content.
But that concern may be partially overstated: Since a Dec. 15 update, ChatGPT largely refuses to respond to prompts it perceives as featuring inappropriate or sexual content or promoting violations of the law, multiple tests by The Standard showed.
The Data Protection Authority also said that a March 20 data breach, which led OpenAI to briefly pull ChatGPT offline, had exposed users’ payment information and conversations with the chatbot.
The agency has initiated an inquiry into the company.
According to the statement, OpenAI must notify the Data Protection Authority within 20 days of its steps to comply with the data ban or face a fine of up to 20 million euros (roughly $22 million) or 4% of its total worldwide turnover.
The Italian regulator made similar arguments to support banning Replika last month. However, unlike ChatGPT, the virtual friend app has gained a reputation for sexual content.
Although Replika is marketed as an “AI companion who cares,” users have regularly complained that the chatbot is sexually harassing them.
Following the Italian ban, Luka—the San Francisco-based company behind Replika—removed erotic content from the app. That delivered a heavy blow to users, who had formed deep personal and romantic bonds with the AI companions and sometimes entered into virtual marriages with them.
The company later decided to return those features of the app for legacy users.