The San Francisco City Attorney’s Office is filing a groundbreaking lawsuit against 16 of the world’s most-visited websites that allow users to generate deepfake pornography of real women and children.
The lawsuit, filed on behalf of the People of the State of California, says the sites violate state and federal laws, such as prohibitions against revenge pornography, by producing explicit images without the consent of its victims.
“This investigation has taken us to the darkest corners of the internet, and I am absolutely horrified for the women and girls who have had to endure this exploitation,” City Attorney David Chiu said in a statement shared with The Standard. “We have to be very clear that this is not innovation—this is sexual abuse.”
Experts estimate millions of deepfake pornographic images are created annually, with the vast majority targeting women and girls, the lawsuit said.
At a press conference Thursday at City Hall, Chiu and two other city attorneys are expected to outline the sites’ privacy-rights violations and ways they have inflicted harm on victims.
The lawsuit argues that a judge could shut down the websites for services that generate such images, demand monetary damages for victims, and add more defendants as they are discovered. The aim of the suit is to remove the websites, permanently restrain defendants from running websites, and pursue civil penalties and legal costs.
The complaint targets several companies based in the U.S. and abroad as well as 50 unnamed John Doe defendants who operate popular “nudifying” websites that let users submit images of clothed victims. The city attorney alleges most of the sites offer limited free “undressing” trials before requiring payment for additional images, often accepting cryptocurrency.
Chiu emphasized the broader implications, adding that “we all need to do our part to crack down on bad actors using AI to exploit and abuse real people, including children.”
San Francisco’s legal action comes as lawmakers nationwide grapple with regulating rapidly advancing AI technology and its potential for abuse.Last October, President Joe Biden announced a sweeping executive order on AI regulation using the nation’s Defense Production Act. The order relies in part on voluntary commitments from AI developers, as well as guidance from the Commerce Department on distinguishing AI-generated content from authentic content.