In the five years since San Francisco became the first U.S. city to outlaw facial recognition technology as part of a historic surveillance ordinance, police copped to circumventing the ban six times, blaming two officers for most of the violations.
But previously unreported documents suggest that the problem is greater than the San Francisco Police Department admits — and that there may be grounds to review dozens, if not hundreds, of criminal cases for dismissal.
The records, cited in a lawsuit brought this week against the city by Secure Justice, a nonprofit that aims to fight government overreach, show that at least six more cops flouted the ban at least seven other times by asking neighboring agencies to use the technology for them.
“The SFPD officers were so bold as to title their email solicitations ‘Facial Recognition,’ and to openly state their intent to solicit the use of [facial-recognition technology] from third parties in their chronology reports,” according to the lawsuit, which includes screenshots of some of the correspondence.
Yet none of those examples were included in SFPD’s 2023 annual report about how it uses surveillance tools — an omission that in itself flouts the city’s 2019 ordinance to curb government spying.
After years of trying to get San Francisco to comply with its own landmark surveillance standard — one that inspired more than 20 other U.S. cities and counties to enact similar laws — Secure Justice Executive Director Brian Hofer resorted to suing the city.
“I helped draft the ordinance,” he said, “and have bent over backward ever since trying to get SFPD to comply, but they’ve ignored every single right-to-cure letter.”
Of the eight jurisdictions in California with surveillance ordinances, only San Diego and San Francisco have yet to abide by their own laws, Hofer said. But while its counterpart in SoCal has a handful of violations left to remedy, San Francisco has at least 60, he said.
“SF is by far the worst,” Hofer said. “It’s not even close.”
With facial recognition leading to false arrests and wrongful incarceration elsewhere in the country — and predominantly against Black people — San Francisco should thoroughly investigate the extent of SFPD’s use of the technology, Hofer noted. Especially given what the lawsuit calls SFPD’s “lengthy and troubling history of racist policing practices,” which judging by the department’s own data persists to this day.
“By the police department violating the law,” he said, “it now calls into question the integrity of any case they forwarded to the district attorney since the ban went into effect in the summer of 2019.”
Valerie Ibarra, a spokesperson for the San Francisco Public Defender’s Office, agreed, saying she recalls a few never-publicized cases involving evidence obtained by skirting the city’s surveillance law.
“I don’t know how widespread that is,” she said, “but it’s one of those where-there’s-smoke-there’s-fire situations.”
‘A mockery’ of the law
San Francisco’s noncompliance extends well beyond the police department and its dubious use of banned artificial intelligence tools, the lawsuit claims. A host of city agencies rely on surveillance without disclosing how it’s used, what guardrails are in place and whether it’s a wise way to spend taxpayer money.
“In other cities, when a technology hasn’t worked, they get rid of it so they quit wasting money,” Hofer said. “And when it works, sometimes they actually expand it. That’s the point I’ve been trying to make in San Francisco. They have more than 40 pieces of surveillance, but they can’t solve any crimes. Why pay for all this technology that doesn’t work? Because they don’t have the data in their annual reports to show when it doesn’t.”
The complaint points to 42 surveillance technologies managed by a slew of city departments without corresponding disclosures required by city law.
Additionally, the lawsuit cites seven surveillance technologies for which no annual report has been published — another violation of city law. Among those tools: ankle-monitoring bracelets and gunshot detectors under police purview, social media surveillance by the Human Services Agency and a “people counter” used by the Recreation and Parks Department.
Of the reports that were published, many lacked legally required information, the lawsuit claims. The San Francisco Fire Department’s annual drone report, for example, summed up the technology’s efficacy merely by saying it “has been used according to our policy.”
“Not only is that not responsive,” the lawsuit alleges, “but it also provides no quantitative or qualitative information to help gauge whether or not San Francisco spent its money wisely on the technology. This is even more concerning as the fire department states in the same report that it wants to procure additional drones, without providing any justification.”
The parks department responded much the same way in a report about its use of SenSource, an AI-powered video camera that counts people as they come and go from defined spaces like event venues. Asked in one of its annual reports whether the technology was effective, the department offered a perfunctory “yes.” The only elaboration was to say, “We have not had many cases using this technology.”
The lawsuit says such responses “make a mockery of the spirit and letter” of the law. Secure Justice also argues that the yearly reports are critical to the overall framework of the surveillance ordinance and are meant to provide the public and city leaders with metrics about efficacy, policy violations, corrective actions, data security, third-party information-sharing and costs to taxpayers.
“Without them, residents and leaders in San Francisco have no idea whether the technology even works, the technologies are performing as desired, there are civil liberties violations, or data breaches are occurring,” the complaint states. “Some technologies may be benign from a privacy infringement perspective but are ineffective at fighting crime or improving efficiency, and as a result, waste scarce taxpayer resources. This is especially a concern today as San Francisco is facing a historic budget deficit.”
Take ShotSpotter gunshot detection devices, Hofer said. San Francisco spends seven figures a year on the technology with a dubious return on investment. One analysis found that 3,000 ShotSpotter alerts in San Francisco over a two-and-a-half-year period led to two arrests — only one of them gun-related.
SFPD has yet to respond to a request for comment. A spokesperson for the City Attorney’s Office declined to weigh in on pending litigation, saying, “Once we are served with the lawsuit, we will review the complaint and respond appropriately.”
Reversing course
The lawsuit comes as San Francisco chips away at the surveillance law that set a national precedent just five years ago.
This spring, 60% of voters passed Proposition E, a measure backed by Mayor London Breed that empowered SFPD to install public security cameras and deploy drones without oversight from the Board of Supervisors or Police Commission. It also gives officers more freedom to pursue suspects in car chases and fewer paperwork obligations.
Another key component of Prop. E was that it relaxed some of the standards enshrined in the 2019 surveillance law. Instead of getting clearance from the Board of Supervisors before adopting new surveillance tools, SFPD can now ask for forgiveness; essentially, by seeking approval anytime within a year of acquiring a new surveillance tool.
San Francisco’s use of facial recognition technology isn’t limited to the type of AI detection SFPD illegally outsourced to other agencies.
Once the ordinance passed, SFPD scrambled to disable a facial recognition feature in software it used to search mugshots, and the Board of Supervisors carved out an exemption for city-issued iPhones — as long as they aren’t used for policing. And just a few months ago, the city made another exception for closed-circuit video and drones.
With AI fueling breakneck changes in surveillance, transparency advocates like the Electronic Frontier Foundation and American Civil Liberties Union say their concerns run deeper than any specific software or device. The aim of the lawsuit filed this week, Hofer said, is to hold government agencies accountable to the public about how they use ever-more-powerful technologies.
“Some of this stuff isn’t really creepy and is actually pretty routine — but it also doesn’t work,” he reiterated. “It doesn’t help you catch bad guys.”