Skip to main content
Business

Silicon Valley says this AI bill will kill tech in SF. Some mayoral candidates agree

Four individuals in business attire sit on a stage; one woman in red stands talking, while three men are seated, one holding a bottle of water.
Mayor London Breed addresses crowd during the second mayoral debate at the University of California College of Law California in San Francisco on Monday, Jun. 17, 2024. | Source: Adahlia Cole for The Standard

Aaron Peskin and Mark Farrell represent two ends of the ideological spectrum in the mayor’s race, but there’s one thing they agree on: A state bill proposing to regulate AI development in the name of safety is bad news for San Francisco.

State Sen. Scott Wiener said his bill, SB 1047, is aimed at ensuring that the largest AI companies implement safeguards to prevent their technology from being used to cause serious harm. Many in Silicon Valley say the bill could threaten the budding AI industry that has clustered in San Francisco, at a time when the city is still struggling to recover from the pandemic.

For a wonky piece of legislation that applies only to companies spending $100 million or more on training AI models, SB 1047 has become a salient local issue, reflecting how much of the San Francisco economy revolves around tech. For his part, Wiener said he’s “a big fan of AI” and “very proud that San Francisco is the heart of AI innovation.” 

Peskin was more circumspect.

“The bill’s good intentions are overshadowed by speculative regulations that will chill innovation before it begins,” the Board of Supervisors president said. “We should approach the regulation of AI just as we approach any industry that impacts how San Franciscans live, work and play. That means safeguarding against known harms, like discrimination or market manipulation, while promoting concepts like open source or open weights that can allow engineers and entrepreneurs to innovate in ways that actually benefit workers and consumers.”

A man with a short gray beard and glasses, wearing a blue suit and tie, is speaking passionately with his hands raised against a dark background.
Aaron Peskin speaks at a mayoral debate in June. He has been critical of SB 1047, which some say is bad news for San Francisco. | Source: Adahlia Cole for The Standard

Farrell went further, writing on Medium that the bill “has the potential to kill our nascent AI ecosystem in San Francisco and stop the one kernel of hope our fragile local technology economy has today.” 

“Everyone in San Francisco should be alarmed about SB 1047,” he wrote. “Let’s leave AI regulation to the federal government, and locally, instead focus on fostering technology innovation, including AI, rather than giving the industry a reason to leave us.”

While her challengers have made an issue of the bill and its potential impact on the economy, Mayor London Breed has opted to straddle the fence, noting that Wiener’s office is actively amending the bill based on feedback from technology companies.

A woman in a blue blazer gestures while speaking at a podium.
Mayor London Breed is "working diligently on this bill with a goal of embracing and supporting AI, while safeguarding against possible abuses," a spokesman said. | Source: Juliana Yamada for The Standard

Breed’s spokesman, Jeff Cretan, called AI regulation “complicated and important,” praising Wiener’s efforts to put forward legislation but not explicitly endorsing the bill. 

“Mayor Breed knows that Sen. Wiener is working diligently on this bill with a goal of embracing and supporting AI, while safeguarding against possible abuses,” Cretan said. “AI is and will continue to be an important part of San Francisco’s present and future.” He noted that Wiener’s office is working with Anthropic, a leading AI company, to amend the bill.

Some of Silicon Valley’s most prominent voices have criticized the legislation. Y Combinator’s Garry Tan — who has encouraged startups to headquarter in San Francisco — said SB 1047 could “gravely harm California’s ability to retain its AI talent and remain the location of choice for AI companies.” Venture capital firm Andreessen Horowitz said the bill would set “an impossible standard to meet,” arguing that “most startups will simply have to relocate to more AI-friendly states or countries.”

A bespectacled man in a black shirt, hand in hair, looks at the camera with a slight smile.
Y Combinator's Garry Tan said SB 1047 could “gravely harm California’s ability to retain its AI talent." | Source: Noah Berger for The Standard

In a city where some 18.7% of workers held tech jobs as of 2021, SB 1047 casts a long shadow. One entrepreneur cited the bill to rail online against crime in the city, after she was injured by a homeless man while sitting in a coffee shop. She argued that state leaders should focus more on personal safety than on regulating innovation. “Focus on keeping us ACTUALLY safe,” she wrote. 

Wiener called concerns that his bill could quash San Francisco’s innovation economy “fear-mongering.” He said in an interview that Farrell’s post misinterpreted what the proposed legislation would do. Its focus on companies spending $100 million or more to train AI models limits its scope to the largest firms, such as Anthropic, OpenAI and the tech giants, Wiener said.

Anthropic, which develops the Claude chatbot, told the senator’s office in a late-July letter that it does not support the bill in its current form but is open to doing so if the company’s suggested amendments are addressed. “We’re actively working with Anthropic and making amendments to the bill,” Wiener said. 

Those changes include narrowing the scope of the bill to focus on ways to hold companies responsible for causing disasters, “instead of deciding what measures companies should take to prevent catastrophes (which are still hypothetical and where the ecosystem is still iterating to determine best practices),” according to the letter.

A man in a blue suit and white shirt is sitting on a beige chair with an orange cushion. In the background, there are green plants and a reddish-brown canopy.
OpenAI, led by Sam Altman, is one of the companies spending at least $100 million on training AI models that would be affected by the bill. | Source: Justin Katigbak/The Standard

The bill in its current form stipulates that the attorney general could sue a company that fails to develop adequate safeguards for its AI technology and causes catastrophic harm.

Among the “critical harm” the bill hopes to prevent is the development of AI that makes it easier to create or use “a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties” and more than $500 million in damage to critical infrastructure. 

Like Breed, mayoral candidate Daniel Lurie, the nonprofit executive and Levi Strauss heir, attempted to walk a middle path when asked for his thoughts on the bill. “We have to manage the risks posed by AI, and to do that government needs to know what this industry is doing — much like we track autonomous vehicles to ensure they are operating safely,” he said in a statement. “However, we have to do that in a way that ensures California remains competitive and attractive for these businesses.” Lurie said the state must “must avoid regulatory schemes that undermine the investment and jobs that this emerging industry is poised to bring to our struggling city.”

A spokesperson for Supervisor Ahsha Safaí, another mayoral candidate, did not respond to a request for comment on SB 1047.

Wiener compared the outcry over the bill to pushback against California’s efforts to pass data privacy laws in 2018. “The tech sector told us, ‘You’re going to push tech out of California if you pass this law,’ ” he said. “Tech is still here and growing strong, and we now have a data privacy law that protects consumers.”