Mayor London Breed has called San Francisco “the artificial intelligence capital of the world,” and she’s not wrong: Companies like OpenAI, Scale AI and Anthropic have created a scene for the technology in the city that is far ahead of any other locale in attracting capital and talent.
More than $11 billion in venture funding went to San Francisco-based AI and machine learning startups in the first quarter of this year, data shows, accounting for nearly half of worldwide investment in the sector. Breed and other San Francisco political leaders are counting on the city’s status as an AI-industry magnet to revive other sectors of the local economy.
But in terms of actually using AI—and writing guidelines on how to use it responsibly—San Francisco appears to be lagging behind some other locales. Cities like Boston, Seattle and, most recently, San Jose have written guidelines governing its use by local government.
In July, San Jose published a 20-plus-page document includes five baseline rules on the use of generative AI by local government. Those include a provision that anything entered into an AI tool like ChatGPT can be made subject to a public records search. Indeed, the rules say outright: “Presume anything you submit could end up on the front page of a newspaper.”
The rules also state that output from the tool must be fact-checked by multiple sources, that use of the tool must be cited when used, and that the person using the tool is accountable for the content generated and included in any document.
Rep. Ro Khanna, who represents parts of the East Bay and Silicon Valley in Congress, recently introduced a bill to require the federal Office of Management and Budget to draft guidelines for how federal agencies use AI for website searches. His office drafted the bill using ChatGPT.
Last month, Breed finally asked City Administrator Carmen Chu “to take the lead in developing guidelines for appropriate uses of AI so that we can best incorporate this new technology in how we serve the public,” according to Sophie Hayward, Legislative and Public Affairs Director at the City Administrator’s Office. Hayward added “the process will take some time.”
The lack of guidelines, however, does not mean that San Francisco hasn’t dipped its toes into use of the tech.
San Francisco International Airport recently implemented a system featuring “smart cameras” that can help travelers find parking spots in the airport garage and even help find their car upon returning from their trip, via a mobile app. The city is also piloting customer service chatbots similar to those now widely used in the private sector, according to the City Administrator’s Office.
That said, use of the technology doesn’t yet appear to be widespread at City Hall. Asked what use there was of tools like ChatGPT at the Board of Supervisors, President Aaron Peskin told The Standard: “I have not nor am I aware of its use by my staff or my colleagues or their staff.”
Generative AI tools can be especially useful to local governments to help fulfill even more complex or repetitive tasks, like in urban planning, and making more informed choices in decisions like purchasing, health service allocation and monitoring regulatory compliance. New York City, for example, is using AI to track fare evasion on the subway.
But those same uses can become especially problematic in areas like protecting confidential information of people like public employees or crime victims, or preventing activities that could violate people’s rights.
Those problems are far more likely to happen than say, something like a paper clip apocalypse but with traffic bollards.
East Bay Assemblymember Rebecca Bauer-Kahan introduced a state bill in January that seeks to prevent “algorithmic discrimination” by “automated decision tools,” which the bill defines as “a system or service that uses artificial intelligence that has been developed, marketed, or modified to make or influence consequential decisions.”
“We know that the field is evolving rapidly and that we need to better understand what AI means, particularly when it comes to generative AI,” Hayward told The Standard. “This is exciting and certainly creates new opportunities, but also requires thoughtful guidelines.”