Skip to main content
Opinion

Google’s deal with publishers shows how to meet AI with innovation, not red tape

Let’s respond to advances in technology through optimism and growth rather than fear and contraction. 

Illustration of a robot holding a newspaper with the headline "Google deal with publishers"
Source: AI illustration by Clark Miller for The Standard

By Greg Ferenstein

Artificial intelligence is unlikely to end life as we know it — the predictions of some doomers notwithstanding — but it’s almost certain to radically upend the news business. And the new way publishers in California have chosen to respond to that very real existential threat — with funding for research and innovation, paid for by companies with the most to gain — offers valuable lessons to the lawmakers seeking to head off much more distant and hypothetical threats to humanity with tools ill-suited to the purpose.  

Under an emerging agreement, Google will pay roughly $50 million into an AI “accelerator” that  will build new journalism tools. Google will also pay tens of millions into existing California news outlets, with a $70 million match from the state. The initiative aims to enhance democracy through better writing, distribution, and business models for struggling media outlets. Details of the program, partly administered by UC Berkeley, are pending.

That is, rather than restrict technological innovations with blunt regulations, the deal works cooperatively with companies to fight the potential harms of AI with better AI solutions — all while protecting local jobs.

California’s novel approach to one threat posed by artificial intelligence emerged in the context of an entirely different dispute about financing journalism. Via their legislative allies, local newsrooms had attempted to extract subsidies through two bills: California Senate Bill 1327 taxed internet advertising, and Assembly Bill 886 taxed tech companies such as Google and Meta to display snippets of news content.

The two bills didn’t have anything to do with AI per se; they were more about publishers trying to get a share of the value their content has generated for big platforms that dominate online advertising. But everyone can see that generative AI is the train bearing down the tracks on the news industry, and publishers have mounted similar arguments (and lawsuits) about the way the algorithms behind Gemini and ChatGPT have been trained on their articles. 

But with the bills facing an uphill political fight, publishers chose to bargain — with the result that tens of millions of dollars that would have gone to tech companies’ bottom lines will now be spent propping up local newsrooms and developing tools to ensure that the  journalists who work in those newsrooms are as well positioned as anyone to capture the productivity gains of AI.   

The approach couldn’t be more different from that of SB 1047. That bill, authored by State Sen. Scott Wiener, would require artificial intelligence models to go through state-sanctioned safety checks and impose massive fines on developers whose tools cause damage by malicious users.

SB 1047 represents the classic government temptation: restrict and control. Critics of the bill, such as Stanford’s Andrew Ng, argue that it is especially crushing for scrappy startups that rely on freely available (“open source”) AI models and don’t have expensive teams of lawyers to navigate the regulatory maze.

Over the past few years, technologists have shown a willingness to exit the state en masse in response to new taxes and regulation. The once-bustling San Francisco downtown is largely empty, and the city faces a budget crisis.

How will San Franciscans struggling to find a job benefit from further restrictions on artificial intelligence? AI companies will simply move out of the state, create new wealth elsewhere, and continue to build technologies that could threaten public safety. 

Instead, what if Wiener took a page from those who cut a deal with the AI companies to fund innovation that would address the potential harms of the technology — and make sure that innovation happens here, in California? A deal like that would improve the state’s economy by making it a hub of new sophisticated countermeasures to malicious AI. 

To be sure, the deal publishers cut with Google has its critics, who have focused mostly on the journalism portion of the deal. 

“state funded ‘journalism.’ what could go wrong?” tweeted Mike Solana, editor of Pirate Wire, a libertarian-leaning tech-and-culture newsletter. 

Left-leaning critics, such as state Sen. Steve Glazer, worry that on top of undercutting what they see as better bills that put Google and Meta on the hook for the news content they monetize, the deal may also benefit large investors who will still gut newsrooms with automation. 

But these critics miss the bigger picture. The sheer number of jobs threatened by AI dwarfs even the largest fines that tech companies could pay. And traditional journalism is not equipped to overshadow the vast amount of written content that can be generated and widely distributed by small teams using advanced technologies. To thrive, journalists will need to find sustainable business models that capture the attention of paying consumers.

These tools don’t need to be particularly complex to be impactful. For instance, modern journalism requires incredible writing and publishing speed to be competitive. On social media, the first stories published have a head start to be the most shared. 

“I think generative AI such as ChatGPT can make us faster and better,” Nicolas Carlson wrote last year, when he was editor in chief of Business Insider. He  recalls using AI to comb through Donald Trump’s indictment statements. 

This is true for me, too. As a former journalist who now researches psychedelic policy and other matters, I’ve used AI to quickly transcribe large volumes of bureaucratic state meetings — it has saved me hours of work. I also programmed a script using OpenAI’s GPT language model that would summarize background information trained on my writing style and topic expertise. 

AI completes the tedious aspects of my job while I focus on original reporting and analysis. I’m able to publish high-quality information more quickly. 

California’s journalism program is a template for how the government can respond to AI in many aspects of society. Law enforcement and security firms could develop cybersecurity tools. Healthcare providers and hospitals could counter medical misinformation. 

The approach is similar to the argument MIT’s Erik Brynjolfsson made for how to deal with technological challenges: “Race with the machine” through cooperation, rather than rage against it through restrictions.  

Let’s respond to advances in technology through optimism and growth, rather than fear and contraction. 

Greg Ferenstein is Founder of Frederick Research, a public affairs firm specializing in emerging markets. 

We’d like to hear what you think about this or any of our opinion articles. You can email us at opinion@sfstandard.com. Interested in submitting an opinion piece of your own? Review our submission guidelines.