Assemblymember Rebecca Bauer-Kahan has spent nearly seven years in the California Capitol writing legislation to keep kids safe against the harms of an increasingly online world.
Her fight has never felt more salient than it did this week. On Tuesday, the family of a California teenager filed a lawsuit in San Francisco against OpenAI and its CEO Sam Altman, alleging the company’s ChatGPT prompted his April suicide. This followed a 2024 suit from a mother in Florida who blamed Character.AI’s chatbot for her son’s death. And on Friday, the Wall Street Journal examined how ChatGPT fed a 56-year-old Connecticut man’s spiraling delusions against his mother before he killed her, then himself.
One of Sacramento’s most outspoken advocates for consumer protection, Bauer-Kahan said her mission has become even more pressing in the wake of these incidents. The Orinda Democrat introduced Assembly Bill 1064 this year to regulate “companion chatbots” for youth users, one of many measures working through the Legislature to crack down on AI risks.
The Standard spoke with Bauer-Kahan, an attorney and mother of three, about the recent AI-related deaths, her legislative efforts, and what responsibility she feels California regulators have to ensure the technology is safe for all users. The conversation has been edited for length and clarity.
What is your reaction to these lawsuits and recent deaths?
It feels just like a moment for us to stop and say these tools have benefits, but these harms are so real, and they’re in our face, and we have to do something. It can’t be that this arms race for AI comes at the expense of innocent lives.
You've worked for years on installing safeguards around social media use, especially for kids. Can you walk me through why you are so focused on it?
As a mom, frankly, sitting there listening to these stories week after week of children who have suffered in numerous ways — be it eating disorders or bullying or fentanyl overdoses from drugs that they acquired through social media or those that have died by suicide — is so hard, but so important. And then to watch the response from social media companies [saying], ‘‘We are doing everything we can.” This regulation, that regulation, every regulation is a problem, but never actually coming to the table in earnest with solutions for how we can protect California’s children.
It’s fascinating to watch it go from more of a partisan issue to now it’s fully a bipartisan issue. … The reason it’s such a hard fight is not just tech pushback in the Capitol, but also their ability to move lawsuits in the courts.
What are the biggest pieces of AI legislation working their way through the Legislature this year?
AB 1064, which is my Leading Ethical AI Development for Kids Act, which deals with these chatbots. What you see in those cases is that there’s emotional attachment. The chatbot is using storage history to create that relationship. We are not experimenting on California’s children. It is not the right thing to do, it’s not safe and this bill will stop that.
Last year Gov. Gavin Newsom vetoed state Sen. Scott Wiener’s bill to establish major guardrails around AI’s development. Was that a mistake? What did you think of that bill?
I voted for the bill. But I don’t think any of the harms we’ve seen come to pass would have been covered by that bill. That bill contemplates bioweapons, bridge shutdowns, really large-scale catastrophic harm that have not come to pass. That capability is not here today.
Does the industry have too much power in the state Legislature?
We saw all these announcements this week that I think were intentional, and intentionally timed, by tech companies around political spending. And I, in my gut, feel like it’s going to backfire. I think it backfired for tobacco. I think it backfired for oil. And I think it will backfire for them.
How long will that take?
My perception is it took a lot longer for us to start to really realize the harm [of social media]. I don’t know. But when it comes to these articles hitting so regularly, I don’t think it will take as long as it did with social media. But I might be wrong.
Where can this technology be beneficial to society and help us solve some of our big, complex problems?
Oh, my God, there are so many examples! I love [San Francisco City Attorney] David Chiu, [who is using AI] to find all of the waste in their reports. … Government should be leaning into that in ways that allow us to be more effective for Californians. When the tools are trained and tested properly, I think you could see less bias in critical decisions where we historically had so much bias in. I have women in my life who live in San Francisco who feel safer in Waymos. … Waymo is AI!
How are you handling AI as a mother?
I have teenagers now. As much as we want to put them in a bubble, we can’t. I think it helps to understand the risks. … We need to understand that, and we need to be having that conversation with our children. This [AI] is not a relationship. This is not a person. This is a tool. How do we use it? What do we use it for? If we’re having mental health challenges, who do we talk to? You don't talk to the tool. You talk to a doctor, a therapist, a parent, a loved one. Those conversations are really important.
If you or someone you know may be experiencing a mental health crisis or contemplating suicide or self-harm, call or text 988 for free and confidential support. You can also call San Francisco Suicide Prevention’s 24/7 Crisis Line at (415) 781-0500.