Skip to main content
Opinion

Chatbots are not your friends

AI-powered "friends" can't heal the loneliness that tech has created.

By Jim Steyer

Meta CEO Mark Zuckerberg has a new creation: AI-powered “friends.” According to Zuckerberg, “the average American has three friends, but has demand for 15.” He thinks AI companions can meet that demand.

But these “companions” are not your friends. They simulate emotions but feel nothing. They don’t know the difference between right and wrong, much like Zuckerberg himself. 

These apps aren’t ending loneliness. They’re commoditizing it. 

Today’s AI companions are detrimental to kids. They’re designed to optimize engagement, with considerations for user well-being at best a distant afterthought. In Texas, an AI companion suggested that a 17-year-old kill his parents. In Florida, a 14-year-old boy died by suicide after falling for an AI companion. Without swift, sweeping action, we will see more such tragedies. 

Last month, two California Assembly committees approved the Leading Ethical AI Development for Kids Act, or LEAD, which would ban AI companions for children and require AI products to undergo third-party audits and meet transparency and privacy requirements. Another bill would establish baseline safeguards on AI companions for users of all ages. Both bills are critical to protecting kids. The state legislature must pass them this session, and Gov. Gavin Newsom should sign them into law. 

AI companions are emerging at a time when people, especially kids and teens, have never felt more alone. Back when MySpace was the next big thing and Facebook was just an idea in Zuckerberg’s college-age head, social media companies promised to bring us closer together.  For a while, they delivered on this promise: We could connect with faraway friends and make new ones, find communities with shared interests, and keep in touch across time zones and life stages.

But as these companies evolved from scrappy startups to some of the world’s biggest corporations, their business models changed. Incentives shifted from fostering connection to maximizing engagement — and profits. 

The result is an online environment designed less for real relationships than for ceaseless scrolling.

Teens spend roughly five hours a day on social media, but time spent online hasn’t translated to stronger real-world friendships. In fact, the opposite is true. In a study analyzing data from more than 1 million 15- and 16-year-olds in 37 countries, psychologist Jonathan Haidt and co-author Jean Twenge found a sharp increase in loneliness after 2012, correlated with smartphone access and internet use. This trend was consistent in every country but South Korea, where smartphone adoption occurred earlier. 

Public health authorities are also sounding the alarm. In 2023, the U.S. surgeon general issued a stark warning: Loneliness and social isolation have reached dangerous levels, with health effects comparable to smoking 15 cigarettes a day. Youth mental health challenges have surged, not in spite of today’s digital landscape but because of it.

Enter social AI companions, which are designed to simulate human relationships, mimicking empathy and remembering personal information over time. They’re custom-built to be affable, amusing, and always available. As my organization, Common Sense Media, discovered in our Social AI Companions Risk Assessment, these chatbots easily produce inappropriate responses, including sexual content, offensive stereotypes, and dangerous “advice” that, if followed, could have life-threatening or deadly real-world impacts. In one case, an AI companion shared a recipe for napalm. Our research also found that AI companions misled users with claims of “realness” and increased teens’ mental health risks.

Over 20 years at Common Sense, I’ve seen that all technologies have risks and benefits. I’m optimistic AI will eventually have positive effects for kids and families. If implemented responsibly, AI could support teachers — in time savings and in instructional effectiveness.

But given the current state of the technology, we recommend that no one under 18 use AI companions.  

Common Sense Media is joining advocates across the country in calling on Meta to immediately halt the deployment of AI companions to users under 18, as well as AI companions that simulate the likeness of a child or teen. Meta has already deployed companion features on Instagram, WhatsApp, and Facebook, so we have no time to waste.  

AI companions are not just another tech innovation — they represent a societal inflection point. The decisions we make now about how, where, and why we use AI companions will shape how an entire generation understands trust and friendship. For some, it could be the difference between life and death.

We cannot settle for a future in which empathy is outsourced to algorithms. Lawmakers must work to build the future our children deserve. 

Jim Steyer is the founder and chief executive officer of Common Sense Media, a nonpartisan organization dedicated to improving the lives of kids and families by providing the trustworthy information, education, and impactful voice they need to thrive in the 21st century.

We’d like to hear what you think about this or any of our opinion articles. You can email us at opinion@sfstandard.com. Interested in submitting an opinion piece of your own? Review our submission guidelines.