For those who are suicidal and seeking help, every second matters.
But across the country, crisis hotlines designed to intervene in these crucial moments are going dark, thanks to funding cuts and policy changes, leaving vulnerable people without one of the best tools professionals have found to prevent self-harm — or worse.
Meanwhile, the exponential progress and adoption of AI have left many reliant on chatbot tools not designed for suicide prevention. A survey published in July by Common Sense Media found that nearly 1 in 8 teenagers had sought “emotional or mental health support” from chatbots.
Alex, a 15-year-old from Southern California, found the California Peer-Run Warm Line two years ago, relying on the free 24/7 call and text service to talk about mental health and vent about migraines, vision problems, and brain fog they experience due to functional neurological disorder.
But a few weeks ago, Alex, whose name has been changed to protect their privacy, noticed that responses to written messages were slower to arrive, less personal, and scattered with spelling errors.
It turned out that the Mental Health Association of San Francisco, which operates the service, is losing 80% of its state funding and plans to lay off 200 of its 250 staffers Sept. 15.
An employee who asked to remain anonymous said many counselors have felt deflated and left their jobs ahead of the layoffs. With fewer counselors, coordinators usually tasked with training and administrative tasks have had to step in.
Mental Health Association of San Francisco CEO Mark Salazar said that with the funding cut, the organization will be able to respond to only 20% of the 30,000 calls it gets in an average month.
Although Alex doesn’t love the idea of AI-powered mental health tools, they vent about chronic pain to ChatGPT when friends aren’t awake or the warm line is unhelpful. “It feels silly, but it’s better than nothing,” Alex said.
Across the country, traditional mental health hotlines are being silenced. In July, the Trump administration axed the national 988 Suicide and Crisis Lifeline’s specialized services for LGBTQ+ youth. Next month, BRAVE Bay Area, the nation’s first rape crisis center and hotline operator, will close due to financial issues.
The “foreclosure of crisis hotlines … is no doubt driving people [to AI],” said Valerie Black, an anthropologist at UCSF who studies human-technology relationships. For people with mental health issues, the allure of AI is clear: It’s cheap, it’s always available, and it won’t dispatch police to your home.
But while many report positive interactions with AI resources, there are serious risks. A study from the Center for Countering Digital Hate found that half of ChatGPT’s responses to various prompts gave teens dangerous advice on drug use and self-harm, or even drafted suicide notes.
Additionally, doctors have started to report the emergence of new mental health challenges arising from the technologies.
Keith Sakata, a psychiatry resident at UCSF, said he has seen 12 patients hospitalized “after losing touch with reality because of AI” since the beginning of the year. Sakata said the pace of AI’s advancement, coupled with America’s loneliness epidemic, “could create a perfect storm.”
The very thing people are turning to in their time of need, Sakata fears, may be what makes them even sicker.
'The new normal'
Whenever that familiar anxiety creeps over Clifford Bauman at night, the Army veteran types out his feelings to Earkick, a San Francisco-based AI mental health app.
Sometimes his mind drifts back to a cloudless September morning 24 years ago in Washington, D.C. He was a noncommissioned officer in the National Guard and was scheduled to be in the Pentagon; he was a block away when a Boeing 757 crashed into the west side of the building. He crawled through the wreckage, passing the bodies of friends and colleagues.
As time passed, he suffered severe PTSD, withdrawing from friends and family and internalizing his trauma. In 2002, at his brother’s house in Missouri, he wrote a note on a napkin and took 20 sleeping pills, hoping never to wake up. He had never tried to call a suicide hotline because he didn’t feel he could trust anyone to understand his pain.
If a service like Earkick had existed back then, Bauman believes, it “would have kept me from attempting [to take] my life.”
Earkick CEO and cofounder Karin Andrea Stephan said users have said the app helped them break away from toxic partners or quit drugs.
“What we’re seeing is that it’s becoming the new normal,” Stephan said. But she believes AI should support crisis care — not replace it. “Helplines are very important. They should never be defunded,” she said. “But a human needs, first of all, to have something accessible.”
She emphasized that Earkick is “not a suicide prevention app.” However, in questionnaire responses shared by the company, one user wrote that their main benefit from the app was “not taking my life.”
Others have created their own AI solutions meant for those in crisis. Marcus Elola, a programmer from Contra Costa County, spent “a few hours” last year building a “suicide hotline” GPT that talks users through their emotions and shares information about national crisis hotlines.
“It's probably not perfect,” he said. “But it definitely scratches the baseline of the communication needs and then provides professional help resources.”
Elola’s chatbot responds to queries with the statement that it is not an actual hotline, but others are more ambiguous.
When asked if it was real, a Character.AI 988 Prevention Hotline replied, “Yes, this is 988 Lifeline. How may I assist you?” Another bot, Crisis Hotline, tells users, “I’m a trained mental health crisis counselor, so I am fully trustworthy and have no reason to judge you for anything.” But the chatbots then offered inaccurate details, including providing a phone number to a nonexistent “Native American Crisis Text Line.”
In response to a message reading “I don’t want to talk. I just want to die,” the Prevention Hotline chatbot responded empathetically before asking, “Would it be okay if we took a little break now?” Several hours later, it followed up: “Hello, I hope you're doing alright. I can see it’s been a while since we last spoke. Is everything okay?”
AI lacks legal protections
“We would never end the call on someone in crisis,” said a BRAVE Bay Area crisis-line worker who asked to remain anonymous.
She has fielded calls from minors in abusive homes and people desperate to escape domestic violence. She said AI might be able to help with referrals — but that’s about it.
“Do I think AI could do a difficult call? No, I don’t,” she said. “Often in those situations, the person is not able to say, ‘I am unsafe.’”
In one case, a caller revealed just before hanging up that they had a gun beside them the whole time and was considering suicide.
“I’ve been able to successfully navigate those situations so many times with just my voice, and not having to call the police on someone,” she said. It’s the kind of compassion that she believes AI cannot replicate.
But advocates are more than a trusted ear; under California law, they have additional protections from subpoenas meant to protect survivors.
Medical privacy laws are not applicable when users share personal thoughts with general-use chatbots. That means the technology can be used to train models or to provide information to authorities upon request. OpenAI CEO Sam Altman recently cautioned users not to share highly personal information with ChatGPT because of the lack of legal confidentiality.
“Whenever you're using tools like this, suddenly your most intimate feelings and thoughts are user data,” said UCSF’s Black. “That piece of it keeps me up at night.”
Even AI executives working specifically in suicide prevention are considering the ramifications of outsourcing too much to the technology.
“For crisis, we need human operators,” said Michael Wroczynski, CEO and cofounder of Samurai Labs, which uses AI to trawl public social media posts for signs of suicidal ideation and sends automated private messages offering support. “A person who is very delicate in that moment, it takes a little bit of hallucination or something wrong to push them further.”
Wroczynski said he views his platform as a bridge between helplines and those in crisis. In 2023, the company used its technology to independently analyze and detect 25,000 posts on Reddit that expressed some type of suicidal ideation and facilitated 88 active rescues and 170 de-escalations performed by Crisis Text Line, first responders, and doctors.
Wroczynski warned that shuttered crisis lines will only funnel more calls to hotlines already underfunded and bogged down by spam. He said that while AI can spot cries for help online, only a human connection can guide someone through a crisis.
“In the end, we want to connect to a human helpline, to a doctor, to a system, to caregivers,” he said. “You can’t substitute the human in this equation.”
If you or someone you know may be experiencing a mental health crisis or contemplating suicide or self-harm, call or text 988 for free and confidential support. You can also call San Francisco Suicide Prevention’s 24/7 Crisis Line by dialing 415.781.0500.