Warning, San Francisco: That cute necklace your coworker is wearing might be recording you.
A crop of startups is selling stealthy AI-powered recording devices and software that’s becoming increasingly popular across Silicon Valley. Regardless of whether you’re in a contentious work meeting, having coffee on a first date, or enjoying the wild abandon of a house party, there’s a growing likelihood that someone is listening.
“My general sense is that we should assume we are being recorded at all times,” said Clara Brenner, a partner at venture capital firm Urban Innovation Fund. “Of course, this is a horrible way to live your life.”
Some of these devices are wearables masquerading as fashionable pendants, like those made by Limitless, or discreet lapel pins, like those by Plaud. Bee has a device that resembles a Fitbit. Others are apps that run quietly in the background of phones and laptops, like Cluely, Granola, and OpenAI’s new ChatGPT Record feature.
It can be hard to know when one is being used. Some devices flash or light up when they’re recording; others glow when they’re switched off. Most automatically generate AI transcripts and audio recordings of everything with which their owner interacts.
Why would anyone voluntarily wear these roving surveillance devices? It’s not necessarily to catch people saying things they’ll regret. Enthusiasts report that the recorders help them stay “present” in meetings, outsource busywork, and act as a perpetually available collaborator.
But many who work in offices where the devices are becoming the norm report that they have begun to self-censor, worried about every offhand comment being etched into an AI-generated transcript. Meanwhile, lawyers warn that it’s only a matter of time before these nonconsensual records and audio files become liabilities in court.
The always-listening crowd
For many in the tech industry, AI recording tools have become a way of life.
At a Dolores Park picnic this summer, a group of founders chatted over snacks and cans of sparkling water. A closer inspection revealed glowing LED lights from AI note-taking devices — a red burst from a silver clip at someone’s collar, a blue ray from a triangular pendant, and a white blip from yet another neckpiece.
This is the new normal, said Anith Patel, founder of the wearable AI note-taker Buddi, whose own collar accessory flashed blue. “At a picnic, you meet 10-plus people, so it's better to get it documented so you remember,” Patel said. Permission to record is “just assumed,” he added.
Nicholas Lopez said AI recording tools have given him a “second brain” for building his AI “superconsultant” out of tech incubator Founders Garage. His $159 Plaud pin helps detail conversation topics, highlights, and takeaways.
“[It’s] like having a modern-day Rockefeller Rolodex that keeps track of my network, my meetings, my entire life,” he said, referencing the late banker David Rockefeller’s custom-designed 5-foot-high filing system.
Outside of work, Lopez has started taking his Plaud to house parties as a kind of social experiment that allows him to relive nights out. “People come over and say crazy things into it,” Lopez said.
AI wearables have become so ubiquitous that people rarely comment, said Jeff Wilson, a VC and cofounder of No Cap, a Y Combinator–backed startup building an AI-powered VC investing tool.
During a coffee meeting, a Limitless pendant hung from Wilson’s chest; his Meta Ray-Bans were in their case. Beside him, Pat Santiago, founder of Accelr8, a coliving startup, had a Buddi pinned to his collar that he uses to gather intel at networking parties and pitch nights. No Cap processes the data he captures with his collar recorder to surface early-stage investments, Santiago explained. “The AI can see patterns that we can’t.”
When a reporter from The Standard added an old-fashioned audio recorder to the table, there were four devices recording the conversation. “I don’t think people care that much anymore,” Wilson said.
But some users acknowledge that Silicon Valley’s newfound recording culture has them on edge. Even a confidential chat in the back of a coffee shop may not be safe.
“I know a VC who records all in-person meetings on their watch, without telling the other meeting participants,” Brenner said. "It's an invasion of privacy and I seriously disapprove of it."
For online meetings, Granola has become the AI-powered note-taking app of choice for the investor class. Instead of joining meetings publicly as a bot, like Otter.AI does, Granola runs locally on the user’s device. The app syncs directly with your calendar and begins transcribing when you have a meeting. Granola wasn’t intended to be a “stealth” app — its website recommends always asking for consent — but many in the tech world don’t bother.
“Some investors assume everyone is using one, so why be awkward and bring it up?” Brenner said, who always makes it a point to ask for permission.
San Francisco-based human experience researcher Harvin Park has seen firsthand how AI recording changes behavior. When you know the person will refer back to their notes, “it's fundamentally a different conversation,” he said.
“They often start speaking in prompts,” he said. “They talk in a way that has the AI remembering key details.” For example, “One of the important things about me is X,” or “You should remember Y.”
Just as smartphones, Slack, and microwavable salmon have forced the creation of new office policies, so too are AI recorders changing workplace etiquette. Jarad Johnson, CEO of Mostly Serious, a Missouri firm that builds websites and trains businesses to use AI, said around a quarter of his clients are drafting or have implemented AI recording policies.
“It’s a big shift,” he said.
Some of his clients have leaned in, buying AI wearables for entire teams, particularly those in sales. One opted for always-on recording (Missouri is a one-party consent state, meaning most recorded conversations are legal) and sends transcripts to every participant. Another company rewrote its policy to tell employees to assume that AI is recording everything.
Is this even legal?
The rules around recording permission differ by state. California’s wiretapping law requires everyone in a confidential conversation to give explicit consent before being recorded in situations where there’s a “reasonable expectation of privacy.”
“You could potentially be subject to criminal penalties if you record a conversation and all parties haven’t consented,” said Catherine Crump, a technology law expert at UC Berkeley.
Those building AI recording devices hope there’s a gray area. Patel said his Buddi device transcribes but does not record audio for that exact reason.
But Crump is unsure whether that level of hair-splitting would hold up in court, since using a transcribing tool is not so different from hiring someone to secretly listen in and take copious notes. Another open legal question is whether the tech world’s “reasonable expectations of privacy” have changed.
“If these bozos are wearing really obvious devices that clearly signal they are recording, and you speak to them, that could constitute consent, even in a private place,” said Chris Hoofnagle, faculty director of the Berkeley Center for Law & Technology.
In cases where AI-recorded transcripts are found, subsequent lawsuits will determine whether the records themselves violate the law. Then, “the burden is on the person who did the recording to prove that the other side consented,” Crump said.
For now, the responsibility for getting appropriate consent has largely fallen on users, with companies distancing themselves from legal liability. An OpenAI spokesperson told The Standard that users must get consent and obey local laws; the company encourages this by placing a grayed-out reminder — “ask before recording others” — beneath its red “record” button.
Granola cofounder Sam Stephensen said during an interview — which he kicked off by asking permission to use Granola to take notes — that an experimental feature enables users to send an automated message letting others know they’re using note-taking technology.
While many have resigned themselves to the slow erosion of personal privacy, Confident Security CEO Jonathan Mortensen is fighting back. “I’ve had so many calls where I’ve said, like, ‘Please don’t record me,’ and no one knows how to turn off the recording or kick out the note-takers,” Mortensen said.
In response, Mortensen’s team spent July building Don’t Record Me, an open-source browser plugin that can detect illicit recordings and prevent them via adversarial AI in the form of high-frequency subsonic sounds that are imperceptible to humans but scramble transcription tools.
“We’re gonna give it away for free,” Mortensen said. “The goal is not to be heard by AI but to be heard by humans.”
Eventually, it should work against wearables as well, and he’s also developing a mobile app.
For some, the last refuge for truly off-the-record conversations is to get stripped down and sweaty in a sauna; ironically, a historical tactic for organized crime members and others up to no good. But eventually, even that may provide little protection against prying ears.
“It will definitely work,” Patel said of his Buddi device. “At temperatures of 40 degrees Celsius [104 degrees Fahrenheit], it’s totally fine.”