The recording is incendiary.
In a muffled clip with the makings of a bombshell leak, Vice President JD Vance appears to lambast Elon Musk.
“Everything that he’s doing is getting criticized in the media,” says a voice that sounds like Vance’s. “He’s making us look bad. He’s making me look bad.”
Then, it takes on a xenophobic tone.
“He’s not even American,” the voice says. “He’s from South Africa. And he’s cosplaying as this great American leader.”
Leaked audio of JD Vance in a jealous rant…
— Christopher Webb (@cwebbonline) March 24, 2025
Let’s just say JD Vance is no fan of Elon Musk and thinks he makes him look bad. But the truth is—Trump and Vance don’t need Elon to make them look bad. pic.twitter.com/Y1jyZPGUOw
The explosive recording, which surfaced Sunday on social media, fueled an already fervid discussion of real tensions between Musk and President Donald Trump’s cabinet.
It quickly reached terminal velocity, garnering millions of views on TikTok, X, Reddit, and Instagram in less than 24 hours — much to the delight of the anti-Trump crowd.
But artificial intelligence experts interviewed by The Standard are calling BS.
“We believe this audio is likely fake and possibly generated via artificial intelligence,” wrote V.S. Subrahmanian, a Northwestern University computer science professor who runs the school’s Global Online Deep Fake Detection System.
Subrahmanian and postdoc Marco Postiglione ran the recording by two trained analysts and 20 deepfake detection algorithms. The most precise algorithms flagged the recording as likely fake.
According to the Northwestern team, it’s hard to authenticate the recording by ear. But one key indicator suggests artificial manipulation: the incessant white noise in the background.
“This could indicate deliberate manipulation of the signal, potentially misleading both human listeners and automated deepfake detectors,” the Northwestern researchers wrote. “We have previously seen the use of background noise to confound automated deepfake detectors.”
The team isolated the speaker’s voice to conduct its analysis.
Manjeet Rege, director of the University of St. Thomas’s Center for Applied Artificial Intelligence, agreed with the assessment.
It is “reasonable to conclude that the audio clip is not authentic and was likely created using AI-based voice synthesis technologies,” he said. Rege cited a low authenticity score returned by Hiya’s deepfake detector and the recording’s questionable origins: a TikTok video that misspelled Vance’s name as “Vence.”
The account did not respond to a request for comment.
“AI tools are super good at catching subtle things that our ears might not pick up,” Rege said. “They analyze the rhythm and pacing of audio, looking for weird pauses or sudden changes that don’t sound like real human speech.”
They also check for tiny distortions often created by synthetic voices, and study pitch and volume change.
The analyses, though, underscore an alarming reality: It is harder for a layperson to spot a fake audio recording than a photo or video. And while some algorithms can competently assess the probability that an audio recording is a deepfake, it is exceedingly difficult to definitively flag them as fake.
Musk has yet to comment on the recording publicly.
Still, the expert opinions lend credence to what Vance himself wrote on X in response to a viral tweet of the recording.
“It’s a fake AI-generated clip,” Vance said on X in a repost of the clip before effectively calling its poster an idiot. “I’m not surprised this guy doesn’t have the intelligence to recognize this fact, but I wonder if he has the integrity to delete it now that he knows it’s false.”