Late last month, Nathan Calvin, the general counsel for a small AI governance nonprofit called Encode, was visiting family at his mother's house in Washington, D.C., when he got a call from someone trying to get into his apartment building across town. The person said they needed to serve him with legal papers. “I was just like, ‘What is happening?,’” he recalled. “What are they here for?”
Calvin gave the caller his mother’s address and waited until 10 p.m. for them to arrive, but no one showed up. Two nerve-wracking days later, he was making dinner with his wife when he got a knock on the door from a sheriff’s deputy carrying a thick stack of papers. Calvin, who joined Encode in 2024, two years after graduating from Stanford Law School, was being subpoenaed by OpenAI. “I was just thinking, ‘Wow, they're really doing this,’” he said. “‘This is really happening.’”
The subpoena was filed as part of the ongoing lawsuits between Elon Musk and OpenAI CEO Sam Altman, in which Encode had filed an amicus brief supporting some of Musk's arguments. It asked for any documents relating to Musk’s involvement in the founding of Encode, as well as any communications between Musk, Encode, and Meta CEO Mark Zuckerberg, whom Musk reportedly tried to involve in his OpenAI takeover bid in February.
Calvin said the answer to these questions was easy: The requested documents didn’t exist.
The subpoena, variations of which have been served to at least two other AI governance groups in recent months, was part of OpenAI’s emerging attack on what it believes is a billionaire-backed conspiracy to halt its progress. In court filings and official complaints, OpenAI’s lawyers allege that groups opposed to its conversion to a for-profit company may be funded by Musk or are working with Zuckerberg. In media interviews, representatives for an OpenAI-affiliated super PAC have described a “vast force” working to slow down AI progress and steal American jobs.
And in a statement to The Standard, OpenAI lawyer Ann O’Leary indicated that the company is seeking to unmask “funders [who] hold direct equity stakes in competitors” — a likely reference not just to Musk and Zuckerberg, but to billionaire backers of archrival Anthropic, like Facebook co-founder Dustin Moskovitz and eBay founder Pierre Omidyar.
“This is about transparency in terms of who funded these organizations,” O’Leary said. “They can engage in all the spin they want, but the one thing they continue to do is duck, dodge, bob, and weave on who is really funding them. That is the more-than-million-dollar question.”
It’s not surprising that OpenAI would go on the offense against those opposed to its aims, and there are undoubtedly deep-pocketed interests on all sides of this fight. But groups like Calvin’s say the company is bending reality to claim that all criticism of the world’s most powerful AI company is part of a billionaire-backed cabal of commercial interests — and that grassroots groups like theirs are getting caught in the middle.
“They’re in this kind of paranoid bubble,” Calvin said of OpenAI. “They’re under siege from Meta, who’s trying to poach their employees, and Elon, who seems genuinely out to get them. I think they're just seeing conspiracies and echoes of their enemies in places where [they aren’t].”
He added: “They seem to have a hard time believing that we are an organization of people who just, like, actually care about this.”
‘Very paranoid’
For years, OpenAI and Altman were seen as the adults in the room, the rational actors in the wild west of artificial intelligence. OpenAI’s founding as a non-profit research lab in 2015 was driven by concerns about the existential risks of the technology. As recently as 2023, Altman was lobbying Congress for more regulations. He warned that the technology his company and others were creating could cause “significant harm to the world” if it went unchecked, earning praise from even the biggest of tech critics in Washington.
Rumman Chowdhury, the founder of the AI governance group Humane Intelligence, described Altman as having been “a darling of D.C.,” adding: “The Biden administration loved him.”
But the company raised eyebrows last year when it announced its plans to become a for-profit entity — a move that company representatives say is necessary to continue funding its already wildly expensive models. Opponents, meanwhile, fear a conversion would push the company to prioritize profits over safety, violating its founding principles while taking its nonprofit earnings with it.
One of the earliest and loudest of those opponents is Musk, who filed a lawsuit against the company that he helped cofound last March. (Musk’s relationship with OpenAI soured after he launched a competitor, xAI in 2023, sent off a slew of fiery tweets assailing the company, and began referring to its CEO as “Scam Altman.”) OpenAI responded with a countersuit against Musk, accusing him of a "campaign of harassment" and "bad-faith tactics" — not the least of which was the failed, $97 billion takeover bid. Both suits are making their way through a federal court in California.
Around the time Musk mounted his legal fight, advocacy groups began to voice their opposition to the transition plan, too. Earlier this year, groups like the San Francisco Foundation, Latino Prosperity, and Encode organized open letters to the California attorney general, demanding further questioning about OpenAI’s move to a for-profit. One group, the Coalition for AI Nonprofit Integrity (CANI), helped write a California bill introduced in March that would have blocked the transition. (The assemblymember who introduced the bill suddenly gutted it less than a month later, saying the issue required further study.)
In the ensuing months, OpenAI leadership seems to have decided that these groups and Musk were working in concert. At a meeting in June between OpenAI representatives and a group of advocacy organizations opposed to its restructuring, a representative from the company suggested that some of the groups were "funded by our competitors,” and asked that they “reveal themselves,” according to two participants present. One participant recalled O’Leary, the OpenAI lawyer, defusing the situation somewhat, saying she knew none of the group members were being “paid to be here.”
Catherine Bracy, the founder of Tech Equity and a participant on the call, described the suggestion that the groups were funded by OpenAI competitors as “a really unproductive way to start the conversation, and — as someone who has devoted my career to equity in tech and progressive politics — a little insulting.”
“Based on my interaction with the company, it seems they’re very paranoid about Elon Musk and his role in all of this, and it’s become clear to me that that’s driving their strategy,” she added.
OpenAI representatives seemed especially suspicious of CANI, the group behind the bill to block its for-profit transition. Lawyers for the company asked repeatedly to meet with the group, and sent letters to its attorneys asking for a list of its funders. When they got no response, the lawyers tracked down and called CANI’s president, Jeffrey Gardner, then looked through his social media following for any connections to other anti-OpenAI organizations. OpenAI subpoenaed the group in May, then filed a complaint with the California Fair Political Practices Commission, suggesting CANI might be a front for Musk.
Some of the allegations raised in the complaint are indeed odd: Gardner is a self-employed attorney and LSAT instructor living in New York, who has no record of having engaged with California officials on the restructuring — or with any public officials, anywhere, on any political issue at all — according to the complaint. Furthermore, Gardner is not listed anywhere on CANI’s website, nor as a signatory to CANI’s open letter supporting its legislation.
In every sense, he is a strange choice to lead an active AI advocacy group. But further raising OpenAI’s suspicions? The fact that he rents a home owned by an entity called Tesla Place, LLC.
“OpenAI has reason to believe that Mr. Gardner is not actively involved in the management of CANI and is simply being used as a prop in an attempt to hide the true identity of the officers and funders of CANI,” lawyers for OpenAI wrote in their complaint.
In a letter to CANI’s lobbyist, the lawyers made their suggestion even clearer: “Elon Musk has engaged in a coordinated campaign via bad-faith tactics, including multiple lawsuits, to disrupt our operations for his own personal benefit. ... We raise this context because several of your client’s public positions echo themes and language found in these ongoing efforts.”
The Standard reached out to CANI with questions about its membership and funding, neither of which are disclosed on its website. A founding member of the group, who asked not to be named, says it operates largely anonymously because its members are terrified of retribution from OpenAI. He said the group, which was officially formed this year, emerged out of conversations with individuals concerned about OpenAI’s transition to a for-profit model and interested in pursuing a legislative fix. And he insisted that they had never spoken with Musk or taken money from him or anyone associated with him.
Gardner, the person added, serves as CANI’s president because he is the member least concerned about being publicly identified. “He's an attorney who has nothing to lose,” the member said. “He really cares about the mission. That's why he's backing it.” (Reached by phone, Gardner deferred comment to CANI’s lawyers.)
The Standard also reached out to the owners of Tesla Place, LLC, who are a married couple in their 70s living in Maryland. The couple asked not to be identified but explained that the husband inherited the New York house when his father died, and that they formed an LLC to rent the property. They named the LLC after the former name of the street where the house is located: Tesla Place, in honor of famed scientist Nikola Tesla.
“If you go back into history, before those streets were numbered, one was called Edison Street, one was called Tesla,” the wife said. “We chose to honor the historic name of the street. I can assure you Elon has no idea about the LLC.”
The FPPC dismissed OpenAI’s complaint against CANI on Friday, finding there was “insufficient information provided to substantiate a lobbying reporting violation.”
But the CANI complaint was only the beginning. In a follow-up letter to the California FPPC, OpenAI mentioned Encode, noting that it shares a lobbyist and at least one group member with CANI. Days later, it sent the subpoenas to Calvin and the Encode offices, requesting documents related to Musk and Meta, as well as the names of all their funders and documentation of their work on Senate Bill 53, a California AI safety bill.
In her statement, O’Leary said the company was seeking “honesty and transparency” — the same thing OpenAI’s critics demanded of them. “We welcome legitimate debate about AI policy.” she said. “But it is essential to understand when nonprofit advocacy is simply a front for a competitive commercial interest.”
And there are some connections between Encode and Musk: The nonprofit filed one of the first amicus briefs in the billionaire’s lawsuit against OpenAI, in support of some of his claims, and has taken money from the Future of Life Institute, where Musk is an advisor. But the group was founded in 2020, long before Musk’s public feud with the company began, by a high school junior concerned about the use of algorithms in the criminal justice system.
The founder, Sneha Revanur, told The Standard that the closest she’s ever come to Elon Musk was “passing a Tesla on the highway.”
“Fundamentally, we respect OpenAI and want its mission to succeed,” Revanur said in a statement. “Our involvement here is about making sure that the most powerful technology ever is developed transparently and responsibly; any suggestion to the contrary is false and a distraction.”
OpenAI also subpoenaed Legal Advocates for Safe Science and Technology (LASST), a nonprofit that focuses on using the courts to minimize existential risk from things like artificial intelligence and man-made pandemics. The group was founded by — and largely consists of — Tyler Whitmer, a longtime commercial litigator from Connecticut who told The Standard he quit his job to pursue something more meaningful with his law degree. The subpoena against LASST similarly asks for any connections between the group and Musk, as well as any work the organization did on CANI’s failed bill. (The answer to both, Whitmer said, is none.)
Last week, someone — both ENCODE and LASST deny it was them — leaked the ENCODE subpoena to Politico, which ran a story on its contents. Whitmer said he was initially unbothered by the subpoena against his group, chalking it up to a “good-faith misunderstanding” of the situation. But when he saw the Politico story, he says, he reconsidered.
“If it was OpenAI that sent Politico that subpoena, that just seems like, transparently, a way to try to launder an allegation,” he said. “It's like a pretty underhanded way of making the allegation without actually making the allegation — and that feels really unfair.”
Fears of a ‘vast force’
Musk and Meta aren’t the only enemies OpenAI is concerned about. Last week, leaders from the company announced their participation in a super PAC called Leading the Future, which would take on any legislators pushing too much regulation.
The group, which has already raised $100 million from donors like the venture fund Andreessen Horowitz and OpenAI president Greg Brockman, positioned itself as the counterweight to a sprawling network of groups opposed to AI progress, which it claimed would slow U.S. innovation and steal American jobs.
According to the Wall Street Journal, the PAC is in part the brainchild of Chris Lehane, OpenAI’s vice president of global affairs. Lehane, who joined the company last year, is perhaps best known for coining the term “vast right-wing conspiracy” to dismiss the allegations against Bill Clinton during the Monica Lewinsky scandal — a line that seems to have seeped into Leading the Future’s messaging, too.
In a statement to the Journal, representatives from the PAC decried a “vast force out there that’s looking to slow down AI deployment, prevent the American worker from benefiting from the U.S. leading in global innovation and job creation, and erect a patchwork of regulation.” Leading the Future, they said, “is going to be the counterforce going into next year.”
In an interview with The Washington Post, PAC representatives made it clear that this “vast force” included the effective altruist movement, or “EAs” — a collection of tech enthusiasts who believe that AI poses an existential risk to humanity. The movement has poured hundreds of millions of dollars into AI policy work and made significant inroads in Washington under the last administration. The RAND Corporation, which is helmed by a prominent effective altruist, was deeply involved in the drafting of the Biden Administration’s executive order on AI; one biosecurity researcher described the group’s reach as “an epic infiltration” of the capitol.
Chief among the movement’s proponents is Dustin Moskovitz, the billionaire Facebook cofounder who has dumped millions into D.C. think tanks and research organizations pushing the idea that unchecked AI could be our doom. Moskovitz’s Open Philanthropy foundation has given at least $15 million to RAND and even funded a Senate fellowship program that placed staffers at the Department of Defense, the Department of Homeland Security, and other key offices, according to Politico.
Another tech billionaire occasionally involved in these conversations is eBay founder Pierre Omidyar, a major anti-trust advocate and critic of big tech. His collection of nonprofits and investment vehicles, the Omidyar Network, pledged $30 million to fund the “inclusive and responsible development of generative AI” in 2023, and recently endorsed a slate of 17 bills regulating AI in California.
Of particular interest to OpenAI is the fact that both Omidyar and Moskovitz are investors in Anthropic — an OpenAI competitor that claims to produce safer, more steerable AI technology. And both men are also interwoven in the fight against OpenAI’s transition to a for-profit: The Omidyar Network signed on to some of the early letters to California’s AG, and has issued grants to other signatories, including Encode and the Tech Oversight Project. Moskovitz’s Open Philanthropy foundation, meanwhile, has given to a rash of groups connected to signatories to the open letters, including the Effective Ventures Foundation, where Whitmer, the LASST founder, once worked.
In her statement, O’Leary questioned whether organizations backed by figures like Moskovitz and Omidyar could be seen as impartial.
“Groups backed by competitors often present themselves as disinterested public voices or ‘advocates’, when in reality their funders hold direct equity stakes in competitors in their sector - in this case worth billions of dollars,” she said. “Regardless of all the rhetoric, their patrons will undoubtedly benefit if competitors are weakened.”
But the two billionaires are far from aligned on their vision for AI regulation: Moskovitz wants to prioritize avoiding existential risk, while Omidyar’s network has explicitly stated it finds issues like racial bias, deepfakes, and protecting intellectual property more pressing. And both say their stakes in Anthropic do not guide their policy work: Moskovitz, a Series A investor in the company, has since donated his shares to a non-Open Philanthropy nonprofit; the Omidyar Network split some 50,000 shares — currently worth about $8 million — with two other nonprofits.
In a statement, a spokesperson for Open Philanthropy pointed the finger back at OpenAI.
“Reasonable people can disagree about the best guardrails to set for emerging technologies, but right now we’re seeing an unusually brazen effort by some of the biggest companies in the world to buy their way out of any regulation they don’t like,” the spokesperson said. “They’re putting their potential profits ahead of U.S. national security and the interests of everyday people.”
‘A strategic opportunity’
The groups OpenAI is targeting with subpoenas say the company’s crusade against them is equally misplaced. Both LASST and Encode have spoken out against Musk and Meta — the entities OpenAI is accusing them of being aligned with — and advocated against their aims: Encode recently filed a complaint with the FTC about Musk’s AI company producing nonconsensual nude images; LASST has criticized the company for abandoning its structure as a public benefit corporation. Both say they have not taken money from Musk nor talked to him. “If anything, I’m more concerned about xAi from a safety perspective than OpenAI,” Whitmer said, referring to Musk’s AI product.
Both Whitmer and Calvin say they understand OpenAI’s paranoia, and think higher-ups there could be legitimately misguided about their organizations’ goals and funders. But Bracy, the founder of Tech Equity, had a less forgiving explanation: Some staffers might truly believe Musk and other billionaires are behind the entire resistance, she said, but there is also “a strategic opportunity in making Musk the villain and setting OpenAI up as the good guy.”
“I think some of the staffers have conveniently bought into it because it gives them some sense of the moral high ground,” she said.
OpenAI's push to highlight possible conflicts of interest from its critics has already had some silencing effects. The Omidyar Network has stopped signing on to letters to the state attorney generals, admitting to the appearance of a conflict of interest due to its Anthropic holdings, and removed a list of all the AI organizations it’s funding from its website. A spokesperson for the network said its Anthropic holdings had “zero bearing in our engagement with the coalition,” but that they felt it was “important to remove any doubt about the coalition’s motives.”
CANI, meanwhile, has not posted on Instagram, X, or Facebook since the bill it backed mysteriously disappeared in April. The group’s lead spokesperson, Becky Warren, left the group after her firm was acquired by an influential consultancy where an OpenAI adviser works. Their new PR adviser, Moffat, told The Standard she is working for free.
Encode has fired back in court, filing a rebuttal to the subpoena that calls its requests overly broad and irrelevant, and noting it does not have any documents related to Musk or Meta. But the strain of fighting the court battle has worn on the organization, which has lately been directing its efforts toward fighting a Congressional proposal for a 10-year moratorium of state-level AI regulation.
“It’s taxing for a small organization, and me as an individual, to be spending a lot of my time dealing with lawyers and responding to reporters about these allegations that are silly and false,” Calvin said.
“Insofar as the goal was to make [our work] harder,” he added, “I'm not gonna lie and say it didn’t have any success.”