Silicon Valley leaders including White House AI & Crypto Czar David Sacks and OpenAI Chief Strategy Officer Jason Kwon caused a stir online this week for their comments about groups promoting AI security. In separate cases, they argued that certain AI safety advocates are not as virtuous as they appear and are either acting in their own interests or in the interests of billionaire puppet masters behind the scenes.
AI security groups who spoke to TechCrunch say the claims by Sacks and OpenAI are Silicon Valley’s latest attempt to intimidate its critics, but certainly not the first. In 2024, some venture capital firms spread rumors that a California AI security bill, SB 1047, would send startup founders to prison. The Brookings Institution labeled the rumor one of many “misrepresentations” about the bill, but Governor Gavin Newsom ultimately vetoed it anyway.
Whether or not Sacks and OpenAI intended to scare critics, their actions have sufficiently spooked several AI safety advocates. Many nonprofit executives reached by TechCrunch last week asked to speak on condition of anonymity to spare their groups from retaliation.
The controversy underscores Silicon Valley’s growing tension between building AI responsibly and building it into a massive consumer product—a theme that my colleagues Kirsten Korosec, Anthony Ha, and I unpack in this week’s Equity podcast. We also dive into a new AI safety law passed in California to regulate chatbots and OpenAI’s approach to eroticism in ChatGPT.
On Tuesday, Sacks wrote a post on X arguing that Anthropic — which has raised concerns about AI’s ability to contribute to unemployment, cyberattacks and catastrophic damage to society — simply fears passing laws that will benefit itself and drown out smaller startups in the paperwork. Anthropic was the only major AI lab to endorse California Senate Bill 53 (SB 53), a bill requiring safety reporting for large AI companies, which was signed into law last month.
Sacks was responding to a viral essay by Anthropic co-founder Jack Clark about his fear of artificial intelligence. Clark delivered the essay as a speech at the Curve AI security conference in Berkeley weeks earlier. Sitting in the audience, it certainly felt like a genuine account of a technologist’s reservations about his products, but Sacks didn’t see it that way.
Sacks said Anthropic is running a “sophisticated regulatory capture strategy,” though it’s worth noting that a truly sophisticated strategy probably wouldn’t involve making an enemy of the federal government. In a follow-up post on X, Sacks noted that Anthropic has positioned itself “consistently as an enemy of the Trump administration.”
Techcrunch event
San Francisco
|
27.-29. October 2025
Also this week, OpenAI’s chief strategy officer, Jason Kwon, wrote a post on X explaining why the company sent subpoenas to AI security nonprofits, such as Encode, a nonprofit that advocates for responsible AI policy. (A subpoena is a court order that requires documents or testimony.) Kwon said that after Elon Musk sued OpenAI — over concerns that the ChatGPT maker has strayed from its nonprofit mission — OpenAI found it suspicious how several organizations also raised opposition to its restructuring. Encode filed an amicus brief in support of Musk’s lawsuit, and other nonprofits publicly spoke out against OpenAI’s restructuring.
“This raised transparency questions about who was funding them and whether there was any coordination,” Kwon said.
NBC News reported this week that OpenAI sent broad subpoenas to Encode and six other nonprofits critical of the company, asking for their communications related to two of OpenAI’s biggest opponents, Musk and Meta CEO Mark Zuckerberg. OpenAI also asked Encode for communications related to its support of SB 53.
A prominent AI security executive told TechCrunch that there is a growing rift between OpenAI’s governance team and its research organization. While OpenAI’s security researchers often publish reports exposing the risks of AI systems, OpenAI’s policy unit lobbied against SB 53, saying it would rather have uniform regulations at the federal level.
OpenAI’s head of mission alignment, Joshua Achiam, spoke out about his company sending subpoenas to nonprofits in a post on X this week.
“With what is possibly a risk to my entire career, I would say: this doesn’t seem great,” Achiam said.
Brendan Steinhauser, CEO of the AI security nonprofit Alliance for Secure AI (which has not been sued by OpenAI), told TechCrunch that OpenAI seems convinced its critics are part of a Musk-led conspiracy. However, he claims that this is not the case, and that much of the AI security community is quite critical of xAI’s security practices, or lack thereof.
“On OpenAI’s part, this is intended to silence critics, to intimidate them, and to deter other nonprofits from doing the same,” Steinhauser said. “For Sacks, I think he’s concerned about that [the AI safety] the movement is growing and people want to hold these companies accountable.”
Sriram Krishnan, the White House’s senior policy adviser on AI and a former a16z general partner, weighed in on the conversation this week with a social media post of his own, calling AI security advocates out of touch. He urged AI security organizations to talk to “real-world people who are using, selling, adopting AI in their homes and organizations.”
A recent Pew survey found that about half of Americans are more worried than excited about artificial intelligence, but it’s unclear what exactly worries them. Another recent study went into more detail and found that US voters care more about job losses and deepfakes than the catastrophic risks posed by AI, which the AI security movement is largely focused on.
Addressing these security concerns could come at the expense of the AI industry’s rapid growth — a trade-off that worries many in Silicon Valley. With AI investment supporting a large part of America’s economy, the fear of over-regulation is understandable.
But after years of unregulated AI progress, the AI security movement appears to be gaining real momentum heading into 2026. Silicon Valley’s attempts to fight back against security-focused groups may be a sign that they’re working.