New lawsuit reveals Pentagon told Anthropic the two sides were nearly aligned — a week after Trump declared relationship kaput

Dario Amodei, co-founder and chief executive officer of Anthropic

Anthropic filed two sworn statements in a California federal court late Friday afternoon, pushing back on the Pentagon’s claim that the AI ​​company poses an “unacceptable risk to national security” and arguing that the government’s case is based on technical misunderstandings and allegations that were never actually raised during the months of negotiations that preceded the dispute.

The statements were filed with Anthropic’s response brief in its lawsuit against the Department of Defense and come ahead of a hearing this coming Tuesday, March 24, before Judge Rita Lin in San Francisco.

The dispute can be traced back to late February, when President Trump and Defense Secretary Pete Hegseth publicly stated that they were cutting ties with Anthropic after the company refused to allow unrestricted military use of its AI technology.

The two people who submitted the statements are Sarah Heck, Anthropic’s head of policy, and Thiyagu Ramasamy, the company’s head of public affairs.

Heck is a former National Security Council official who worked in the White House during the Obama administration before moving to Stripe and then Anthropic, where she runs the company’s government relations and policy work. She was personally present at the meeting on February 24, where CEO Dario Amodei sat down with Defense Minister Hegseth and Pentagon Undersecretary Emil Michael.

In her affidavit, Heck calls out what she describes as a central lie in government records: that Anthropic demanded some sort of approval role over military operations. That claim, she says, is simply not true. “At no time during Anthropic’s negotiations with the department did I or any other Anthropic employee state that the company wanted that kind of role,” she wrote.

She also claims that Pentagon concerns about Anthropic potentially disabling or altering its technology mid-operation were never raised during negotiations. Instead, she says, it appeared for the first time in government filings, giving Anthropic no opportunity to respond.

Techcrunch event

San Francisco, CA
|
13.-15. October 2026

Another detail in Heck’s statement that is sure to draw attention is that on March 4 — the day after the Pentagon formally ended its supply chain risk designation against Anthropic — Under Secretary Michael emailed Amodei to say the two sides were “very close” on the two issues the government now cites as evidence that Anthropic is a national security threat: its positions on U.S. weapons of mass surveillance.

The email, which Heck attached as an exhibit to her affidavit, is worth reading along with what Michael said publicly in the days that followed. On March 5, Amodei released a statement saying the company had had “productive conversations” with the Pentagon. The day after that, Michael wrote on X that “there is no active War Department negotiation with Anthropic.” A week after that, he told CNBC that there was “no chance” of renewed talks.

Heck’s point seems to be: If Anthropic’s position on these two issues is what makes it a national security threat, why did the Pentagon’s own official say that the two sides were nearly aligned on exactly these issues right after the designation was finalized? (She stops short of saying the government used the designation as a bargaining chip, but the timeline she lays out leaves the question hanging.)

Ramasamy brings a different kind of expertise to the case. Before joining Anthropic in 2025, he spent six years at Amazon Web Services managing AI deployments for government customers, including classified environments. At Anthropic, he is credited with building the team that brought its Claude models into national security and defense settings, including the $200 million contract with the Pentagon announced last summer.

His statement addresses the government’s claim that Anthropic could theoretically disrupt military operations by disabling the technology or otherwise changing how it behaves, which Ramasamy says is not technically possible. By his account, once Claude is deployed into a government-secured, “air-gapped” system operated by a third-party vendor, Anthropic has no access to it; there is no remote switch, no backdoor and no mechanism to push unauthorized updates. Any kind of “operational veto” is a fiction, he suggests, explaining that a change to the model would require the Pentagon’s explicit approval and action to deploy.

Anthropic, he says, can’t even see what public users type into the system, let alone extract that data.

Ramasamy also disputes the government’s claim that Anthropic’s employment of foreign nationals makes the company a security risk. He notes that Anthropic employees have undergone US government security clearance — the same background check process required for access to classified information — and adds in his statement that “to my knowledge,” Anthropic is the only AI company where cleared personnel actually built the AI ​​models designed to run in classified environments.

Anthropic’s lawsuit alleges that the supply chain risk designation — the first ever applied to a U.S. company — amounts to government retaliation for the company’s publicly stated views on AI security, in violation of the First Amendment.

The government, in a 40-page filing earlier this week, completely rejected that framing, saying Anthropic’s refusal to allow all legitimate military uses of its technology was a business decision, not protected speech, and that the designation was a straightforward national security call and not a punishment for the company’s views.

Leave a Reply

Your email address will not be published. Required fields are marked *