The trap that Anthropic built for himself

The trap that Anthropic built for himself

On Friday afternoon, just as this interview was underway, a news alert flashed across my computer screen: The Trump administration cut ties with Anthropic, the San Francisco AI company founded in 2021 by Dario Amodi. Defense Secretary Pete Hegseth had invoked a national security law to blacklist the company from doing business with the Pentagon after Amodei refused to allow Anthropic’s technology to be used for mass surveillance of US citizens or for autonomous armed drones that could select and kill targets without human input.

It was a chain of events. Anthropic is at risk of losing a contract worth up to $200 million and could be barred from working with other defense contractors after President Trump posted on Truth Social directing all federal agencies to “immediately cease all use of Anthropic technology.” (Anthropic has since said it will challenge the Pentagon in court.)

Max Tegmark has spent the better part of a decade warning that the race to build ever more powerful AI systems is outstripping the world’s ability to control them. The MIT physicist founded the Future of Life Institute in 2014 and helped organize an open letter — ultimately signed by more than 33,000 people, including Elon Musk — calling for a pause in advanced AI development.

His view of the anthropic crisis is gentle: the company, like its rivals, has sown the seeds of its own predicament. Tegmark’s argument begins not with the Pentagon, but with a decision made years earlier — a choice shared across the industry to resist regulation. Anthropic, OpenAI, Google DeepMind and others have long promised to manage themselves responsibly. Anthropic this week even dropped the central tenet of its own security pledge — its pledge not to release increasingly powerful AI systems until the company was confident they wouldn’t cause harm.

Now, in the absence of regulations, there is little to protect these players, says Tegmark. Here’s more from that interview, edited for length and clarity. You can hear the entire conversation this coming week on TechCrunch’s StrictlyVC Download podcast.

When you just saw this news about Anthropic, what was your first reaction?

The road to hell is paved with good intentions. It’s so interesting to think back to a decade ago when people were so excited about how we were going to do artificial intelligence to cure cancer, to increase the prosperity of America and make America strong. And here we are now, with the US government mad at this company for not wanting artificial intelligence to be used for domestic mass surveillance of Americans, nor for wanting to have killer robots that can autonomously — without any human input whatsoever — decide who gets killed.

Techcrunch event

San Francisco, CA
|
13.-15. October 2026

Anthropic has staked its entire identity on being a security-leading AI company, yet it partnered with defense and intelligence agencies [dating back to at least 2024]. Do you think that is contradictory at all?

It is contradictory. If I may give a slightly cynical take on this – yes, Anthropic have been very good at marketing themselves as being about security. But if you actually look at the facts instead of the claims, you’ll see that Anthropic, OpenAI, Google DeepMind, and xAI have all been very vocal about how they care about security. None of them have come out in support of binding safety regulation as we have in other industries. And all four of these companies have now broken their own promises. First we had Google – this big slogan, “Don’t be evil.” Then they dropped it. Then they dropped another longer commitment that basically said they promised to do no harm with AI. They dropped that so they could sell artificial intelligence for surveillance and weapons. OpenAI just dropped the word security from their mission statement. xAI shut down their entire security team. And now, earlier this week, Anthropic dropped their most important security commitment — the promise not to release powerful AI systems until they were sure they wouldn’t cause harm.

How did companies that made such prominent security commitments end up in this position?

All of these companies, notably OpenAI and Google DeepMind, but also to some extent Anthropic, have persistently lobbied against regulation of AI, saying, ‘Just trust us, we’ll regulate ourselves.’ And they have successfully lobbied. So right now we have less regulation of AI systems in America than on sandwiches. You know, if you want to open a sandwich shop and the health inspector finds 15 rats in the kitchen, he won’t let you sell any sandwiches until you fix it. But if you say, ‘Don’t worry, I’m not going to sell sandwiches, I’m going to sell AI girlfriends to 11-year-olds, and they’ve been linked to suicide in the past, and then I’m going to release something called superintelligence that might bring down the US government, but I have a good feeling about my’, go inspect,Fiorne’ sandwich.’

There is food safety regulation and no AI regulation.

And I feel like all of these companies really share the blame for that. Because if they had taken all these promises that they made back today about how to be so safe and good, and got their act together and then gone to the government and said, ‘Please take our voluntary commitments and turn them into American law that binds even our most sloppy competitors’ – this would have happened. Instead, we are in a complete legislative vacuum. And we know what happens when there is a complete corporate amnesty: you get thalidomide, you get tobacco companies pushing cigarettes on children, you get asbestos causing lung cancer. So it’s kind of ironic that their own resistance to having laws that say what’s okay and not okay with AI is now coming back to bite them.

There is no law right now against building AI to kill Americans, so the government can suddenly ask for it. If the companies themselves had come out earlier and said, ‘We want this law,’ they wouldn’t be in this pickle. They really shot themselves in the foot.

The companies’ counterargument is always the race with China – if American companies don’t do such and such, Beijing will. Does that argument hold?

Let’s analyze it. The most common talk from the lobbyists for the AI ​​companies—they are now better funded and outnumber the lobbyists from the fossil fuel industry, the pharmaceutical industry, and the military-industrial complex combined—is that when someone proposes any kind of regulation, they say, ‘But China.’ So let’s look at it. China is in the process of banning AI boyfriends outright. Not just age limits – they’re looking at banning all anthropomorphic artificial intelligence. Why? Not because they want to please America, but because they feel this is destroying Chinese youth and making China weak. Of course, it also makes American youth weak.

And when people say we need to race to build superintelligence so we can win against China – when we actually don’t know how to control superintelligence so the default result is humanity losing control of Earth to alien machines – guess what? The Chinese Communist Party really likes control. Who in their right mind thinks that Xi Jinping will tolerate a Chinese AI company building something that will overthrow the Chinese government? No way. It is also obviously very bad for the US government if it is overthrown in a coup by the first US company to build superintelligence. This is a national security threat.

It’s compelling framing — superintelligence as a national security threat, not an asset. Do you see that view gaining ground in Washington?

I think if people in the national security community listen to Dario Amodei describe his vision — he’s given a famous speech where he says we’ll soon have a country of geniuses in a data center — they might start thinking, ‘Wait, did Dario just use the word country? Maybe I should put that land of geniuses in a data center on the same threat list that I keep an eye on, because it sounds threatening to the US government.’ And I think, pretty quickly, enough people in the US national security community will realize that uncontrollable superintelligence is a threat, not a tool. This is completely analogous to the Cold War. There was a race for dominance – economic and military – against the Soviet Union. We Americans won it without ever participating in the second race, which was to see who could place the most nuclear craters in the other superpower. People realized it was just suicide. Nobody wins. The same logic applies here.

What does all this mean for the pace of AI development more broadly? And how close do you think we are to the systems you describe?

Six years ago, almost every AI expert I knew predicted that we were decades away from having AI that could master human-level language and knowledge — maybe 2040, maybe 2050. They were all wrong, because we already have it now. We’ve seen AI develop quite rapidly from high school level to university level to PhD level to university professor level in some fields. Last year, AI won the gold medal at the International Mathematical Olympiad, which is about as difficult as human tasks get. I co-authored a paper with Yoshua Bengio, Dan Hendrycks, and other top AI researchers just a few months ago in which I provided a strict definition of AGI. According to this, GPT-4 was 27% of the way there. GPT-5 was 57% of the way there. So we’re not there yet, but going from 27% to 57% is quickly suggesting that it might not be that long.

When I lectured my students yesterday at MIT, I told them that even if it takes four years, it means that when they graduate, they may not be able to get another job. It is certainly not too early to start preparing for it.

Anthropic is now blacklisted. I’m curious to see what happens next – will the other AI giants stand by and say, ‘We don’t want to do this either?’ Or does someone like xAI raise their hand and say, ‘Antropic didn’t want that contract, we’ll take it’? [Editor’s note: Hours after the interview, OpenAI announced its own deal with the Pentagon.]

Last night Sam Altman came out and said he stands with Anthropic and shares the same red lines. I admire him for the courage to say that. Google hadn’t said anything when we started this interview. If they just shut up, I think it’s incredibly embarrassing for them as a company, and many of their employees will feel the same. We haven’t heard anything from xAI yet either. So it will be interesting to see. Basically, there is this moment where everyone has to show their true colors.

Is there a version of this where the result is actually good?

Yes, and that’s why I’m actually optimistic in a weird way. There is so obviously an alternative here. If we just start treating AI companies like any other company – drop the corporate amnesty – they would clearly have to do something like a clinical trial before releasing something so powerful and demonstrate to independent experts that they know how to control it. Then we get a golden age with all the goodness of AI, without the existential angst. That’s not the path we’re on right now. But it could be.

Leave a Reply

Your email address will not be published. Required fields are marked *