While Washington’s break with Anthropic exposed the complete lack of coherent rules for artificial intelligence, a bipartisan coalition of thinkers has assembled something the government has so far refused to produce: a framework for what responsible AI development should actually look like.
The Pro-Human statement was completed before last week’s Pentagon-anthropic standoff, but the clash between the two events was not lost on anyone involved.
“There is something quite remarkable that has happened in America over the last four months,” Max Tegmark, the MIT physicist and AI researcher who helped organize the effort, told this editor. “Sudden vote [is showing] that 95% of all Americans oppose an unregulated race for superintelligence.”
The newly published document, signed by hundreds of experts, former officials and public figures, opens with the no-nonsense observation that humanity is at a crossroads. One path, which the statement calls the “race to replace,” leads to people being displaced first as workers, then as decision-makers, as power falls to unaccountable institutions and their machines. The other leads to AI that massively expands human potential.
The latter scenario hinges on five key pillars: holding people accountable, avoiding concentration of power, protecting the human experience, preserving individual freedom, and holding AI companies legally accountable. Among its more muscular provisions are an outright ban on the development of superintelligence until there is scientific consensus that it can be done safely and genuine democratic buy-in; mandatory off-switches on powerful systems; and a ban on architectures capable of self-replication, autonomous self-improvement, or shutdown resistance.
The release of the statement coincides with a period that makes it much easier to understand the urgency. Last Friday in February, Defense Secretary Pete Hegseth designated Anthropic — whose AI already runs on classified military platforms — a “supply chain risk” after the company refused to give the Pentagon unrestricted use of its technology, a label usually reserved for firms with ties to China. Hours later, OpenAI finalized its own agreement with the Department of Defense, an agreement that legal experts say will be difficult to enforce in any meaningful way. What it all laid bare is how costly congressional inaction on AI has become.
As Dean Ball, a senior fellow at the Foundation for American Innovation, told The New York Times afterward, “This is not just a dispute over a contract. This is the first conversation we’ve had as a country about controlling AI systems.”
Techcrunch event
San Francisco, CA
|
13.-15. October 2026
Tegmark reached for an analogy that most people can understand when we spoke. “You never have to worry that a drug company is going to release some other drug that causes massive harm before people figure out how to make it safe,” he said, “because the FDA won’t allow them to release anything until it’s safe enough.”
Washington turf wars rarely generate the kind of public pressure that changes laws. Instead, Tegmark sees child safety as the pressure point most likely to break the current stalemate. In fact, the statement calls for mandatory testing of AI products before deployment – especially chatbots and companion apps aimed at younger users – which cover risks including increased suicidal thoughts, worsening mental health conditions and emotional manipulation.
“If some creepy old man texts an 11-year-old pretending to be a young girl and tries to talk this boy into killing himself, the guy could go to jail for that,” Tegmark said. “We already have laws. It’s illegal. So why is it any different if a machine does it?”
He believes that once the principle of pre-release testing is established for children’s products, the scope will almost inevitably expand. “People will come and be like — let’s add a few other requirements. Maybe we should also test that this can’t help terrorists make bioweapons. Maybe we should test to make sure that superintelligence doesn’t have the ability to overthrow the U.S. government.”
It is no small thing that former Trump adviser Steve Bannon and Susan Rice, President Obama’s national security adviser, have signed the same document – along with former Joint Chiefs Chairman Mike Mullen and progressive faith leaders.
“What they agree on, of course, is that they are all human,” says Tegmark. “If it comes down to whether we want a future for humans or a future for machines, of course they’ll be on the same page.”
