Skip to content

Tech News

From $1 Deal to Banishment: The Full Story of Anthropic’s Dramatic Fallout with the U.S. Government

February 28, 2026 By @mritxperts 8 min read
Anthropic US government

What began as one of the most ambitious AI-government partnerships in American history ended — at least for now — in a stunning public confrontation between one of Silicon Valley’s most safety-conscious AI companies and the most powerful government on earth. In just six months, Anthropic went from offering Claude to the entire U.S. federal government for $1 to being labeled a “supply chain risk to national security” by the Secretary of Defense.

Here is the full story.


The Beginning: A $1 Deal That Made History

On August 12, 2025, Anthropic made a headline-grabbing announcement: it would offer Claude for Enterprise and Claude for Government to all three branches of the U.S. government — the executive, legislative, and judicial — for just $1 per agency for a year.

The deal, structured through a OneGov agreement with the General Services Administration (GSA), was unprecedented in scope. Every federal civilian agency, congressional office, and federal court would have access to Anthropic’s frontier AI model, including the FedRAMP High-certified Claude for Government platform designed for sensitive, unclassified work.

“America’s AI leadership requires that our government institutions have access to the most capable, secure AI tools available,” Anthropic CEO Dario Amodei said at the time.

The move was clearly strategic. Competition for government AI contracts had grown fierce. OpenAI had made a similar $1 offer just weeks earlier, and both Google DeepMind and Elon Musk’s xAI were also vying for federal business. Anthropic, backed by billions in investment and valued at around $380 billion, was positioning itself as the responsible, safety-first alternative — the AI partner the government could trust.

That trust was about to be tested.


Building a Defense Portfolio

Even before the $1 deal, Anthropic had been quietly cementing its status as a serious player in national security AI.

In July 2025, the U.S. Department of Defense awarded contracts of up to $200 million each to Anthropic, OpenAI, Google DeepMind, and xAI to “prototype frontier AI capabilities that advance U.S. national security.” By the end of 2025, Anthropic had become the only AI company whose models were actively deployed on classified networks, operating through a partnership with data analytics giant Palantir. It had reportedly become the first AI firm ever cleared to handle classified information.

At Lawrence Livermore National Laboratory alone, 10,000 scientists and researchers were using Claude daily to accelerate scientific discovery. The District of Columbia’s Department of Health had deployed Claude to help residents access health services in multiple languages. Anthropic’s footprint in government was growing rapidly.

Then, in January 2026, something changed.


The Venezuela Incident and a Line in the Sand

According to reports from the Wall Street Journal and Axios, Anthropic’s Claude systems were used — through its contract with Palantir — during a U.S. military operation in January 2026 that resulted in the capture of Venezuelan President Nicolás Maduro. Exactly how the AI was used remains unclear, but the incident alarmed Anthropic’s leadership.

CEO Dario Amodei wrote to the Pentagon reiterating two “bright red lines” the company had maintained since its founding: Claude would not be used for mass domestic surveillance of U.S. persons, and it would not be deployed in fully autonomous lethal weapons systems. These were not new positions — they were baked into Anthropic’s usage policies and, crucially, into the terms of its Defense Department contract, which the Pentagon had originally agreed to.

But the political winds in Washington had shifted. Defense Secretary Pete Hegseth had issued a memo in January directing that all Pentagon AI contracts incorporate standard “any lawful use” language within 180 days — a direct collision with Anthropic’s restrictions.


Hegseth’s Ultimatum

The dispute came to a head in late February 2026. On Tuesday, February 24, Hegseth met with Amodei directly and issued a stark ultimatum: agree by 5:01 p.m. on Friday, February 27, to allow Claude to be used for “all lawful purposes” — or face consequences.

Hegseth threatened to invoke the Defense Production Act, a powerful wartime statute that could theoretically compel Anthropic to provide its technology on the government’s terms. He also raised the possibility of labeling the company a “supply chain risk to national security” — a designation typically reserved for companies with ties to adversary nations like China or Russia.

Behind the scenes, negotiations continued. Anthropic had already made significant concessions. In December 2025 contract talks, the company had agreed to allow its AI systems to be used for missile defense and cyber defense purposes. A Pentagon undersecretary told Bloomberg that talks had been at “final stages” just before the deadline. Anthropic said every version of its proposed contract language would enable support for missile defense and similar national security applications.

But there was one line Anthropic would not cross. Mass domestic surveillance. Fully autonomous weapons. On those two points, Amodei was immovable.

“These threats do not change our position,” Amodei wrote in a letter to the Pentagon on Thursday, February 26. “We cannot in good conscience accede to their request.”


The Fallout: Banned, Labeled, Replaced

The response was swift and punishing.

On Friday, February 27, President Trump posted on Truth Social calling Anthropic “woke” and “leftwing,” accusing the company of endangering troops and jeopardizing national security. He directed every federal agency to “immediately cease all use of Anthropic’s technology.” He added a personal warning: if Anthropic refused to comply, he would use “the Full Power of the Presidency to make them comply.”

The General Services Administration announced it would remove Anthropic from USAi.gov, the federal government’s centralized AI testing platform — effectively wiping away the $1 deal that had been celebrated just six months earlier.

Defense Secretary Hegseth went further. He officially designated Anthropic a “Supply-Chain Risk to National Security” — an extraordinary label with sweeping consequences. The designation means that any company doing business with the Pentagon must now prove it has no connection to Anthropic’s technology in its defense work. Given how widely Claude is used across the enterprise sector, this puts a large portion of Anthropic’s customer base at risk.

As one analyst put it: “It means that Anthropic’s existing customer base, some large portion of it might evaporate, either because they have government contracts or might want them in the future.”

The Pentagon did grant one concession: a six-month phase-out period for agencies currently relying on Anthropic’s tools, acknowledging that Claude is the only AI cleared for classified operations and that an abrupt transition could disrupt critical missions.


OpenAI Moves In

Within hours of the Anthropic announcement, Sam Altman hosted an all-hands meeting at OpenAI. By Friday evening, he announced on X that OpenAI had reached an agreement with the Pentagon.

Remarkably, the deal reportedly included some of the same protections Anthropic had been fighting for. Altman told employees the government agreed to include OpenAI’s own “red lines” in the contract — prohibiting use of AI for autonomous weapons, domestic mass surveillance, and critical decision-making without human oversight. The government also agreed that if the model refused a task, it would not force OpenAI to override it.

Altman publicly called on the Pentagon to “offer these same terms to all AI companies” — a statement that underscored the question many observers were already asking: why were terms acceptable for OpenAI unacceptable when Anthropic asked for them?


What’s at Stake

The dispute raises questions that go far beyond one company’s government contracts.

For AI safety: Anthropic has long positioned itself as the safety-first AI company — the firm that would not compromise its principles for commercial gain. Its stand against unrestricted military use is perhaps the clearest real-world test of that identity. The company held its line. Whether that proves to be a principled stand or a costly miscalculation remains to be seen.

For the AI industry: The Pentagon’s actions send a clear message to every AI company with government ambitions. Comply with “any lawful use” demands, or face consequences. xAI’s Grok reportedly agreed to those terms almost immediately after receiving its $200 million contract. Now that OpenAI has negotiated a deal with safety carve-outs, the dynamic may shift — but the precedent of using supply-chain risk designations against a domestic AI company is unprecedented and alarming to civil liberties advocates.

For civil liberties: The Electronic Frontier Foundation and other advocacy groups have been vocal in supporting Anthropic’s position. The two red lines at the center of this dispute — autonomous weapons and domestic mass surveillance — are not fringe concerns. They represent the difference between AI as a tool of human decision-making and AI as an autonomous instrument of lethal or authoritarian power.

For the government itself: Federal agencies now face the task of migrating away from an AI system embedded across classified and unclassified networks — within six months. That is a significant operational challenge, and it’s one the government brought on itself.


What Happens Next

As of this writing, Anthropic has not backed down. The company faces the prospect of losing not just its $200 million Pentagon contract but potentially a substantial portion of its enterprise customer base if the supply-chain risk designation is broadly enforced.

Legal challenges are possible. Some experts believe the use of a supply-chain risk designation against a domestic company with no foreign adversary ties could be legally contested. The Defense Production Act threat, meanwhile, raises profound questions about the limits of executive power over private AI companies — questions that courts have not yet had to answer.

What is certain is that this dispute has changed the landscape. It has forced every AI company to decide where its lines are. It has shown that the U.S. government is willing to use significant economic and political leverage to shape how AI is developed and deployed. And it has put AI safety — not as an abstract principle, but as a practical business and policy question — at the center of America’s national security debate.


Timeline at a Glance

DateEvent
July 2025DoD awards $200M contracts to Anthropic, OpenAI, Google, xAI
August 12, 2025Anthropic offers Claude to all 3 government branches for $1 via GSA OneGov deal
Late 2025Anthropic becomes first AI company cleared for classified network use
January 2026Claude reportedly used in Venezuela operation; Amodei reiterates red lines
January 9, 2026Hegseth memo directs “any lawful use” language for all Pentagon AI contracts
February 24, 2026Hegseth meets Amodei, issues Friday deadline
February 26, 2026Amodei refuses in public letter
February 27, 2026Trump orders all agencies to cease use of Anthropic; Hegseth labels it supply-chain risk; OpenAI reaches deal with Pentagon

This story is developing. The outcome of Anthropic’s legal options, the enforcement of the supply-chain risk designation, and the broader industry response are all still unfolding.

Leave a Reply

Your email address will not be published. Required fields are marked *