Microsoft Sides With Anthropic Against Trump Admin’s Supply Chain Risk Designation

Decrypt

In brief

  • Microsoft backed Anthropic in court to protect billions tied to Claude and Azure.
  • The Pentagon blacklist could ripple across the entire AI contractor ecosystem.
  • Microsoft argued the DoD used a foreign-adversary security designation in an “unprecedented” way.

Microsoft has up to $5 billion invested in Anthropic, while Anthropic has committed to buy $30 billion in Azure compute under the partnership. That context makes its decision to file an amicus curiae brief in support of Anthropic’s lawsuit against the U.S. Department of Defense look less like altruism and more like financial self-defense. The brief, filed March 10 in San Francisco, argues that a temporary restraining order blocking enforcement of the Pentagon’s “supply chain risk” designation would serve the public interest. Microsoft itself is a major DoD contractor, and that designation puts its own products at risk. Defense Secretary Pete Hegseth directed that no contractor, supplier, or partner doing business with the U.S. military may conduct any commercial activity with Anthropic—a sweep potentially broad enough to catch Microsoft’s own Copilot and Azure products, which ship with support for Claude.

 The brief highlights a procedural contradiction that has received little attention in mainstream coverage: The Department of Defense gave itself a six-month phase-out period to transition away from Anthropic’s tools, but applied the designation to contractors immediately with no equivalent runway. Microsoft’s lawyers called this out directly, noting that tech suppliers must now scramble to audit, re-engineer, and reprocure products on a timeline the government didn’t impose on itself. Microsoft also raised an alarm that cuts to the heart of the legal dispute. The supply chain risk authority invoked—10 U.S.C. § 3252—has historically been reserved for foreign adversaries. Only one such designation has ever been issued publicly under related statutes, and that was against Acronis AG, a Swiss software firm with Russian ties. Using it against a San Francisco AI startup is, as Microsoft put it, “unprecedented.”

The brief’s most pointed argument is structural. If a contract dispute between one agency and one company can trigger a national-security blacklist, then every company doing business with the federal government just inherited a new category of existential risk. Microsoft’s lawyers described an industry model built on interconnected services, where one banned component can freeze entire product lines. There’s an irony here that’s hard to ignore. Microsoft is simultaneously OpenAI’s biggest backer—with investments valued at approximately $135 billion—and now one of Anthropic’s loudest courtroom defenders. OpenAI, for its part, rushed to sign a deal with the DoD hours after the Anthropic blacklist dropped, a move that drew internal backlash and led to public acknowledgment from OpenAI CEO Sam Altman that the announcement “looked opportunistic and sloppy.” Microsoft backed both horses.

Here is re-post of an internal post:

We have been working with the DoW to make some additions in our agreement to make our principles very clear.

  1. We are going to amend our deal to add this language, in addition to everything else:

"• Consistent with applicable laws,…

— Sam Altman (@sama) March 3, 2026

The brief stops short of endorsing Anthropic’s specific AI safety positions on autonomous weapons and mass surveillance—the two red lines that triggered the standoff. Instead, it frames the case in terms any government contractor can understand: due process, orderly transitions, and the effects of weaponizing procurement law over policy disagreements. Microsoft’s request is a temporary restraining order, not a verdict. The tech giant wants the clock slowed down enough for the parties to negotiate—and for its own products to stay legally deployable while they do. What’s at stake goes beyond one company’s contract. If courts allow the Pentagon’s move to stand, then every AI company selling into the government just learned that safety guardrails can be reframed as national security threats. Microsoft’s brief makes clear that lesson isn’t lost on the broader tech industry—and that the company isn’t willing to learn it quietly.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments