top of page
Copy of Logo white.png

Pentagon Declares AI Firm Anthropic a Supply Chain Risk Amid Heated Military AI Dispute

The U.S. Department of Defense has designated artificial intelligence company Anthropic as a "supply chain risk to national security," escalating a public dispute over the use of its AI models by the military. This move follows Anthropic's refusal to allow its AI, Claude, to be used for mass domestic surveillance or fully autonomous weapons, a stance that clashes with the Pentagon's demand for "all lawful uses."

Key Takeaways

  • The Pentagon has officially designated Anthropic as a "supply chain risk.

  • This designation prohibits military contractors from conducting commercial activity with Anthropic.

  • The dispute centers on Anthropic's ethical guardrails against mass surveillance and autonomous weapons.

  • President Trump has ordered all federal agencies to phase out Anthropic's technology within six months.

  • Anthropic vows to challenge the designation in court, calling it "legally unsound."

Escalating Conflict Over AI Ethics

The conflict ignited after Anthropic publicly stated its contracts should not facilitate mass domestic surveillance or the development of autonomous weapons. The company argued that such uses are incompatible with democratic values and that current AI technology cannot safely support them. The Pentagon, however, insisted on the ability to use AI for "all lawful purposes," asserting that existing laws and policies already restrict problematic uses.

Pentagon's Directive and Anthropic's Response

Secretary of Defense Pete Hegseth announced the supply chain risk designation, stating that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic" effective immediately. President Trump followed with a directive for all federal agencies to cease using Anthropic's technology within six months, with a transition period for departments like the Department of War.

Anthropic has vowed to challenge the designation in court, deeming it "legally unsound" and a dangerous precedent for companies negotiating with the government. The company maintains that the supply chain risk designation, under 10 USC 3252, should only apply to direct Department of Defense contracts and not affect its other customers.

Industry Reactions and OpenAI's Deal

The dispute has polarized the tech industry. Hundreds of employees from companies like Google and OpenAI have signed an open letter supporting Anthropic. Meanwhile, xAI CEO Elon Musk sided with the Trump administration, stating "Anthropic hates Western Civilization."

In a notable development, OpenAI announced it has reached an agreement with the Department of Defense to deploy its models within classified environments, with similar prohibitions against domestic mass surveillance and autonomous weapons. This deal comes as OpenAI's CEO Sam Altman urged the DoD to extend these terms to all AI companies.

Broader Implications and Legal Challenges

Experts have raised concerns that the Pentagon's actions could have a chilling effect on the broader frontier AI industry, potentially discouraging companies from working with the government if they fear their ethical safeguards could be penalized. The designation, typically reserved for foreign adversaries, marks an unprecedented move against an American company. Legal experts suggest Anthropic is likely to pursue litigation, which could take years to resolve, potentially impacting the company's business in the interim.

Sources

  • Pentagon Designates Anthropic Supply Chain Risk Over AI Military Dispute, The Hacker News.

  • Hegseth declares Anthropic a supply chain risk, restricting military contractors from doing business with AIgiant, CBS News.

  • Anthropic Hits Back After US Military Labels It a ‘Supply Chain Risk’, WIRED.

  • Experts raise questions and concerns about Pentagon’s threat to blacklist Anthropic amid AI spat |DefenseScoop, DefenseScoop.

  • Pentagon moves to designate Anthropic as a supply-chain risk, TechCrunch.

Join our mailing list

bottom of page