US Department of Defense Tags AI Firm Anthropic as a Potential Risk to Supply Chain

In a move that has no precedent, the U.S. Department of Defense has officially marked the artificial intelligence company Anthropic as a supply chain risk. This is the first instance of such a label being applied to an American technology company. The designation, typically reserved for foreign adversaries like China’s Huawei, was confirmed on March 5, 2026.

The root of the conflict lies in Anthropic’s refusal to provide the Pentagon with unrestricted access to its Claude AI model. CEO Dario Amodei has set two primary restrictions: the technology cannot be used for mass surveillance of Americans or to power fully autonomous weapons without human oversight. The Pentagon, however, demanded access for “all lawful purposes,” arguing that the company should not limit military applications.

As a result of the supply chain risk designation, defense contractors are now required to certify that they don’t use Anthropic’s AI models in Pentagon-related work. Ironically, Claude is currently being used by U.S. military operations in Iran, where it assists with intelligence analysis and operational planning through Palantir’s Maven Smart System.

Anthropic has announced its intention to challenge the designation in court, calling it “legally unsound.” Major partners including Microsoft, Google, and Amazon have clarified that they can continue working with Anthropic on non-defense projects. The dispute has sparked a debate about government overreach and the role of private companies in setting ethical boundaries for AI in warfare.

Source: TechCrunch

Move to the category:

Leave a Reply

Your email address will not be published. Required fields are marked *