Anthropic Files Lawsuits Against US Department of Defense Over AI Chatbot Restrictions

The Financial Express
Anthropic Files Lawsuits Against US Department of Defense Over AI Chatbot Restrictions
Full News
Share:

The artificial intelligence firm Anthropic has filed lawsuits against the administration of Donald Trump , challenging a decision by the United States Department of Defense to classify the company as a “supply chain risk.” The move follows a dispute over restrictions placed by the company on how its AI chatbot, Claude, can be used by the military. The San Francisco-based AI developer approached federal courts on Monday with two separate legal challenges—one in a California federal court and another in the federal appeals court in Washington, D.C. The filings contest different elements of the Pentagon’s actions, which the company says unfairly penalise it. The controversy intensified after the Pentagon formally labeled the company a supply chain risk last week. The designation came after a public disagreement between the government and the AI firm over whether its technology should be allowed for unrestricted military applications. According to reports, the decision effectively blocks the company from participating in certain defence-related work. Officials cited national security concerns, while the company maintains that the action goes beyond the intended scope of such designations. “These actions are unprecedented and unlawful,” Anthropic’s lawsuit says. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive’s unlawful campaign of retaliation.” The Defense Department declined to comment on the matter, saying it does not discuss ongoing litigation. The company has argued that its technology should not be deployed for certain sensitive uses. Specifically, it sought limits on the use of its AI systems for mass surveillance of Americans and fully autonomous weapons. Officials in the administration, including Defence Secretary Pete Hegseth, had reportedly insisted the company must allow “all lawful uses” of the chatbot by the military. Authorities also warned of potential penalties if those conditions were not met. The supply chain risk label has broader implications because it is typically used to prevent foreign adversaries from infiltrating national security systems. According to reports, this is the first known instance of the designation being applied to a U.S.-based technology company. The dispute has also drawn in other federal agencies. The company’s lawsuit names departments such as the United States Department of the Treasury and the United States Department of State after officials reportedly instructed employees to stop using the firm’s services. Despite the legal battle, the company says it remains committed to national security cooperation while defending its business interests. Anthropic said in a statement, “Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners.”

Disclaimer: This content has not been generated, created or edited by Achira News.
Publisher: The Financial Express

Want to join the conversation?

Download our mobile app to comment, share your thoughts, and interact with other readers.

Anthropic Files Lawsuits Against US Department of Defense Over AI Chatbot Restrictions | Achira News