Key takeaways:
- A federal judge blocked the Trump administration from labeling AI company Anthropic as a “supply chain risk” and banning federal use of its technology, citing likely unlawful and arbitrary government actions.
- The dispute arose after Anthropic restricted military use of its AI for domestic surveillance and autonomous weapons, leading to the Pentagon’s designation and a federal ban on Anthropic’s AI services.
- The ruling restores the status quo by preventing enforcement of the ban but does not require federal agencies to use Anthropic’s products; the government may appeal within seven days.
A federal judge in California has issued a ruling blocking the Trump administration from designating artificial intelligence company Anthropic as a “supply chain risk” to national security and from banning all federal use of its technology. U.S. District Judge Rita Lin granted the preliminary injunction on Thursday in response to a lawsuit filed by Anthropic against the Department of Defense and other federal agencies. The company argued that the government’s actions constituted an “unprecedented and unlawful” attempt to punish it for exercising First Amendment rights.
The dispute centers on the Pentagon’s designation of Anthropic as a supply chain risk, a label that effectively prohibited the Defense Department and its contractors from using Anthropic’s AI services, including its Claude chatbot system. President Donald Trump also issued an order directing all federal agencies to immediately cease using Anthropic’s technology. The government’s actions followed Anthropic’s efforts to restrict military use of its AI for domestic surveillance and fully autonomous weapons, demands the administration opposed, asserting the need to employ AI for “all lawful purposes.”
In her ruling, Judge Lin described the government’s designation as likely unlawful and arbitrary, noting that Anthropic was not given meaningful notice or an opportunity to contest the designation prior to the ban. She wrote, “The Department of War provides no legitimate basis to infer from Anthropic’s forthright insistence on usage restrictions that it might become a saboteur.” The judge’s order restores the status quo by preventing enforcement of the supply chain risk label and the federal ban, but it does not compel the government to use Anthropic’s products. Agencies remain free to transition to other AI providers, provided they comply with applicable laws and regulations.
Anthropic, which is the only AI company approved for use on the Defense Department’s classified networks, filed two lawsuits—one in Northern California and another in Washington, D.C.—alleging that the government’s actions went beyond a typical contract dispute and amounted to retaliation following months of negotiations over military use restrictions. In a statement following the ruling, an Anthropic spokesperson expressed gratitude for the court’s swift action and emphasized the company’s commitment to working productively with the government to ensure safe and reliable AI benefits for all Americans. The judge’s order is stayed for seven days, allowing the administration time to appeal. The Defense Department and White House have not yet commented on the ruling.





Be First to Comment