ShowBiz & Sports Lifestyle

Hot

US court declines to block Pentagon's Anthropic blacklisting for now

US court declines to block Pentagon's Anthropic blacklisting for now

By Jack QueenWed, April 8, 2026 at 11:30 PM UTC

0

FILE PHOTO: Anthropic logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo

By Jack Queen

NEW YORK, April 8 (Reuters) - A Washington, D.C., federal appeals court on Wednesday declined to block the Pentagon's national security blacklisting of AI company Anthropic for now, a win for the Trump administration that ‌comes after another appeals court came to the opposite conclusion in a separate legal challenge by Anthropic.

Anthropic, developer of ‌the popular Claude AI assistant, alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated the company a national security supply-chain risk over its refusal ​to remove certain usage guardrails on its products, a label that blocks Anthropic from Pentagon contracts and could trigger a government-wide blacklisting.

Anthropic executives have said the designation could cost the company billions of dollars in lost business and reputational harm.

A panel of judges of the U.S. Court of Appeals for the District of Columbia Circuit denied Anthropic's bid to pause the designation while the case plays out. The decision ‌is not a final ruling.

An Anthropic spokeswoman said ⁠in a statement following Wednesday's ruling that the company is confident the court will ultimately agree the supply-chain risk designation is unlawful.

Acting Attorney General Todd Blanche hailed the ruling as a victory for military ⁠readiness in a social media post Wednesday.

"Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company," Blanche said, using Trump's new name for the Defense Department.

The lawsuit is one of two Anthropic filed over Hegseth’s unprecedented move, which came after ​Anthropic ​refused to allow the military to use AI chatbot Claude for U.S. ​surveillance or autonomous weapons due to safety and ethics ‌concerns.

Advertisement

Hegseth issued orders designating Anthropic under two different laws, and Anthropic is challenging each of them separately.

A California federal judge blocked one of the orders on March 26, saying the Pentagon appeared to have unlawfully retaliated against Anthropic for its views on AI safety.

Anthropic's designation was the first time a U.S. company has been publicly designated a supply-chain risk under obscure government-procurement statutes aimed at protecting military systems from enemy sabotage or infiltration.

In its lawsuits, Anthropic says the government violated its right to free speech under the ‌First Amendment of the Constitution by retaliating against its views on AI safety. ​The company said it was not given a chance to dispute its ​designation, in violation of its Fifth Amendment right to due ​process.

The lawsuits say the designations were unlawful, unsupported by facts and inconsistent with the military’s past praise ‌of Claude.

The Justice Department says that Anthropic’s refusal to ​lift the restrictions could cause ​uncertainty in the Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing.

The government said its decision stemmed from Anthropic’s refusal to accept contractual terms, not its views on AI safety.

The ​D.C. case concerns a law that could ‌lead to the blacklist widening to the broader civilian government following an interagency review process.

The California case deals with ​a narrower statute that excludes Anthropic from Pentagon contracts related to military information systems.

(Reporting by Jack Queen in ​New York; Editing by Noeleen Walder, Cynthia Osterman and Lincoln Feast.)

Original Article on Source

Source: “AOL Money”

We do not use cookies and do not collect personal data. Just news.