
The Pentagon cut ties with Anthropic because use of its artificial intelligence models would “pollute” the US military’s supply chain, a top War Department official claimed Thursday.
Emil Michael, the War Department’s chief technology officer, said Anthropic’s Claude AI chatbot was trained using a fundamentally different ideology from what the Pentagon wants for its systems.
“We can’t have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, pollute the supply chain so our war fighters are getting ineffective weapons, ineffective body armor, ineffective protection,” Michael said in an interview with CNBC.
“That’s really where the supply chain risk designation came from,” he added.
Anthropic is currently suing the Pentagon after it became the first US company to be formally labeled a “supply chain risk” – a tag typically reserved for foreign entities that effectively requires defense contractors to stop using its technology.
Michael said the designation was “not meant to be punitive” and denied allegations from Anthropic that the Trump administration has been telling companies outside the defense sector not to work with them.
Trump officials had long been concerned that Anthropic had wacky ideological leanings – including its ties to the cult-like “Effective Altruism” movement and Democratic megadonors like LinkedIn co-founder Reid Hoffman.
Earlier this month, The Post exclusively reported on oddball blog posts penned by Amanda Askell, Anthropic’s in-house “philosopher” who helped craft the “soul document” that governs Claude.
The Trump administration’s rocky relationship with Anthropic reached a peak last month after the company and its CEO Dario Amodei refused to remove safeguards blocking its AI models from being used to power autonomous weapons or mass surveillance of Americans.
President Trump blasted Anthropic’s leaders as “leftwing nut jobs” while ordering all federal agencies to stop working with the firm. He allowed a six-month transition period.
At the time, Anthropic’s Claude was the only model approved for work on the Pentagon’s classified systems. OpenAI has since struck a deal to take over the bulk of that work.
Amodei, himself a Democratic donor, responded by blasting Trump in an internal memo, claiming the Pentagon had targeted Anthropic because it hadn’t given the president “dictator-style praise.” The exec later apologized.
Anthropic’s new suit against the Trump administration claims that the supply chain risk designation and other actions by the US government are “unprecedented and unlawful.”
Meanwhile, Palantir, another defense contractor, is still using Claude for the time being – including for operations linked to the conflict with Iran, its CEO Alex Karp told CNBC.








