
The Pentagon has reportedly asked Boeing and Lockheed Martin to detail their reliance on Anthropic’s Claude chatbot ahead of a Friday deadline for the AI firm to either relax its safeguards or face blacklisting.
The request, first reported by Axios, came after the Pentagon threatened last week to declare Anthropic a “supply chain risk” – a rare rebuke generally reserved for foreign firms like China’s Huawei. It would cancel existing contracts and force other defense contractors to stop doing business with Anthropic.
Boeing said it has no active contracts with Anthropic, while Lockheed confirmed it was contacted by the Pentagon and declined further comment, according to the report.
As The Post reported, Defense Secretary Pete Hegseth warned Anthropic boss Dario Amodei at a tense meeting earlier this week that he has until Friday at 5:01 pm ET to remove restrictions on how the US military can use its chatbot.
Hegseth said if that doesn’t happen, the Pentagon could use the Defense Production Act to effectively force Anthropic to tailor Claude for its use. Some critics have pointed out that the “supply chain risk” designation and a potential use of the DPA could be seen as contradictory.
Representatives for the Pentagon and Anthropic did not immediately return requests for comment.
Amodei reiterated that Anthropic would not support the use of its technology to enable mass surveillance of Americans or to power weapons that can fire without human oversight. He insisted the company’s red lines have never impacted a military operation.
The Tuesday meeting between Amodei and Hegseth was described as cordial but tense, with the defense secretary praising Claude’s capabilities even as he delivered the ultimatum.
Claude is the only chatbot currently used by the US military in classified situations. However, a senior Defense official previously told The Post that Elon Musk’s Grok AI model recently received clearance, and chatbots from other major companies are close — giving the military a plausible alternative to Claude.
Earlier this week, an Anthropic spokesperson said the firm had “continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”
In July, the Pentagon awarded Anthropic a $200 million contract that included an agreement to “prototype frontier AI capabilities that advance US national security.”
Tensions between Anthropic and the Trump administration have been on the rise for months. The Post first reported in November that the company’s ties to the cult-like Effective Altruism movement and Democratic megadonors like LinkedIn cofounder Reid Hoffman were on the White House’s radar.












