In a publish on Fact Social, President Trump directed federal businesses to stop use of all Anthropic merchandise after the corporate’s public dispute with the Department of Defense. The president allowed for a six-month phase-out interval for departments utilizing the merchandise, however emphasised that Anthropic was now not welcome as a federal contractor.
“We don’t want it, we don’t need it, and won’t do enterprise with them once more,” the president wrote within the publish.
Notably, the president’s publish didn’t point out any plans to designate Anthropic as a provide chain threat, as had been beforehand talked about as a consequence. Nevertheless, a subsequent tweet from Secretary of Protection Pete Hegseth made good on the menace.
“Along with the President’s directive for the Federal Authorities to stop all use of Anthropic’s know-how, I’m directing the Division of Battle to designate Anthropic a Provide-Chain Danger to Nationwide Safety,” Secretary Hegseth wrote. “Efficient instantly, no contractor, provider, or accomplice that does enterprise with the USA army might conduct any industrial exercise with Anthropic.”
The Pentagon dispute centered on Anthropic’s refusal to permit its AI fashions for use to energy both mass home surveillance or totally autonomous weapons, which Secretary Hegseth discovered unduly restrictive.
CEO Dario Amodei reiterated his stance in a public post on Thursday, refusing to compromise on the 2 factors.
“Our robust choice is to proceed to serve the Division and our warfighters — with our two requested safeguards in place,” Amodei wrote on the time. “Ought to the Division select to offboard Anthropic, we’ll work to allow a easy transition to a different supplier, avoiding any disruption to ongoing army planning, operations, or different essential missions.”
Techcrunch occasion
Boston, MA
|
June 9, 2026
OpenAI reportedly got here out in assist of Anthropic’s choice. Per the BBC, CEO Sam Altman despatched a memo to workers on Thursday saying he shared the identical “pink traces” and that any OpenAI-related protection contracts would additionally reject makes use of that had been “illegal or unsuited to cloud deployments, similar to home surveillance and autonomous offensive weapons.”
OpenAI co-founder Ilya Sutskever, who very publicly fell out with Altman in November 2023 and has since co-founded his personal AI firm, additionally waded into the dialog on Friday, writing on X: “It’s extraordinarily good that Anthropic has not backed down, and it’s vital that OpenAI has taken an identical stance.”
However inside hours of the Trump administration ordering federal businesses to chop ties with Anthropic, OpenAI moved to fill the void, saying a take care of the Pentagon that Altman mentioned preserved the identical core ideas Anthropic had fought for — prohibitions on home surveillance and autonomous weapons.
In response to the New York Occasions, OpenAI and the federal government started assembly a couple of potential tie-up on Wednesday of this week.
Certainly, there will probably be extra twists to return.
Anthropic, OpenAI and Google every acquired contract awards from the U.S. Protection Division final July. Whereas some Google employees have come out in assist of Anthropic, Google and its mum or dad firm have but to remark.
Replace: This story has been up to date with further reporting.
Trending Merchandise
