(Bloomberg) - The Pentagon escalated its ongoing dispute with Anthropic PBC on Thursday, making public a threat to effectively ban the artificial intelligence startup from the US military’s vast supply chain.
In a social media post, the Defense Department’s main spokesman Sean Parnell warned Anthropic of a deadline of Friday 5:01 pm in Washington to allow the Pentagon unfettered use of Anthropic’s Claude Gov AI tools after the company had previously insisted on some safeguards.
“This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk,” Parnell wrote. A senior Pentagon official confirmed Thursday that the Defense Department had sent its final offer to Anthropic on Wednesday.
In its discussions with the Pentagon, Anthropic has asked US officials to refrain from using its products to create weapons that autonomously target enemy combatants or conduct mass surveillance of US citizens, according to people familiar with the matter.
The Pentagon has pushed back and demanded the ability to use Claude, one of the only AI tools cleared for classified cloud work, without any restrictions from the company. The Defense Department has also threatened to use the Cold War-era Defense Production Act to use Anthropic’s software anyway.
Parnell’s X post on Thursday represented the department’s first, on-the-record statement spelling out potential consequences.
The Pentagon has no interest in mass surveillance or developing “autonomous weapons that operate without human involvement,” Parnell wrote.
“We will not let ANY company dictate the terms regarding how we make operational decisions,” he continued. “They have until 5:01 PM ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk.”
By Iain Marlow
With assistance from Katrina Manson