The realization struck the Pentagon with a jolt, according to Emil Michael, the department’s under secretary for research and engineering. Following a U.S. military operation in Venezuela earlier this year that led to the capture of Nicolas Maduro, Anthropic, the AI company whose technology was deeply integrated into defense systems, questioned whether its AI had been deployed in the raid. While Anthropic characterized this inquiry as standard procedure, the Pentagon, alongside its contractor Palantir, viewed it as a stark warning about potential vulnerabilities in their AI supply chain.
Michael vividly recalled the moment, expressing a profound concern: “I’m like, holy shit, what if this software went down, some guardrail picked up, some refusal happened for the next fight like this one and we left our people at risk?” This sentiment rippled through the Pentagon’s leadership, underscoring an unexpected and troubling reliance on a single software provider without immediate alternatives. Until recently, Anthropic’s Claude model held a unique position as the sole AI authorized for use in classified government settings, a testament to its advanced capabilities and the trust it had garnered within the defense establishment.
Anthropic, a San Francisco-based startup, has publicly stated its commitment to U.S. national interests, yet it maintains clear boundaries regarding the use of its technology. The company has drawn lines at applications involving mass domestic surveillance or the development of autonomous weapons. The Pentagon, in turn, insisted it would deploy the AI solely in lawful contexts and refused to accept any company-imposed limitations that extended beyond these established constraints. This fundamental disagreement ultimately led to an impasse.
Last week, the inability to find common ground culminated in President Donald Trump’s directive for the federal government to cease using Anthropic’s services, granting the Pentagon a six-month window for a complete phase-out. Defense Secretary Pete Hegseth further solidified this stance by designating Anthropic a supply-chain risk, effectively barring defense contractors from utilizing its AI for military-related projects. Despite this directive, Anthropic’s AI continues to play a role in ongoing operations, particularly in the U.S. war on Iran, where its rapid target identification capabilities remain instrumental for warfighters.
The potential for a “poisoned model” was another significant concern raised by Michael during his podcast appearance. He articulated worries that a rogue developer could intentionally compromise the AI, rendering it ineffective, inducing purposeful hallucinations, or even programming it to disregard instructions. This vulnerability highlighted the intrinsic risks associated with proprietary AI systems in critical defense applications. In response to these growing concerns, the Pentagon has moved to diversify its AI partnerships.
OpenAI was subsequently contacted, leading to a similar agreement to the one previously held with Anthropic. Elon Musk’s xAI has also been integrated into classified operations, and efforts are underway to bring Google’s AI into the secure fold. Michael emphasized a desire for redundancy and a level playing field among providers, stating, “I’m not biased. I just I want all of them. I want to give them all the same exact terms because I need redundancy.” He acknowledged Anthropic’s prior deep integration within the department, attributing it to their proactive approach in providing forward-deployed engineers, a level of engagement other AI firms had not matched. The unfolding situation underscores a broader cultural collision between the defense sector and Silicon Valley, a dynamic where technological innovation, often rooted in military research, now confronts ethical considerations regarding its application in warfare. Caitlin Kalinowski, a prominent robotics engineer at OpenAI, recently resigned, echoing Anthropic’s earlier concerns about ethical boundaries in AI deployment, particularly regarding surveillance and lethal autonomy.
