OpenAI Secures Pentagon Contract Hours After Trump Blacklists Rival Anthropic
Washington, 28 February 2026
OpenAI has struck a deal with the US Department of Defense just hours after President Trump branded rival AI company Anthropic a ‘supply chain risk’ and ordered all federal agencies to cease using its technology. The dramatic reversal occurred after Anthropic refused Pentagon demands to remove ethical safeguards preventing its AI from being used for domestic mass surveillance and fully autonomous weapons systems. OpenAI’s agreement includes similar safety restrictions that Anthropic had insisted upon, highlighting the contradictory nature of the government’s stance and raising questions about the Pentagon’s selective enforcement of AI ethics standards.
Pentagon Ultimatum Forces Industry Realignment
The confrontation between the Pentagon and Anthropic reached a crescendo on 27 February 2026, when Defence Secretary Pete Hegseth designated Anthropic a ‘Supply-Chain Risk to National Security’ [1]. This dramatic escalation followed months of negotiations that collapsed over Anthropic’s insistence on maintaining safeguards preventing its Claude AI models from being used for autonomous weapons or mass surveillance [1]. The Pentagon had issued a stark ultimatum earlier that week, demanding Anthropic remove these ethical restrictions by 17:01 EST on Friday, 28 February 2026, or face termination of its partnership and potential invocation of the Defence Production Act [3][5]. President Trump reinforced this position by directing every federal agency to ‘immediately cease’ all use of Anthropic’s technology [1], whilst labelling the company a ‘RADICAL LEFT, WOKE COMPANY’ on his Truth Social platform [2].
OpenAI Capitalises on Rival’s Downfall
Within hours of Anthropic’s blacklisting, OpenAI CEO Sam Altman announced on Friday evening that his company had reached agreement with the Department of Defence to deploy its models on classified networks [1][4]. The timing proved particularly striking, as OpenAI’s deal incorporates virtually identical safety principles that had led to Anthropic’s exclusion from government contracts. Altman explicitly stated that OpenAI’s agreement includes ‘prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems’ [1][4]. The OpenAI chief noted that ‘the DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement’ [1], highlighting the apparent contradiction in the Pentagon’s selective enforcement of ethical standards. OpenAI committed to building ‘technical safeguards’ and deploying personnel to ensure model safety [1], measures that mirror Anthropic’s own proposed approach.
Industry Backlash Reveals Deep Divisions
The Pentagon’s differential treatment of the AI companies sparked significant opposition within the technology sector. More than 430 employees from Google and OpenAI signed a letter supporting Anthropic’s position, alleging that the Pentagon was attempting to coerce AI companies into agreeing to applications they deemed unethical [3]. Over 100 Google AI team employees specifically wrote to Jeff Dean, Google DeepMind’s chief scientist, expressing concerns about military access to Google’s Gemini AI for surveillance and lethal weapons [3]. The letter urged Google leadership to ‘do everything in your power to stop any deal which crosses these basic red lines’ [3]. Notably, even OpenAI’s Sam Altman voiced criticism of the Pentagon’s tactics, stating ‘I don’t personally think the Pentagon should be threatening DPA against these companies’ [3], despite his company ultimately benefiting from Anthropic’s exclusion.
Financial Stakes and Market Implications
The dispute carries substantial financial implications, with the Pentagon having awarded contracts worth up to £160 million each to Anthropic, Google, xAI, and OpenAI in summer 2025 [5]. Anthropic faced the potential loss of its £160 million contract, alongside the broader reputational damage of being designated a supply chain risk - a label typically reserved for companies from adversarial nations [6]. As of 17 February 2026, classified versions of Anthropic’s Claude AI had been available to Defence Department personnel through Amazon and Palantir [5], making the company’s sudden exclusion particularly jarring for existing users. The Defence Production Act, which the Pentagon threatened to invoke against Anthropic, grants the federal government broad authority to compel private companies to prioritise national defence needs [8]. Originally signed by President Harry S. Truman in 1950 [8], the Act has been used during wartime and national emergencies, though legal experts note it has never been employed to force a company to produce a product it deems unsafe [8].
Bronnen
- www.cnbc.com
- www.politico.com
- thehill.com
- www.bloomberg.com
- breakingdefense.com
- www.dw.com
- www.cbs19news.com
- www.bastillepost.com