Regulation31d ago

OpenAI’s “compromise” with the Pentagon is what Anthropic feared

Source: MIT Technology Review AI·Thu, 5 Mar 2026, 12:50 am UTCRead original
78
Relevance

AI Summary

On February 28, OpenAI announced a deal allowing the US military to use its technologies in classified settings, with CEO Sam Altman acknowledging the negotiations were 'definitely rushed' and began only after the Pentagon publicly reprimanded Anthropic. OpenAI published a blog post stating the agreement prohibits use for autonomous weapons and mass domestic surveillance, citing existing laws including a 2023 Pentagon directive on autonomous weapons and the Fourth Amendment, rather than standalone contractual prohibitions. However, Jessica Tillipman, associate dean for government procurement law studies at George Washington University, noted the published contract excerpt 'does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use,' meaning the Pentagon can use OpenAI's technology for any lawful purpose. Defense Secretary Pete Hegseth had issued a sharp rebuke of Anthropic on a Friday evening, calling the company's stance 'a master class in arrogance and betrayal,' and announced measures that could classify Anthropic as a supply chain risk and bar any Pentagon contractor from doing commercial business with the company — a threat Anthropic has said it will challenge legally. According to MIT Technology Review, Claude remains the only AI model actively deployed by the Pentagon in classified operations, including in Venezuela, and was reportedly used in strikes on Iran hours after the ban was issued, raising questions about the feasibility of a six-month phase-out to OpenAI and Elon Musk's xAI models. OpenAI employee Boaz Barak stated the company maintains control over safety rules embedded in its models and will not provide the military a version stripped of safety controls, though the company has not specified how its military-use safety rules differ from those governing civilian users.

Why it matters

The Pentagon's divergent treatment of OpenAI and Anthropic signals a significant competitive and regulatory inflection point for the AI sector, as government defense contracts represent a major and growing revenue opportunity that may now hinge on companies' willingness to defer to existing law rather than impose independent ethical constraints. Anthropic's potential classification as a defense supply chain risk — if legally upheld — could have broad commercial consequences, as the restriction would extend to any contractor, supplier, or partner that does business with the US military, potentially pressuring enterprise clients to reconsider commercial relationships with the company. The episode highlights a broader tension emerging across the AI industry between safety-focused governance frameworks and government demands for unrestricted lawful access, a dynamic that is likely to shape how AI companies structure future public sector partnerships and communicate their compliance postures to both investors and employees.

Scoring rationale

This article directly covers major AI companies (OpenAI, Anthropic, xAI) negotiating classified military contracts with significant regulatory, ethical, and competitive market implications, including Anthropic facing an existential government supply-chain ban that could devastate its revenue.

78/100

Impacted tickers

ANTHPRIVATEOPENAIPRIVATE

This summary was generated by AI from the original article published by MIT Technology Review AI. AIMarketWire does not provide trading advice. Always refer to the original source for complete reporting.

Related articles