Timeline of the Anthropic–U.S. Department of Defense contract negotiations

From Timelines
Jump to navigation Jump to search

This is a timeline of the AnthropicU.S. Department of Defense contract negotiations, documenting the development of collaboration, conflict, and policy disputes over the military use of frontier AI systems. It traces key events including defense contracts, disagreements over safeguards such as bans on mass surveillance and autonomous weapons, government retaliation, and broader debates about AI governance, national security, and corporate responsibility.

Full timeline

Year Event type Details
2025 (June 6) Prelude A report describes “Claude Gov,” a specialized version of Anthropic’s AI models designed for U.S. national security agencies and capable of handling classified information. Developed with government feedback, the system supports tasks such as intelligence analysis, strategic planning, and operational assistance within secure environments. The models are customized to better process defense documents and languages relevant to security operations. The initiative reflects growing competition among AI companies for defense contracts, while experts caution that AI systems may produce inaccurate analyses if not carefully supervised.[1]
2025 (July 15) Prelude The U.S. Department of Defense awards Anthropic, along with Google, OpenAI, and xAI, contracts worth up to $200 million each to develop advanced AI tools for national-security applications. The initiative aims to accelerate the Pentagon’s adoption of generative and agentic AI across mission areas, including analysis and operational support. Anthropic’s inclusion highlights its role as a major frontier-AI provider working with defense agencies. The contracts reflect growing competition among leading AI companies to supply the U.S. military with advanced artificial-intelligence capabilities.[2]
2026 (February 26) Company statement Anthropic CEO Dario Amodei states that the company strongly supports deploying AI to defend the United States and democratic allies, and had already supplied models to U.S. defense and intelligence agencies. Anthropic says it had restricted access to adversaries and supported export controls to preserve democratic technological advantage. However, it refuses to allow two uses: mass domestic surveillance and fully autonomous weapons, citing democratic values and technical safety concerns. The Department of War demands unrestricted lawful use and had threatened sanctions if safeguards remain, a request Anthropic rejects.[3]
2026 (February 27) Government action The Trump administration orders federal agencies and military contractors to cease doing business with Anthropic after the company had refused to remove restrictions on Pentagon use of its AI systems. Anthropic prohibits the use of its Claude model for autonomous weapons and mass surveillance of U.S. citizens, conditions the Pentagon rejects. Agencies are given six months to phase out Anthropic products, with the company risking being labeled a “supply chain risk.”[4]
2026 (February 27) President Donald Trump states on Truth Social that Anthropic is a “radical left, woke company” and directs all federal agencies to immediately cease use of its technology.[5]
2026 (February 27) Competition OpenAI CEO Sam Altman announces that the company had reached an agreement with the U.S. Department of War to deploy its AI models on classified government networks. Altman says the department supports OpenAI’s safety principles, including prohibitions on domestic mass surveillance and maintaining human responsibility in the use of force. The agreement includes technical safeguards, deployment on cloud networks, and field deployment engineers to oversee model safety. Altman also urges the government to extend similar terms to other AI companies.[6]
2026 (February 27) Government action In a public statement, U.S. Secretary of War Pete Hegseth sharply criticizes Dario Amodei, accusing the company of arrogance and attempting to impose ideological restrictions on the U.S. military’s use of AI. Hegseth says the Pentagon requires unrestricted access to AI systems for all lawful defense purposes and rejects Anthropic’s safeguards. He announces that the Department of War would designate Anthropic a national-security supply-chain risk, ordering federal agencies and military contractors to cease business with the company and transition away from its technology within six months.[7]
2026 (February 28) Company statement In a CBS News interview, Anthropic CEO Dario Amodei explains the company’s dispute with the U.S. Department of War after being labeled a national-security supply-chain risk. Amodei says Anthropic supports national defense and already provides AI to intelligence and military systems, but refuses two uses: domestic mass surveillance and fully autonomous weapons without human oversight. He argues current AI systems are unreliable for lethal autonomy and that surveillance capabilities may outpace legal safeguards. Amodei describes the Pentagon’s ultimatum as punitive but says Anthropic remains willing to cooperate under its “red line” restrictions and would continue supporting U.S. security.[8]
2026 (February 28) Commentary A publication argues that Anthropic was justified in refusing Pentagon demands to remove safeguards on its AI systems. The author argues that the company’s safeguards—banning domestic mass surveillance and fully autonomous weapons—were modest and reasonable. According to the writer, the Pentagon’s refusal to accept even limited constraints demonstrates why military institutions should not be trusted with unrestricted AI authority. The episode is cited as evidence that stronger regulation and oversight of military AI are urgently needed.[9]
2026 (February 28) Analysis An interview discusses the dispute between Anthropic and the U.S. Department of War over restrictions on military uses of artificial intelligence. Experts Kat Duffy and Amos Toh explain that the conflict reflects broader tensions between rapid military AI adoption and safety safeguards. The experts warn that escalating the conflict—such as labeling Anthropic a supply-chain risk—raises legal questions, damages global trust in U.S. technology, and highlights the need for stronger congressional oversight of military AI.[10]
2026 (February 28) Analysis A publication reports on an interview with MIT physicist Max Tegmark about the conflict between Anthropic and the U.S. government over limits on AI use. Tegmark argues the dispute reflects a broader failure by major AI companies—including Anthropic, OpenAI, and Google DeepMind—to support binding regulation. According to Tegmark, their reliance on voluntary self-governance created a regulatory vacuum that now leaves them vulnerable to political pressure. He warns that rapid AI development without strong oversight could create severe societal and security risks.[11]
2026 (March 1) Anthropic’s standoff with the U.S. government culminates in a dramatic public response. After refusing Pentagon demands for unrestricted AI use and being labeled a “supply chain risk,” CEO Dario Amodei defends the decision in a CBS interview. That same day, Claude becomes the world’s most downloaded app, topping App Store rankings in over 20 countries. The surge reflects widespread public and industry support, while ChatGPT faces backlash, including sharp spikes in uninstalls and negative reviews.[12]
2026 (March 2) Operational use A publication reports that Claude AI was used by U.S. Central Command during airstrikes on Iran for intelligence analysis, target identification, and battle simulations, even after President Donald Trump had ordered federal agencies to phase out the company’s technology. The report highlights how deeply AI systems are embedded in military infrastructure, making rapid replacement difficult.[13]
2026 (March 2) Public reaction A report states that OpenAI faces a backlash after announcing a Pentagon deal, prompting some users to cancel ChatGPT accounts and switch to Anthropic’s Claude. Online campaigns such as “Delete ChatGPT” and “CancelChatGPT” spread across social media, criticizing OpenAI’s military partnership. At the same time, Claude rose to the top of the Apple App Store productivity rankings in the United States, Canada, and Germany.[14]
2026 (March 2) Analysis A column argues that Anthropic’s AI policies may align better with Europe than with the United States. The dispute arose after President Donald Trump criticized the company’s “woke” restrictions, which prohibit domestic mass surveillance and fully autonomous weapons. Despite political attacks and a planned government phase-out, the U.S. military reportedly continued using Anthropic’s Claude during strikes on Iran. The author suggests that Europe’s regulatory framework, particularly the EU AI Act emphasizing oversight and limits on surveillance, is more compatible with Anthropic’s safety-focused approach than the U.S. political environment.[15]
2026 (March 2) Analysis In a Stratechery analysis, Ben Thompson argues that the conflict between Anthropic and the U.S. government reflects a deeper issue about power and control over advanced AI. Anthropic seeks to restrict uses such as mass surveillance and fully autonomous weapons, while the Pentagon insists on “any lawful use.” Thompson contends that decisions about military capabilities should ultimately belong to elected governments, not private companies. If AI becomes strategically comparable to nuclear weapons, he argues, states will not tolerate independent corporate control over such technology and will assert authority over it.[16]
2026 (March 3) Defense Secretary Pete Hegseth announces the Pentagon would move to designate Anthropic as a “supply chain risk to national security,” escalating tensions after failed negotiations over military AI use. The dispute centers on Anthropic’s refusal to allow applications such as autonomous lethal weapons and mass domestic surveillance. The announcement signals an imminent cutoff of Pentagon business with the company. Anthropic rejects the move as legally unfounded and prepares to challenge the designation in court.[17]
2026 (March 5) Dario Amodei resumes negotiations with the U.S. Department of Defense regarding terms governing military access to the company’s AI models. By reopening dialogue, both sides signal a willingness to explore a compromise governing military access to Anthropic’s technology. The negotiations focus on defining acceptable uses of advanced AI systems while addressing concerns about national security, operational needs, and safety safeguards in defense applications.[18]
2026 (March 6) The United States Department of Defense formally designates Anthropic as a “supply chain risk to national security,” prohibiting its use across military operations and requiring federal agencies and contractors to discontinue its AI systems. Chief executive officer Dario Amodei contests the decision as legally unfounded and stated that the company would challenge it in court, citing potential adverse effects on U.S. AI innovation and investment.[17]
2026 (March 7) The administration of Donald Trump drafts new federal guidelines regulating civilian artificial intelligence procurement amid an ongoing dispute with Anthropic. Prepared by the U.S. General Services Administration, the proposed rules would require companies seeking federal civilian contracts to permit “any legal use” of their AI systems and grant the U.S. government an irrevocable license to deploy them for lawful purposes. The initiative forms part of broader efforts to standardize federal procurement of AI technologies.[19]
2026 (March 9) Anthropic files a lawsuit against the executive branch of the United States and multiple federal agencies after the United States Department of Defense designated the company a “supply chain risk.” Anthropic alleges the government’s action is unlawful retaliation that violates constitutional free-speech protections and seeks judicial invalidation of the designation rather than monetary damages.[20]
2026 (March 17) The U.S. government labels Anthropic an “unacceptable” national security risk, arguing that its AI systems could be manipulated or altered in wartime, thereby undermining trust. The government denies Anthropic’s lawsuit, which alleges ideological retaliation and violations of First Amendment rights, and maintains that it is exercising standard vendor selection discretion. The designation is expected to cost Anthropic billions of dollars and affect its customer base. Courts review the dispute at the time, with industry and civil liberties groups supporting the company.[21]
2026 (March 26) A U.S. federal judge blocks an attempt by the administration of Donald Trump to designate Anthropic as a national-security “supply chain risk” and prohibit federal contractors from using its technology. The ruling, issued by U.S. District Judge Rita Lin, halts a presidential directive requiring federal agencies to stop using Anthropic’s Claude AI model. Judge Lin finds that the government had presented no evidence supporting the “supply chain risk” claim and that the action appears to be retaliation for the company’s public advocacy regarding ethical limits on the use of its technology.[22]

Meta information on the timeline

How the timeline was built

The initial version of the timeline was written by Sebastian.

Check Detail construction for full timeline in timelines, Inclusion criteria for full timeline in timelines, and Representativeness of events in timelines.

Funding information for this timeline is available.

Feedback and comments

Feedback for the timeline can be provided at the following places:

  • FIXME

What the timeline is still missing

Timeline update strategy

See also

References

  1. Edwards, Benj (6 June 2025). "Anthropic releases custom AI chatbot for classified spy work". Ars Technica. Retrieved 4 March 2026.
  2. Brodkin, Jon (15 July 2025). "Grok's "MechaHitler" meltdown didn't stop xAI from winning $200M military deal". Ars Technica. Retrieved 4 March 2026.
  3. "Statement from Dario Amodei on our discussions with the Department of War". anthropic.com. 26 February 2026. Retrieved 4 March 2026.
  4. Hadas Gold (27 February 2026). "Trump administration orders military contractors and federal agencies to cease business with Anthropic". CNN. Retrieved 4 March 2026.
  5. Pignol, Gil (10 March 2026). "Everyone Thinks Anthropic Took a Moral Stand. The Timeline Says Otherwise". Medium. Retrieved 20 March 2026.
  6. Sam Altman (27 February 2026). "Post by Sam Altman on agreement with the Department of War to deploy OpenAI models on classified networks". X (formerly Twitter). Retrieved 4 March 2026.
  7. Hegseth, Pete (27 February 2026). "Statement on Anthropic and Department of War policy". X (formerly Twitter). Retrieved 4 March 2026.
  8. "Read the full transcript of our interview with Anthropic CEO Dario Amodei". CBS News. 28 February 2026. Retrieved 4 March 2026.
  9. Brown, Hayes (28 February 2026). "Anthropic was right not to trust Pete Hegseth". MSNBC (MS Now). Retrieved 4 March 2026.
  10. Hendrix, Justin (28 February 2026). "How to Think About the Anthropic-Pentagon Dispute". Tech Policy Press. Retrieved 4 March 2026.
  11. Loizos, Connie (28 February 2026). "The trap Anthropic built for itself". TechCrunch. Retrieved 4 March 2026.
  12. "Anthropic Refuses Pentagon's AI Demands, Gets Blacklisted — Then Claude Becomes the World's Most Downloaded AI App". Elephas. Retrieved 20 March 2026.
  13. V, Vismaya (2 March 2026). "Anthropic's AI Used in Iran Strikes After Trump Moved to Cut Ties: WSJ". Decrypt. Retrieved 4 March 2026.
  14. Steinschaden, Jakob (2 March 2026). "Users Cancel ChatGPT Over Pentagon Deal as Claude of Anthropic Tops App Store Charts". Trending Topics. Retrieved 4 March 2026.
  15. ten Houten, Merien (2 March 2026). "Anthropic is a better fit for Europe than for the US". IO+. Retrieved 4 March 2026.
  16. "About". Stratechery. Retrieved 4 March 2026.
  17. 17.0 17.1 "Pentagon declares Anthropic a national security risk". National Today. 6 March 2026. Retrieved 20 March 2026.
  18. Ghaffary, Shirin (5 March 2026). "Anthropic's Amodei Reopens AI Discussions with Pentagon, FT Says". Bloomberg. Retrieved 6 March 2026.
  19. "Amidst the standoff with Anthropic, the United States is drafting strict new guidelines for civilian AI". MEXC News. PANews. 7 March 2026. Retrieved 28 March 2026.
  20. Hays, Kali (9 March 2026). "Anthropic sues US government for calling it a risk". BBC News. Retrieved 12 March 2026.
  21. Frenkel, Sheera (March 17, 2026). "U.S. Says Anthropic Is an 'Unacceptable' National Security Risk". The New York Times. Retrieved 20 March 2026.
  22. "US District Judge blocks government ban on Anthropic AI". JURIST. 28 March 2026. Retrieved 28 March 2026.