Trader consensus on Polymarket reflects a 92.5% implied probability against Anthropic striking a deal with the Pentagon, driven by the Department of Defense's March 2026 designation of the AI firm as a supply-chain risk after protracted negotiations collapsed over usage restrictions for Claude large language models. Anthropic refused to lift safeguards prohibiting applications in autonomous weapons or domestic surveillance, prioritizing AI safety commitments that clashed with the Pentagon's demand for "all lawful purposes" access—echoing tensions seen in OpenAI's more permissive compromise. This blacklisting bars Anthropic from DoD contracts and subcontractors, with an April 8 appeals court loss upholding the decision. While recent productive White House meetings under the Trump administration signal thawing relations and potential indirect influence, a direct Pentagon reversal remains unlikely absent major policy shifts or new leadership, though executive intervention could catalyze surprise negotiations.
Experimental AI-generated summary referencing Polymarket data. This is not trading advice and plays no role in how this market resolves. · Updated$67,437 Vol.
$67,437 Vol.
$67,437 Vol.
$67,437 Vol.
This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”.
A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count).
An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect.
Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period.
Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement.
The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
Market Opened: Mar 6, 2026, 1:33 PM ET
Resolver
0x65070BE91...This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”.
A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count).
An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect.
Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period.
Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement.
The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
Resolver
0x65070BE91...Trader consensus on Polymarket reflects a 92.5% implied probability against Anthropic striking a deal with the Pentagon, driven by the Department of Defense's March 2026 designation of the AI firm as a supply-chain risk after protracted negotiations collapsed over usage restrictions for Claude large language models. Anthropic refused to lift safeguards prohibiting applications in autonomous weapons or domestic surveillance, prioritizing AI safety commitments that clashed with the Pentagon's demand for "all lawful purposes" access—echoing tensions seen in OpenAI's more permissive compromise. This blacklisting bars Anthropic from DoD contracts and subcontractors, with an April 8 appeals court loss upholding the decision. While recent productive White House meetings under the Trump administration signal thawing relations and potential indirect influence, a direct Pentagon reversal remains unlikely absent major policy shifts or new leadership, though executive intervention could catalyze surprise negotiations.
Experimental AI-generated summary referencing Polymarket data. This is not trading advice and plays no role in how this market resolves. · Updated



Beware of external links.
Beware of external links.
Frequently Asked Questions