Skip to main content
Market icon

Anthropic conclura-t-il un accord avec le Pentagone ?

Market icon

Anthropic conclura-t-il un accord avec le Pentagone ?

Oui

7% chance
Polymarket

$66,954 Vol.

Oui

7% chance
Polymarket

$66,954 Vol.

In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.Trader consensus on Polymarket reflects strong conviction that Anthropic will not strike a deal with the Pentagon, driven by a bitter public standoff over AI safety guardrails in Claude and the new Mythos large language model. After months of failed negotiations, Anthropic rejected the Department of Defense's February 2026 ultimatum demanding unrestricted military access, citing risks of autonomous weapons and mass surveillance—core to its Constitutional AI framework. The Pentagon responded by designating Anthropic a national security supply chain risk, ordering system removals and barring future contracts, prompting Anthropic's lawsuit. Recent appeals court denial and White House talks for non-Pentagon access underscore the impasse, though a surprise policy shift or legal settlement could reopen doors before resolution.

In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products.

This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”.

A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count).

An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect.

Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period.

Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement.

The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
Volume
$66,954
Date de fin
30 avr. 2026
Marché ouvert
Mar 6, 2026, 1:33 PM ET
In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.Trader consensus on Polymarket reflects strong conviction that Anthropic will not strike a deal with the Pentagon, driven by a bitter public standoff over AI safety guardrails in Claude and the new Mythos large language model. After months of failed negotiations, Anthropic rejected the Department of Defense's February 2026 ultimatum demanding unrestricted military access, citing risks of autonomous weapons and mass surveillance—core to its Constitutional AI framework. The Pentagon responded by designating Anthropic a national security supply chain risk, ordering system removals and barring future contracts, prompting Anthropic's lawsuit. Recent appeals court denial and White House talks for non-Pentagon access underscore the impasse, though a surprise policy shift or legal settlement could reopen doors before resolution.

In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products.

This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”.

A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count).

An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect.

Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period.

Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement.

The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
Volume
$66,954
Date de fin
30 avr. 2026
Marché ouvert
Mar 6, 2026, 1:33 PM ET
In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.

Méfiez-vous des liens externes.

Questions fréquentes

« Anthropic conclura-t-il un accord avec le Pentagone ? » est un marché de prédiction sur Polymarket avec 2 résultats possibles où les traders achètent et vendent des parts selon ce qu'ils pensent qu'il se passera. Le résultat en tête actuel est « Anthropic conclura-t-elle un accord avec le Pentagone ? » à 7%. Les prix reflètent des probabilités en temps réel de la communauté. Par exemple, une part cotée à 7¢ implique que le marché attribue collectivement une probabilité de 7% à ce résultat. Ces cotes changent en permanence. Les parts du résultat correct sont échangeables contre $1 chacune lors de la résolution du marché.

À ce jour, « Anthropic conclura-t-il un accord avec le Pentagone ? » a généré $67K en volume total de trading depuis le lancement du marché le Mar 6, 2026. Ce niveau d'activité reflète un fort engagement de la communauté Polymarket et garantit que les cotes actuelles sont alimentées par un large bassin de participants. Vous pouvez suivre les mouvements de prix en direct et trader sur n'importe quel résultat directement sur cette page.

Pour trader sur « Anthropic conclura-t-il un accord avec le Pentagone ? », parcourez les 2 résultats disponibles sur cette page. Chaque résultat affiche un prix actuel représentant la probabilité implicite du marché. Pour prendre position, sélectionnez le résultat que vous estimez le plus probable, choisissez « Oui » pour trader en sa faveur ou « Non » pour trader contre, entrez votre montant et cliquez sur « Trader ». Si votre résultat choisi est correct lors de la résolution, vos parts « Oui » rapportent $1 chacune. S'il est incorrect, elles rapportent $0. Vous pouvez également vendre vos parts avant la résolution.

C'est un marché très ouvert. Le leader actuel pour « Anthropic conclura-t-il un accord avec le Pentagone ? » est « Anthropic conclura-t-elle un accord avec le Pentagone ? » à seulement 7%. Aucun résultat ne dominant clairement, les traders voient cela comme très incertain, ce qui peut présenter des opportunités de trading uniques. Ces cotes sont mises à jour en temps réel, alors ajoutez cette page à vos favoris.

Les règles de résolution de « Anthropic conclura-t-il un accord avec le Pentagone ? » définissent exactement ce qui doit se produire pour que chaque résultat soit déclaré gagnant, y compris les sources de données officielles utilisées pour déterminer le résultat. Vous pouvez consulter les critères de résolution complets dans la section « Règles » sur cette page au-dessus des commentaires. Nous recommandons de lire attentivement les règles avant de trader, car elles précisent les conditions exactes, les cas particuliers et les sources.