Skip to main content
Market icon

Wird Anthropic einen Deal mit dem Pentagon abschließen?

Market icon

Wird Anthropic einen Deal mit dem Pentagon abschließen?

Ja

8% Chance
Polymarket

$67,439 Vol.

Ja

8% Chance
Polymarket

$67,439 Vol.

In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.Trader consensus on Polymarket reflects a 92.5% implied probability for "No" deal between Anthropic and the Pentagon, driven by the February 2026 collapse of their prior $200 million Department of Defense contract over irreconcilable differences on Claude AI usage restrictions—specifically Anthropic's prohibitions on mass domestic surveillance and fully autonomous weapons. The Pentagon's subsequent supply chain risk designation, contract termination, and pivot to OpenAI, reinforced by an April 8 appeals court ruling denying Anthropic's injunction block, signal entrenched DoD resolve amid national security priorities. While recent White House signals of openness and CEO Dario Amodei's productive April 18 meetings hint at thawing relations, core AI safety red lines and litigation timelines make a timely compromise unlikely absent major policy shifts or executive intervention.

In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products.

This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”.

A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count).

An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect.

Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period.

Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement.

The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
Volumen
$67,439
Enddatum
30. Apr. 2026
Markt eröffnet
Mar 6, 2026, 1:33 PM ET
In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.Trader consensus on Polymarket reflects a 92.5% implied probability for "No" deal between Anthropic and the Pentagon, driven by the February 2026 collapse of their prior $200 million Department of Defense contract over irreconcilable differences on Claude AI usage restrictions—specifically Anthropic's prohibitions on mass domestic surveillance and fully autonomous weapons. The Pentagon's subsequent supply chain risk designation, contract termination, and pivot to OpenAI, reinforced by an April 8 appeals court ruling denying Anthropic's injunction block, signal entrenched DoD resolve amid national security priorities. While recent White House signals of openness and CEO Dario Amodei's productive April 18 meetings hint at thawing relations, core AI safety red lines and litigation timelines make a timely compromise unlikely absent major policy shifts or executive intervention.

In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products.

This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”.

A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count).

An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect.

Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period.

Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement.

The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
Volumen
$67,439
Enddatum
30. Apr. 2026
Markt eröffnet
Mar 6, 2026, 1:33 PM ET
In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.

Vorsicht bei externen Links.

Häufig gestellte Fragen

„Wird Anthropic einen Deal mit dem Pentagon abschließen?" ist ein Prognosemarkt auf Polymarket mit 2 möglichen Ergebnissen, bei dem Händler Anteile auf Basis ihrer Einschätzung kaufen und verkaufen. Das aktuell führende Ergebnis ist „Wird Anthropic einen Deal mit dem Pentagon machen?" mit 8%. Die Preise spiegeln Echtzeit-Wahrscheinlichkeiten der Community wider. Ein Anteilspreis von 8¢ bedeutet, dass der Markt diesem Ergebnis eine Wahrscheinlichkeit von 8% zuweist. Diese Quoten ändern sich laufend, wenn Händler auf neue Entwicklungen reagieren. Anteile am richtigen Ergebnis können bei Marktauflösung für jeweils $1 eingelöst werden.

Stand heute hat „Wird Anthropic einen Deal mit dem Pentagon abschließen?" ein Gesamthandelsvolumen von $67.4K generiert, seit der Markt am Mar 6, 2026 gestartet wurde. Dieses Aktivitätsniveau spiegelt starkes Engagement der Polymarket-Community wider und stellt sicher, dass die aktuellen Quoten von einem breiten Pool an Marktteilnehmern geprägt werden. Sie können Live-Preisbewegungen verfolgen und direkt auf dieser Seite auf jedes Ergebnis handeln.

Um auf „Wird Anthropic einen Deal mit dem Pentagon abschließen?" zu handeln, durchsuchen Sie die 2 verfügbaren Ergebnisse auf dieser Seite. Jedes Ergebnis zeigt einen aktuellen Preis, der die implizierte Wahrscheinlichkeit des Marktes darstellt. Um eine Position einzunehmen, wählen Sie das Ergebnis, das Sie für am wahrscheinlichsten halten, wählen Sie „Ja" um dafür oder „Nein" um dagegen zu handeln, geben Sie Ihren Betrag ein und klicken Sie auf „Handeln". Liegt Ihr gewähltes Ergebnis bei Marktauflösung richtig, zahlen Ihre „Ja"-Anteile jeweils $1 aus. Liegt es falsch, zahlen sie $0. Sie können Ihre Anteile auch jederzeit vor der Auflösung verkaufen.

Dies ist ein offener Markt. Der aktuelle Spitzenreiter für „Wird Anthropic einen Deal mit dem Pentagon abschließen?" ist „Wird Anthropic einen Deal mit dem Pentagon machen?" mit nur 8%. Da kein Ergebnis eine starke Mehrheit hat, sehen Händler dies als hochgradig unsicher an, was einzigartige Handelsmöglichkeiten bieten kann. Diese Quoten werden in Echtzeit aktualisiert – speichern Sie diese Seite als Lesezeichen.

Die Auflösungsregeln für „Wird Anthropic einen Deal mit dem Pentagon abschließen?" definieren genau, was passieren muss, damit jedes Ergebnis als Gewinner erklärt wird – einschließlich der offiziellen Datenquellen zur Bestimmung des Ergebnisses. Sie können die vollständigen Auflösungskriterien im Abschnitt „Regeln" auf dieser Seite über den Kommentaren einsehen. Wir empfehlen, die Regeln vor dem Handeln sorgfältig zu lesen, da sie die genauen Bedingungen, Sonderfälle und Quellen festlegen.