OpenAI holds a commanding 99.1% implied probability for the best AI model in math as of March 31, reflecting trader consensus on its o1 reasoning model's dominance since its September 2024 release. The model's advanced chain-of-thought capabilities delivered breakthrough benchmark scores—83.3% on the challenging MATH dataset and 94.8% on AIME 2024—far surpassing Google DeepMind's AlphaProof, Anthropic's Claude, and others amid a quiet competitive landscape. No new large language model releases or credible evaluations have narrowed the gap in the past 30 days. Realistic challenges include an unexpected pre-deadline launch from Google or xAI with superior verified math performance, though typical development cycles make this improbable. Traders eye ongoing leaderboards for last-minute shifts.
Resumen experimental generado por IA con datos de Polymarket · ActualizadoOpenAI 99.0%
Google <1%
DeepSeek <1%
Anthropic <1%
$437,688 Vol.
$437,688 Vol.

OpenAI
99%

1%

DeepSeek
<1%

Anthropic
<1%

xAI
<1%

Moonshot
<1%

Z.ai
<1%

Mistral
<1%

Alibaba
<1%
OpenAI 99.0%
Google <1%
DeepSeek <1%
Anthropic <1%
$437,688 Vol.
$437,688 Vol.

OpenAI
99%

1%

DeepSeek
<1%

Anthropic
<1%

xAI
<1%

Moonshot
<1%

Z.ai
<1%

Mistral
<1%

Alibaba
<1%
If two models are tied for the highest LiveBench Mathematics Average score at this market's check time, resolution will be based on whichever company's name, as it is described in this market group, comes first in alphabetical order.
The primary source of resolution for this market will be LiveBench’s AI leaderboard, specifically the “Mathematics Average” category, found at livebench.ai. If this resolution source is unavailable at check time, this market will remain open until the LiveBench AI leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Mercado abierto: Dec 12, 2025, 1:25 PM ET
Resolver
0x2F5e3684c...If two models are tied for the highest LiveBench Mathematics Average score at this market's check time, resolution will be based on whichever company's name, as it is described in this market group, comes first in alphabetical order.
The primary source of resolution for this market will be LiveBench’s AI leaderboard, specifically the “Mathematics Average” category, found at livebench.ai. If this resolution source is unavailable at check time, this market will remain open until the LiveBench AI leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Resolver
0x2F5e3684c...OpenAI holds a commanding 99.1% implied probability for the best AI model in math as of March 31, reflecting trader consensus on its o1 reasoning model's dominance since its September 2024 release. The model's advanced chain-of-thought capabilities delivered breakthrough benchmark scores—83.3% on the challenging MATH dataset and 94.8% on AIME 2024—far surpassing Google DeepMind's AlphaProof, Anthropic's Claude, and others amid a quiet competitive landscape. No new large language model releases or credible evaluations have narrowed the gap in the past 30 days. Realistic challenges include an unexpected pre-deadline launch from Google or xAI with superior verified math performance, though typical development cycles make this improbable. Traders eye ongoing leaderboards for last-minute shifts.
Resumen experimental generado por IA con datos de Polymarket · Actualizado
Cuidado con los enlaces externos.
Cuidado con los enlaces externos.
Preguntas frecuentes