OpenAI's GPT-5.4 Pro commands near-unanimous trader consensus at 100% implied probability for the top AI model in math as of March 31, 2026, driven by its dominant performance on key benchmarks like MATH-500 (99.4%), AIME 2026 (100%), and USAMO 2026 (95%), saturating frontiers once challenging for large language models. Recent March releases, including GPT-5.4's "Thinking" variant with enhanced reasoning chains and 1M-token context, outpaced rivals like Claude Opus 4.6 (97.6% on weighted math scores) and DeepSeek V4 amid an avalanche of 20+ frontier models. This skin-in-the-game positioning reflects verified leaderboard supremacy from MathArena and BenchLM. Realistic challenges include last-minute competitor updates or revised evals revealing ties, though rapid benchmark saturation limits upside surprises today.
Experimental AI-generated summary referencing Polymarket data · UpdatedOpenAI 100.0%
Google <1%
Z.ai <1%
DeepSeek <1%
$499,119 Vol.
$499,119 Vol.

No

OpenAI
Yes

Z.ai
No

DeepSeek
No

Mistral
No

Anthropic
No

Alibaba
No

xAI
No

Moonshot
No
OpenAI 100.0%
Google <1%
Z.ai <1%
DeepSeek <1%
$499,119 Vol.
$499,119 Vol.

No

OpenAI
Yes

Z.ai
No

DeepSeek
No

Mistral
No

Anthropic
No

Alibaba
No

xAI
No

Moonshot
No
If two models are tied for the highest LiveBench Mathematics Average score at this market's check time, resolution will be based on whichever company's name, as it is described in this market group, comes first in alphabetical order.
The primary source of resolution for this market will be LiveBench’s AI leaderboard, specifically the “Mathematics Average” category, found at livebench.ai. If this resolution source is unavailable at check time, this market will remain open until the LiveBench AI leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Market Opened: Dec 12, 2025, 1:25 PM ET
Resolver
0x2F5e3684c...Outcome proposed: No
No dispute
Final outcome: No
If two models are tied for the highest LiveBench Mathematics Average score at this market's check time, resolution will be based on whichever company's name, as it is described in this market group, comes first in alphabetical order.
The primary source of resolution for this market will be LiveBench’s AI leaderboard, specifically the “Mathematics Average” category, found at livebench.ai. If this resolution source is unavailable at check time, this market will remain open until the LiveBench AI leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Resolver
0x2F5e3684c...Outcome proposed: No
No dispute
Final outcome: No
OpenAI's GPT-5.4 Pro commands near-unanimous trader consensus at 100% implied probability for the top AI model in math as of March 31, 2026, driven by its dominant performance on key benchmarks like MATH-500 (99.4%), AIME 2026 (100%), and USAMO 2026 (95%), saturating frontiers once challenging for large language models. Recent March releases, including GPT-5.4's "Thinking" variant with enhanced reasoning chains and 1M-token context, outpaced rivals like Claude Opus 4.6 (97.6% on weighted math scores) and DeepSeek V4 amid an avalanche of 20+ frontier models. This skin-in-the-game positioning reflects verified leaderboard supremacy from MathArena and BenchLM. Realistic challenges include last-minute competitor updates or revised evals revealing ties, though rapid benchmark saturation limits upside surprises today.
Experimental AI-generated summary referencing Polymarket data · Updated



Beware of external links.
Beware of external links.
Frequently Asked Questions