OpenAI commands a commanding 91% implied probability as the best AI model for coding by March 31, driven by its o1 series' chain-of-thought reasoning prowess, which delivers top-tier results on benchmarks like SWE-Bench Verified (nearing 50% success) and HumanEval, outpacing rivals in complex, multi-step programming tasks. Traders cite OpenAI's unmatched compute resources, rapid iteration history—from GPT-4o to o1-preview—and whispers of full o1 or "Orion" upgrades expected early 2025 as key supports. Challenges could arise from Anthropic's Claude 3.5 Sonnet currently edging leaderboards, a surprise Claude 4 drop, Google's Gemini 2.0, or open-source surges like DeepSeek-V3, if any leapfrog verified scores before the deadline.
基於Polymarket數據的AI實驗性摘要 · 更新於OpenAI 91%
Anthropic 6.8%
Google 1.3%
DeepSeek <1%
$965,675 交易量
$965,675 交易量

OpenAI
91%

Anthropic
7%

1%

DeepSeek
<1%

xAI
<1%

Z.ai
<1%

Mistral
<1%

阿里巴巴
<1%

Moonshot
<1%
OpenAI 91%
Anthropic 6.8%
Google 1.3%
DeepSeek <1%
$965,675 交易量
$965,675 交易量

OpenAI
91%

Anthropic
7%

1%

DeepSeek
<1%

xAI
<1%

Z.ai
<1%

Mistral
<1%

阿里巴巴
<1%

Moonshot
<1%
If two models are tied for the top LiveBench coding average score at this market's check time, resolution will be based on whichever company's name, as it is described in this market group, comes first in alphabetical order.
The primary source of resolution for this market will be LiveBench’s AI leaderboard, specifically the “coding average” category, found at livebench.ai. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
市場開放時間: Dec 12, 2025, 1:29 PM ET
Resolver
0x2F5e3684c...Resolver
0x2F5e3684c...OpenAI commands a commanding 91% implied probability as the best AI model for coding by March 31, driven by its o1 series' chain-of-thought reasoning prowess, which delivers top-tier results on benchmarks like SWE-Bench Verified (nearing 50% success) and HumanEval, outpacing rivals in complex, multi-step programming tasks. Traders cite OpenAI's unmatched compute resources, rapid iteration history—from GPT-4o to o1-preview—and whispers of full o1 or "Orion" upgrades expected early 2025 as key supports. Challenges could arise from Anthropic's Claude 3.5 Sonnet currently edging leaderboards, a surprise Claude 4 drop, Google's Gemini 2.0, or open-source surges like DeepSeek-V3, if any leapfrog verified scores before the deadline.
基於Polymarket數據的AI實驗性摘要 · 更新於
警惕外部連結哦。
警惕外部連結哦。
Frequently Asked Questions