Anthropic's Claude 3 Opus has commanded the top spot on the LMSYS Chatbot Arena leaderboard since its March 4 release, delivering superior performance in reasoning, coding, and multimodal benchmarks over rivals like OpenAI's GPT-4 and Google's Gemini 1.5 Pro, driving trader consensus to a 99.2% implied probability for it as the best AI model by March end. This dominance stems from verified capability demonstrations in third-party evaluations, with no competing large language models overtaking it amid a quiet period for major releases. While Polymarket odds reflect strong skin-in-the-game alignment, a surprise late-March model drop from OpenAI or Google—or an anomalous leaderboard recalculation—could theoretically challenge this positioning before resolution.
Experimental AI-generated summary referencing Polymarket data · UpdatedAnthropic 99.2%
Google <1%
xAI <1%
OpenAI <1%
$15,166,074 Vol.
$15,166,074 Vol.

Anthropic
99%

<1%

xAI
<1%

OpenAI
<1%

DeepSeek
<1%

Baidu
<1%

Moonshot
<1%

Meituan
<1%

Alibaba
<1%

Z.ai
<1%

Mistral
<1%
Anthropic 99.2%
Google <1%
xAI <1%
OpenAI <1%
$15,166,074 Vol.
$15,166,074 Vol.

Anthropic
99%

<1%

xAI
<1%

OpenAI
<1%

DeepSeek
<1%

Baidu
<1%

Moonshot
<1%

Meituan
<1%

Alibaba
<1%

Z.ai
<1%

Mistral
<1%
Results from the "Arena Score" section on the Leaderboard tab of https://lmarena.ai/leaderboard/text with the style control off will be used to resolve this market.
If two models are tied for the highest arena score at this market's check time, resolution will be based on whichever company's name, as it is described in this market group, comes first in alphabetical order (e.g., if both were tied, "Google" would resolve to "Yes", and "xAI" would resolve to "No")
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Market Opened: Dec 2, 2025, 6:02 PM ET
Resolver
0x2F5e3684c...Results from the "Arena Score" section on the Leaderboard tab of https://lmarena.ai/leaderboard/text with the style control off will be used to resolve this market.
If two models are tied for the highest arena score at this market's check time, resolution will be based on whichever company's name, as it is described in this market group, comes first in alphabetical order (e.g., if both were tied, "Google" would resolve to "Yes", and "xAI" would resolve to "No")
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Resolver
0x2F5e3684c...Anthropic's Claude 3 Opus has commanded the top spot on the LMSYS Chatbot Arena leaderboard since its March 4 release, delivering superior performance in reasoning, coding, and multimodal benchmarks over rivals like OpenAI's GPT-4 and Google's Gemini 1.5 Pro, driving trader consensus to a 99.2% implied probability for it as the best AI model by March end. This dominance stems from verified capability demonstrations in third-party evaluations, with no competing large language models overtaking it amid a quiet period for major releases. While Polymarket odds reflect strong skin-in-the-game alignment, a surprise late-March model drop from OpenAI or Google—or an anomalous leaderboard recalculation—could theoretically challenge this positioning before resolution.
Experimental AI-generated summary referencing Polymarket data · Updated


Beware of external links.
Beware of external links.
Frequently Asked Questions