Google's Gemini 1.5 Pro commands a commanding 97% implied probability as the third-best AI model by March 31, per the LMSYS Chatbot Arena Elo leaderboard snapshot, trailing only OpenAI's GPT-4o and Anthropic's Claude 3 Opus. This trader consensus stems from Gemini 1.5's stable third-place ranking in blind user-voted benchmarks since its experimental release in late February, showcasing superior long-context processing and multimodal capabilities that solidified its position amid fierce large language model competition. No rival models from xAI, DeepSeek, or others surged in the final days, locking in the hierarchy. Realistic challenges include an improbable last-minute leaderboard shakeup from unreported model tweaks or rapid Elo gains, though fixed snapshot timing minimizes such risks.
Resumen experimental generado por IA con datos de Polymarket · ActualizadoGoogle 97.0%
xAI 2.1%
OpenAI <1%
DeepSeek <1%
$369,352 Vol.
$369,352 Vol.

97%

xAI
2%

OpenAI
<1%

DeepSeek
<1%

Anthropic
<1%

Alibaba
<1%

Meituan
<1%

Z.ai
<1%

Mistral
<1%

Baidu
<1%

Moonshot
<1%
Google 97.0%
xAI 2.1%
OpenAI <1%
DeepSeek <1%
$369,352 Vol.
$369,352 Vol.

97%

xAI
2%

OpenAI
<1%

DeepSeek
<1%

Anthropic
<1%

Alibaba
<1%

Meituan
<1%

Z.ai
<1%

Mistral
<1%

Baidu
<1%

Moonshot
<1%
Results from the "Arena Score" section on the Leaderboard tab of https://lmarena.ai/leaderboard/text with the style control off will be used to resolve this market.
If two models are tied for the third best arena score at this market's check time, resolution will be based on whichever company's name, as it is described in this market group, comes first in alphabetical order (e.g. if both were tied, "Google" would resolve to "Yes", and "xAI" would resolve to "No")
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Mercado abierto: Dec 2, 2025, 6:03 PM ET
Resolver
0x2F5e3684c...Resolver
0x2F5e3684c...Google's Gemini 1.5 Pro commands a commanding 97% implied probability as the third-best AI model by March 31, per the LMSYS Chatbot Arena Elo leaderboard snapshot, trailing only OpenAI's GPT-4o and Anthropic's Claude 3 Opus. This trader consensus stems from Gemini 1.5's stable third-place ranking in blind user-voted benchmarks since its experimental release in late February, showcasing superior long-context processing and multimodal capabilities that solidified its position amid fierce large language model competition. No rival models from xAI, DeepSeek, or others surged in the final days, locking in the hierarchy. Realistic challenges include an improbable last-minute leaderboard shakeup from unreported model tweaks or rapid Elo gains, though fixed snapshot timing minimizes such risks.
Resumen experimental generado por IA con datos de Polymarket · Actualizado
Cuidado con los enlaces externos.
Cuidado con los enlaces externos.
Preguntas frecuentes