Google's Gemini 3.1 Pro, released in February 2026, achieved approximately 37% accuracy on Epoch AI's FrontierMath benchmark—comprising tiers of unpublished, expert-level math problems including open research challenges—matching prior Gemini 3 Pro performance while solving a novel Tier 4 problem. This keeps Gemini competitive but trailing leaders like OpenAI's GPT-5.4 Pro, which hit 50% on Tiers 1-3 and 38% on Tier 4 in March evaluations. Trader sentiment hinges on Google's rapid iteration amid fierce rivalry from Anthropic's Claude and xAI's Grok models, with potential Gemini 4 previews at May's Google I/O or developer updates possibly boosting scores before the June 30 deadline; historical delays in frontier AI releases underscore execution risks.
Polymarketデータを参照したAI生成の実験的な要約。これは取引アドバイスではなく、このマーケットの解決方法には一切関係ありません。 · 更新日$127,692 Vol.
40%以上
92%
45%以上
41%
50%以上
33%
60%以上
18%
$127,692 Vol.
40%以上
92%
45%以上
41%
50%以上
33%
60%以上
18%
This market will resolve according to the Epoch AI’s Frontier Math benchmarking leaderboard (https://epoch.ai/frontiermath) for Tier 1-3. Studies which are not included in the leaderboard (e.g. https://x.com/EpochAIResearch/status/1945905796904005720) will not be considered.
The primary resolution source will be information from EpochAI; however, a consensus of credible reporting may also be used.
マーケット開始日: Feb 6, 2026, 6:03 PM ET
Resolver
0x65070BE91...This market will resolve according to the Epoch AI’s Frontier Math benchmarking leaderboard (https://epoch.ai/frontiermath) for Tier 1-3. Studies which are not included in the leaderboard (e.g. https://x.com/EpochAIResearch/status/1945905796904005720) will not be considered.
The primary resolution source will be information from EpochAI; however, a consensus of credible reporting may also be used.
Resolver
0x65070BE91...Google's Gemini 3.1 Pro, released in February 2026, achieved approximately 37% accuracy on Epoch AI's FrontierMath benchmark—comprising tiers of unpublished, expert-level math problems including open research challenges—matching prior Gemini 3 Pro performance while solving a novel Tier 4 problem. This keeps Gemini competitive but trailing leaders like OpenAI's GPT-5.4 Pro, which hit 50% on Tiers 1-3 and 38% on Tier 4 in March evaluations. Trader sentiment hinges on Google's rapid iteration amid fierce rivalry from Anthropic's Claude and xAI's Grok models, with potential Gemini 4 previews at May's Google I/O or developer updates possibly boosting scores before the June 30 deadline; historical delays in frontier AI releases underscore execution risks.
Polymarketデータを参照したAI生成の実験的な要約。これは取引アドバイスではなく、このマーケットの解決方法には一切関係ありません。 · 更新日
外部リンクに注意してください。
外部リンクに注意してください。
よくある質問