Trader consensus prices "Yes" at 80.5% for AI securing a gold medal at the 2026 International Mathematical Olympiad, driven by rapid advances in AI mathematical reasoning following DeepMind's AlphaProof silver-medal performance on IMO 2024 problems and multiple systems—Gemini Deep Think, OpenAI's general-purpose large language models, and others—achieving gold-standard scores (35/42 points, solving five of six problems) on IMO 2025 benchmarks announced in July 2025. Recent updates, including Gemini Deep Think's 90% on IMO-ProofBench in February 2026 and new models like Nemotron-Cascade claiming similar feats, underscore scaling improvements in formal proof generation and olympiad-level problem-solving. With IMO 2026 slated for summer, traders anticipate iterative releases from leading labs amid competitive AI-for-math initiatives, though novel problem difficulty and strict time constraints pose residual risks to full gold attainment.
Экспериментальная сводка, созданная ИИ на основе данных Polymarket · ОбновленоAI выиграл золотую медаль IMO в 2026 году?
AI выиграл золотую медаль IMO в 2026 году?
Да
Да
The resolution source is the IMO Grand Challenge (https://imo-grand-challenge.github.io/) and the Artificial Intelligence Math Olympiad (AIMO, https://aimoprize.com/). If either source demonstrates that an AI has won the challenge/prize before the resolution date, this market will resolve to "Yes".
Открытие рынка: Nov 12, 2025, 5:08 PM ET
Resolver
0x65070BE91...The resolution source is the IMO Grand Challenge (https://imo-grand-challenge.github.io/) and the Artificial Intelligence Math Olympiad (AIMO, https://aimoprize.com/). If either source demonstrates that an AI has won the challenge/prize before the resolution date, this market will resolve to "Yes".
Resolver
0x65070BE91...Trader consensus prices "Yes" at 80.5% for AI securing a gold medal at the 2026 International Mathematical Olympiad, driven by rapid advances in AI mathematical reasoning following DeepMind's AlphaProof silver-medal performance on IMO 2024 problems and multiple systems—Gemini Deep Think, OpenAI's general-purpose large language models, and others—achieving gold-standard scores (35/42 points, solving five of six problems) on IMO 2025 benchmarks announced in July 2025. Recent updates, including Gemini Deep Think's 90% on IMO-ProofBench in February 2026 and new models like Nemotron-Cascade claiming similar feats, underscore scaling improvements in formal proof generation and olympiad-level problem-solving. With IMO 2026 slated for summer, traders anticipate iterative releases from leading labs amid competitive AI-for-math initiatives, though novel problem difficulty and strict time constraints pose residual risks to full gold attainment.
Экспериментальная сводка, созданная ИИ на основе данных Polymarket · Обновлено
Не доверяй внешним ссылкам.
Не доверяй внешним ссылкам.
Часто задаваемые вопросы