Trader consensus heavily favors "No" at 89.5% implied probability that a decentralized large language model (dLLM)—an AI system trained or inferred via distributed, blockchain-based networks—will claim the top spot on major benchmarks like LMSYS Arena before 2027, driven by persistent technical barriers in decentralized compute. Centralized giants like OpenAI's o1-preview and Anthropic's Claude 3.5 Sonnet dominate leaderboards, leveraging massive proprietary clusters with exaflops of H100-equivalent power that decentralized protocols like Bittensor struggle to match due to coordination delays, bandwidth limits, and fault tolerance issues. Recent tests of projects like Gensyn show subpar scaling, while no dLLM cracks even the top 20; key catalysts include lacking breakthroughs at upcoming events like NeurIPS 2024, reinforcing skepticism on 2.5-year timelines amid accelerating centralization.
Экспериментальная сводка, созданная ИИ на основе данных Polymarket · ОбновленоБудет ли dLLM лучшей моделью ИИ до 2027 года?
Будет ли dLLM лучшей моделью ИИ до 2027 года?
Да
Да
A Diffusion Large Language Model (dLLM) is any model for which official publicly released documentation, such as a model card, technical paper, or official statements from its developers, clearly identifies diffusion or iterative denoising as a central part of its text-generation or decoding process.
Results from the "Score" section on the Leaderboard tab of https://lmarena.ai/leaderboard/text set to default (style control on) will be used to resolve this market.
If two or models are tied for the top arena score at any point, this market will resolve to “Yes” if any of the joint-top ranked models are Diffusion Large Language Models.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable on December 31, 2026, 11:59 PM ET, this market will resolve based on all published Chatbot Arena LLM Leaderboard rankings prior to the period of lack of availability.
Открытие рынка: Nov 14, 2025, 3:05 PM ET
Resolver
0x65070BE91...A Diffusion Large Language Model (dLLM) is any model for which official publicly released documentation, such as a model card, technical paper, or official statements from its developers, clearly identifies diffusion or iterative denoising as a central part of its text-generation or decoding process.
Results from the "Score" section on the Leaderboard tab of https://lmarena.ai/leaderboard/text set to default (style control on) will be used to resolve this market.
If two or models are tied for the top arena score at any point, this market will resolve to “Yes” if any of the joint-top ranked models are Diffusion Large Language Models.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable on December 31, 2026, 11:59 PM ET, this market will resolve based on all published Chatbot Arena LLM Leaderboard rankings prior to the period of lack of availability.
Resolver
0x65070BE91...Trader consensus heavily favors "No" at 89.5% implied probability that a decentralized large language model (dLLM)—an AI system trained or inferred via distributed, blockchain-based networks—will claim the top spot on major benchmarks like LMSYS Arena before 2027, driven by persistent technical barriers in decentralized compute. Centralized giants like OpenAI's o1-preview and Anthropic's Claude 3.5 Sonnet dominate leaderboards, leveraging massive proprietary clusters with exaflops of H100-equivalent power that decentralized protocols like Bittensor struggle to match due to coordination delays, bandwidth limits, and fault tolerance issues. Recent tests of projects like Gensyn show subpar scaling, while no dLLM cracks even the top 20; key catalysts include lacking breakthroughs at upcoming events like NeurIPS 2024, reinforcing skepticism on 2.5-year timelines amid accelerating centralization.
Экспериментальная сводка, созданная ИИ на основе данных Polymarket · Обновлено
Не доверяй внешним ссылкам.
Не доверяй внешним ссылкам.
Часто задаваемые вопросы