Trader sentiment on Meta's "Mango" multimodal AI model, focused on high-resolution image and video generation, reflects caution after recent delays eroded early-year optimism. December 2025 reports outlined a first-half 2026 launch alongside the text-based "Avocado" model, with internal milestones hit by January, but March disclosures revealed performance shortfalls against rivals like OpenAI's Sora, Google's Gemini 3.0, and Anthropic benchmarks in reasoning and generation tasks. Meta postponed releases to at least May, opting for interim reliance on external models like Gemini. Key catalysts ahead include Q2 earnings calls and potential developer previews, amid intensifying competition in AI media synthesis where capabilities and timelines frequently slip.
基於Polymarket數據的AI實驗性摘要 · 更新於$22,346 交易量
6月30日
30%
$22,346 交易量
6月30日
30%
This market will resolve to "Yes" if Meta makes a new frontier AI model for image and video generation, or any model confirmed by Meta to be the model codenamed “Mango” during development, available to the general public by the listed date, 11:59 PM ET. Otherwise, this market will resolve to "No."
A frontier AI image and video model refers to a newly released Meta model that Meta describes as one of its most capable or next-generation, general-purpose flagship models for both image and video generation.
A qualifying model must be a general purpose model for image and video generation. Models which are focused on a specific aspect of image or video creation (e.g. computer vision or video segmentation) will not qualify.
Upgrades or successors to previous Meta models (e.g. Emu or SAM) will not count unless explicitly confirmed by Meta to be the model codenamed “Mango” during development or described by Meta as a frontier AI model for both image and video generation.
For this market to resolve to "Yes," the relevant model must be launched and publicly accessible, including via open beta or open rolling free waitlist signups. A closed beta or any form of private access will not suffice. The release must be clearly defined and publicly announced by Meta as being accessible to the general public.
A publicly-confirmed integration of a qualifying model into one of Meta’s primary AI buttons or portals (e.g. Instagram or Whatsapp) will qualify as a public release.
The primary resolution source for this market will be official information from Meta, with additional verification from a consensus of credible reporting.
市場開放時間: Dec 22, 2025, 1:23 PM ET
Resolver
0x65070BE91...This market will resolve to "Yes" if Meta makes a new frontier AI model for image and video generation, or any model confirmed by Meta to be the model codenamed “Mango” during development, available to the general public by the listed date, 11:59 PM ET. Otherwise, this market will resolve to "No."
A frontier AI image and video model refers to a newly released Meta model that Meta describes as one of its most capable or next-generation, general-purpose flagship models for both image and video generation.
A qualifying model must be a general purpose model for image and video generation. Models which are focused on a specific aspect of image or video creation (e.g. computer vision or video segmentation) will not qualify.
Upgrades or successors to previous Meta models (e.g. Emu or SAM) will not count unless explicitly confirmed by Meta to be the model codenamed “Mango” during development or described by Meta as a frontier AI model for both image and video generation.
For this market to resolve to "Yes," the relevant model must be launched and publicly accessible, including via open beta or open rolling free waitlist signups. A closed beta or any form of private access will not suffice. The release must be clearly defined and publicly announced by Meta as being accessible to the general public.
A publicly-confirmed integration of a qualifying model into one of Meta’s primary AI buttons or portals (e.g. Instagram or Whatsapp) will qualify as a public release.
The primary resolution source for this market will be official information from Meta, with additional verification from a consensus of credible reporting.
Resolver
0x65070BE91...Trader sentiment on Meta's "Mango" multimodal AI model, focused on high-resolution image and video generation, reflects caution after recent delays eroded early-year optimism. December 2025 reports outlined a first-half 2026 launch alongside the text-based "Avocado" model, with internal milestones hit by January, but March disclosures revealed performance shortfalls against rivals like OpenAI's Sora, Google's Gemini 3.0, and Anthropic benchmarks in reasoning and generation tasks. Meta postponed releases to at least May, opting for interim reliance on external models like Gemini. Key catalysts ahead include Q2 earnings calls and potential developer previews, amid intensifying competition in AI media synthesis where capabilities and timelines frequently slip.
基於Polymarket數據的AI實驗性摘要 · 更新於
警惕外部連結哦。
警惕外部連結哦。
Frequently Asked Questions