Two years ago Meta was the AI underdog. In 2026 they are arguably the best-positioned. Three plays explain it.
The open weights play
Llama 4 dropped July 2025 with permissive licensing[1]. Every other lab is closed weights. Meta lets you download the model and run it on your own GPUs. The cost: Meta does not collect inference revenue from Llama users.
The benefit: every company, university, and indie dev that fine-tunes an open model becomes a Meta partner. Llama is now in 70 percent of self-hosted deployments globally.
The talent play
OpenAI lost roughly 20 senior researchers to Meta in 2024-2025[2]. The hires include people who actually built GPT-3 and GPT-4. Meta did not just hire warm bodies; they hired the people who know how to build the next generation.
The distribution play
Meta AI is in WhatsApp. Three billion users. Billions of free queries per day. The data flywheel from this dwarfs anything OpenAI or Anthropic can build organically.
What could go wrong
EU AI Act compliance. The open-weights model raises tricky questions about whose responsibility model misuse is. Meta is litigating multiple cases in 2026.
User backlash. Putting AI in WhatsApp by default annoyed people. Adoption metrics are strong but trust is mixed.
The Manus question. If Meta wants a top-tier agent product, building one organically is slow. Acquiring Manus is the fastest path. The rumour mill has not stopped suggesting it.
Where to watch in 2026
Llama 5 is rumoured for late 2026. If it matches GPT-5 / Claude 5 on standard benchmarks while staying open weights, Meta wins the open-source AI war definitively.
If Meta acquires a major agent product, the consumer AI map redraws.
If neither happens, Llama 4 might be the high-water mark.
About the data
A note on what the numbers in this post represent so you can read them with the right confidence:
- "My own bench" rows are personal measurements on my own hardware. They are honest about my setup and reproducible there, but they should not be treated as universal benchmark scores.
- Benchmark numbers attributed to public sources (Geekbench Browser, DXOMARK, NotebookCheck, FIA timing) are illustrative — the trend is what matters, not the third decimal place. Cross-check against the source for anything you would act on financially.
- Client outcomes and ROI percentages in business-focused posts are anonymised composites drawn from my own consulting work. Real numbers, real direction, sanitised so individual clients are not identifiable.
- Foldable crease-depth and similar engineering measurements are estimates pulled from teardown reports and reviewer claims; manufacturers do not publish these directly.
- Forecasts and "what I bet" lines are exactly that — opinions, not predictions with a track record yet.
If you spot a number that contradicts a source you trust, tell me — I would rather correct it than be the chart that was off by 6 percent and pretended otherwise.