In partnership with

Attio is the AI CRM for modern teams.

Connect your email and calendar, and Attio instantly builds your CRM. Every contact, every company, every conversation, all organized in one place.

Then Ask Attio anything:

  • Prep for meetings in seconds with full context from across your business

  • Know what’s happening across your entire pipeline instantly

  • Spot deals going sideways before they do

No more digging and no more data entry. Just answers.

ResearchAudio.io

LeCun Raised $1.03B on a Bet LLMs Are Wrong

12 employees, no product. JEPA vs. transformers explained.

On March 10, 2026, a company with 12 employees and no product closed Europe's largest seed round in history: $1.03 billion at a $3.5 billion pre-money valuation. The company is Advanced Machine Intelligence (AMI) Labs. Its chairman is Yann LeCun, who spent 12 years as Meta's Chief AI Scientist and won the Turing Award. And the thesis behind the raise is that the technology every major AI lab is building right now is fundamentally, architecturally wrong.

This is not a standard AI startup story. Bezos Expeditions, Eric Schmidt, Mark Cuban, and Tim Berners-Lee signed checks. Investors are betting based on one man's research thesis and track record alone. That thesis: the path to human-level intelligence runs through world models, not language models.

What LLMs Cannot Do

Large language models learn by predicting the next token in a sequence of text. They are, at their core, statistical pattern matchers trained on written human knowledge. LeCun's argument (first formalized in a 2022 position paper) is that this design choice creates a permanent ceiling. Text describes the world, but it does not model it.

A 2025 benchmark paper presented at a major AI conference found "striking limitations" in LLMs' basic world-modeling abilities, including "near-random accuracy when distinguishing motion trajectories." A separate MIT/Harvard/Cornell study found LLMs failed to produce realistic maps of New York City when faced with detours. These are not edge cases. They are symptoms of a system that has read everything ever written about physics but has never felt a ball drop.

$1.03B
AMI Labs seed round
$5B
World Labs valuation (Fei-Fei Li)
2M+
NVIDIA Cosmos downloads

How World Models Work: JEPA Explained

LeCun's proposed architecture is called Joint Embedding Predictive Architecture (JEPA). The key difference from transformers: instead of predicting the next word (or pixel), JEPA predicts an abstract representation of a future state. It does not try to reconstruct raw data. It learns a compressed model of what is likely to happen next in an environment, and then reasons within that compressed space.

The analogy LeCun uses: a child watching objects fall develops an internal model of gravity without anyone explaining Newton's laws. They observe, build a mental model, and then predict. I-JEPA (Image JEPA), the first implementation of this idea published by Meta at CVPR 2023, demonstrated that a model can learn strong visual representations by predicting abstract image regions from other regions, without hand-crafted augmentations and with significantly lower compute than comparable approaches.

Architecture Comparison

LLM (Transformer)
Input
Token sequence (text)
Learns by
Predicting next token in raw space
World model
Implicit, statistical, text-only
Physics reasoning
Near-random on trajectories (2025 benchmark)
JEPA (World Model)
Input
Sensory observations (any modality)
Learns by
Predicting abstract representations of future states
World model
Explicit, updatable, physics-aware
Physics reasoning
Core design goal (DreamerV3: plans by imagining)

Sources: Meta AI Blog (I-JEPA, CVPR 2023), Nature (DreamerV3, April 2025), 2025 benchmark on LLM world modeling

Who Else Is Building This

AMI Labs is not building alone. In August 2025, Google DeepMind released Genie 3, described as the first real-time interactive general-purpose world model. It generates navigable 3D environments at 24 frames per second from text prompts, maintaining visual consistency for several minutes of real-time interaction. That is a qualitatively different capability from generating a plausible next sentence.

NVIDIA's Cosmos platform, trained on 9,000 trillion tokens from 20 million hours of real-world video, crossed 2 million downloads. Fei-Fei Li's World Labs shipped Marble, its first commercial world model product, and is reportedly in talks to raise $500 million at a $5 billion valuation. An April 2025 Nature paper on DreamerV3 showed that an agent with an internal world model can improve its behavior by "imagining" future scenarios before acting. The research infrastructure for a post-LLM paradigm is forming rapidly.

Key Insights

The architecture debate is now a capital debate. LeCun spent years arguing JEPA's theoretical advantages over transformers. AMI Labs raising $1.03B at a $3.5B pre-money valuation (with 12 employees and no product) means this debate has moved from conference papers to the balance sheet. Investors are pricing the probability that transformers hit a ceiling.

The gap between language and physics is measurable. Benchmark results showing LLMs at "near-random accuracy" on motion trajectory tasks are significant. Not because LLMs are bad at reasoning, but because they expose that text-based training is a lossy encoding of physical reality. A model that has read every physics textbook cannot predict where a ball lands.

AMI Labs CEO expects world model branding to go viral. Alexandre LeBrun told TechCrunch directly: "In six months, every company will call itself a world model to raise funding." This is a precise prediction of what happened with "generative AI" in 2022 and "agentic AI" in 2024. The underlying signal worth tracking is Genie 3, Cosmos, and DreamerV3, not the rebrands.

LeCun's advice to academia is a signal, not just an opinion. His statement "don't work on LLMs, the most exciting work on world models is coming from academia" is a direct funding and career incentive for researchers. He is pointing the next generation of graduate students toward JEPA, physical reasoning benchmarks, and hierarchical planning. Watch where the PhD dissertations go in 2026-2027.

The question this raise does not answer: whether JEPA is the right architecture, or whether LeCun has correctly identified the problem (LLMs have a ceiling) while the solution remains open. His own CEO acknowledged that going from JEPA theory to commercial applications could take years. Dario Amodei publicly disagreed with LeCun's thesis at Davos in January 2026, arguing current architectures will replace software developers within a year. Both cannot be right. The 2026 benchmark results will start to separate the positions.

ResearchAudio.io — AI research, explained clearly.

Sources: TechCrunch (AMI Labs raise) · MIT Technology Review (LeCun interview) · World Models Survey (arXiv)

Keep Reading