Will world models beat LLMs on the path to AGI?

Jan 23, 2026

11:37pm UTC

Copy link
Share on X
Share on LinkedIn
Share on Instagram
Share via Facebook
W

orld models are among AI’s buzziest emerging themes of 2026.

On Friday, Bloomberg reported that World Labs, a world model startup founded by the godmother of AI, Dr. Fei-Fei Li, is in talks to raise hundreds of millions at a valuation of $5 billion, up from its $1 billion valuation in 2024.

The news follows World Labs' November debut of Marble, its first world model, calling it the “foundation for a spatially intelligent future.” And last week, the company launched the World API, allowing users to generate “explorable 3D worlds” from text, images and video.

It’s also not the only evidence that spatial intelligence is gaining traction. AMI Labs, a world model startup founded by fellow prominent AI thinker Yann LeCun in December, is also reportedly in talks for a funding round that would value the nascent company at $3.5 billion. In October, General Intuition, a company focused on “spatial-temporal reasoning,” raised $134 million in seed funding.

These models, which aim to offer realistic representations of the world, have piqued investor interest at a time when the future and profitability of large language models seem uncertain. It underscores a growing interest in AI that interacts with the real world beyond just chat functionality, laying the foundation for physical AI and robotics.

It also comes as AI’s foremost thinkers butt heads on the reality of artificial general intelligence, the lofty and undefined goal that drives several major AI firms. At the World Economic Forum in Davos this past week, that tension came to a head, with LeCun and Google DeepMind CEO Demis Hassabis both remarking that today’s LLMs are not even close to human-level artificial general intelligence. Meanwhile, Anthropic’s Dario Amodei asserted that these systems are nearing “Nobel-level” scientific research.

Li and LeCun have both argued that large language models aren’t going to be able to achieve AGI on their own and will require an understanding of space, physics, and physical interaction in order to truly be capable of human thought. And that assertion makes sense: Without an understanding of physical actions, reactions and consequences, how can a machine truly be on par with the human brain?

Our Deeper View

While giving these machines a sense of the real world could represent a massive breakthrough, it raises the question of what the consequences might be. Giving AI models a broad sense of physical presence and intelligence that exceeds human comprehension might sound like a recipe for disaster to AI doomers or anyone who watched Terminator. But beyond the fear of robots taking over, a more immediate risk is what humans might be capable of with this technology, if left unchecked.