World Labs raises $1B as VCs look beyond LLMs

Feb 18, 2026

10:57pm UTC

Copy link
Share on X
Share on LinkedIn
Share on Instagram
Share via Facebook
T

he world of AI is moving well beyond language.

On Wednesday, World Labs, a startup founded by AI pioneer Fei-Fei Li, announced a $1 billion funding round. The round’s investors included AMD, Autodesk, Emerson Collective, Fidelity, Nvidia and Sea, the company said in its announcement.

Though World Labs didn’t disclose its valuation, previous reports from Bloomberg claim that the company sought funding at a valuation of $5 billion.

“We are focused on accelerating our mission to advance spatial intelligence by building world models that revolutionize storytelling, creativity, robotics, scientific discovery, and beyond,” World Labs said in its press release.

World Labs’ success is the latest sign that researchers are looking for breakthroughs beyond what large language models can provide. Investors, meanwhile, may be eyeing this development as their next big bet.

  • Runway, an AI video startup, announced a $315 million Series E funding round that shot its valuation to $5.3 billion, a source told The Deep View. The company intends to use the funding to bolster its world model research, calling it the “most transformative technology of our time.”
  • AMI Labs, a world model startup founded by fellow AI godparent Yann LeCun earlier this year, is also reportedly in talks for funding at a multibillion-dollar valuation.

With their capabilities in real-world perception and action, some developers are slating these models as a catalyst for massive progress in visual and physical AI, including fields such as robotics, self-driving cars and game development. But creating these models is no easy feat.

“Simulating reality is simulating a dynamic world,” Anastasis Germanidis, co-founder and CTO of Runway, previously told The Deep View. “The static environment problem is much easier to solve than the dynamic world when you want to simulate and understand … the effects of different actions that you can take.”

Our Deeper View

While world models carry massive promise, they are also far more difficult to build and train than their large language model predecessors. Along with eating up more compute resources and data, creating a machine that can see the world and act on it as humans do is challenging: These machines don’t have millions of years of built-in evolutionary biology to fall back on the way that humans do. And given that the goal is to put these models in charge of training for physical actions, their mistakes have more dire physical consequences then, say, an LLM hallucinating a pizza recipe that calls for mixing Elmer's glue into cheese.