Will world models beat LLMs on the path to AGI?
January 26, 2026

Welcome back. World models are having a moment, and the AGI debate is full of more conflicting signals than ever. As investors pour billions into spatial intelligence, AI’s biggest thinkers are openly questioning whether LLMs alone will ever get us to general intelligence. Meanwhile, more AI agents are appearing in the real world: Amazon is rolling out personalized health agents in telemedicine, and Synthesia is betting big on video agents to retrain the enterprise workforce. Today's stories point to a clear shift: The next phase of AI isn’t just smarter chatbots. It’s systems that see, act, explain, and increasingly show up where real work happens. —Jason Hiner
IN TODAY’S NEWSLETTER
1. Will world models beat LLMs on the path to AGI?
2. AI agents enter telemedicine via Amazon Health
3. Synthesia preps AI video agents, doubles to $4B
RESEARCH
Will world models beat LLMs on the path to AGI?

World models are among AI’s buzziest emerging themes of 2026.
On Friday, Bloomberg reported that World Labs, a world model startup founded by the godmother of AI, Dr. Fei-Fei Li, is in talks to raise hundreds of millions at a valuation of $5 billion, up from its $1 billion valuation in 2024.
The news follows World Labs' November debut of Marble, its first world model, calling it the “foundation for a spatially intelligent future.” And last week, the company launched the World API, allowing users to generate “explorable 3D worlds” from text, images and video.
It’s also not the only evidence that spatial intelligence is gaining traction. AMI Labs, a world model startup founded by fellow prominent AI thinker Yann LeCun in December, is also reportedly in talks for a funding round that would value the nascent company at $3.5 billion. In October, General Intuition, a company focused on “spatial-temporal reasoning,” raised $134 million in seed funding.
These models, which aim to offer realistic representations of the world, have piqued investor interest at a time when the future and profitability of large language models seem uncertain. It underscores a growing interest in AI that interacts with the real world beyond just chat functionality, laying the foundation for physical AI and robotics.
It also comes as AI’s foremost thinkers butt heads on the reality of artificial general intelligence, the lofty and undefined goal that drives several major AI firms. At the World Economic Forum in Davos this past week, that tension came to a head, with LeCun and Google DeepMind CEO Demis Hassabis both remarking that today’s LLMs are not even close to human-level artificial general intelligence. Meanwhile, Anthropic’s Dario Amodei asserted that these systems are nearing “Nobel-level” scientific research.
Li and LeCun have both argued that large language models aren’t going to be able to achieve AGI on their own and will require an understanding of space, physics, and physical interaction in order to truly be capable of human thought. And that assertion makes sense: Without an understanding of physical actions, reactions and consequences, how can a machine truly be on par with the human brain?

While giving these machines a sense of the real world could represent a massive breakthrough, it raises the question of what the consequences might be. Giving AI models a broad sense of physical presence and intelligence that exceeds human comprehension might sound like a recipe for disaster to AI doomers or anyone who watched Terminator. But beyond the fear of robots taking over, a more immediate risk is what humans might be capable of with this technology, if left unchecked.
TOGETHER WITH METICULOUS
Still writing tests manually?
Companies like Dropbox, Notion and LaunchDarkly have found a new testing paradigm - and they can't imagine working without it. Built by ex-Palantir engineers, Meticulous autonomously creates a continuously evolving suite of E2E UI tests that delivers near-exhaustive coverage with zero developer effort - impossible to deliver by any other means.
It works like magic in the background:
✅ Near-exhaustive coverage on every test run
✅ No test creation
✅ No maintenance (seriously)
✅ Zero flakes (built on a deterministic browser)
PRODUCTS
AI agents enter telemedicine via Amazon Health

Amazon is bringing AI to One Medical, its primary care service, to help patients better understand their health and connect with appropriate care.
Amazon's new Health AI agentic assistant in the One Medical app can field health questions and manage tasks like booking appointments and tracking medications. The key differentiator: it's personalized for each patient using their medical records, lab results, and current prescriptions.
Users can ask Health AI to explain lab results, answer questions about symptoms and conditions, provide wellness guidance, and more. If Health AI detects that the patient could be better treated by a human clinician, it will recommend the appropriate care and even make an appointment. It can also help renew prescriptions through Amazon Pharmacy.
Because health is such a sensitive matter, Amazon shared additional details regarding security in the blog post to ease user concerns. For instance, Amazon reassures users that their personal health data is protected with HIPAA-compliant privacy and security safeguards.
The company also shared that the app was also codeveloped with One Medical’s clinical leadership in “every stage of development, embedding multiple patient safety guardrails and clinical protocols.”
The app is available to Amazon One Medical members in the One Medical app.

Even though 2026 just kicked off, AI applications for health are already among the hottest consumer AI trends of the year. For example, this Amazon announcement follows OpenAI’s launch of ChatGPT Health and Anthropic's launch of Claude for Healthcare earlier this month, both meant to help users better understand their health data. This growing interest reflects demand for practical AI applications that can meaningfully improve lives, a task that developers and companies are working to advance as quickly as possible to address skepticism about AI's value. However, it is worth recognizing that it is still very early, which will likely mean putting up with bugs and challenges for early adopters of AI health tech.
TOGETHER WITH YOU
Static training data can’t keep up with fast-changing information, leaving your models to guess. We recommend this technical guide from You.com, which gives developers the code and framework to connect GenAI apps to the live web for accurate, real-time insights.
What you’ll get:
A step-by-step Python tutorial to integrate real-time search with a single GET request
The exact code logic to build a "Real-Time Market Intelligence Agent" that automates daily briefings
Best practices for optimizing latency, ensuring zero data retention, and establishing traceability
Turn "outdated" into "real-time." Download the API Integration Guide.
STARTUPS
Synthesia preps AI video agents, doubles to $4B

While AI video tools like OpenAI's Sora, Google's Veo, and Kling are best known for creating "AI slop" that people text to each other for laughs, real money is being made in the enterprise with AI-generated video.
Case in point: Enterprise AI video platform Synthesia has raised $200 million in Series E funding, bringing its valuation to $4 billion.
Announced Monday, the round was led by Google Ventures, with participation from former Sequoia partner Matt Miller’s fund Evantic and VC firm Hedosophia. The round nearly doubles the company’s previous valuation of $2.1 billion just 12 months ago. Snythesia announced in Q2 2025 that it had reached $100 million in ARR.
Unlike other generative video platforms that target a broad audience, Synthesia is targeting enterprises in need of content for upskilling, internal knowledge sharing, and rapid ways to update workforces on new products, regulations, and safety policies. It can be a boon for internal training videos.
In the company’s announcement, Synthesia co-founder and CEO Victor Riparbelli said that the funding will support the company’s core principles: Bringing the cost of content creation “down to zero” and using AI to create videos that are actually engaging for teams.
The next step for Synthesia’s vision is going beyond “static, one-way content,” verging into conversational AI agents designed specifically for organizational upskilling and education, the company said.
These agents, capable of answering user questions, role-playing scenarios and offering personalized explanations, give users a more human-like experience, the company noted.
Imagine an interactive AI of a company executive that can answer questions and respond to employee queries using a database of answers. That's where this is heading.
Synthesia is hitting the nail on the head of two of AI’s biggest markets: Agents and visual AI. Sora and Veo have captured the industry’s attention by creating increasingly realistic content. Agentic AI, meanwhile, completely swept the market over the past year, making headlines at every major tech conference and becoming a focal point for practically every Big Tech and AI firm. Breaking these tools out of the pilot phase means assigning them to the right tasks. Synthesia, it seems, sees workforce education as ripe for the taking.
“Market opportunities like this do not come along often,” Riparbelli said. “We are at a unique point in time where technology enables agents that can truly understand and respond, and where enterprises are under unprecedented pressure to reskill and upskill their workforce.”

Many major model providers are deadset on the idea that scaling large language models is the key to success. But amid growing concern about capability overhang, some worry there may be a point of diminishing returns in developing superpowered LLMs. As the industry grapples with how exactly to use these models, Synthesia’s raise highlights two major possibilities for AI’s next big thing. Video and visual AI bring these language models beyond words, and agents give them the ability to actually meet an acute need.
LINKS

Apple to roll out phase one of its Siri overhaul in iOS 26.4
OpenAI is pitching CEOs to win enterprise business from Anthropic
Tech CEOs got into a bunch of nerd skirmishes over AI at Davos
Meta pauses access to AI characters for teens amid revamp
AI hospitality firm Mews raises $300 million Series D
Booz Allen Hamilton commits $400 million to a16z’s latest fund

Google AI Ultra Upgrade: Now premium subscribers have increased limits on Gemini, getting 1,500 thinking prompts and 500 pro prompts per day.
Google Personal Intelligence in AI Mode: Google’s personalized responses are now available in its AI-powered search.
Gamma Remix: Regenerate presentations for any customer, stakeholder or context that you need.
Freepik: AI-powered video editing with cinematic quality, now with video color grading.
Odyssey-2 Pro: A frontier world model with long-running, interactive simulations, available in 720p.
Gamma: Easy-to-use tool quickly produces high-quality slide decks, web pages, and other visual content, saving time on design and formatting.

Disney: Lead Software Engineer - AI Core Engineering
ByteDance: Research Scientist, LLM Pretraining (Seed-LLM)
DoorDash: Staff Machine Learning Engineer - Ads Economics
Salesforce: Senior/Lead Applied Scientist, Responsible AI
A QUICK POLL BEFORE YOU GO
Which of the big three AI labs feels like it is in lead so far in 2026? |
The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“Tree is less symmetrical and the moon is not in the aesthetically pleasing position you would want it in for an image.” |
“The moon seemed unrealistically big; it was that simple.” |

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.








