
Nat Rubio-Licht
Nat Rubio-Licht is a senior reporter at The Deep View. Nat previously led CIO Upside, a newsletter dedicated to enterprise tech, for The Daily Upside. They've also worked for Protocol, The LA Business Journal, and Seattle Magazine. Reach out to Nat at nat@thedeepview.ai.
Articles

Why world models could be the future of AI
Today’s most popular AI models are great with words.
But when given tasks beyond letters and numbers, these models often fail to grasp the world around them. Conventional AI models tend to flounder when faced with real-world tasks, struggling to understand things like physics and causality. It’s why self-driving cars still struggle with edge cases, resulting in safety hazards and traffic law violations. It’s why industrial robots still need tons of training before they can be trusted to not break the things – or people – around them.
The problem is that these models can’t reconcile what they see with what’s actually real.
And from Abu Dhabi to Silicon Valley, a group of researchers from the Institute of Foundation Models at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) is working to fix that. These researchers have their sights set on world models, or those that make decisions and act on the world around them.
“Our world model is designed to let AI understand and imagine how the world works — not just by seeing what’s happening, but by predicting what could happen next,” Hector Liu, Director at the Institute of Foundation Models (IFM), Silicon Valley Lab told The Deep View.
As it stands, tech firms are intent on using language to control AI – whether that be via chatbots, video and image generation, or agents. But conventional large language models lack what Stanford University researcher Dr. Fei-Fei Li calls “spatial intelligence,” or the ability to visualize in the way that humans do. These models are only good at predicting what to say or create based on their training data, and are unable to ground what they generate into reality.
This is the main divide between a world model and a video generation model, Liu said: One renders appearance, while the other simulates reality.
Video generation tools like OpenAI’s Sora, Google’s Veo and xAI’s Grok Imagine can produce visually realistic scenes, but world models are designed to understand and simulate the world at large.
While a video generator creates a scene with no sense of state, a world model maintains an internal understanding of the world around it, and how that world evolves, said Liu.
“It predicts how scenes unfold over time and how they respond to actions or interventions, rather than just what they look like,” Liu said. Rather than just generating a scene, these models are interactive and reactive. If a tree falls in the world model, its virtual stump cracks, and the digital grass is flattened in its wake.
There are several companies currently in the running to create models that understand the world around them. Both Google DeepMind and Nvidia released new versions of their world models in August, for example.
But MBZUAI’s PAN world model has several advantages over its competitors, said Liu.
- Rather than working only in narrow domains, MBZUAI’s PAN is trained for generality, said Liu, designed to transfer its knowledge across domains. It does so by combining language, vision and action data into one unified space, enabling broad simulation.
- The structure of PAN separates “reasoning from perception,” meaning seeing is distinct from thinking, said Liu. That separation provides the technical advantage of observability, preventing PAN from drifting away from real-world physics.
To measure how well PAN understands the world, MBZUAI researchers measure two main factors: long-horizon performance, or the ability to simulate a coherent world over time, and agentic usability. If something is wrong within a world model, the agent that’s working within it goes haywire.
The next step in the development of PAN is to make the model’s “imagination space,” or inner visualization capabilities, more rich and precise. This will allow the model to understand and render worlds in even finer detail. MBZUAI is also expanding beyond just vision understanding, researching modalities such as sound and motion signals, as well as using an agent to test and learn from different scenarios.
“That’s how we move from a model that only imagines the world to one that can actually think and act within it,” said Liu.
Though several developers want to build models that see the world for what it is, these systems are still in very early stages.
Progress has been made on visual understanding, but humans have more than one sense. For a world model to be truly complete, developing a system with a strong understanding of audio, touch and physical interaction is crucial. The ideal world model not only understands all those modalities but can also create simulations in any of them. “If a modality is missing, the simulation will always be incomplete,” said Liu.
Creating an AI that can understand all of those modalities is to create a model that senses and understands almost entirely like a human does. But doing so comes with significant technical barriers, including access to substantial amounts of complex training data and potentially the need for entirely new model architecture.
But surpassing those barriers could have far-reaching implications, said Liu.
In robotics, these models can prevent the need for intense monitoring and training, limiting “real-world trial and error,” Liu said. Instead, the models that operate robots could be trained in simulations, perfecting actions and discovering mistakes before they even get onto factory floors or into homes. In self-driving cars, meanwhile, a world model could allow an autonomous driving model to rehearse thousands of traffic scenarios before the rubber hits the road.
And the possibilities extend beyond the self-piloted machines we have available today, with research being done in domains as sports strategy to simulate player outcomes, animation and digital art to design and create worlds, said Liu. More discoveries could emerge once these models are actually in the hands of the people.
“In the end, it’s about creating AI that doesn’t just react to the world but can think ahead.”

OpenAI claims $20 billion ARR by year’s end
Sam Altman is addressing the elephant in the room.
The OpenAI CEO on Thursday said in a post on X that the company is on track to earn more than $20 billion in annualized revenue run rate this year, with a path to bring in “hundreds of billion by 2030.”
The figure is a stark jump from the $13 billion in revenue that the company’s CFO Sarah Friar claimed in September. But it’s still a far cry from the $1.4 trillion in infrastructure deals with massive tech firms, which have raised concerns that the company won’t be able to cover its commitments.
Altman said that while “each doubling” of revenue is hard earned, the company is “feeling good about our prospects there,” noting that enterprise offerings, consumer devices and robotics could become promising revenue categories. Altman also alluded to directly selling compute capacity to companies and consumers.
As for the rapid pace and magnitude of the infrastructure buildout itself, Altman said the risks of not aggressively building out AI data centers are greater than having too much compute power.
“We are trying to build the infrastructure for a future economy powered by AI, and given everything we see on the horizon in our research program, this is the time to invest to be really scaling up our technology,” Altman said.
Addressing criticism that OpenAI had eyed a federal backstop for its infrastructure investments, Altman said in his post that “taxpayers should not bail out companies that make bad business decisions or otherwise lose in the market.”
Though Altman has full confidence that OpenAI will be a “wildly successful company,” achieving the kind of exponential growth that it’s banking on hinges not just on widespread adoption, but a large pool of users – both enterprise and consumer – willing to pay for OpenAI’s services.
And despite the fact that OpenAI now has deep financial ties with practically every major tech power player, “if we get it wrong, that’s on us,” Altman said. “If one company fails, other companies will do good work.”

AI complicates cyber threats, Google report finds
AI is making the threat landscape more complex by the day.
On Wednesday, Google’s Threat Intelligence Group published findings showing a shift in how AI is used by cybersecurity threat actors. Rather than simply using it for productivity gains, AI-powered malware has become a growing part of threat actors’ strategies.
According to Google’s AI Threat Tracker, AI is being used to “dynamically alter behavior mid-execution” of malware.
- The intelligence group identified certain families of malware that use large language models during their execution, generating malicious scripts and hiding their own code to evade detection.
- These threat actors will also use AI models to create malicious functions “on demand,” according to the report. “While still nascent, this represents a significant step toward more autonomous and adaptive malware,” Google said in the report.
Additionally, AI tools like Gemini are also being abused across the attack lifecycle, including by state-sponsored actors such as North Korea, Iran, and the People's Republic of China, according to the report.
More illicit AI tools have emerged in underground marketplaces this year, including phishing kits, deepfake generators and vulnerability-spotting tools, “lowering the barrier to entry for less sophisticated actors,” the report said.
But given the wide availability of AI tools, these emerging threats aren’t surprising, Cory Michal, chief security officer of AppOmni, told The Deep View. AI has long powered both the cybersecurity defenses and threat actors alike. As these tools get better, so do the people utilizing them. “Threat actors are leveraging AI to make their operations more efficient and sophisticated, just as legitimate teams use AI to improve productivity,” he said.
“AI doesn’t just make phishing emails more convincing, it makes intrusion, privilege abuse, and session theft more adaptive and scalable,” Michal noted.

Google targets space-based AI infrastructure
Google is looking towards the cosmos to meet AI’s energy demand.
On Tuesday, the tech giant unveiled Project Suncatcher, an initiative to scale AI compute in space. The goal is to harness the power of the Sun via satellites in orbit and minimize the impact of AI infrastructure on “terrestrial resources,” the company said.
The initiative will equip “solar-powered satellite constellations” with Google’s Tensor Processing Units and inter-satellite links, keeping them in constant view of the sun in low-earth orbit to run on solar power perpetually. The first two prototype satellites for the project will be launched in 2027.
Google noted in its research paper that a solar panel can be eight times more productive in orbit than it is on Earth if placed in the proper orbit. “In the future, space may be the best place to scale AI compute,” Travis Beals, senior director of Project Suncatcher, wrote in the report.
Google’s initiative isn’t the first time we’ve seen companies seek creative solutions to AI’s energy and resource problems.
- Microsoft, for example, has filed a number of patent applications for inventions in recent years for systems that rely on cryogenic power, or cold energy, to power data centers.
- Nvidia, meanwhile, has partnered with Starcloud, a company that aims to deploy data centers in space, which on Monday launched its first satellite equipped with an Nvidia GPU.
Given the increasing competition among tech giants to accelerate AI development, the pressure is on to figure out how to fuel that growth.
“Moving some processing off-planet might ease the load on Earth’s power systems, but it also highlights a larger issue: as AI scales, careful planning for sustainable energy use becomes essential across the whole AI infrastructure,” Roman Eloshvili, founder of AI compliance firm ComplyControl, told The Deep View.
It’s no secret that AI is an energy hog. Global energy demand from data centers – driven largely by AI – is expected to double by 2030, according to the International Energy Agency, with a growth rate of around 15% per year, or four times faster than the growth of total electricity demand from all other sectors. Still, tech companies continue to throw astronomical amounts of cash at massive AI infrastructure buildouts. Given the sheer energy demand of these facilities, whether energy infrastructure – clean or otherwise – can keep up remains unclear.

Nscale scores Microsoft deal, hints at IPO
Neoclouds, or specialized computing infrastructure focused on AI, are soaring sky-high.
On Wednesday, Nscale, an AI infrastructure firm, announced plans to go public towards the end of 2026, according to CNBC. The company also announced an expanded partnership with Microsoft worth $14 billion to supply the company with 200,000 Nvidia chips.
“The pace with which we have expanded our capacity demonstrates both our readiness and our commitment to efficiency, sustainability and providing our customers with the most advanced technology available,” Josh Payne, founder and CEO of Nscale, said in the company’s announcement.
The deal comes just weeks after Nscale announced two funding rounds in one week: $1.1 billion Series B and an additional $433 million pre-Series C agreement.
The news adds to the growing buzz around so-called neoclouds.
- CoreWeave went public earlier this year, with its share price now sitting more than 230% higher than when it debuted. The company has also inked multi-billion-dollar infrastructure deals with Meta and OpenAI.
- In September, Nebius scored a $19.4 billion partnership with Microsoft, and Lambda inked a deal with Nvidia worth $1.5 billion.
The popularity of this AI-specific infrastructure is only projected to grow. According to Synergy Research Group, neocloud revenues are on track to exceed $23 billion this year and could reach almost $180 billion by 2030.
It makes sense why neoclouds are getting so much attention. It’s why AI giants are investing hundreds of billions in developing data centers through partnerships and projects like Stargate. The demand for AI infrastructure is surging, and traditional cloud offerings simply might not be cutting it anymore. The popularity of neoclouds, however, allows for the compute to not be centralized solely in the hands of the biggest names in AI.

How SF Compute corners to offtake market
It pays to be the middleman.
At least that’s what Evan Conrad, CEO of The San Francisco Compute Company, better known as SF Compute, has found to be true. The startup, which helps GPU cluster vendors secure “offtake,” claims it has experienced rapid growth as AI developers clamber for more compute without wanting to get locked into expensive, long-term contracts.
So how does it work? To put it simply, Conrad said, if you go to get a loan to fund a GPU cluster, the lender needs to know that the organizations you’re selling that cluster to will be able to pay for it. This creates a market where GPU vendors want to lock customers into lengthy contracts, also known as offtake.
That’s where SF Compute comes in, said Conrad. The company secures offtake for those vendors by sourcing customers for GPU clusters, allowing those customers to “buy long contracts, and … sell back portions of it.” Every time someone sells that compute, Conrad’s firm takes a fee.
“Something like a trillion dollars is flowing into compute,” said Conrad. “And if you don't have secure offtake, that means there's a bubble.”
SF Compute was born out of necessity, Conrad said, having been “backed into it” while searching for short-term compute for a previous AI startup he was creating. Unable to find ways to rent the power he needed in the short term, his then-company purchased a GPU cluster, using what they needed and “subleasing” the rest of the resources, he said.
According to Conrad, SF Compute has struck a chord among AI developers, claiming that the company’s revenue has grown 13 times the size it was in July. Though Conrad was close-lipped about customers, he noted that “people are spending on the order of millions of dollars a month.”
“There is more money than has ever been deployed into any infrastructure project in the history of the world, flowing into compute at the moment,” Conrad noted.
Compute is more valuable now than ever – and traditional cloud offerings may not be cutting it. The search for a solution is evident in the growing popularity of neoclouds, such as Nebius, CoreWeave, Nscale and Lambda, which have, in total, inked more than $33 billion in deals with Microsoft. But as the costs of compute grow higher and higher, SF Compute’s success could signal a desire among smaller AI labs for more affordable alternatives.

CEOs are all in on agents
The C-suite is ready for agentic AI.
A study published on Thursday by the International Data Corporation, in partnership with Salesforce, found that CEOs are overwhelmingly bullish on implementing “digital labor” into their workforces, with 99% of more than 150 surveyed saying they’re prepared for the transformation.
These CEOs, who were all from organizations ranging from 100 to 10,000 employees, see agents as a key part of this vision:
- 65% said they are looking at AI agents as a means of transforming their business models entirely, and 73% said that digital labor would transform their company’s structure.
- 72% of respondents believe that most employees will have an AI agent reporting to them in the next five years, and 57% of CEOs reported that digital labor would increase the need for workers in leadership roles.
- Some of the departments that CEOs expect to see the most impact include security, software development and customer service.
Given the nascent nature of this technology, the fact that so many CEOs are sold on agentic AI is “striking,” said Alan Webber, research director at IDC. “They're looking at AI agents to reshape their business, to redo what it is they do, reform workflows and business processes.”
With this massive transformation, 80% of CEOs report that the future of work involves humans and agents coexisting, rather than complete displacement of jobs, with a projected 4 in 5 employees either remaining in their current roles or redeployed to new ones, according to the report.
While that theoretical figure still leaves 20% of workers out of luck, Webber noted that there are many roles where the impact of shifting to an “agentic enterprise” is still unknown. For example, Webber said, with the rise of AI-powered coding agents handling development, “we don't know exactly what that augmentation and what the human role there looks like yet.”

Meta to use AI to inform ads, content
Meta will soon use your AI chats to inform your scrolling.
The social media giant announced on Wednesday that it will start personalizing ads and content recommendations using user interactions with its generative AI features. Users will be notified starting October 7, before this change goes into effect in mid-December. This policy won’t apply to users in South Korea, the U.K. and the E.U. due to privacy laws in those regions.
To keep your ads from devolving into the Thanksgiving dinner table, Meta noted that topics like “religious views, sexual orientation, political views, health, racial or ethnic origin, philosophical beliefs, or trade union membership,” discussed with its AI bots, won’t be used in ad algorithms.
Meta claimed that its AI products garner more than 1 billion users monthly. Meta AI users will not have the option to opt out, the company confirmed.
A move like this is par for the course for Meta, which already relies on user interactions to hone its ad experiences and content algorithms. “Soon, interactions with AIs will be another signal we use to improve people’s experience,” the company said in its announcement.
With this move, Meta is operating by the same playbook that it always has: utilizing every tool at its disposal to target advertising to a T, and thereby rake in as much cash as possible. In the most recent quarter, the company’s ad revenue totaled nearly $46.6 billion, representing an increase of over 18% from the same quarter in the previous year.
With fears of an AI bubble creeping in, the stakes are growing higher. Companies are investing billions in developing massive AI models, with little evidence of return on investment. Meta noted in its July earnings call that its expenses are expected to range between $114 billion and $118 billion for 2025 and increase above this for 2026, primarily due to investments in AI.
This isn’t the first time Meta has sought to incorporate AI into its digital ad playbook. The company began rolling out some generative features in its Ads Manager back in 2023, and said this past summer that it’s working on an AI tool to create entire ad campaigns from scratch. Still, it’s unclear whether these additions will be fruitful enough to make these models worth the price tag.

Legal AI frenzy grows as Eve hits $1B
Legal AI startup Eve has joined the unicorn club, raising $103 million in Series B funding, led by Spark Capital and reaching a $1 billion valuation.
The investment was supported by Andreessen Horowitz, Lightspeed Venture Partners and Menlo Ventures.
Eve’s platform specializes in plaintiff-side law, managing and automating tasks at all parts of a case’s life cycle, including case intake, collecting records, drafting documents, legal research and discovery.
With the sheer amount of documents and information law firms have to handle, the legal field is ripe for AI automation. Eve joins several startups aiming to bring AI into the law field, with legal tech investments reaching $2.4 billion this year, according to Crunchbase.
- Last week, Filevine, which organizes documents and workflows for law firms, raised $400 million in equity financing, reaching a valuation of $3 billion.
- Harvey, a startup that automates legal tasks including contract analysis, due diligence and regulatory compliance, hit a valuation of $5 billion in June and $100 million in annual recurring revenue in August.
Jay Madheswaran, CEO and co-founder of Eve, said in the announcement the company’s “AI-Native” law movement has attracted more than 350 firms as partners, which have used the tech to process more than 200,000 documents.
Eve’s tech has helped firms recover upward of $3.5 billion in settlements and judgments, including a $27 million settlement won by Hershey Law and a $15 million settlement by the Geiger Legal Group last year.
“AI has fundamentally changed the equation of plaintiff law,” Madheswaran said in the announcement. “For the first time, law firms have technology that can think with them.”
Despite its potential, deploying AI in legal contexts poses several risks. For one, AI still faces significant data security challenges, which can cause trouble when handling sensitive documents or confidential information. Hallucination and accuracy issues also present a hurdle – one that Anthropic’s lawyers already faced earlier this year after the company’s models hallucinated an incorrect footnote in its legal battle with music publishers.
or get it straight to your inbox, for free!
Get our free, daily newsletter that makes you smarter about AI. Read by 450,000+ from Google, Meta, Microsoft, a16z and more.