
Nat Rubio-Licht
Nat Rubio-Licht is a senior reporter at The Deep View. Nat previously led CIO Upside, a newsletter dedicated to enterprise tech, for The Daily Upside. They've also worked for Protocol, The LA Business Journal, and Seattle Magazine. Reach out to Nat at nat@thedeepview.ai.
Articles

OpenAI claims $20 billion ARR by year’s end
Sam Altman is addressing the elephant in the room.
The OpenAI CEO on Thursday said in a post on X that the company is on track to earn more than $20 billion in annualized revenue run rate this year, with a path to bring in “hundreds of billion by 2030.”
The figure is a stark jump from the $13 billion in revenue that the company’s CFO Sarah Friar claimed in September. But it’s still a far cry from the $1.4 trillion in infrastructure deals with massive tech firms, which have raised concerns that the company won’t be able to cover its commitments.
Altman said that while “each doubling” of revenue is hard earned, the company is “feeling good about our prospects there,” noting that enterprise offerings, consumer devices and robotics could become promising revenue categories. Altman also alluded to directly selling compute capacity to companies and consumers.
As for the rapid pace and magnitude of the infrastructure buildout itself, Altman said the risks of not aggressively building out AI data centers are greater than having too much compute power.
“We are trying to build the infrastructure for a future economy powered by AI, and given everything we see on the horizon in our research program, this is the time to invest to be really scaling up our technology,” Altman said.
Addressing criticism that OpenAI had eyed a federal backstop for its infrastructure investments, Altman said in his post that “taxpayers should not bail out companies that make bad business decisions or otherwise lose in the market.”
Though Altman has full confidence that OpenAI will be a “wildly successful company,” achieving the kind of exponential growth that it’s banking on hinges not just on widespread adoption, but a large pool of users – both enterprise and consumer – willing to pay for OpenAI’s services.
And despite the fact that OpenAI now has deep financial ties with practically every major tech power player, “if we get it wrong, that’s on us,” Altman said. “If one company fails, other companies will do good work.”

AI complicates cyber threats, Google report finds
AI is making the threat landscape more complex by the day.
On Wednesday, Google’s Threat Intelligence Group published findings showing a shift in how AI is used by cybersecurity threat actors. Rather than simply using it for productivity gains, AI-powered malware has become a growing part of threat actors’ strategies.
According to Google’s AI Threat Tracker, AI is being used to “dynamically alter behavior mid-execution” of malware.
- The intelligence group identified certain families of malware that use large language models during their execution, generating malicious scripts and hiding their own code to evade detection.
- These threat actors will also use AI models to create malicious functions “on demand,” according to the report. “While still nascent, this represents a significant step toward more autonomous and adaptive malware,” Google said in the report.
Additionally, AI tools like Gemini are also being abused across the attack lifecycle, including by state-sponsored actors such as North Korea, Iran, and the People's Republic of China, according to the report.
More illicit AI tools have emerged in underground marketplaces this year, including phishing kits, deepfake generators and vulnerability-spotting tools, “lowering the barrier to entry for less sophisticated actors,” the report said.
But given the wide availability of AI tools, these emerging threats aren’t surprising, Cory Michal, chief security officer of AppOmni, told The Deep View. AI has long powered both the cybersecurity defenses and threat actors alike. As these tools get better, so do the people utilizing them. “Threat actors are leveraging AI to make their operations more efficient and sophisticated, just as legitimate teams use AI to improve productivity,” he said.
“AI doesn’t just make phishing emails more convincing, it makes intrusion, privilege abuse, and session theft more adaptive and scalable,” Michal noted.

Google targets space-based AI infrastructure
Google is looking towards the cosmos to meet AI’s energy demand.
On Tuesday, the tech giant unveiled Project Suncatcher, an initiative to scale AI compute in space. The goal is to harness the power of the Sun via satellites in orbit and minimize the impact of AI infrastructure on “terrestrial resources,” the company said.
The initiative will equip “solar-powered satellite constellations” with Google’s Tensor Processing Units and inter-satellite links, keeping them in constant view of the sun in low-earth orbit to run on solar power perpetually. The first two prototype satellites for the project will be launched in 2027.
Google noted in its research paper that a solar panel can be eight times more productive in orbit than it is on Earth if placed in the proper orbit. “In the future, space may be the best place to scale AI compute,” Travis Beals, senior director of Project Suncatcher, wrote in the report.
Google’s initiative isn’t the first time we’ve seen companies seek creative solutions to AI’s energy and resource problems.
- Microsoft, for example, has filed a number of patent applications for inventions in recent years for systems that rely on cryogenic power, or cold energy, to power data centers.
- Nvidia, meanwhile, has partnered with Starcloud, a company that aims to deploy data centers in space, which on Monday launched its first satellite equipped with an Nvidia GPU.
Given the increasing competition among tech giants to accelerate AI development, the pressure is on to figure out how to fuel that growth.
“Moving some processing off-planet might ease the load on Earth’s power systems, but it also highlights a larger issue: as AI scales, careful planning for sustainable energy use becomes essential across the whole AI infrastructure,” Roman Eloshvili, founder of AI compliance firm ComplyControl, told The Deep View.
It’s no secret that AI is an energy hog. Global energy demand from data centers – driven largely by AI – is expected to double by 2030, according to the International Energy Agency, with a growth rate of around 15% per year, or four times faster than the growth of total electricity demand from all other sectors. Still, tech companies continue to throw astronomical amounts of cash at massive AI infrastructure buildouts. Given the sheer energy demand of these facilities, whether energy infrastructure – clean or otherwise – can keep up remains unclear.

Nscale scores Microsoft deal, hints at IPO
Neoclouds, or specialized computing infrastructure focused on AI, are soaring sky-high.
On Wednesday, Nscale, an AI infrastructure firm, announced plans to go public towards the end of 2026, according to CNBC. The company also announced an expanded partnership with Microsoft worth $14 billion to supply the company with 200,000 Nvidia chips.
“The pace with which we have expanded our capacity demonstrates both our readiness and our commitment to efficiency, sustainability and providing our customers with the most advanced technology available,” Josh Payne, founder and CEO of Nscale, said in the company’s announcement.
The deal comes just weeks after Nscale announced two funding rounds in one week: $1.1 billion Series B and an additional $433 million pre-Series C agreement.
The news adds to the growing buzz around so-called neoclouds.
- CoreWeave went public earlier this year, with its share price now sitting more than 230% higher than when it debuted. The company has also inked multi-billion-dollar infrastructure deals with Meta and OpenAI.
- In September, Nebius scored a $19.4 billion partnership with Microsoft, and Lambda inked a deal with Nvidia worth $1.5 billion.
The popularity of this AI-specific infrastructure is only projected to grow. According to Synergy Research Group, neocloud revenues are on track to exceed $23 billion this year and could reach almost $180 billion by 2030.
It makes sense why neoclouds are getting so much attention. It’s why AI giants are investing hundreds of billions in developing data centers through partnerships and projects like Stargate. The demand for AI infrastructure is surging, and traditional cloud offerings simply might not be cutting it anymore. The popularity of neoclouds, however, allows for the compute to not be centralized solely in the hands of the biggest names in AI.

How SF Compute corners to offtake market
It pays to be the middleman.
At least that’s what Evan Conrad, CEO of The San Francisco Compute Company, better known as SF Compute, has found to be true. The startup, which helps GPU cluster vendors secure “offtake,” claims it has experienced rapid growth as AI developers clamber for more compute without wanting to get locked into expensive, long-term contracts.
So how does it work? To put it simply, Conrad said, if you go to get a loan to fund a GPU cluster, the lender needs to know that the organizations you’re selling that cluster to will be able to pay for it. This creates a market where GPU vendors want to lock customers into lengthy contracts, also known as offtake.
That’s where SF Compute comes in, said Conrad. The company secures offtake for those vendors by sourcing customers for GPU clusters, allowing those customers to “buy long contracts, and … sell back portions of it.” Every time someone sells that compute, Conrad’s firm takes a fee.
“Something like a trillion dollars is flowing into compute,” said Conrad. “And if you don't have secure offtake, that means there's a bubble.”
SF Compute was born out of necessity, Conrad said, having been “backed into it” while searching for short-term compute for a previous AI startup he was creating. Unable to find ways to rent the power he needed in the short term, his then-company purchased a GPU cluster, using what they needed and “subleasing” the rest of the resources, he said.
According to Conrad, SF Compute has struck a chord among AI developers, claiming that the company’s revenue has grown 13 times the size it was in July. Though Conrad was close-lipped about customers, he noted that “people are spending on the order of millions of dollars a month.”
“There is more money than has ever been deployed into any infrastructure project in the history of the world, flowing into compute at the moment,” Conrad noted.
Compute is more valuable now than ever – and traditional cloud offerings may not be cutting it. The search for a solution is evident in the growing popularity of neoclouds, such as Nebius, CoreWeave, Nscale and Lambda, which have, in total, inked more than $33 billion in deals with Microsoft. But as the costs of compute grow higher and higher, SF Compute’s success could signal a desire among smaller AI labs for more affordable alternatives.

CEOs are all in on agents
The C-suite is ready for agentic AI.
A study published on Thursday by the International Data Corporation, in partnership with Salesforce, found that CEOs are overwhelmingly bullish on implementing “digital labor” into their workforces, with 99% of more than 150 surveyed saying they’re prepared for the transformation.
These CEOs, who were all from organizations ranging from 100 to 10,000 employees, see agents as a key part of this vision:
- 65% said they are looking at AI agents as a means of transforming their business models entirely, and 73% said that digital labor would transform their company’s structure.
- 72% of respondents believe that most employees will have an AI agent reporting to them in the next five years, and 57% of CEOs reported that digital labor would increase the need for workers in leadership roles.
- Some of the departments that CEOs expect to see the most impact include security, software development and customer service.
Given the nascent nature of this technology, the fact that so many CEOs are sold on agentic AI is “striking,” said Alan Webber, research director at IDC. “They're looking at AI agents to reshape their business, to redo what it is they do, reform workflows and business processes.”
With this massive transformation, 80% of CEOs report that the future of work involves humans and agents coexisting, rather than complete displacement of jobs, with a projected 4 in 5 employees either remaining in their current roles or redeployed to new ones, according to the report.
While that theoretical figure still leaves 20% of workers out of luck, Webber noted that there are many roles where the impact of shifting to an “agentic enterprise” is still unknown. For example, Webber said, with the rise of AI-powered coding agents handling development, “we don't know exactly what that augmentation and what the human role there looks like yet.”

Meta to use AI to inform ads, content
Meta will soon use your AI chats to inform your scrolling.
The social media giant announced on Wednesday that it will start personalizing ads and content recommendations using user interactions with its generative AI features. Users will be notified starting October 7, before this change goes into effect in mid-December. This policy won’t apply to users in South Korea, the U.K. and the E.U. due to privacy laws in those regions.
To keep your ads from devolving into the Thanksgiving dinner table, Meta noted that topics like “religious views, sexual orientation, political views, health, racial or ethnic origin, philosophical beliefs, or trade union membership,” discussed with its AI bots, won’t be used in ad algorithms.
Meta claimed that its AI products garner more than 1 billion users monthly. Meta AI users will not have the option to opt out, the company confirmed.
A move like this is par for the course for Meta, which already relies on user interactions to hone its ad experiences and content algorithms. “Soon, interactions with AIs will be another signal we use to improve people’s experience,” the company said in its announcement.
With this move, Meta is operating by the same playbook that it always has: utilizing every tool at its disposal to target advertising to a T, and thereby rake in as much cash as possible. In the most recent quarter, the company’s ad revenue totaled nearly $46.6 billion, representing an increase of over 18% from the same quarter in the previous year.
With fears of an AI bubble creeping in, the stakes are growing higher. Companies are investing billions in developing massive AI models, with little evidence of return on investment. Meta noted in its July earnings call that its expenses are expected to range between $114 billion and $118 billion for 2025 and increase above this for 2026, primarily due to investments in AI.
This isn’t the first time Meta has sought to incorporate AI into its digital ad playbook. The company began rolling out some generative features in its Ads Manager back in 2023, and said this past summer that it’s working on an AI tool to create entire ad campaigns from scratch. Still, it’s unclear whether these additions will be fruitful enough to make these models worth the price tag.

Legal AI frenzy grows as Eve hits $1B
Legal AI startup Eve has joined the unicorn club, raising $103 million in Series B funding, led by Spark Capital and reaching a $1 billion valuation.
The investment was supported by Andreessen Horowitz, Lightspeed Venture Partners and Menlo Ventures.
Eve’s platform specializes in plaintiff-side law, managing and automating tasks at all parts of a case’s life cycle, including case intake, collecting records, drafting documents, legal research and discovery.
With the sheer amount of documents and information law firms have to handle, the legal field is ripe for AI automation. Eve joins several startups aiming to bring AI into the law field, with legal tech investments reaching $2.4 billion this year, according to Crunchbase.
- Last week, Filevine, which organizes documents and workflows for law firms, raised $400 million in equity financing, reaching a valuation of $3 billion.
- Harvey, a startup that automates legal tasks including contract analysis, due diligence and regulatory compliance, hit a valuation of $5 billion in June and $100 million in annual recurring revenue in August.
Jay Madheswaran, CEO and co-founder of Eve, said in the announcement the company’s “AI-Native” law movement has attracted more than 350 firms as partners, which have used the tech to process more than 200,000 documents.
Eve’s tech has helped firms recover upward of $3.5 billion in settlements and judgments, including a $27 million settlement won by Hershey Law and a $15 million settlement by the Geiger Legal Group last year.
“AI has fundamentally changed the equation of plaintiff law,” Madheswaran said in the announcement. “For the first time, law firms have technology that can think with them.”
Despite its potential, deploying AI in legal contexts poses several risks. For one, AI still faces significant data security challenges, which can cause trouble when handling sensitive documents or confidential information. Hallucination and accuracy issues also present a hurdle – one that Anthropic’s lawyers already faced earlier this year after the company’s models hallucinated an incorrect footnote in its legal battle with music publishers.

ChatGPT gets parental controls
AI and teenagers have something in common: They can be unpredictable.
Looking to reign in both, OpenAI on Monday launched parental controls for ChatGPT, allowing parents and teens to link their accounts to limit, monitor and manage how the chatbot is used. The AI giant launched these controls in partnership with Common Sense Media and other advocacy groups, as well as the attorneys general of California and Delaware.
Parents now can control a number of settings on their teens’ accounts, including:
- Setting quiet hours, removing voice mode and image generation capabilities, turning off chatGPT’s ability to save memories and opting out of model training.
- OpenAI will also automatically limit “graphic content, viral challenges, sexual, romantic or violent role play, and extreme beauty ideals” for teen accounts.
If OpenAI’s tech detects something is “seriously wrong,” such as recognizing signs of self harm or “acute distress,” parents will be notified immediately unless they have opted out. In more serious cases, such as signs of imminent danger, OpenAI is working on a process to contact emergency services.
These guardrails come on the heels of a lawsuit alleging that OpenAI’s ChatGPT is responsible for the death of a 16-year-old boy, whose parents claim he was using the chatbot to explore suicide methods.
These safeguards highlight that an increasing amount of teens turn to AI for companionship. A July Common Sense Media survey of more than 1,000 teens found that 72% reported using AI companions, with 33% relying on these companions for emotional support, friendship or romantic interactions.
Robbie Torney, senior director of AI programs at Common Sense Media, said in a statement that safeguards like these are “just one piece of the puzzle” in safe AI use.
In its announcement, OpenAI said these measures will “iterate and improve over time,” noting that it’s working on an age prediction system that it announced in mid-September. “Guardrails help, but they’re not foolproof and can be bypassed if someone is intentionally trying to get around them.”
or get it straight to your inbox, for free!
Get our free, daily newsletter that makes you smarter about AI. Read by 450,000+ from Google, Meta, Microsoft, a16z and more.