
Nat Rubio-Licht
Nat Rubio-Licht is a Senior Reporter at The Deep View. Nat previously led CIO Upside, a newsletter dedicated to enterprise tech, for The Daily Upside. They've also worked for Protocol, The LA Business Journal, and Seattle Magazine. Reach out to Nat at nat@thedeepview.ai.
Articles

Perplexity shuns ads for enterprises, devices
As OpenAI jumps headfirst into advertising, another AI leader has taken a stance against it. We're not talking about Anthropic, we're talking about Perplexity.
Last week, Perplexity told reporters at a press briefing that the AI search tool is backing away from its advertising plans and increasing its focus on its subscription business. Though it doesn’t intend to get rid of its free tier, Perplexity plans to pay for it by partnering with device makers moving forward.
The decision follows OpenAI facing heat over its decision to embed ads into its popular chatbot. Perplexity executives fear that embedding ads into its platform may diminish consumer trust in its product. Instead, with its subscription business, Perplexity may be eyeing Anthropic’s primary audience: enterprises and developers.
The move is a stark reversal from Perplexity’s previous strategy, in which CEO Aravind Srinivas said on a podcast last year that advertising will become its core stream of revenue over subscription and enterprise “if we crack it.” However, ads could have gone either way for the company:
- Given that Perplexity’s flagship product is an AI-powered search engine, leaning into advertising would have made sense, since ads are how Google makes its billions.
- Still, where other ads-based businesses have Perplexity beat is on user numbers, with the AI search tool sporting roughly 60 million users compared to ChatGPT’s 800 million weekly active users, according to WIRED.
Perplexity doesn’t seem too worried about following in Google’s footsteps, telling reporters in the press briefing that “Google is changing to be like Perplexity more than Perplexity is trying to take on Google.”
Our Deeper View
There might be a reason attendees at the Cerebral Valley conference voted Perplexity a flop risk back in November: The startup does not have an easy road ahead of it to grow revenue. Though enterprise tech is generally a more lucrative bet than consumer, the startup has Anthropic to contend with, and its Claude Code and Cowork products have seen massive uptake in enterprise and developer circles due to their focus on trust and responsible AI. And among consumers, getting users to latch onto anything but its free tier is a hard sell, with Google providing its own AI search features and ChatGPT cementing itself as a household name. Perplexity will need to become known for something distinct in order to succeed.

Claude Code adds AI-powered security layer
Security has long been seen as one of AI’s biggest pitfalls, and the concerns have only accelerated as agents take on autonomous action. Anthropic is trying to change the narrative.
On Friday, the company introduced Claude Code Security, an AI-powered tool that searches codebases for security vulnerabilities that humans may have missed, and it's now available in a limited research preview.
Built into Claude Code, this tool weeds out security bugs and suggests software patches for human review.
- Rather than scanning for known patterns, the way traditional code scanning tools would, Anthropic claims that Claude Code Security reasons with your code by trying to understand how components interact and how data moves through your systems, “the way a human security researcher would.”
- Before getting in front of human eyes, all of this tool’s findings then go through a multi-stage process to filter for false positives. Finally, everything is posted to the Claude Code Security dashboard for human approval.
Claude Code Security addresses a prevailing issue for strapped security teams: “Too many software vulnerabilities and not enough people to address them,” Anthropic said. Finding subtle pitfalls “requires skilled human researchers, who are dealing with ever-expanding backlogs.”
Anthropic’s tool adds another feature to its ever-popular suite of enterprise offerings. The feature comes as developers increasingly rely on Claude and other AI tools to do most, if not all, of their coding for them.
But the more we rely on these tools, the more we risk when they fail. For example, a 13-hour Amazon Web Services outage in December reportedly involved the company’s own Kiro agentic AI coding tool, according to the Financial Times. Amazon blamed human error for this outage, claiming that the staffer involved had broader access permissions than expected and that "the same issue could occur with any developer tool or manual action.”
Our Deeper View
Trying to do AI the safe and ethical way is par for the course for Anthropic, yet this tool touches on a particularly poignant need. Developers across enterprises big and small are using Claude Code more than ever, and that reliance is only bound to grow as these companies search for ways to make their AI deployments gain traction. To trust the code they're writing, developers need to trust the codebases Claude is building on. Plus, a tool like this is bound to earn more reputation points for Anthropic if it can effectively prevent security snafus before they happen.

Gemini 3.1 Pro ups performance, lowers cost curve
Just a few months after Gemini 3 shook the industry, Google’s back with another upgrade to its flagship model – and it runs cheaper than rival Anthropic's cutting-edge model.
On Thursday, Google unveiled the preview of Gemini 3.1 Pro, the latest iteration in the Gemini series. Google’s model beats out Anthropic’s Claude and OpenAI’s GPT models in benchmarks related to reasoning, scientific knowledge, agentic terminal coding and tool use, and long-horizon professional tasks.
Gemini 3.1 Pro can handle multimodal inputs, including text, images, audio, and video files, with a context window of up to 1 million tokens. Its outputs, meanwhile, are text-based and up to 64,000 tokens. In a post on X, Google called Gemini 3.1 Pro its “new baseline for complex problem solving.”
The big news? Google’s model offers frontier capabilities at a lower cost than recent releases from rivals:
- Gemini 3.1 Pro costs $2 per one million token input and $12 per one million token output.
- Meanwhile, Claude Opus 4.6, Anthropic’s recently-released update to its flagship model, costs roughly $5 per million-token input and $25 per million-token output. It also undercuts Sonnet 4.6, Anthropic’s faster, mid-tier model, which sits at $3 per million-token input and $12 per million-token output.
- Google’s latest model is competitively priced with OpenAI’s GPT-5.2, which costs slightly less at $1.75 per million-token input and slightly more at $14 per million-token output.
Currently, Gemini 3.1 Pro is available in the Gemini app and NotebookLM for users with Google AI Pro and Ultra Plans. For developers, the model is now available via its suite of enterprise apps, including AI Studio, Antigravity, Vertex AI, Gemini Enterprise, Gemini CLI and Android Studio.
Our Deeper View
Google might shake out to be one of the biggest winners of the AI war, and not just because its models continue to break benchmarks. Though the Anthropic versus OpenAI rivalry is taking up a good deal of airtime, Google’s legacy in both the consumer and enterprise spaces gives it a foundation to better serve a larger number of users. Plus, if it’s able to undercut competitors at a time when AI costs are becoming stifling, Google not only has the opportunity to make models more accessible and democratized, but also to foster long-term progress. After all, decreasing cost curves tends to be one of the most powerful, although less flashy, ways to move the needle on innovation.

Accenture pushes AI use, Google unveils cert
AI tools only matter if your employees actually use them, as many companies are learning the hard way.
Consulting giant Accenture is now correlating promotions to use of its AI tools, aiming to encourage “regular adoption” of AI by making it a requirement to earn leadership roles, according to a report from the Financial Times.
Confirming the report, Accenture told CNBC that, in order to be the “reinvention partner of choice” for its clients, consultants themselves must adopt “the latest tools and technologies to serve our clients most effectively.”
It’s not the first time Accenture’s AI exuberance has impacted its workforce.
- In September, the company laid off 11,000 employees as part of an AI-focused restructuring program. CEO Julie Sweet said that, as part of “upskilling our reinventors,” those who cannot be reskilled “will be exited.”
- And further buying into the space, Accenture struck a multi-year agreement with Anthropic in December to train 30,000 of its professionals on AI and increase Claude adoption.
Accenture’s decision comes as companies sound the alarm on the gap between AI deployments and AI skills. One survey from Google and Ipsos found that, though 70% of managers see an urgent need for an AI-ready workforce, only 14% of workers have actually been offered AI training.
To address the chasm between what we have and how we use it, Google on Thursday launched the Google AI Professional Certificate to its career certificate program. This gives its students training on how to practically use Google’s frontier models in enterprise settings, including use cases like building infographics, conducting deep research, turning goals into project plans with timelines and vibe coding custom apps, the company said in its announcement.
Our Deeper View
AI is creating a barrage of mixed messages in the workforce. Some reports push the narrative that AI is going to completely replace thousands of jobs, and is already primed to do so. Others, meanwhile, claim that AI is actually increasing workloads by democratizing the amount of tasks that the non-technical employee can do. Even though employers are largely pressuring employees to make good use of this technology, employees may feel as though they’re training their replacements. And while it’s too early to fully understand how AI will reshape the workforce, the pressure from stakeholders and C-suite executives could foster a looming sense of dread that the tech represents an existential threat to employees' livelihoods.

World Labs raises $1B as VCs look beyond LLMs
The world of AI is moving well beyond language.
On Wednesday, World Labs, a startup founded by AI pioneer Fei-Fei Li, announced a $1 billion funding round. The round’s investors included AMD, Autodesk, Emerson Collective, Fidelity, Nvidia and Sea, the company said in its announcement.
Though World Labs didn’t disclose its valuation, previous reports from Bloomberg claim that the company sought funding at a valuation of $5 billion.
“We are focused on accelerating our mission to advance spatial intelligence by building world models that revolutionize storytelling, creativity, robotics, scientific discovery, and beyond,” World Labs said in its press release.
World Labs’ success is the latest sign that researchers are looking for breakthroughs beyond what large language models can provide. Investors, meanwhile, may be eyeing this development as their next big bet.
- Runway, an AI video startup, announced a $315 million Series E funding round that shot its valuation to $5.3 billion, a source told The Deep View. The company intends to use the funding to bolster its world model research, calling it the “most transformative technology of our time.”
- AMI Labs, a world model startup founded by fellow AI godparent Yann LeCun earlier this year, is also reportedly in talks for funding at a multibillion-dollar valuation.
With their capabilities in real-world perception and action, some developers are slating these models as a catalyst for massive progress in visual and physical AI, including fields such as robotics, self-driving cars and game development. But creating these models is no easy feat.
“Simulating reality is simulating a dynamic world,” Anastasis Germanidis, co-founder and CTO of Runway, previously told The Deep View. “The static environment problem is much easier to solve than the dynamic world when you want to simulate and understand … the effects of different actions that you can take.”
Our Deeper View
While world models carry massive promise, they are also far more difficult to build and train than their large language model predecessors. Along with eating up more compute resources and data, creating a machine that can see the world and act on it as humans do is challenging: These machines don’t have millions of years of built-in evolutionary biology to fall back on the way that humans do. And given that the goal is to put these models in charge of training for physical actions, their mistakes have more dire physical consequences then, say, an LLM hallucinating a pizza recipe that calls for mixing Elmer's glue into cheese.

The developer role shifts to orchestrating AI
When AI can create apps from simple prompts, many developers are left wondering what to do with their time.
The tech’s ability to generate practically anything in the digital domain has triggered a number of questions about how these capabilities will change the way tech workers, well … work. Coding tools like Claude Code have entirely automated something that once required a fleet of eager college grads to complete.
The result? Developers are becoming managers, rather than creators. And executives are eating it up:
- Canva’s CTO Jesal Gadhia told Business Insider that most of the company’s senior engineers spend their time reviewing AI-generated code, rather than writing it themselves. As a result, they’ve produced an “unprecedented amount of code” in the last 12 months.
- Meanwhile, Spotify co-CEO Gustav Söderström said in the company’s recent earnings call that its most talented developers haven’t handwritten “a single line of code since December.”
- And Dan Cox, the CTO of Axios, said that the company used AI agents to complete a project in 37 minutes that took one of its best engineers three weeks to complete the previous year.
While this might be fine for senior developers, the question remains of how this will impact the green coders who are just entering the workforce, especially amid the plethora of mixed signals on how AI is impacting the job market.
Some estimates paint a bleak picture: According to data from the Federal Reserve Bank of New York analyzing the degrees with the highest unemployment rate, computer engineering and computer science ranked in the top five, at 7.8% and 7%, respectively.
Others, however, point to AI transforming certain jobs, rather than completely killing them. A Gartner study, for instance, said that 50% of the workers laid off as a result of AI will be rehired to do similar work. It’s a sentiment that IBM is taking into its own hiring practices as it plans to triple entry-level headcount, but shift focus away from technical tasks that AI can do to instead have these staffers focus on person-to-person jobs that need human skills.
Our Deeper View
An argument can be made that creation requires humanity. Art, music, literature: these are things that are born as a result of channeling the human experience into an artistic medium so that we can relate to one another. But that argument is a little more difficult to make for technical skills like coding. As these systems become more capable of doing technical work, human creation might become more valuable. As Daniela Amodei said in a recent interview with ABC News: “In a world where AI is very smart and capable of doing so many things, the things that make us human will become much more important."

Open models rise as Moonshot sets $10B target
Open source AI might be catching up to its proprietary rivals.
Chinese AI firm Moonshot, the developer of the Kimi open source model family, is reportedly targeting a $10 billion valuation in an expansion of its current funding round, according to Bloomberg.
The company raised $500 million at a $4.3 billion valuation last month. The round’s existing backers include Alibaba, Tencent and 5Y Capital, which have already committed more than $700 million to the company.
And Moonshot isn’t the only company seeing open source success. Last week, Paris-based Mistral AI, which provides a suite of open-source models, announced a $1.4 billion commitment to AI data centers in Sweden as it hit an annualized revenue run rate of more than $400 million. The company had raised roughly $2 billion at a more than $13.8 billion valuation in September.
Adoption of these models, too, are starting to pick up pace, with Alibaba’s Qwen model family raking in hundreds of millions of downloads on Hugging Face.
Still, these figures are drops in the bucket next to the high-flying valuations of US-based proprietary model developers, with OpenAI eyeing an $830 billion valuation in its upcoming twelve-figure funding round; Anthropic hitting $380 billion after its $30 billion round; and xAI sitting at upwards of $230 billion prior to its acquisition by SpaceX.
However, money might not be everything. Scott Bickley, advisory fellow at Info-Tech Research Group, told The Deep View that these valuations are more a function of structural economic differences, rather than being indicative of model performance.
- That difference is geopolitical: The US relies on massive companies to push frontier AI research via capital concentration, he said.
- Meanwhile, China – where most of the open source AI development is concentrated – favors a large swathe of small companies to conduct research cheaper, said Bickley.
“This valuation gap belies the fact that open-source models are rapidly closing the intelligence gap,” Bickley told The Deep View.
Our Deeper View
Though there is a common notion that open-source AI still lags behind proprietary competitors, the race may no longer be who can make the most performant AI model, but rather it’s a “race to the bottom,” said Bickley, the winner being the lab that can make the AI model that’s most efficient and affordable. But the benefits of open-source go beyond being cheap: Open ecosystems promote and democratize collaborative innovation, if developers can find ways around the lack of safeguards and security issues that these models tend to pose.

AI companies court Pentagon, Anthropic resists
AI companies are fighting over the Pentagon’s favor.
Elon Musk-owned SpaceX and subsidiary xAI have thrown their hat into a secretive Pentagon contest for a $100 million contract to develop voice-powered autonomous drone swarms, Bloomberg reported on Monday. It marks xAI’s latest effort to collaborate with the government agency, as the AI lab recently signed a contract with the Pentagon to integrate Grok into government sites, as well as a $200 million contract to integrate its tech into military systems.
But the Musk-owned companies aren’t the only ones seeking Pentagon contracts. OpenAI is reportedly supporting Applied Intuition, an autonomous machines startup, with its own submission for the contest, Bloomberg reported.
However, the news comes as a rival’s relationship with the military is reportedly on the rocks: According to Axios, the Pentagon may cut ties with Anthropic over the company’s refusal to relax the safety restrictions in place on its flagship chatbot, Claude.
- The military is currently using Claude only for its classified systems. Though the company is willing to loosen some safety restrictions, it wants to ensure that its chatbot won’t aid the agency in surveillance of U.S. citizens or developing autonomous weaponry that can kill without human oversight.
- Defense Secretary Pete Hegseth is reportedly considering designating the company a "supply chain risk," forcing military contractors using Anthropic to also drop the company.
One senior official told Axios that the agency is “going to make sure they pay a price for forcing our hand like this." The Department of War’s spokesperson Sean Parnell told Axios that its relationship with Anthropic is “being reviewed.”
Anthropic’s tension with the Pentagon represents a stark reversal from its previous intentions. Anthropic, along with practically every other major AI company, sought to get in the government’s good graces last year by offering services at steep discounts to support early adoption.

IBM is changing entry-level jobs, not killing them
With even some of the best developers barely writing code anymore, what is there left for an entry-level tech worker to do? According to IBM, a lot.
Last week, the company announced plans to triple entry-level hiring in the US in 2026. However, these positions aren’t going to look like the early career jobs of the past, Nickle LaMoreaux, the company’s HR chief, said at Charter’s Leading with AI Summit.
IBM has overhauled its job descriptions for low-level positions, shifting the focus from tasks that AI can automate to areas that AI can’t. This means less coding and admin work, more person-to-person work, such as customer engagement. Though IBM didn’t reveal specific hiring targets, this workforce expansion will be implemented across the board.
“The entry-level jobs that you had two to three years ago, AI can do most of them … you need to be able to show the real value these individuals can bring now,” LaMoreaux said at the summit. “And that has to be through totally different jobs.”
The decision is a complete reversal of the common view that AI will demolish the job market for young and early-career workers. It also adds another piece of evidence to the growing pile of conflicting studies and research on AI displacement. For instance:
- A study from Harvard claims that AI tools actually intensify work, rather than lessen it, as people feel more capable of taking on a broader scope of tasks.
- Meanwhile, MIT claims that AI can already automate thousands of hours of work, and make certain jobs obsolete.
- And a study from Gartner splits the difference: While many will lose their jobs as a result of AI-enabled automation, 50% of those workers will be rehired to do similar work.
There’s no doubt that AI automation will have “extraordinary repercussions” for enterprises, Luis Lastras, director of language technologies at IBM, told The Deep View. However, businesses that are seeking to use AI to shave staff and boost the bottom line might be thinking about this technology the wrong way, he said.
If an individual can now do five times as much in one day as they previously could, enterprises shouldn’t be looking at doing the same amount with less people. Rather, they should be looking at ways to empower people to do more: more exploration, more experimentation, more creation, he said.
“If I were a business owner, I would focus a lot on very strong people, not on fewer people,” Lastras told me. “Because I would want to scale my ability to experiment.”
or get it straight to your inbox, for free!
Get our free, daily newsletter that makes you smarter about AI. Read by 750,000+ from Google, Meta, Microsoft, a16z and more.