Nat Rubio-Licht
Senior Reporter

Nat Rubio-Licht

Nat Rubio-Licht is a Senior Reporter at The Deep View. Nat previously led CIO Upside, a newsletter dedicated to enterprise tech, for The Daily Upside. They've also worked for Protocol, The LA Business Journal, and Seattle Magazine. Reach out to Nat at nat@thedeepview.ai.

The developer role shifts to orchestrating AI

When AI can create apps from simple prompts, many developers are left wondering what to do with their time.

The tech’s ability to generate practically anything in the digital domain has triggered a number of questions about how these capabilities will change the way tech workers, well … work. Coding tools like Claude Code have entirely automated something that once required a fleet of eager college grads to complete.

The result? Developers are becoming managers, rather than creators. And executives are eating it up:

  • Canva’s CTO Jesal Gadhia told Business Insider that most of the company’s senior engineers spend their time reviewing AI-generated code, rather than writing it themselves. As a result, they’ve produced an “unprecedented amount of code” in the last 12 months.
  • Meanwhile, Spotify co-CEO Gustav Söderström said in the company’s recent earnings call that its most talented developers haven’t handwritten “a single line of code since December.”
  • And Dan Cox, the CTO of Axios, said that the company used AI agents to complete a project in 37 minutes that took one of its best engineers three weeks to complete the previous year.

While this might be fine for senior developers, the question remains of how this will impact the green coders who are just entering the workforce, especially amid the plethora of mixed signals on how AI is impacting the job market.

Some estimates paint a bleak picture: According to data from the Federal Reserve Bank of New York analyzing the degrees with the highest unemployment rate, computer engineering and computer science ranked in the top five, at 7.8% and 7%, respectively.

Others, however, point to AI transforming certain jobs, rather than completely killing them. A Gartner study, for instance, said that 50% of the workers laid off as a result of AI will be rehired to do similar work. It’s a sentiment that IBM is taking into its own hiring practices as it plans to triple entry-level headcount, but shift focus away from technical tasks that AI can do to instead have these staffers focus on person-to-person jobs that need human skills.

Our Deeper View

An argument can be made that creation requires humanity. Art, music, literature: these are things that are born as a result of channeling the human experience into an artistic medium so that we can relate to one another. But that argument is a little more difficult to make for technical skills like coding. As these systems become more capable of doing technical work, human creation might become more valuable. As Daniela Amodei said in a recent interview with ABC News: “In a world where AI is very smart and capable of doing so many things, the things that make us human will become much more important."

Open models rise as Moonshot sets $10B target

Open source AI might be catching up to its proprietary rivals.

Chinese AI firm Moonshot, the developer of the Kimi open source model family, is reportedly targeting a $10 billion valuation in an expansion of its current funding round, according to Bloomberg.

The company raised $500 million at a $4.3 billion valuation last month. The round’s existing backers include Alibaba, Tencent and 5Y Capital, which have already committed more than $700 million to the company.

And Moonshot isn’t the only company seeing open source success. Last week, Paris-based Mistral AI, which provides a suite of open-source models, announced a $1.4 billion commitment to AI data centers in Sweden as it hit an annualized revenue run rate of more than $400 million. The company had raised roughly $2 billion at a more than $13.8 billion valuation in September.

Adoption of these models, too, are starting to pick up pace, with Alibaba’s Qwen model family raking in hundreds of millions of downloads on Hugging Face.

Still, these figures are drops in the bucket next to the high-flying valuations of US-based proprietary model developers, with OpenAI eyeing an $830 billion valuation in its upcoming twelve-figure funding round; Anthropic hitting $380 billion after its $30 billion round; and xAI sitting at upwards of $230 billion prior to its acquisition by SpaceX.

However, money might not be everything. Scott Bickley, advisory fellow at Info-Tech Research Group, told The Deep View that these valuations are more a function of structural economic differences, rather than being indicative of model performance.

  • That difference is geopolitical: The US relies on massive companies to push frontier AI research via capital concentration, he said.
  • Meanwhile, China – where most of the open source AI development is concentrated – favors a large swathe of small companies to conduct research cheaper, said Bickley.

“This valuation gap belies the fact that open-source models are rapidly closing the intelligence gap,” Bickley told The Deep View.

Our Deeper View

Though there is a common notion that open-source AI still lags behind proprietary competitors, the race may no longer be who can make the most performant AI model, but rather it’s a “race to the bottom,” said Bickley, the winner being the lab that can make the AI model that’s most efficient and affordable. But the benefits of open-source go beyond being cheap: Open ecosystems promote and democratize collaborative innovation, if developers can find ways around the lack of safeguards and security issues that these models tend to pose.

AI companies court Pentagon, Anthropic resists

AI companies are fighting over the Pentagon’s favor.

Elon Musk-owned SpaceX and subsidiary xAI have thrown their hat into a secretive Pentagon contest for a $100 million contract to develop voice-powered autonomous drone swarms, Bloomberg reported on Monday. It marks xAI’s latest effort to collaborate with the government agency, as the AI lab recently signed a contract with the Pentagon to integrate Grok into government sites, as well as a $200 million contract to integrate its tech into military systems.

But the Musk-owned companies aren’t the only ones seeking Pentagon contracts. OpenAI is reportedly supporting Applied Intuition, an autonomous machines startup, with its own submission for the contest, Bloomberg reported.

However, the news comes as a rival’s relationship with the military is reportedly on the rocks: According to Axios, the Pentagon may cut ties with Anthropic over the company’s refusal to relax the safety restrictions in place on its flagship chatbot, Claude.

  • The military is currently using Claude only for its classified systems. Though the company is willing to loosen some safety restrictions, it wants to ensure that its chatbot won’t aid the agency in surveillance of U.S. citizens or developing autonomous weaponry that can kill without human oversight.
  • Defense Secretary Pete Hegseth is reportedly considering designating the company a "supply chain risk," forcing military contractors using Anthropic to also drop the company.

One senior official told Axios that the agency is “going to make sure they pay a price for forcing our hand like this." The Department of War’s spokesperson Sean Parnell told Axios that its relationship with Anthropic is “being reviewed.”

Anthropic’s tension with the Pentagon represents a stark reversal from its previous intentions. Anthropic, along with practically every other major AI company, sought to get in the government’s good graces last year by offering services at steep discounts to support early adoption.

IBM is changing entry-level jobs, not killing them

With even some of the best developers barely writing code anymore, what is there left for an entry-level tech worker to do? According to IBM, a lot.

Last week, the company announced plans to triple entry-level hiring in the US in 2026. However, these positions aren’t going to look like the early career jobs of the past, Nickle LaMoreaux, the company’s HR chief, said at Charter’s Leading with AI Summit.

IBM has overhauled its job descriptions for low-level positions, shifting the focus from tasks that AI can automate to areas that AI can’t. This means less coding and admin work, more person-to-person work, such as customer engagement. Though IBM didn’t reveal specific hiring targets, this workforce expansion will be implemented across the board.

“The entry-level jobs that you had two to three years ago, AI can do most of them … you need to be able to show the real value these individuals can bring now,” LaMoreaux said at the summit. “And that has to be through totally different jobs.”

The decision is a complete reversal of the common view that AI will demolish the job market for young and early-career workers. It also adds another piece of evidence to the growing pile of conflicting studies and research on AI displacement. For instance:

  • A study from Harvard claims that AI tools actually intensify work, rather than lessen it, as people feel more capable of taking on a broader scope of tasks.
  • Meanwhile, MIT claims that AI can already automate thousands of hours of work, and make certain jobs obsolete.
  • And a study from Gartner splits the difference: While many will lose their jobs as a result of AI-enabled automation, 50% of those workers will be rehired to do similar work.

There’s no doubt that AI automation will have “extraordinary repercussions” for enterprises, Luis Lastras, director of language technologies at IBM, told The Deep View. However, businesses that are seeking to use AI to shave staff and boost the bottom line might be thinking about this technology the wrong way, he said.

If an individual can now do five times as much in one day as they previously could, enterprises shouldn’t be looking at doing the same amount with less people. Rather, they should be looking at ways to empower people to do more: more exploration, more experimentation, more creation, he said.

“If I were a business owner, I would focus a lot on very strong people, not on fewer people,” Lastras told me. “Because I would want to scale my ability to experiment.”

Anthropic raises $30B and leans into ethics

Everybody loves Anthropic.

On Thursday, the company announced a $30 billion Series G funding round at a $380 billion post-money valuation. The company said in a blog post that it would use the funding to fuel its infrastructure buildout, product development and frontier model research.

The round included dozens of investors, with big names such as JPMorganChase, Goldman Sachs, Fidelity and BlackRock on the roster, as well as previously announced investments from Nvidia and Microsoft. The funding is more than triple the $10 billion target the company initially set.

Additionally, the company announced that its revenue run rate hit $14 billion, a figure that’s grown tenfold annually over the past three years. Anthropic attributed this growth to becoming the “platform of choice for enterprises and developers.”

In the wake of its success, Anthropic is sharing the love. On Wednesday, the company announced that it intends to cover the rising costs of electricity stemming from the buildout of AI data centers.

This includes covering 100% of grid updates needed to support data centers, procuring new sources of power to protect consumers from price increases, investing in “curtailment systems” that cut data center power usage, and addressing these data centers’ impacts on communities throughout development. “Done right, AI infrastructure can be a catalyst for the broader energy investment the country needs,” Anthropic said in a blog post.

The decision comes as Anthropic flirts with building out 10 gigawatts of data center capacity, the Information reported earlier this week.

The donation isn’t Anthropic’s only show of altruism for the week: On Thursday, the AI firm announced plans to donate $20 million to an AI super PAC focused on safety, guardrails and public education about AI. The group, called Public First Action, has reportedly been in talks with Anthropic about a donation since November, aiming to ensure that OpenAI doesn’t concentrate too much political power, according to The New York Times.

In a blog post, the company said that the AI policy decisions made over the next few years will “touch nearly every part of public life, from the labor market to online child protection to national security and the balance of power between nations.”

The PAC’s mission also runs somewhat in opposition to Leading the Future, the political group that OpenAI has thrown its own weight behind, which pushes against state AI regulation in favor of a looser national framework.

Both the grid investment and supporting the PAC signal that the company is seeking to keep its moral compass aligned with true north.

OpenAI, Google sound alarm on model cloning

Major AI firms are sounding the alarm on secondhand models.

On Thursday, OpenAI sent a memo to US lawmakers warning them that Chinese AI firm DeepSeek is using distillation techniques to “free-ride” on the capabilities of OpenAI's models, as well as those of other frontier labs. The firm says DeepSeek is using “obfuscated methods” to undercut OpenAI’s defenses.

Open AI memo claims that Chinese LLM providers and university research labs are using its models in such a way that would be “highly beneficial” in creating competitor models through distillation.

  • OpenAI also has observed accounts associated with DeepSeek employees using methods to “circumvent” access restrictions.
  • Although OpenAI has added safeguards to prevent this distillation, the company claims that these techniques are becoming more sophisticated as a result.
  • Although distillation is a commonly used technique in AI training, OpenAI claims that doing this under the radar can result in models that are missing key guardrails, resulting in “dangerous outputs in high-risk domains.”

“It’s important to note that there are legitimate use cases for distillation … However, we do not allow our outputs to be used to create imitation frontier AI models that replicate our capabilities,” OpenAI said in the memo.

And OpenAI isn’t alone in calling out these risks. On Thursday, Google’s Threat Intelligence Group published a report detailing a flood of “commercially motivated” actors seeking to clone its flagship model, Gemini. The company said in the report that these actors are using “distillation attacks,” in which they prompt Gemini thousands of times as a means of learning how it works to bolster their own models.

Though Google didn’t call out any specific group in its report, the company said it “observed and mitigated frequent model extraction attacks from private sector entities all over the world and researchers seeking to clone proprietary logic.”

Study: AI leads people to work more, not less

Though many workers worry that AI is going to take their jobs, evidence suggests that it’s actually giving AI adopters more work, not less.

In an eight-month study of approximately 200 workers at a US-based tech company, Harvard University researchers discovered that AI tools consistently intensified work, rather than reducing the load. The researchers found that AI tools allowed workers to complete tasks faster, enabling them to take on a broader scope of tasks, thereby extending their work hours.

Though the company being studied offered enterprise subscriptions to AI tools for their employees, the researchers noted that these employees were not mandated to use AI. Rather, the workers did so of their own accord.

The problem, however, is that once the excitement over these shiny new AI tools wore off, workers found that their workload had increased without them noticing. The researchers identified three main ways that these workloads intensified:

  • AI made tasks that were once out of reach feel achievable to new audiences. For example, coding and engineering tasks are now within reach for non-technical employees.
  • Reduced friction in starting and completing tasks also blurred the boundaries between work and non-work.
  • Finally, these tools allowed for easier multitasking, with the tech being seen as a “partner” that could handle more tasks in the background. The consequence of that, however, was also an increased taskload.

Harvard’s study joins a litany of conflicting research detailing how AI will impact the way we work. While some say that AI can already automate thousands of hours of work and make certain jobs obsolete, others argue that AI will create new jobs entirely. This study lands somewhere in the middle: Creating new work in the jobs that we already have, while quietly piling on more right under our noses.

Runway’s $5.3B valuation fuels world models

The AI industry just got another indicator that the future lies beyond words alone.

On Tuesday, video AI firm Runway announced a $315 million Series E funding round. The funding slingshots the startup to a $5.3 billion post-money valuation, a source familiar with the matter told The Deep View. The round was led by General Atlantic, and included participation from investors such as Nvidia, Fidelity, Adobe and AMD.

New York-based Runway specializes in generative video, with its core offering being the Gen series of video models. In December, the company released Gen-4.5, its most recent iteration of the model capable of handling text and image inputs to produce realistic, cinematic videos with improved motion and prompt adherence compared to previous models.

However, with this funding round, Runway has its eyes on a new prize: World models.

  • In the announcement, Runway said it intends to use the funding to “pre-train the next generation of world models and bring them to new products and industries.” The company called world models the “most transformative technology of our time.”
  • It follows a December blog post from the company entitled “Universal World Simulator,” detailing its vision to train video models at such a large scale that they become world models. “To predict the next frame, a video model must learn how the world works,” Runway wrote.

Runway’s interest mirrors a broader industry shift towards AI that can work with more than just text. World Labs and AMI Labs, founded by AI godparents Fei-Fei Li and Yann LeCun, are each in talks for funding at multibillion-dollar valuations to build their models. Meanwhile, Google’s Genie world model is already being put to use by Waymo to train for rare encounters.

The industry is betting on world models, capable of perceiving and acting on the world, as physical applications of AI become more and more tangible. These models could help robotics systems understand physics, which is crucial in scaling physical AI safety, Anastasis Germanidis, co-founder and CTO of Runway, told The Deep View.

“If you take any self driving company’s data set, the vast, vast majority is going to be non accidents.” Germanidis said. “But the place where their models need to perform the best … is exactly in those moments that you don't have any data. Being able to generate data for those use cases … they become a lot better at reasoning through those scenarios.”

OpenAI brings insurance apps inside ChatGPT

OpenAI wants to help you do the kind of shopping you don’t want to do.

On Monday, the AI giant approved apps from two insurance providers on ChatGPT, allowing users to get quotes for insurance within the chatbot. The providers include Tuio, a Spanish home insurance provider, and Insurify, a US-based car insurance comparison agent. The partnerships mark the first time that insurance providers have integrated with the platform.

Approving insurers to operate on its platform marks OpenAI’s attempt at capturing the consumer market through niche offerings.

  • Ahead of the holiday season, OpenAI integrated an online shopping assistant called “shopping research” into its flagship chatbot, as well as an “instant checkout” option in the platform, which launched in September.
  • In early January, OpenAI launched ChatGPT Health, a consumer-focused platform dedicated to health and wellness that lets users connect their wearables' data and medical records for personalized health advice.
  • And rumors have also been flying around for months about the company’s first AI-powered device — the latest reports expect the form factor to be earbuds, potentially positioning it as a competitor to Apple.

Though OpenAI has consumer attention, making money from those users can be tricky. Recently, OpenAI has taken the controversial route of monetizing that attention through ads. However, its partnerships with insurance providers could represent a new path in which OpenAI serves as the intermediary between customers and service providers.

or get it straight to your inbox, for free!

Get our free, daily newsletter that makes you smarter about AI. Read by 750,000+ from Google, Meta, Microsoft, a16z and more.