OpenAI leads SMB as Anthropic wins enterprise
January 16, 2026

Welcome back. Two of the six co-founders of Mira Murati's Thinking Machines Lab have left the trendy startup and returned to OpenAI. There's been a lot of drama around the situation and some sharp elbow-swinging on both sides. But don't lose the larger narrative. The battle over AI talent remains fierce in 2026 (further under the radar, OpenAI's safety research lead just changed jerseys to play for Anthropic). But the bigger story might be the commoditization of frontier models. It suddenly looks more appealing to focus on building AI to solve specific problems rather than building the world's single best model. —Jason Hiner
IN TODAY’S NEWSLETTER
1. OpenAI leads SMB as Anthropic wins enterprise
2. Microsoft swept into AI data center tsunami
3. IBM warns AI spend fails without AI literacy
ENTERPRISE
OpenAI leads SMB as Anthropic wins enterprise

OpenAI is courting businesses left and right. Data from the billing startup Ramp shows that OpenAI is maintaining a stark lead over Anthropic and Google in AI spending. According to the report, around 36.8% of businesses that use Ramp pay for OpenAI products.
Anthropic holds second place in the Ramp study at 16.7%. Google trails far behind that at 4.3%, though this number might be under-reported, as many businesses get Gemini services for free through their Google Workspace deployments.
Let's keep in mind that Ramp is especially popular with startups and small businesses, though it reportedly has a growing list of enterprise clients. Still, Ramp has only been around since 2019, while many Fortune 500 companies have long relationships with legacy expense and billing systems like Oracle. So, Ramp's data is likely heavily skewed toward SMBs. That helps explain why the Ramp report contradicts recent research from sources such as Menlo Ventures, which show that Anthropic has been extending its lead over OpenAI in enterprise AI market share.
Anthropic has also scored a slew of enterprise deals in the last several months, including with IBM, Deloitte, Accenture, Cognizant and Allianz Global. The company announced its latest partnership on Thursday: A deal to bring Claude to nearly 10,000 employees at insurance firm Travelers.
Additionally, the recent launch of Claude Cowork, though still in beta and available only to premium subscribers, could attract even more business customers. “With Claude [Cowork] … Anthropic took the angle delivering value for the enterprise right away.” Brian Jackson, principal research director at Info-Tech Research Group, told The Deep View.
Google, meanwhile, has its own advantages. It too has struck major deals to weave Gemini throughout companies like Walmart, Oracle and PwC. Its workplace suite is also widely used by millions across enterprises, making turning on Gemini features very turnkey.

OpenAI’s progress with SMB customers is likely due to two main factors: First-mover advantage and brand recognition. But to maintain that lead, the company will have to focus on reliability and winning enterprise trust. As it stands, OpenAI is trying to do a little bit of everything, whether it’s making health apps, putting out translation models or investing in Neuralink competitors. Meanwhile, Anthropic’s deep focus on enterprise use cases and Google’s existing legacy in workplace software stacks provide significant advantages, especially if OpenAI continues to spread itself so thin.
TOGETHER WITH METICULOUS
Tests Are Dead, Meet Meticulous
Companies like Dropbox, Notion and LaunchDarkly have found a new testing paradigm - and they can't imagine working without it. Built by ex-Palantir engineers, Meticulous autonomously creates a continuously evolving suite of E2E UI tests that delivers near-exhaustive coverage with zero developer effort - impossible to deliver by any other means.
It works like magic in the background: Near-exhaustive coverage on every test run
No test creation
No maintenance (seriously)
Zero flakes (built on a deterministic browser)
🤨 Curious? Find out how to never write tests again
DATA CENTERS
Microsoft swept into AI data center tsunami

While Microsoft has traditionally been pegged as a software and cloud company, the tech giant is increasingly building a future around the deployment of data centers and the energy infrastructure required to support them.
On Thursday, Microsoft's funding partner BlackRock announced it had raised $12.5 billion to fund the buildout of data centers and their energy sources. This was part of a $30 billion pact the two companies signed in 2024 to collaborate on AI infrastructure. Nvidia also signed on to the partnership, as did xAI.
At the time the original deal was signed, Microsoft president Brad Smith said, “The investment opportunity is real and the investment need is even greater.” In an interview with Bloomberg, he called AI “the next general purpose technology that will fuel growth across every sector of the economy both in the United States and abroad.”
While the race to build AI factories in the US has turned into an all-out frenzy, two problems have emerged.
The first challenge is that the massive data center buildout is fundamentally based on the scaling laws that underpin the current AI boom. One of those tenets is that the more computing power you have, the more breakthroughs and progress you'll achieve. That's why companies are racing to scale up compute with new data centers. However, scaling laws have been called into question over the past six months. And one of the counter-trends in AI in 2026 is building smaller models and domain-specific models that are far more efficient, cost-effective, and can run on less demanding hardware. This could ease the demand for scaling compute.
The second is that AI data centers are facing community and political backlash. There's a growing perception in the U.S. that if giant data centers are built in your community, their power consumption will be passed on to consumers and raise energy bills. This issue has gotten so intense that U.S. President Donald Trump weighed in this week to say that Microsoft would make "major changes" to guarantee U.S. consumers don't have their utility bills increase because of data centers being built nearby.

It's commendable that Microsoft and the U.S. government acknowledge that the buildout of AI factories and next-gen data centers could raise U.S. consumers' power bills. Ultimately, they needed to take control of the narrative because people across the U.S. with slightly higher monthly bills were likely starting to blame it on AI, no matter the cause. But the fact is that AI and data centers are driving up overall energy demand, while energy supply hasn't increased — and that means higher prices. Plans are now in place to deploy more nuclear and clean energy, but those will take years to ramp up. The larger question of whether companies are overbuilding data center capacity because new breakthroughs will make AI models infinitely more efficient is an issue that the free market will have to sort out.
TOGETHER WITH INNOVATING WITH AI
Innovating with AI’s founder was recently interviewed by Fortune Magazine to discuss a crazy stat – AI engineers are being deployed as consultants at $900/hr.
Why did they interview Rob? Because he’s already trained 1,000+ AI consultants – and Innovating with AI’s exclusive consulting directory has driven Fortune 500 leads to graduates.
Want to learn how to turn your AI enthusiasm into marketable skills, clear services and a serious business? Enrollment in The AI Consultancy Project is opening soon – and you’ll only hear about it if you apply for access now.
POLICY
IBM warns AI spend fails without AI literacy

AI literacy is a must to meet the challenges of the moment. But that means far more than just knowing how to write LLM prompts.
At a meeting at the IBM office in Raleigh, North Carolina, Phaedra Boinodiris, IBM distinguished engineer, and Rachel Levy, North Carolina State University's Executive Director of Data Science and AI Academy, described a future where AI literacy is no longer a specialist concern but a baseline competency that universities, companies, and governments must build for everyone.
Without that literacy, they warned that organizations will keep pouring money into AI systems that fail to deliver value or worse, cause real-world harm. As Boinodiris pointed out, the famous MIT report, State of AI in Business, showed that "95% of organizations are getting zero return" from their AI investment. This cannot continue.
Levy said she is “absolutely” advocating “AI literacy for all.” She argued that societies cannot safely or productively adopt AI if understanding remains confined to engineers and computer scientists.
Both speakers repeatedly stressed that AI systems are only as good as their data, objectives, and constraints, and that non‑technical experts are central to getting those right.
Part of the problem is that many people still treat AI as a monolith instead of a suite of distinct technologies already embedded in everyday tools, Levy stated.
Circling around to the business AI ROI failure, Boinodiris commented, "AI is failing to bring a return on investment,” largely because projects do not solve real problems, organizations either over‑trust or under‑trust AI outputs, and there is “a total lack of literacy.”
Despite the risks, both speakers framed the current moment as an opportunity to reinvent education around human judgment and interdisciplinary thinking. Indeed, Boinodiris called it “a Phoenix moment for the Humanities,” arguing that schools need to teach “human judgment,” creativity, and accountability in an AI‑saturated world. For her, the central literacy questions students should grapple with are: “What are those human principles we want to see reflected in AI? Is this even the right use for artificial intelligence? Is this solving the right kind of problem?”
Without AI literacy for everyone and everyone having a seat at the table, AI in business will continue to flounder, and neither corporations nor society will reap its potential benefits, they warned.
By Steven J. Vaughn-Nichols, Contributing Writer
LINKS

Anthropic research warns of widening inequality from rich countries’ AI use
Replit in talks to raise $400 million at $9 billion valuation
Symbolic.AI partners with News Corp to offer AI tools to WSJ, Barron's
OpenAI issues request for proposals to US-based robotics manufacturers
AI video startup Higgsfield raises $80 million at $1.3 billion valuation
Supply chain robotics firm Mytra raises $120 million Series C

Open Responses: An open-source spec by OpenAI for interoperable language model interfaces on top of its models.
TranslateGemma: Google’s suite of open translation models, powered by Gemma, that works across 55 languages
GLM-Image: Z.AI’s latest image generation model, trained entirely on Huawei processors.
Promptless 1.0: An AI agent that automatically updates consumer-facing content, including screenshots and code snippets.
Openwork: An open-source computer use agent that’s fast, cheap and secure.

When priorities shift weekly, you need to signal fast. Athyna connects you with top-tier Data Scientists who support experimentation, modeling, and analysis that stay grounded in the business.• SQL + Python
Statistics-first ML
Fast signal from messy data
A QUICK POLL BEFORE YOU GO
Do you believe that AI data centers have increased your energy bill? |
The Deep View is written by Nat Rubio-Licht, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“Color of the pagoda seems to be brighter in [this] image. Also, the safety line across the waterfall seems like a detail current AI might not include.” |
“The trees looked fake in [this] image and the growth on the rocks near the waterfall seemed too ‘clean.’” |

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.








