Enterprise AI is the surprise star of CES
January 7, 2026

Welcome back. AI is about to test the comfort zone of a lot of people in one US state. Utah is going to allow AI to authorize prescription refills for patients — an activity that's usually strictly limited to licensed medical professionals. The controversial move is elevating the dialogue about which activities make sense for AI to automate versus those that need humans in the loop. The Utah experiment is being administered by healthcare startup Doctronic, which says its study of 500 cases found the AI prescribed the same refills as doctors in 99.2% of cases. However, the American Medical Association has questioned the practice of not having a doctor provide final sign-off in these cases. The result could have wider implications. The US Congress is considering a bill that could make this practice more widespread. —Jason Hiner
IN TODAY’S NEWSLETTER
1. Enterprise AI is the surprise star of CES
2. Nvidia CEO argues speed is key to safer AI
3. AI add-ons steal chat data from 900K users
BIG TECH
Enterprise AI is the surprise star of CES

While AI is still searching for the devices and apps that can win over consumers — and CES proved that the experiments are still all over the map — the journey of AI in business, industry, and the enterprise is racing ahead at a much faster pace and with a lot more clarity.
While enterprise tech used to be a footnote at CES, it now occupies an entire pavilion in the North Hall of the Las Vegas Convention Center. And in another signal of how far the enterprise has come at CES, Siemens CEO Roland Busch headlined the official opening day on Tuesday with a keynote on industrial AI.
And Siemens took full advantage of the spotlight to announce AI advances in six key industrial enterprise areas:
Digital Twin Composer — Siemens' biggest announcement was its new AI-powered platform for creating real-time simulations that go beyond product development and now extend to operations.
Nine Copilots — In partnership with Microsoft, Siemens launched industrial AI assistants that can bring intelligence to enterprise processes that include manufacturing, product lifecycle management, design, and simulation.
Meta Ray-Ban smart glasses in the enterprise — Siemens is partnering with Meta to bring AI smartglasses to the shop floor. This will allow workers to access hands-free audio in real-time with guidance on processes and procedures, as well as safety insights and feedback loops.
PAVE360 automotive technology — This "system-level" digital twin enables a software-defined vehicle to operate in a simulated environment.
AI-powered life sciences innovation — Bringing research data into digital twins to test molecules and bring important therapies to market up to 50% faster and at a reduced cost.
Energy acceleration — Siemens' partner, Commonwealth Fusion Systems, was highlighted for using Siemens' design software to develop commercial fusion, which holds promise for creating affordable, clean energy.
Nvidia has long been a key partner for Siemens, and Nvidia CEO Jensen Huang joined Busch on stage for the keynote, calling Siemens "the operating system of manufacturing plants throughout the world." Huang added that "Siemens is unquestionably at the core of every industry we work in."
The two are also partnering on one of the biggest, most ambitious projects of this generation: AI factories. The combination of Nvidia's AI chips and Siemens' digital twins software is creating digital twin simulations to greatly accelerate the development and deployment of these next-generation data centers for running today's most advanced AI.

It's surreal to see the enterprise play such a prominent role at a CES show that has long been dominated by TVs and consumer gadgets. It's certainly true that this transformation is largely driven by the AI boom — and beyond that, the public's fascination, fear, and interest in what the advance of AI will mean for the future of work, jobs, and society. Siemens stands at the forefront of that trend and is clearly very thoughtful in the way it discusses how its efficiency-driving AI innovations will impact jobs. That offers a glimmer of hope that it will continue to be equally thoughtful about the broader societal impact of its products.

TOGETHER WITH CEREBRAS
20× Faster Inference, Built to Scale
Advanced reasoning, agentic, long‑context, and multimodal workloads are driving a surge in inference demand—with more tokens per task and tighter latency budgets—yet GPU‑based inference is memory‑bandwidth bound, streaming weights from off‑chip HBM for each token and producing multi‑second to minutes-long delays that erode user engagement.
Cerebras Inference shatters this bottleneck through its revolutionary wafer-sized chip architecture, which uses exponentially faster memory that is closer to compute, delivering frontier‑model outputs at interactive speed.
REGULATION
Nvidia CEO argues speed is key to safer AI

Safety advocates have long been urging AI firms to tap the brakes. Nvidia’s Jensen Huang thinks they have the wrong idea.
In a media briefing at CES on Monday, the CEO of the world's most valuable company advocated for unified, US federal regulation that enables rapid progress, claiming that slowing the pace of AI innovation wouldn’t improve the tech’s safety. Rather, Huang noted, safer AI will come from more development, claiming “innovation speed and safety goes hand in hand.”
Huang said that the first step in tech innovation is making a product “perform as expected,” such as limiting hallucination and grounding outputs in truth and research. He also compared stymied development to driving a 50-year-old car or flying a 70-year-old plane: “I just don't think this is safe,” said Huang. “It was only a few years ago some people said, ‘let's freeze AI,’ then the first version of ChatGPT would be all we have. And how is that a safer AI?”
Huang’s perspective stands in stark contrast to the common viewpoint held by AI ethics and safety advocates that we shouldn’t forge ahead blindly with tech that could upend humanity without a full picture of what it’s capable of.
Several of AI’s most prominent voices have called for model firms to slow down their development to assess risks. Two of AI’s so-called “godfathers,” Yoshua Bengio and Geoffrey Hinton, have warned of the tech’s potential existential threat in recent months.
And in late October, the Future of Life Institute advocated for a full moratorium on the push for superintelligence, releasing a petition that has garnered more than 132,000 signatures to date.
Some of the signatories include Hinton and Bengio; a number of employees from OpenAI, Anthropic and Google DeepMind; and major artists like Joseph Gordon-Levitt, Kate Bush and Grimes.
But Huang isn’t alone in his desire for free rein. Several of AI’s biggest proponents (and beneficiaries) hold the same view, with the likes of OpenAI’s Sam Altman and Greg Brockman, a16z’s Marc Andreessen, and Palantir’s Joe Lonsdale all joining forces in August to launch a pro-AI super PAC called Leading the Future to back candidates calling for unified regulation.

Huang obviously has a bias toward simple, loose AI regulations. The bigger and more powerful AI models get, the more compute and chips they eat up, the more money his multitrillion-dollar company makes. Advocating for slowing down would essentially be talking himself out of a payday. But new capabilities are emerging faster than ever as these firms charge towards AGI, and we may not truly know how to reckon with it – something that even OpenAI might be admitting to itself in hiring a “head of preparedness.”

TOGETHER WITH UNFRAME
From Zero to AI-Native
Ready to leverage AI but not sure where to begin? Sign up for a private workshop designed specifically for your team. In just 60 minutes, you’ll learn why so many AI projects fail, and what it actually takes to drive real business outcomes.
Together, we will:
Identify high-value AI opportunities
Select a starting point for your business
Turn a use case into your first AI win (and fast)
You’ll walk away with a clear, focused use case and a practical path forward.Register for Unframe’s AI Fast Track Workshop right here.
SECURITY
AI add-ons steal chat data from 900K users

Looking for an AI extension for your web browser? You may want to think twice.
In late December, cybersecurity firm OX Security identified two Google Chrome plug-ins that secretly siphoned user conversations with popular AI chatbots to attacker-controlled servers. The extensions — “Chat GPT for Chrome with GPT‑5, Claude Sonnet & DeepSeek AI” and “AI Sidebar with Deepseek, ChatGPT, Claude and more” — add a sidebar to Chrome that lets users interact with multiple frontier models directly on their browser. The malware ran silently in the background, extracting browser activity and chatbot conversations every 30 minutes, an attack known as data exfiltration.
Together, these extensions have been downloaded more than 900,000 times, exposing a trove of sensitive chatbot conversations, including personal information, company secrets, and customer details, to an unknown threat actor.
“Threat actors holding this information can use it for a variety of purposes like stalking, doxxing, selling information, corporate espionage, and extortion,” Moshe Siman Tov Bustan, a security researcher team lead at OX Security, told The Deep View.
Once labeled as “Featured” in the Chrome Web Store, the two extensions are impostors that mimic a legitimate AITOPIA extension with a nearly identical name.
According to OX Security’s assessment, the AITOPIA extension keeps user queries private and processes them on Amazon-hosted infrastructure as part of its normal operations. The malicious lookalikes, however, claim to collect “anonymous, non-identifiable analytics data,” but instead exfiltrate user conversations with ChatGPT and DeepSeek.
OX Security reported the extensions to Google on December 29. As of January 6, they remain available on the Chrome Web Store. Bustan urges users to uninstall them immediately.
To avoid malware, he recommends being cautious about extensions that request broad permissions and checking metadata — the developer’s email, website, and privacy policy — to spot anything that doesn’t pass a gut check.

The recent AI exfiltration attacks offer a glimpse into the cybersecurity storms ahead. As AI evolves in lockstep with cyber threats, attackers will continue to find new ways to compromise internet security. Since AI makes developing complex malware easier, Bustan has no doubts that AI-enabled attacks will become “much more common.” Prompt injection, phishing scams, and other emerging exploits threaten both individuals and enterprises. Organizations will need better techniques and stronger defenses to keep up with the proliferation of new attack vectors created by the ease and speed of malware development with AI.

LINKS

Elon Musk’s xAI raises $20 billion in Series E funding
Meta pauses plans to sell Ray-Ban Display glasses outside the U.S.
Photonic raises more than $180 million to commercialize quantum
Accenture to acquire Faculty, a UK-based Palantir competitor
Device firm Razer eyes $600 million AI gaming push
Teleprompter and EMG handwriting features come to Meta Ray-Ban Display

2-b.ai: An AI-powered to-do list that turns your browser or web context into manageable tasks.
Instruct 2.5: An autonomous agent that navigates your apps and immediately executes tasks that need to be done
LFM2.5: Liquid’s latest model, built for capable edge AI deployment and on-device use.
Okara Reddit Agent: An AI agent that monitors your Reddit account, curates posts and writes engaging comments.

Nvidia: Generative AI Application Engineer
Charles Schwab: Responsible AI Researcher, AI.x
SAP: Senior AI Research Developer
Meta: AI Research Scientist - Safety Alignment Team
POLL RESULTS
Would you be comfortable giving an AI-powered toy to a child?
Yes (25%)
No (50%)
Other (25%)
The Deep View is written by Nat Rubio-Licht, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“[This] dog seemed to blend a little into the color scheme of the background, something I would expect the AI to avoid in order to create a ‘photorealistic’ image.” |
“In [this image], the bench placements did not look right.” |

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.






