
Jason Hiner
Jason Hiner is the Chief Content Officer and Editor-in-Chief of The Deep View. He's an award-winning journalist who has spent his career analyzing how tech has reshaped the world. He covered AI for over a decade at ZDNET, CNET, and TechRepublic and watched it evolve from research labs to enterprise infrastructure to a daily reality for billions of people. He came to The Deep View for the opportunity to cover AI every day and build a next-generation media company.
Articles

Microsoft swept into AI data center tsunami
While Microsoft has traditionally been pegged as a software and cloud company, the tech giant is increasingly building a future around the deployment of data centers and the energy infrastructure required to support them.
On Thursday, Microsoft's funding partner BlackRock announced it had raised $12.5 billion to fund the buildout of data centers and their energy sources. This was part of a $30 billion pact the two companies signed in 2024 to collaborate on AI infrastructure. Nvidia also signed on to the partnership, as did xAI.
At the time the original deal was signed, Microsoft president Brad Smith said, “The investment opportunity is real and the investment need is even greater.” In an interview with Bloomberg, he called AI “the next general purpose technology that will fuel growth across every sector of the economy both in the United States and abroad.”
While the race to build AI factories in the US has turned into an all-out frenzy, two problems have emerged.
The first challenge is that the massive data center buildout is fundamentally based on the scaling laws that underpin the current AI boom. One of those tenets is that the more computing power you have, the more breakthroughs and progress you'll achieve. That's why companies are racing to scale up compute with new data centers. However, scaling laws have been called into question over the past six months. And one of the counter-trends in AI in 2026 is building smaller models and domain-specific models that are far more efficient, cost-effective, and can run on less demanding hardware. This could ease the demand for scaling compute.
The second is that AI data centers are facing community and political backlash. There's a growing perception in the U.S. that if giant data centers are built in your community, their power consumption will be passed on to consumers and raise energy bills. This issue has gotten so intense that U.S. President Donald Trump weighed in this week to say that Microsoft would make "major changes" to guarantee U.S. consumers don't have their utility bills increase because of data centers being built nearby.

Nvidia's EDEN targets thousands of incurable diseases
Nvidia has teamed up with UK startup Basecamp Research, a frontier AI lab in life sciences, to announce "programmable gene insertion," a new form of therapy that could reprogram cells and replace genes to treat incurable diseases.
Basecamp Research claims that its breakthrough has achieved one of the key goals of genetic medicine over the past several decades: replacing DNA sequences at precise locations in a person's genome. Comparatively, therapies based on the popular CRISPR technology make small edits that damage DNA, which limits their use.
- Basecamp Research's AI-Programmable Gene Insertion (aiPGI) uses Nvidia's EDEN AI models that are focused on DNA and biology.
- Nvidia, Microsoft, and a group of academics published a paper with lab results showing how the EDEN AI models successfully achieved insertion in "disease-relevant target sites in the human genome."
- Meanwhile, Basecamp Research demonstrated insertion in over 10,000 "disease-related locations in the human genome," which included effectiveness at killing cancer cells
- Using the same models, AI-designed molecules have also proved effective at targeting drug-resistant "superbugs."
John Finn, Chief Scientific Officer at Basecamp Research, said, "We believe we are at the start of a major expansion of what's possible for patients with cancer and genetic disease. By using AI to design the therapeutic enzyme, we hope to accelerate the development of cures for thousands of untreatable diseases."
NVentures (NVIDIA's venture capital arm) also announced an investment in Basecamp Research's pre-Series C funding round to accelerate the development of these new genetic therapies.

OpenAI’s Jony Ive mystery device: An AI earbud?
OpenAI has been accused of a lack of focus for its forays into a wide variety of initiatives, from healthcare to third-party apps to a web browser to its Sora video generator to AI data centers. OpenAI CEO Sam Altman's comments targeting Apple as a bigger long-term competitor than Google certainly haven't quelled those fears of overreach.
Altman reportedly told a group of journalists that the AI battles of the future will be won through devices rather than frontier models. The fact that Apple teamed up with Google rather than OpenAI to power the next version of Siri has likely only fueled Altman's competitive spirit.
Now we have a new report out of China that OpenAI is working with manufacturing giant Foxconn on an AirPod-like audio device codenamed "Sweetpea" that wraps around the ear and functions as an audio-powered AI assistant. Sounds like a form factor similar to a classic Bluetooth headset or a high-end pair of earbuds.
The report, from supply chain leaker SmartPikachu, says OpenAI is using cutting-edge hardware for the device (including a 2nm processor), plans to manufacture 40-50 million devices in the first year, and that will be released around September.
The planned volume of devices might be the biggest surprise, if correct. For context, Meta's Ray-Ban AI smart glasses, which have similar features, sold about 2-3 million pairs last year, and Apple currently sells about 65 million pairs of AirPods per year. To nearly match AirPods in their first full year would be quite a feat for OpenAI's device.
Another interesting insight from the report was that OpenAI is working on a total of up to 5 devices that include a "home style device" (sounds like an Amazon Alexa smart speaker) and a pen. The pen reference would corroborate a recent report that OpenAI was working on an AI device shaped like a traditional writing instrument.

Apple's Google-powered Siri will reshape AI
Apple users can exhale now. You are finally one step away from your long-awaited Siri upgrade. Apple has officially signed a multi-year agreement with Google to use Gemini’s AI models to power its next-gen version of Siri, according to a joint statement the companies released on X.
While Google's technology will make Siri much more capable and usable, Apple's own foundation models will do some of the heavy lifting, and its Private Cloud Compute will still keep things secure. Most Apple users are unlikely to notice, as long as they can reliably use Siri to answer questions and carry out tasks on their iPhones and other Apple devices.
Siri was one of the original voice assistants when Apple integrated it into the iPhone 4S in 2011, but it soon got lapped by Google Assistant, Amazon Alexa, and later ChatGPT Voice Mode.
Apple announced its Siri overhaul powered by Apple Intelligence at WWDC 2024. And while Apple changed Siri's animation and design, it eventually had to delay the 2025 launch of the new AI-powered Siri because it wasn't ready. As Apple Intelligence's signature feature, the delay reignited the narrative that Apple was falling behind in AI. Multiple members of Apple's AI team leaving for other competitors during 2025 only reinforced that point of view.
Nevertheless, only 13% — about 1 million of the 8 million people on the planet — are regularly using generative AI today. That number is expected to double over the next few years, but that still means we'll be at about 1/4 of the world's population using AI regularly by the end of the decade. It’s still early, in the grand scheme of things.

6 AI trends that defined CES 2026
The Deep View was on the ground at CES 2026 to sort out the real AI advances from the marketing stunts and hardware slop. These were the ones that made the cut.
1. AI got a speed boost and a price drop
Nvidia gave us the biggest surprise of CES by announcing (about six months ahead of schedule) the release of its next-gen Vera Rubin chip for AI training and inference. It can cut inference costs by up to 10x and train mixture-of-experts models with 1/4 as many GPUs as its current Blackwell chips. AMD announced that its AI chips have achieved a 1,000x performance boost over the past 4 years. —Jason Hiner
2. Robots are now 'Physical AI'
Tech firms are keen to give AI a physical form, and CES was all in on the trend. Nvidia was the star of the show, unveiling the Alpamayo self-driving platform, new Cosmos world models, and the GR00T humanoid models. Meanwhile, Franka Robotics displayed its AI-powered robotic arms; LEM Surgical unveiled a spine surgery robot; and AGIBOT showed off a dancing humanoid for its US debut. Don't get too excited about home robots yet, because enterprise is the physical AI cash cow. —Nat Rubio-Licht
3. Industrial AI enters its hype era
What a tidal wave of industrial AI and enterprise AI announcements we saw at CES 2026. Siemens led the charge with its new digital twin technology that extends beyond product development. It can now use AI to simulate operations and become an engine for optimization and continuous improvement. From the new enterprise pavilion in North Hall came big AI announcements from Caterpillar, Oshkosh, Neural Concept, Lenovo, Hyundai, Hitachi, and others. This is clearly one of the fastest-expanding areas of CES. —Jason Hiner
4. Quantum wants to amp up AI
Quantum scored an entire pavilion called CES Foundry, dedicated to companies like D-Wave, Quantinuum and Quantum Computing Inc. (QCi), signaling that it's no longer a lab project. Mixed opinions abound as to when the tech will reach large-scale deployment, but experts agree that quantum will be a catalyst for AI. CES examples included QCi demoing its quantum machine learning technology for things like fraud detection, drug discovery and financial forecasting. —Nat Rubio-Licht
5. An unexpected winner in consumer AI
Audio emerged as the best consumer AI category of the show, with products often built on Gemini, ChatGPT and Claude models. Some that caught our eye include Elehear and Cearvol’s AI-powered hearing aids, Subtle Computing’s voice-isolating earbuds, and Gyges Labs’ note-taking voice ring and agent companion. The form factor presents a low barrier to entry, but the products will have to niche down to compete with Google, OpenAI, and other giants. — Nat Rubio-Licht
6. AI glasses spark copycat boom
AI glasses aimed at chipping away at Meta's 70% market share were everywhere at CES. Lenovo, XGIMI, Solos, Cellid, Vuzix, LLVision, Asus, and Rokid all introduced new products. Thankfully, none of them are pure knock-offs. Each has a unique focus, from live translation to industrial tasks to lightweight displays to gaming. The jury is still out on whether AI-on-your-face becomes a trend, but it's going to get plenty of chances in 2026. —Jason Hiner

Enterprise AI was the surprise star of CES 2026
While AI is still searching for the devices and apps that can win over consumers — and CES proved that the experiments are still all over the map — the journey of AI in business, industry, and the enterprise is racing ahead at a much faster pace and with a lot more clarity.
While enterprise tech used to be a footnote at CES, it now occupies an entire pavilion in the North Hall of the Las Vegas Convention Center. And in another signal of how far the enterprise has come at CES, Siemens CEO Roland Busch headlined the official opening day on Tuesday with a keynote on industrial AI.
And Siemens took full advantage of the spotlight to announce AI advances in six key industrial enterprise areas:
- Digital Twin Composer — Siemens' biggest announcement was its new AI-powered platform for creating real-time simulations that go beyond product development and now extend to operations.
- Nine Copilots — In partnership with Microsoft, Siemens launched industrial AI assistants that can bring intelligence to enterprise processes that include manufacturing, product lifecycle management, design, and simulation.
- Meta Ray-Ban smart glasses in the enterprise — Siemens is partnering with Meta to bring AI smartglasses to the shop floor. This will allow workers to access hands-free audio in real-time with guidance on processes and procedures, as well as safety insights and feedback loops.
- PAVE360 automotive technology — This "system-level" digital twin enables a software-defined vehicle to operate in a simulated environment.
- AI-powered life sciences innovation — Bringing research data into digital twins to test molecules and bring important therapies to market up to 50% faster and at a reduced cost.
- Energy acceleration — Siemens' partner, Commonwealth Fusion Systems, was highlighted for using Siemens' design software to develop commercial fusion, which holds promise for creating affordable, clean energy.
Nvidia has long been a key partner for Siemens, and Nvidia CEO Jensen Huang joined Busch on stage for the keynote, calling Siemens "the operating system of manufacturing plants throughout the world." Huang added that "Siemens is unquestionably at the core of every industry we work in."
The two are also partnering on one of the biggest, most ambitious projects of this generation: AI factories. The combination of Nvidia's AI chips and Siemens' digital twins software is creating digital twin simulations to greatly accelerate the development and deployment of these next-generation data centers for running today's most advanced AI.

8 big AI shifts to watch in 2026
If you thought AI was everywhere in 2025, that future is only going to accelerate in the next 12 months. There were developments we never could have anticipated a year ago. And while there will undoubtedly be plenty of surprises again this year — make sure you're subscribed to The Deep View to catch the biggest ones every day — there are plenty of future developments already in sight. Today, Nat Rubio-Licht and I break down the biggest ones to watch. Let's count them down.
1. Can AI glasses make consumers love AI?
The big tech companies — Meta, Google, Apple, Amazon, and others — are betting that AI glasses will convince consumers that AI is good for more than just chatbot searches and laughing at AI slop. Consumers will still need a lot of convincing. As much as the Meta Ray-Ban AI glasses have been a surprise hit, they've still only sold several million pairs in total over the past couple of years. And screenless AI hardware devices like the Humane AI Pin and the Rabbit R1 have been largely rejected by consumers. OpenAI has generated buzz for its screenless Jony Ive-designed device that sounds a lot like the doomed Humane AI Pin. And startups like Pickel have created heat around their forthcoming AI glasses, hyperbolized as an $800 "soul computer" that looks too good to be true from the slick marketing video. Still, I've tried the Meta Ray-Ban Display glasses, and there are some compelling everyday AI features, such as live captions and live language translation. I'm also interested in the forthcoming Brilliant Labs Halo glasses, which will feature a privacy-first, multimodal AI agent to help you remember things — similar to how meeting assistants take notes for you, but for real-life conversations. But make no mistake, companies large and small are coming for your face with AI in 2026.
2. Robotaxis will foreshadow AI's future
Self-driving taxis from Waymo, Tesla and others will likely grow to become one of the most common ways consumers interact with AI to get a taste of the future in terms of declining costs, societal impact on jobs, and automating previously manual tasks. During 2025, Waymo — owned by Google's parent company Alphabet — transformed from a quirky AI/robotics experiment into an emerging alternative to both Uber and taxis in five US cities. Those five were San Francisco, Los Angeles, Atlanta, Phoenix, and Austin. Near the end of the year, it also announced expansion into 5 more cities: Miami, Dallas, Houston, San Antonio, and Orlando. By the end of 2026, Waymo's expansion could extend up to another dozen US cities and its first international location in London. Meanwhile, Tesla's Robotaxi service launched in Austin, Texas and the San Francisco Bay Area during 2025, and the company has slated its next expansion for Las Vegas, Phoenix, Dallas, Houston, and Miami. Waymo rides are currently about 30% more expensive than standard Uber, Lyft, and taxi rides. I suspect that's because Waymo wants to avoid the negative publicity of undercutting human drivers and potentially putting people out of work, at least for now. Tesla appears to have fewer qualms about that — it's rides have been noted to be significantly cheaper than Uber at times. That feels like the inevitable reality here. I've been surprised at how many people have already told me they prefer robotaxis to human drivers, for a variety of reasons. It's a future that's about to take another big step forward in 2026.
3. We enter the post-LLM phase of generative AI
At an event in November, Hugging Face CEO Clem Delangue called it: We’re not in an AI bubble, we’re just in an “LLM bubble.” Major model providers have been on a frenetic mission to make their large language models more formidable, with many chasing after the elusive and ill-defined goal of artificial general intelligence. But some of AI’s foremost thinkers have started to question the validity of this chase, turning their attention beyond the LLM. World models and physical AI have become a major focal point in recent months, with voices like Yann LeCun and Fei-Fei Li calling it the next frontier. And as large language model inference only becomes more costly, enterprises may find more luck with small language models and domain-specific models, or as IBM Research Chief Scientist Ruchir Puri put it, “artificial useful intelligence.” 2026 might be the year that LLMs hit a plateau – not in capability, but in no longer being the sole driving force behind the AI boom.
4. Agents usher in the post-chatbot era
Tech companies went absolutely feral for agents in 2025 as the tech promised to break us out of the call-and-response requirements of conventional chatbots. And as companies figure out what exactly these autonomous little helpers are capable of, the hype will only continue. With that hype, new complexities will emerge. These agents are already creating cybersecurity hiccups for IT teams, and could present new challenges for HR as the “digital employee” impacts workforce operations and morale. But agents might also force some of the biggest players in AI to work together for once, such as with the launch of the open-source Agentic AI Foundation in December, bringing the industry one step closer to interoperable agents that unlock far more value working together than they could working alone.
5. New chips will redefine AI beyond Nvidia
The AI industry’s manic obsession with Nvidia reached such a fever pitch this year that there’s literally a name for it: Jensanity, dubbed after the company’s perpetual leather jacket-wearing CEO, Jensen Huang. But some are starting to recognize that there exist alternatives to Nvidia’s reign. Amazon unveiled upgrades to its Trainium chips at Re:Invent in December, and Google is spreading its TPUs, the company’s custom chips, far and wide, making its seventh-generation Ironwood chip generally available and discussing deals with Anthropic and Meta. Though it’d likely take some kind of catastrophic event to truly dethrone Nvidia at this juncture, AI innovators who want to move rapidly but are constrained by their Nvidia orders will have more alternatives in 2026.
6. Model developers will start to weigh the risks
Though voices like Geoffrey Hinton, Yoshua Bengio and (for some reason) Joseph Gordon-Levitt have been shouting from the rooftops about the risks that AI presents, model developers have been keenly optimistic so far. That, however, might change in 2026 as models become more powerful and new capabilities emerge. OpenAI might already be on guard, with CEO Sam Altman posting on X in late December that the company is seeking a “head of preparedness” to help implement “increasingly complex safeguards.” Given that the world’s most valuable startup has been treated as an industry bellwether thus far, it wouldn’t be surprising if other AI firms followed in its footsteps.
7. Government relationships with AI will get stickier
Last year saw a number of major AI firms seek to get cozy with the US government, from offering steep discounts to the General Services Administration to multi-billion-dollar investments that support the Trump Administration’s AI Action Plan. The Administration, too, is trying to make it easier for AI companies to have free rein by stymying state legislation. But not everyone is pleased. Sen. Bernie Sanders, for example, has pushed for a moratorium on data center development amid an “unregulated sprint to develop & deploy AI.” As AI becomes more and more politicized, don’t expect the regulatory landscape — or the discourse around how it should be governed — to become any less heated. With mid-term elections looming in the US, AI is on course to become a national issue rivaling health care, immigration, and the national debt.
8. We still won’t know what AI is going to do to us
To put it plainly, there’s still plenty of debate about whether AI is good or bad. There are the AI zealots who believe the tech is the key to unlocking a level of growth and prosperity that’s exponential and therefore incomprehensible to most of the human race. On the flip side, there are the doomers who believe AI and robots will diminish our humanity — if not literally rise up and kill us all (they nearly view The Terminator as a documentary). And there's abundant data available to support either side of the debate. Conflicting reports are constantly emerging. AI is creating jobs, and AI is killing jobs. It’s going to tank the stock market and make billionaires out of a lot more startup founders. It’s eroding our ability to forge human connections and providing people with an outlet for mental healthcare. It’s destroying our capacity to think critically, and it’s supercharging discovery. As these contradictory notions only continue to conflict, people are going to do what they’ve always done: Find the data that supports their preconceived biases and cling for dear life.

Disneyland meets AI: A portent for 2026
In 2026, you'll be able to use AI to navigate Disneyland in California in some pretty interesting ways using Meta smart glasses. But there's still a big question around whether consumers will be into it.
When the public thinks about AI today, it's still mostly asking chatbots the same questions they used to ask Google and texting each other silly photos generated by the latest AI imaging apps.
While businesses and professionals are slowly starting to put AI to work in more powerful ways, most consumers still need a lot of convincing that it will truly do something new and interesting for them.
The big tech companies are betting smart glasses will be the answer — or at least one of the answers — in 2026. Meta is, of course, the AI glasses leader with 73% market share. But Google, Samsung, Amazon, Apple, and others are expected to bring their own AI glasses to market by the end of 2026 or early 2027.
For now, the reality is that Meta's AI glasses still don't do much. They're not bad for listening to music and podcasts, taking POV photos and videos for social media, and a few quick, rudimentary AI searches. But not much more.
That's likely to change soon because Meta has finally given developers the ability to add third-party apps and software to its glasses. And Disney is one of the first big partners that has signed on.
At a developer session at Meta Connect, a short video clip showed how Disney Imagineers developed a variety of "in-the-moment experiences" that Meta smart glasses could soon deliver inside Disneyland. In the demo, Beeta from Disney walked through the theme park and showed the following experiences:
- Looks at a ride, asks what it is and how to ride it
- Asks where to get a gluten-free snack while walking through the park and gets a list of options nearby
- Looks at a souvenir hanging off someone's bag while walking and asks where to purchase one of those — gets the name of the souvenir and the shop to buy it
- Looks at a ride and asks what it is and if it's appropriate for a four-year-old to go on
- While walking, gets prompted that a ride nearby has a short wait time and offers to provide directions for how to get in line
- While walking, gets prompted that a Disney character is nearby and offers to give directions for how to get in line and meet them
While there's nothing earth-shaking there — it's basically an audio guide on your glasses — the most impressive part was where the AI prompted you instead of you having to prompt it. That's where it's starting to act more like an AI agent. And that's where I could see consumers started to find unique value in what AI could offer. The other big benefit could be staying immersed in the experience without having to pull out your phone to look up information.
Several other companies have stated that they intend to release apps and experiences for Meta's AI glasses:
When Google and Apple get into the AI glasses game, we can expect them to bring the Android and iOS ecosystems with them, potentially opening up many more AI-powered experiences for consumers.

AI tool helps diagnose cancer 30% faster
In radiology, a new AI tool is helping fill the gap left by a shortage of radiologists to read CT scans. It's also helping to improve early detection and get diagnosis data to patients faster. It's not by replacing skilled medical professionals, but assisting them.
The breakthrough came at the University of Tartu in Estonia, where computer scientists, radiologists, and medical professionals collaborated on a study published in the journal Nature.
The tool, called BMVision, uses deep learning to detect and assess kidney cancer. AI startup Better Medicine is commercializing the software.
"Kidney cancer is one of the most common cancers of the urinary system. It is typically identified using … [CT] scans, which are carefully reviewed by radiologists. However, there are not enough radiologists, and the demand for scans is growing. This makes it more challenging to provide patients with fast and accurate results," said Dmytro Fishman, co-founder of Better Medicine, and one of the authors of the study.
Here's how the study worked:
- The AI software was tested by a team of six radiologists on a total of 2,400 scans
- Each radiologist used BMVision to help interpret 200 CT scans
- Each scan was measured twice: once with AI and once without
- Accuracy, reporting times and inter-radiologist agreement were compared
- Using the AI software reduced the time to identify, measure, and report malignant lesions by 30%
- The time for radiologists to read scans was reduced by 33% on average, and as much as 52% in some cases
- Auto-generated reports significantly reduced the time for typing and dictation
- Use of the tool improved sensitivity by about 6%, leading to greater accuracy and agreement between radiologists
- The study said AI wouldn't replace radiologists but would become a valuable assistant
In the journal article, the authors of the study concluded, "We found that BMVision enables radiologists to work more efficiently and consistently. Tools like BMVision can help patients by making cancer diagnosis faster, more reliable, and more widely available."
or get it straight to your inbox, for free!
Get our free, daily newsletter that makes you smarter about AI. Read by 450,000+ from Google, Meta, Microsoft, a16z and more.