Sabrina Ortiz
Senior Reporter

Sabrina Ortiz

Sabrina Ortiz is a Senior Reporter at The Deep View. Previously, Sabrina led AI coverage at ZDNET. Sabrina graduated with an M.A. in Journalism, Business and Economics Reporting from the Craig Newmark Graduate School of Journalism at CUNY and a B.A. in Media and Journalism and Political Science from the University of North Carolina at Chapel Hill.

Robot phone hints at stranger AI devices ahead

At MWC, smartphones fold, change colors, and even have robotic arms.

While every manufacturer is touting an AI smartphone, Chinese phone maker Honor is taking things a step further with a camera that physically extends from its Robot Phone via what the company calls the "industry's smallest 4DoF gimbal system."

[@portabletext/react] Unknown block type "twitter", specify a component for it in the `components.types` prop

Almost resembling a Pixar lamp, a small arm unfolds from the back of the phone and swings out to broaden the phone's view of its surroundings. Honor is pitching four key benefits:

  • AI assistance: A wider field of view provides added context for more useful AI responses.
  • Image stabilization: Like a traditional gimbal, it helps keep photos and videos steady.
  • AI object tracking: The camera can lock onto a moving subject, which is useful for shooting content or video calls.
  • Entertainment: The robot camera can bob its "head" to the beat of music, which is gimmicky but clever.

I demoed all four features, and the best way to describe the experience is simply fun. Object tracking worked as promised, the robot did bust some moves, and the AI assistant performed comparably to most standard chatbots, with the added charm of a Pixar lamp "head" swiveling toward you.

Beyond the fun factor, however, the practical case for the phone is harder to make — particularly at what will likely be a steep price. The exact cost has yet to be announced. What it does illustrate, though, is a broader truth about the smartphone industry: companies are going to extraordinary lengths to stand out in today's AI and robotics hype cycle.

Our Deeper View

Since AI exploded in popularity, companies have raced to incorporate it into their devices, with plenty of big promises and ambitious announcements, but relatively little change to the actual smartphone experience. So it's exciting to see a manufacturer think beyond the screen. I don't expect everyone to be carrying a robot phone anytime soon, but I do hope Honor's bold move inspires other companies to break from the standard slab form factor and seriously explore what AI-driven hardware could look like.

Alibaba makes surprise leap into AI glasses

Forget phones. Smartglasses have taken over Mobile World Congress, and Chinese tech giant Alibaba just made its move.

Alibaba's Qwen has made a name for itself as one of the world's leading AI models, with its appeal rooted in high performance and open-source. At MWC, the company unveiled its new AI smartglasses: the Qwen Glasses S1 and the Qwen Glasses G1.

The key distinction between the two is that the S1 features dual in-lens displays, while the G1 does not, making one more like the Meta Ray-Bans and the other more like the Even Realities G2.

The feature set covers what most smartglasses now offer as table stakes: AI assistance, photo and video capture, real-time translation, and notifications. Users control the glasses by tapping or swiping along the sides, pressing a button near the camera to shoot, and managing media playback near the end of the stem.

[@portabletext/react] Unknown block type "image", specify a component for it in the `components.types` prop

Alibaba announced Qwen AI glasses at MWC 2026. Photo: Sabrina Ortiz

Qwen has yet to issue a press release, but specs gathered from a demo and booth tour include:

  • Audio: Bone conduction mic and 5-mic array
  • Battery: Dual-battery system with swappable 272 mAh packs
  • Chip: Snapdragon AR1, the same Qualcomm chipset powering the Meta Ray-Bans, as well as a coprocessor.
  • Form factor: 8mm ultra-slim temples with custom lenses
  • Camera: 12MP POV camera, 3K video, 109-degree ultra-wide FOV, HDR, IMX681, 5P lens

Chatting with Qweenie, the onboard Qwen assistant, testing real-time translation, and previewing photos directly in-lens were all positive experiences. But none of it was new, and at this point, other smartglasses do it in color. What stood out most was the hardware itself.

The glasses were surprisingly light, particularly impressive given the built-in displays. That said, their monochromatic green-tinted HUDs, similar to those on the Even Realities, do make for a lighter-weight design compared to the full-color displays on Meta's frames or the forthcoming Google glasses.

The swappable battery is a genuine differentiator. Battery life remains one of the biggest unsolved problems in smartglasses, since people wear glasses all day and expect them to keep up. Being able to pop out a depleted pack and snap in a fresh one sidesteps that problem.

Our Deeper View

While these glasses are undoubtedly capable and practical, they did highlight just how quickly the space is moving. Had I seen them at CES last year, I would have been incredibly impressed and eager to try them in the real world. But a little over a year later, having now tried smartglasses with full-color in-lens displays, it was harder to be impressed. The addition of color may seem like a minor upgrade, but in my experience, it makes the display feel far more familiar and intuitive, closer to the phones and laptops we already drive daily. If smartglasses are going to truly bridge the physical and digital worlds, the experience has to be as compelling as the devices they're meant to displace.

Google AI glasses prepare to take center stage

Nine months after my first demo, Google's AI glasses still feel like they could change everything. And my second demo at MWC 2026 this week only confirmed it.

I wasn't allowed to take photos during the demo since these were prototypes and not the final product. Even so, the promise is clear: like the classic Meta Ray-Bans, they look strikingly similar to regular glasses. The final product will be produced in collaboration with popular eyewear brands Warby Parker and Gentle Monster, likely making them more stylish than the typical geek glasses.

[@portabletext/react] Unknown block type "image", specify a component for it in the `components.types` prop

Google demoed AI glasses at MWC 2026. Photo by Sabrina Ortiz

The in-lens display is the biggest highlight, as it opens up a whole new range of capabilities. Smart glasses are gaining momentum largely through AI integration and the ability to fuse the physical and digital worlds, but there are also practical, everyday wins, like reading messages or following turn-by-turn navigation without pulling out your phone.

The in-lens display is well-positioned and easy to read. During the five-minute demo, I asked Gemini multiple questions, watched my words get accurately transcribed and sent to the chatbot, and received responses in real time.

I also tried the Nano Banana integration, which let me ask Gemini to take a photo of what I was looking at and modify it. I asked it to add a space-themed background. While it wasn't the most practical everyday scenario, the image quality was impressive, and the processing was fast (around 15 seconds, I was told). Last time, I demoed Google Maps turn-by-turn navigation and came away equally impressed.

Following the surprise success of the classic Meta Ray-Bans, last year Google announced that it was re-entering the category with its own smart glasses. When worn, Google's AI glasses feel much closer to the original Meta Ray-Bans, which owe their popularity largely to their comfort and the fact that they look like normal glasses. However, Google's version is more functionally similar to the bulkier and more expensive Meta Ray-Ban Displays, which look less like normal glasses and more like a tech product.

Our Deeper View

There's already growing acceptance of AI glasses, and since Google's glasses are so similar to regular glasses and add so much functionality with the in-lens display, I think they are poised to push smart glasses adoption to the next level. Some important details that will play a major role in the appeal are still to be determined, such as battery life and speed. However, all the pieces may be coming together. Qualcomm just unveiled Snapdragon Wear Elite, a platform designed to power next-gen AI wearables with always-on, low-power, on-device AI processing. The next year will be pivotal for the AI wearable category, and Google’s take on smart glasses is likely to redefine the category by making in-lens displays mainstream.

Qualcomm chip preps 2026 AI wearables boom

For AI to be genuinely useful in the real world, it needs to escape the screen and engage with the physical world. That's the idea behind the booming AI wearables market, and Qualcomm's newest platform is positioning itself as the catalyst for the next big leap forward.

Kicking off MWC on Monday, Qualcomm launched its new Snapdragon Wear Elite platform designed to power AI wearables, including pins, watches, earbuds, pendants, and glasses. The Wear Elite platform addresses primary AI wearable needs: low-power, always-on availability and on-device AI processing for low latency and user privacy.

A key aspect of achieving those goals is the inclusion of the Qualcomm Hexagon NPU architecture, which supports up to two billion‑parameter models locally on-device. Other spec highlights include:

  • Improved performance: 5x single-core CPU performance improvement and up to 7x faster GPU in max FPS performance.
  • Improved power and charging: multi-day battery life, 30% longer day of use compared to the previous generations, and rapid charging, powering up to 50% in 10 minutes.
  • Multi-mode connectivity support that integrates: 5G RedCap, Micro‑Power Wi‑Fi, Bluetooth 6.0, UWB, GNSS, and NB‑NTN.

The chipset is the first to work across WearOS by Google, Android, and Linux. Global partners supporting the platform include Google, Motorola, and Samsung. Furthermore, the first Snapdragon Wear Elite-powered devices will be available in the next few months, according to Qualcomm.

The company said the Wear Elite platform is part of Qualcomm’s broader vision for building an ecosystem of wearable devices where multimodal AI agents are tailored to users, understanding their context and anticipating their needs.

Our Deeper View

While the AI wearables race has already started, it's about to get a lot more intense in 2026. Meta's Ray-Bans, which also use Qualcomm chips, sold millions of units in the past year alone. Now nearly every major tech company is following suit. Google and Samsung have an imminent smartglasses launch that looks likely to run on Qualcomm’s new platform. Meanwhile, Apple is rumored to be developing its own glasses and AI wearables, and Motorola previewed Project Maxwell at CES, a pin that takes agentic actions based on what it sees. But these devices desperately need substantial processing power and on-device AI capabilities in a tiny, battery-constrained package. So step-changes like the new Qualcomm chips hold promise that the next wave of 2026 devices will be able to pack in new capabilities.

Disclosure: Qualcomm covered Sabrina Ortiz’s travel and accommodations to attend MWC. Qualcomm had no editorial input or review of this article.

Agentic Gemini feature debuts on Samsung phone

I've been an AI beat reporter for over three years, but lately, I've found myself at more device launch events than ever, because AI is now being infused into everything. After watching more features come and go than I can count, one of Samsung's latest actually intrigued me.

With the launch of the Samsung Galaxy S26, Samsung introduced a task automation feature in beta powered by Gemini. The way it works is simple: it doesn’t perform an action for you, but it does take the steps necessary, so all you have to do is approve it. To activate it, all you have to do is ask Gemini.

In my demo, all I said was “Call me an Uber to SFO in 15 minutes.” Then it got to work in the background, with a blue pill-shaped button labeled “View Progress,” which is, of course, optional, as the aim is for it to run in the background while you do other tasks.

When you click the button, you can watch it work in the sandbox environment, such as entering an address and selecting the vehicle type. Then it requires your final confirmation to act. Both the sandbox environment and final confirmation are to protect users from the agent going rogue.

This release marks the first time a truly agentic feature has shipped to customers in a consumer device, despite many attempts in the past, including the most notorious: the Rabbit R1 failure. At CES 2026, Motorola's 312 Labs showcased Project Maxwell, an AI-powered pin, as a mere proof of concept with no release timeline, and yet it worked the same way as Samsung's Gemini task automation feature.

During the Unpacked keynote, TM Roh, Samsung's President and CEO of the Device eXperience Division, called the Galaxy S26 lineup the first true "agentic AI phones." It's an ambitious label, but if this feature ships and works as promised, it may well be the clearest glimpse yet at what an agentic smartphone can do.

Our Deeper View

Also notable is that this is a Google feature, meaning its reach will extend far beyond the Galaxy S26 lineup. It's now also shipping to Google's own Pixel 10 and Pixel 10 Pro, and with Google's partnership with Apple to power the new Siri, this capability could soon make its way to iPhones as well. That said, my confidence in the feature isn't high just yet, since every attempt to use it outside of the demo on the Galaxy S26 Ultra has fallen short. Still, this is a beta on day one, and it deserves the benefit of the doubt as the feature rolls out. I'll be putting it through its paces as I switch into the Galaxy S26 Ultra as my daily driver throughout this review, and I'll keep you updated. You can follow me on X/Twitter at x.com/sabrinaa_ortiz for updates in real-time.

Samsung's Galaxy S26 pushes AI from flashy to functional

Smartphones are becoming AI-first devices at a rapid pace, and Samsung's latest flagship is further proof.

The S26 lineup, comprising the S26, S26+, and S26 Ultra, brought upgrades expected of a new smartphone generation, including improvements to form factor, camera system, and display. But the most significant hardware updates and exciting new features were united by a common theme: deeper integration of AI assistance.

For example, the Galaxy S26 lineup is powered by the Snapdragon 8 Elite Gen 5 for Galaxy, which delivers improvements across the NPU, GPU, and CPU, collectively boosting AI performance. But the biggest upgrade of the three was the 39% increase to the NPU, which is specialized for AI workloads.

The Galaxy S26 Ultra also introduces a new thermal architecture with a redesigned vapor chamber for better heat distribution, along with Super-Fast Charging 3.0 for improved battery efficiency: both critical for sustaining demanding AI applications and workflows.

[@portabletext/react] Unknown block type "twitter", specify a component for it in the `components.types` prop

Beyond the hardware, there is a plethora of new AI features. These are the most noteworthy, starting with the ones that impressed me the most in hands-on testing:

  1. Automated app action: With this feature, you can ask Gemini to perform an action in an app, and it will handle the setup, so all you have to do is approve. For starters, it will work with Uber. All you have to do is ask Gemini something like, “Call me an Uber to SFO,” and it will handle the rest.
  2. Now Nudge: This feature provides real-time suggestions across any messaging app, working within the keyboard. It will feed you proactive information based on the context of your conversation, such as pulling contact information or calendar dates.
  3. Bixby Update: Bixby was updated to better understand natural language. Its real appeal lies in its deep integration in settings, which allows users to easily adjust their phone settings with conversational prompts. The best part is you don’t need to pick between Perplexity, Gemini, or Bixby, as they can work in tandem.
  4. Call Screening: While Call Screening already existed and has become a standard AI feature across all ecosystems, with this update, Galaxy AI can ask who’s calling and why, and show you the response in real time.
  5. Selfie quality update: An enhanced AI ISP and Object-Aware model aims to produce selfies with better textures and tones.
  6. Photo Assist: You can now use generative AI right within your Gallery app to add elements to your photos. For example, in a demo, we were able to take a picture of a half-eaten cake and then ask Galaxy AI to make it look whole, which it did seamlessly. Some more fun (yet silly) applications included using it to change my selfie background to the Golden Gate Bridge or add a cap to my head.
  7. Document Scan: While this has always been available, it is now simplified: all users need to do is point their camera at the document, much like scanning a QR code. It also ignores creases, removes distortions, and completes the photo if it has bent corners.
  8. Creative Studio: This serves as a hub for all your generative AI creation needs. It is last on my list because I don’t know how often I will need to create sticker packs, wallpapers, or contact cards, but from the demos I did, the generative outputs are pretty good.
[@portabletext/react] Unknown block type "image", specify a component for it in the `components.types` prop

The Galaxy S26 lineup is available for pre-order today and will be generally available on March 11. The Galaxy S26 Ultra starts at $1,299.99, the Galaxy S26 at+ at $1,099.99, and Galaxy S26 at $899.99.

Our Deeper View

Since generative AI soared in popularity, phone manufacturers have been racing to add new AI integrations and features. However, these additions need to be done tastefully, as adding features that aren't truly helpful, just for the sake of it, has led companies such as Apple to face significant backlash. Google's Pixel 10 release raised the bar for what an “AI phone” should look like. With the S26 launch, Samsung picked up where Google left off, adding AI features in subtle yet helpful ways that should improve users' everyday experiences. Most importantly, the company isn't overpromising, as seen with the app action integration. For example, Samsung isn't actually advertising that the feature will call an Uber for you, which might have sounded more impressive but could have fallen short in practice.

Conflicting signals: AI investments vs. ROI doubts

AI investments continue surging, evident in headlines, fundraising rounds, stock rallies, and product launches. But executives are having second thoughts.

The AI company Dataiku released a new global report based on a Harris Poll survey of more than 800 CIOs worldwide, and it found that CIOs are not only facing regret from their AI investments, but also hold anxiety about what AI’s ability to perform means for their organization's future and their jobs.

A majority of CIOs (74%) said their role will be at risk if their company does not deliver measurable business gains from AI within the next two years. At the same time, they are not seeing the results, yet they are being questioned about them. The report found that:

  • 74% say they regret at least one major AI vendor or platform decision made in the last 18 months.
  • 62% say their CEO has directly questioned or challenged those decisions.
  • Nearly one-third (29%) say they have repeatedly been asked to justify AI outcomes they could not fully explain.

“ROI is a real question, but the honest answer is that we're early. It's normal that measurement frameworks haven't caught up with a technology whose application is still being defined,” Kurt Muehmel, Head of AI Strategy at Dataiku, told The Deep View. “The pressure is real. But the answer isn't to stop investing, it's to stop investing badly.”

To maximize the value of AI investments, Muehemel recommends avoiding a single model provider. Advantages of this approach include: switching to better models as they evolve rapidly, leveraging cheaper alternatives if the AI bubble pops, and avoiding the need to rebuild your entire system when swapping out a model.

A Gartner report that surveyed more than 300 CFOs and finance leaders also found that they are willing to increase AI spending in 2026 based on the future promise of AI. The report found that 60% of CFOs plan to increase AI investments in the finance function by 10% or more in 2026, while another 24% expect gains of 4% to 9%.

“This investment surge is driven by a 'Return on the Future' mindset, which prioritizes long-term strategic disruption and competitive parity over immediate financial gains,” said Nauman Raja, Director Analyst at Gartner. “After all, 88% of CFOs view AI as a critical mandate for future efficiency, so its potential of being a disruptive force is too great for CFOs not to invest in and try to achieve gains with.”

Meta’s AI glasses could soon identify people

Smart glasses bring AI into your world. They could also identify anyone in it.

Meta has dominated the AI smart-glasses market, with its Ray-Ban collaboration becoming the world's best-selling AI glasses, moving over 7 million units sold in the past year. The appeal lies in seamlessly integrating mics, cameras, and speakers into a lightweight design. However, a New York Times report reveals that Meta is exploring using those same cameras for a new facial recognition feature.

The feature, internally called “Name Tag,” would allow the wearer to identify the people around them, as well as relevant information through the Meta AI assistant: the same one currently used for general queries, according to the four people involved with the plans who spoke to the NYT.

According to two sources, the feature would be limited in scope, potentially recognizing only people the wearer is connected to on Meta platforms or those with public Meta accounts, rather than identifying anyone indiscriminately.

An internal May document obtained by the NYT also reveals Meta planned to pilot the feature with attendees at a conference for the blind before rolling it out more widely, signaling it’d be first marketed as an accessibility feature. Beyond accessibility, the feature could deliver benefits for users and Meta alike.

“For consumers, facial recognition removes the barriers and embarrassment of being caught in a situation when you think you know someone, but aren’t 100% sure,” said Ramon Llamas, Research Director, Mobile Devices and AR/VR at IDC. “For Meta, facial recognition on the glasses can help strengthen the connections among its different products and services and drive longer usage of each.”

Notably, the document suggests Meta planned to leverage the "dynamic political environment" in the United States to distract from potential backlash from civil society groups. Meta has experienced similar scrutiny before.

In 2024, two Harvard Students integrated a facial recognition service that allowed them to identify strangers and retrieve personal information. At the time, the company stated that the flashing light on the glasses serves as an indicator to the public that the camera is running. Meta also had to shut down it’s decade old Facebook facial recognition technology in 2021 due to privacy concerns.

“It raises many questions as to what would be a reasonable approach to privacy regarding what information can be accessed, to what extent, how reliable that information is, and so much more,” added Llamas. “That’s where Meta has to come up with the right formula for reasonable usage.”

The future of the feature is still not guaranteed, as the company is reportedly evaluating how the feature could be released in a way that addresses “safety and privacy risks,” according to the documents. Meta similarly considered adding facial recognition to the original launch of its AI glasses in 2021, but decided against it.

Apple’s Siri revamp slips as features get split

When will Siri's promised upgrade arrive? Don't hold your breath, at least not yet.

Since last June, Apple users have been waiting for the promised Siri overhaul, one that would make the assistant more conversational and capable of taking actions on your behalf using personal context. Bloomberg previously reported that Apple had set an internal March release target with the iOS 26.4 update, but a new report reveals the features will now be staggered across multiple future updates.

The delay stems from recent testing snags that revealed software problems, including Siri taking too long to handle requests, according to people familiar with the matter cited in the report. However, some new features may arrive as soon as iOS 26.5, as internal versions of that update already include notices about certain Siri upgrades.

Internal test versions of iOS 26.5 reveal it will include two unannounced features: a new web search feature that functions similarly to Perplexity, and custom image generation. Following Apple’s track record with iOS release schedules, the wait between iOS 26.4 and iOS 26.5 will likely be short.

Yet, the most anticipated feature might not be included. An internal version of 26.5 lets users “preview” Siri’s capability of referencing personal data to add context to prompts, with that “preview” designation likely being a sign that the full-on feature is not ready to ship just yet.

The report cites a landslide of other challenges, including running behind on advanced commands for voice-controlled in-app actions, early tester accuracy issues and bugs, Siri defaulting to OpenAI’s ChatGPT instead of Apple’s technology, which now includes AI tech from Google Gemini.

This isn’t stopping Apple’s ambitions as the company is reportedly also working on a revamped, chatbot-like Siri for iOS 27, iPadOS 27, and macOS 27, as we previously reported.

or get it straight to your inbox, for free!

Get our free, daily newsletter that makes you smarter about AI. Read by 750,000+ from Google, Meta, Microsoft, a16z and more.