
Jason Hiner
Jason Hiner is the Chief Content Officer and Editor-in-Chief of The Deep View. He's an award-winning journalist who has spent his career analyzing how tech has reshaped the world. He covered AI for over a decade at ZDNET, CNET, and TechRepublic and watched it evolve from research labs to enterprise infrastructure to a daily reality for over a billion people. He came to The Deep View for the opportunity to cover AI every day and build a next-generation media company.
Articles

Sonnet 4.6: Anthropic goes beyond techies
Anthropic models continue to grow in popularity among AI enthusiasts. Now, the latest upgrade from Anthropic gives existing users a better deal and gives regular folks new reasons to try AI.
On Tuesday, Anthropic rolled some of its latest capabilities into its less expensive mid-tier model, Claude Sonnet. With the launch of Sonnet 4.6, Anthropic has brought many of the latest AI superpowers from its flagship model, Opus 4.6, into its mid-tier option.
But the real story is two-fold:
- First, a lot of the latest strengths from Opus 4.6 (coding, working with agents, better accuracy, and searching big data sets) are now available at 40% discount to the price for Opus tokens. However, Opus is still better at deep reasoning and solving the most difficult and ambiguous problems.
- Second, Anthropic is now pushing Sonnet 4.6 as the technology that can do more than help developers debug their websites, offering everyone agents to help power through their to-do lists. For example, the Sonnet 4.6 launch video shows the Claude agent using the technology to renew a car registration, file an expense report, update a presentation, and reschedule a delivery date.
Companies like Box that have been testing Sonnet 4.6 have reported strong real-world benefits:
If you're confused by all of the different names of the Anthropic AI models, then here's a quick refresher. There are always three tiers, and the names themselves are a dead giveaway for the differences, since they are based on literary metaphors.
- Opus: The most powerful and expensive model is for multi-step reasoning, advanced coding, and long-horizon agentic work with complex problems
- Sonnet: The mid-tier model is the most versatile and the default option, as it's designed to balance speed, power, and costs; it can handle the most diverse set of tasks
- Haiku: This lightweight model is aimed purely at speed and economy and is best for basic question-and-answer queries, summarization, classifying content and lightweight chats
Our Deeper View
It's encouraging to see Anthropic create an easier interface within its Claude app to allow non-techies to take advantage of the power of AI agents. That's where Claude Cowork comes in, and Sonnet 4.6 feels primed to help regular people do even more with this tool. On the other end of the spectrum are the hardcore AI enthusiasts, represented by the early adopters of OpenClaw. These folks are burning through tokens (and dollars) to keep their personal AI agents doing work for them, often powered by Opus 4.6. They're likely to be thrilled to use Sonnet 4.6 for many of those tasks and save a bunch of money.

Ready to install OpenClaw? Do it smart, or wait
When OpenAI hired OpenClaw founder Peter Steinberger, it officially turned personal AI agents into the hottest trend of 2026.
If you're feeling FOMO and are about to spend $500 on a Mac mini to install OpenClaw and spin up your own personal AI assistant, there are a few factors you may want to consider first.
Despite what you may have heard, installing OpenClaw is a pretty technical and time-intensive process. There are four-hour YouTube videos that walk you through the entire process —and that doesn't include all the prep and planning it takes to do it right by setting up separate accounts for email, texting, GitHub, Slack, an Apple account, and any other services you want your personal AI assistant to work with.
That brings us to the second caveat: You shouldn’t install this on your main computer or allow it to use your logged-in accounts for email, file storage, text messaging, etc. That would be like hiring a new employee and giving them the password to your laptop and your email on the first day.
Remember that AI agents are non-deterministic systems. That means they don't follow a set of step-by-step instructions. In fact, they can delete all of your files because they perceive that you are well-organized and thought it would be helpful to clean up for you.
So instead of installing OpenClaw yourself right now, consider one of these options:
- Use a cloud-based OpenClaw service like Hostinger, MyClaw.ai, or V2Cloud. You can spin up one of these almost instantly for $5-$10 per month, and they give you a secure sandbox that keeps your personal AI assistant from accessing other machines and data on your network.
- Wait until OpenAI and Steinberger release their personal AI assistant product that will "bring agents to your mom and everyone else," as OpenAI's CMO Kate Rouch said. We should expect that to happen relatively quickly.
- Give Anthropic's Claude Cowork a try. It's another AI agent, but it has more guardrails to keep you from getting in trouble. For enterprises, there's also the lesser-known Amazon Quick Suite, which offers many of the benefits of personal AI assistants within the confines of a traditional IT environment.

OpenAI bets big on personal AI agents with OpenClaw
OpenClaw may be AI's biggest inflection point since ChatGPT, and it now has a special relationship with OpenAI.
On Sunday, OpenClaw founder Peter Steinberger announced that he was "joining OpenAI to work on bringing agents to everyone." He also stated that the OpenClaw project itself would "move to a foundation and stay open and independent."
This doesn't come as a surprise after Steinberger was interviewed on the Lex Fridman podcast at the end of last week and said that VCs had been chasing him to give him money to turn OpenClaw into a company. But, Steinberger told Fridman that the alternative path was to work with one of the big AI labs and that "Meta and OpenAI seem the most interesting." He said his condition was that the project remain open-source and perhaps follow a model similar to Google's Chrome and Chromium.
"I think this is too important to just give to a company and make it theirs," Steinberger said.
OpenClaw has become the biggest story in AI so far in 2026, stealing the spotlight from Anthropic's Claude Code and Claude Cowork, two other agentic solutions that have also begun to change the way people work. But OpenClaw (formerly known as Clawdbot and Moltbot) has been a runaway freight train of momentum since it went viral at the end of January.
There have been two main reasons for OpenClaw's rapid popularity:
- It's largely viewed as the most independent and capable personal AI agent; once you set it up, it can figure out creative ways to do tasks you tell it, but it can also learn about you and proactively suggest things it could help you with
- One of OpenClaw's biggest innovations is relatively simple: the ability to send it instructions from messaging apps such as iMessage, WhatsApp, and Slack and have it carry out those tasks even when you're not at your computer
OpenAI is clearly thrilled with the Steinberger deal, as three of its top executives, Sam Altman, Greg Brockman, and Fidji Simo, all tweeted about it on Sunday night.
"I'm very excited to make this into a version that I can get to a lot of people, because this is the year of personal agents and I think that's the future," Steinberger said. "And the fastest way to do that is teaming up with the big labs."

Lessons from 587 C-suite AI meetings
What does it actually take for enterprises to adopt AI at scale?
In this episode of The Deep View Conversations, I sat down with Shibani Ahuja, Senior Vice President of Enterprise IT Strategy at Salesforce. Over the past year, Shibani has met with 587 C-suite leaders to understand how Salesforce can evolve into an agentic AI platform for the world’s largest organizations.
We unpack what she’s learned from those conversations, including the real blockers to AI adoption, how leading enterprises are progressing, and why shared context and trust matter more than raw model capabilities.
Shibani also breaks down Salesforce’s Agentic Maturity Model, a framework designed to help organizations assess their current AI readiness and chart a path forward.
We also explore:
- How AI is reshaping the banking and financial services industry, where Shibani spent a good part of her career
- The story of how Shibani joined Salesforce after challenging Marc Benioff and his leadership team as a customer
- Why clear, jargon-free communication is one of the most underrated skills in AI, and how to do it well in high-stakes settings
Shibani is one of the most cogent communicators in tech today, and this conversation is packed with practical insights for anyone leading, building, or communicating about AI inside an organization or in public settings.

Empire of AI: When will SpaceX and Tesla merge?
SpaceX is officially an AI company after closing its acquisition of xAI. The other shoe still to drop: When will Tesla and SpaceX combine forces?
While SpaceX and Tesla traditionally had very different missions — reusable rockets and electric vehicles, respectively — the two have more recently moved into each other's orbits as they prepare to play their part in the AI boom.
They share the same CEO, Elon Musk, of course, and Musk has been publicly entertaining the idea of combining the companies since at least 2020. But the idea always appeared to be little more than conjecture on Twitter, until now. The AI race has prompted both companies to reimagine their roles in a world where AI-powered transformation is taking all the oxygen in the tech industry.
SpaceX naturally sees itself playing a key role in the newly trendy idea of launching solar-powered AI factories into space. Meanwhile, Tesla has increasingly reframed itself as an AI company rather than a car company, with its focus on autonomous taxis and robots.
And while bringing the two companies together might be a lot more convenient for Musk, shareholders in both companies care less about Musk's sleep schedule and more about optimizing the long-term value of their investments.
From that perspective, a combined SpaceX-Tesla organization could have several benefits in creating a single company with AI as its north star:
- Tesla's sometimes-overlooked solar and battery storage businesses could provide key technology SpaceX would need if the datacenter-in-space dream becomes a reality
- With xAI, Tesla could benefit from having a frontier model lab to help optimize and train its AI software for autonomous vehicles
- Bringing together the tech stacks on Tesla's self-driving cars and SpaceX's autonomous rockets could boost the technology of both
- The two companies have pioneered manufacturing advances, Tesla with gigacasting and SpaceX with 3D printing, and combining those efforts and teams could be a force-multiplier, especially as AI begins to intersect with the physical world
- Along with manufacturing, both Tesla and SpaceX are also playing a role in the resurgent robotics sector, which is now at the cutting edge of AI trends as physical AI emerges

AI agents are moving faster than you thought
AI agents in business aren't something that will happen in the future. They’re already here, and they're scaling a lot more rapidly than we expected.
In this episode of The Deep View: Conversations, I talk to Matt Yanchyshyn, who leads AWS Marketplace at Amazon Web Services. Yanchyshyn's team helps organizations discover, buy, and deploy software on AWS, and one of the biggest shifts they’ve seen over the past six months is the explosion of AI agents in real-world use cases.
When AWS unveiled its agent marketplace in mid-2025, the internal goal was initially to launch with 50 agents. By early 2026, that number had surged past 2,600 agents, making it the fastest-growing category in the history of the world’s largest cloud platform.
So what’s driving that surge? Yanchyshyn breaks it down.
In this conversation, we cover:
- Which types of AI agents are seeing the fastest enterprise adoption
- The industries and use cases leading the charge
- How companies are handling data security and sovereignty concerns
- The role of multi-model orchestration in agent effectiveness
- How AWS is using agents internally to drive lots of different wins
If you're trying to understand where AI agents are actually being deployed — not the hype, but the reality — then this conversation will reset your expectations. It will help you understand where agentic AI is already delivering business value, and where it’s heading next.
🎧 Listen in your favorite podcast player
Thank you to our sponsor, Deel, an AI-native platform for HR, IT, and payroll. Hire, manage, pay, and equip anyone, anywhere: deel.com/deepview

OpenAI just launched its answer to Claude Cowork
On the same day that Anthropic released its Opus 4.6 model to match OpenAI's multi-agent coding advantage, OpenAI has released GPT-5.3-Codex to rival Anthropic's Claude Cowork.
In its blog post announcing the new model, OpenAI declared, "With GPT‑5.3-Codex, Codex goes from an agent that can write and review code to an agent that can do nearly anything developers and professionals can do on a computer."
While GPT-5.3-Codex is still aimed primarily at engineers and developers, it can now help them with other parts of their job beyond just writing code. Specifically, OpenAI cites, "debugging, deploying, monitoring, writing [product requirements documents], editing copy, user research, tests, [and] metrics." It's also built to help create slide decks and spreadsheets.
And, of course, many of those tasks will be helpful for professionals adjacent to software engineers, such as product leaders, designers, and project managers. In fact, tech-forward employees in almost any role will likely find these features useful, especially if they already have experience with ChatGPT.
And since OpenAI released its desktop Codex app for Mac on Monday, it's now much more accessible to non-coders, since you no longer have to operate it from the command line. Keep in mind that the Codex app is separate from the ChatGPT app, unlike Claude, which integrates its coding and chatbot into a single app. OpenAI's Codex app is also limited to Mac for now, while the Claude app is also available on Windows. But we should expect that it's only a matter of time before OpenAI brings the Codex app to Windows.
Other notable upgrades include:
- 25% faster inference for quicker coding and task execution
- Improved coding accuracy: it beats previous models on developer benchmarks such as SWE-Bench Pro and Terminal-Bench 2.0
- Interactive agent options: you can steer, ask, and update while the agent is working on complex, long-running tasks
- Upgrades to reasoning and professional knowledge: combines advanced coding with broader general reasoning from GPT-5.2 for more nuanced decision-making
- Stronger cybersecurity: this is the first model that OpenAI qualifies as “high” in their cybersecurity framework, as it's trained to detect software vulnerabilities and backed by stronger safety layers
Notably, OpenAI shared that "GPT‑5.3‑Codex is our first model that was instrumental in creating itself." Specifically, the model was involved in its own debugging, deployment, and diagnosis of test results. The company reported, "Our team was blown away by how much Codex was able to accelerate its own development."

Opus 4.6: Claude Code can now do multi-agent tasks, too
As the industry anxiously awaits the release of Claude Sonnet 5, Anthropic released Claude Opus 4.6 to match OpenAI's short-lived advantage in agents.
On Thursday, Anthropic released the latest update to its Claude model family, saying the model offers better coding and review skills, improved task planning, and can sustain agentic tasks for longer than its predecessor.
Alongside the launch, Anthropic unveiled a feature called “agent teams,” which gives users the ability to spin up agents that can split up tasks autonomously and work on them in parallel. Notably, this feature offers capabilities similar to OpenAI’s Codex, which debuted multi-agent capabilities earlier this week.
Claude Code has been a viral hit over the past couple months. It's even been blamed for the stock market sell-off of software companies because investors worry that they may not have a future if any organization or individual can now use AI to vibe-code custom software that exactly meets their needs.
However, OpenAI's Codex raised the bar. Rather than just offering an AI agent for coding, it enabled the ability to use a team of agents working together. With Opus 4.6, Claude can now do the same.
Anthropic also touted a number of other improvements that Opus 4.6 has to offer, including improved abilities on work tasks such as financial analysis, research, document creation and agentic search; the ability to work more reliably with large codebases; and autonomous multitasking capabilities when utilized in Claude Cowork.
- The model is also the first in the Opus line to offer a 1 million token context window,
- Opus 4.6 also achieves state-of-the-art performance on several benchmarks, including agentic coding evaluation Terminal-Bench 2.0, frontier model evaluation Humanity’s Last Exam, and finance and legal evaluation GDPval-AA.
- One drawback that the company noted in its blog post is that, while Opus 4.6 is more thoughtful and careful in considering its outputs to offer better results for harder problems, this feature can “add cost and latency on simpler ones.”
Alex Albert, head of Claude relations at Anthropic, said in an article on X that the launch represents “the watershed moment for AI becoming a real working partner for people who spend their days in spreadsheets, slide decks, and long docs.”
The Deep View got access to the model before the launch, and did not see an observable jump in the quality of the outputs. Having said this, the previous model was already very capable, and this release did not hinder that experience at all. It is possible that you would only see the difference when stress testing the model under really complex coding and reasoning workflows, and our team will be continuing to test it to find the latest nuances.

How AI could reshape human memory and attention
AI could change the way we remember, and the way we pay attention.
In this episode of The Deep View: Conversations, I sit down with Bobak Tavangar, CEO of Brilliant Labs, one of the most intriguing startups in AI hardware today.
While trillion-dollar giants like Meta and Google race to define the future of AI glasses, Brilliant Labs is taking a radically different path: building in public, going open-source with both software and hardware, and centering their next product, the Halo glasses, around something deeply human.
The focus? A conversational AI agent for your long-term memories and conversations.
This isn’t just about smarter wearables. It’s about a bigger idea:
+ Can AI help us be more present, not less?
+ Could technology support memory, reflection, and intention instead of distraction?
+ What does privacy look like when AI can recall your life?
We also explore:
- What Bobak learned during his time at Apple
- Why AI hardware is one of the hardest frontiers in tech
- The challenging process of finding a co-founder
- Bobak’s philosophy on communicating on social media with purpose, not hype
Bobak is one of the most thoughtful founders in the AI space, consistently elevating the conversation beyond features and into questions of values, agency, and human experience.
If you care about where AI, wearables, memory, and attention intersect, this is a conversation you don’t want to miss.
📺 Watch on YouTube: https://youtu.be/aN-Z1VzVLZg
🎧 Listen on your favorite podcast player: https://tdv.transistor.fm/episodes/2-how-ai-could-reshape-human-memory-and-attention-bobak-tavangar
or get it straight to your inbox, for free!
Get our free, daily newsletter that makes you smarter about AI. Read by 750,000+ from Google, Meta, Microsoft, a16z and more.