t's clear that 2025 was the year tech companies became obsessed with humans doing less. Practically every big tech firm went crazy over AI agents last year. The constant refrain at major conferences was agentic innovation, as these firms repeatedly touted the astonishing productivity gains that could result from installing digital coworkers alongside existing human workforces.
Enterprise C-suites were all in on breaking agents out of the pilot phase and automating legacy processes. Agents even brought tech rivals like Anthropic, OpenAI, Google, and Microsoft together in a coalition dedicated to developing open-source standards for them. The excitement around the tech has reached such a fever pitch that Salesforce is even considering a total rebrand to Agentforce (à la Facebook-to-Meta in 2021).
The big takeaway? We’ve built foundational models with formidable intelligence. Agents let us actually do something with them, Steve Zisk, principal data strategist for Redpoint Global, told The Deep View.
“We're finally at a point where the AI pattern recognition engines, if you will, start to actually resemble what people think of as human interactions,” said Zisk. “That has meant that a lot of people on both sides of the equation, the consumers, the big brands, big companies and so on, are reassessing what they can actually hand off to the machine.”
Agents are the natural evolution of where AI and technology broadly are going, Prasidh Srikanth, senior director of product management at Palo Alto Networks, told me. Search engines democratized information, chatbots democratized intelligence, and now agents democratize work, he said.
“It's behaving on behalf of a human being, thinking about our intelligence and actually making sense of what you need to do to fulfill an objective,” Srikanth said.
But despite all the buzz, anticipation and starry-eyed hopes, agents are far from ready to be our actual work companions, Neil Dhar, global managing partner at IBM Consulting, told me. As it stands, we’re still in the “first or second inning of the whole agent race,” he said. And while enterprises are aware that agents have the potential to upend the way we work, “people are just getting in tune with what an agent actually is.”
AI agents have trust issues we can't ignore
While tech companies talk a big game with agents, there are cracks in the foundation that can’t be ignored.
At their core, agents still face all the same problems as their conventional chatbot predecessors. Enterprises struggled to work around the fundamental issues they faced with generative AI, and a new form factor isn’t necessarily the solution. A study from Gartner estimates that more than 40% of agentic AI projects will be canceled by the end of 2027, citing high costs and an inability to control risks.
The throughline problem is trust, said Dhar, in all of its “different flavors.”
- Security and privacy remain major complications. The more autonomy and agency we give these systems, the more access we have to give them, too, said Srikanth. That introduces a new level of risk that enterprises aren’t ready to handle. “Security often comes as an afterthought,” he said.
- On top of security, there’s the quality and accuracy issue, said Zisk. Enterprises must constantly weigh the “hallucination tax,” he said. What are the costs, financially, operationally, and reputationally, of one of these systems flying off the rails?
Finally, there’s a broader trust issue that employees and executives alike are facing, questioning how this tech will entirely shift their workflows, their processes and potentially their livelihoods. “There's just the fear of the unknown,” said Dhar.
If enterprises can’t trust these systems, they’re effectively stuck. The agents will remain in pilot mode, only being tasked with small and insignificant processes that have little to no effect on a company’s earnings – a far cry from the transformational powers that tech giants are claiming.
But foundational problems may require foundational solutions, said Srikanth. Just as with conventional AI systems, good data is key. One study from Capgemini found that less than 20% of enterprises have a high level of data readiness for AI adoption, and only 9% consider themselves fully prepared.
While having agents explain themselves could help instill trust, said Zisk, “explainability is a cure for a disease that the AI agents have,” he said. “What you really want is to find a prevention for that disease. The prevention is to make sure that they have the right data, the right prompts, [and] the right controls in the first place.”
Can agents break out of the 'productivity lens'
With upwards of a trillion dollars being poured into the AI market broadly, the pressure is on to make the investment worth it.
Though cost forecasts vary, one estimate from Bank of America analysts found that agentic AI spending could reach $155 billion by 2030, and could deliver up to $1.9 trillion in value for enterprises as these systems take over more and more workflows.
However, stakeholders are getting antsy, said Dhar.
“Boards are putting pressure on CEOs to actually deliver return on investment,” said Dhar. “And as such, stuff rolls downhill. CEOs are putting a lot more pressure on their CFOs, COOs and CTOs to deliver actual returns.”
As it stands, companies are keen to use agents to cut as much spending as possible, said Dhar. Enterprises are looking at ways to do more with fewer people, with sights set solely on returns from productivity gains. Things like headcount, efficiency, and “bending the cost curve” are top of mind. “I think most boards and CEOs right now are measuring AI through a productivity lens,” he noted.”
But the real value isn’t going to be derived from the bottom line, he said. Rather, it’s going to come from the top, rethinking not just what can be done by substituting agents for human workers, but also reimagining how processes can be transformed entirely, he said.
Actually getting there is going to take some trust from the people writing the checks, said Dhar. He pointed out that it's often easier to justify an investment by cutting something and saving money than to bet on something you expect to generate more revenue. While that may sound paradoxical, the fact is that it's more tangible to reduce spending than to believe in a key move that will make more money flow your way in the future.




