ecurity has long been seen as one of AI’s biggest pitfalls, and the concerns have only accelerated as agents take on autonomous action. Anthropic is trying to change the narrative.
On Friday, the company introduced Claude Code Security, an AI-powered tool that searches codebases for security vulnerabilities that humans may have missed, and it's now available in a limited research preview.
Built into Claude Code, this tool weeds out security bugs and suggests software patches for human review.
- Rather than scanning for known patterns, the way traditional code scanning tools would, Anthropic claims that Claude Code Security reasons with your code by trying to understand how components interact and how data moves through your systems, “the way a human security researcher would.”
- Before getting in front of human eyes, all of this tool’s findings then go through a multi-stage process to filter for false positives. Finally, everything is posted to the Claude Code Security dashboard for human approval.
Claude Code Security addresses a prevailing issue for strapped security teams: “Too many software vulnerabilities and not enough people to address them,” Anthropic said. Finding subtle pitfalls “requires skilled human researchers, who are dealing with ever-expanding backlogs.”
Anthropic’s tool adds another feature to its ever-popular suite of enterprise offerings. The feature comes as developers increasingly rely on Claude and other AI tools to do most, if not all, of their coding for them.
But the more we rely on these tools, the more we risk when they fail. For example, a 13-hour Amazon Web Services outage in December reportedly involved the company’s own Kiro agentic AI coding tool, according to the Financial Times. Amazon blamed human error for this outage, claiming that the staffer involved had broader access permissions than expected and that "the same issue could occur with any developer tool or manual action.”
Our Deeper View
Trying to do AI the safe and ethical way is par for the course for Anthropic, yet this tool touches on a particularly poignant need. Developers across enterprises big and small are using Claude Code more than ever, and that reliance is only bound to grow as these companies search for ways to make their AI deployments gain traction. To trust the code they're writing, developers need to trust the codebases Claude is building on. Plus, a tool like this is bound to earn more reputation points for Anthropic if it can effectively prevent security snafus before they happen.




