ompanies are widely using AI to trudge through mundane coding tasks. The problem is developers often don’t trust the outputs.
A survey of 1,100 developers from code review firm Sonar found that, while AI accounts for 42% of all committed code, 96% of developers don’t fully trust AI to generate code that’s functionally correct. Still, despite the distrust, developers are pushing the code forward: Only 48% reported that they check their AI-assisted code before they commit it to projects, according to the report.
AI-powered coding has emerged as a major point of focus for investors:
- In December, Swedish AI coding startup Lovable raised $330 million, bumping its valuation to $6.6 billion.
- In November, AI coding firm Cursor raised a $2.3 billion series D round, bringing its valuation to $29.3 billion.
And it makes sense why investors are throwing money at it: These coding tools are massively popular. Lovable hit $200 million in annual recurring revenue in November, while Cursor claims to have surpassed $1 billion in revenue and has drawn in more than one million users. And while Anthropic doesn’t reveal user numbers, the company’s ever-popular Claude Code tool reportedly attracted 115,000 developers and processed 195 million lines of code in just one week.
These tools stand to make developers far more productive, with one internal study from Anthropic finding that developers who used Claude Code saw a 50% gain in productivity. However, like any AI tool, coding assistants can mess up. Productivity gains from these tools are only going to actualize into returns if we can trust the outputs, or have foolproof systems to ensure that bugs don’t slip through the cracks.




