nthropic’s Claude Code flipped the software world on its head. But the ability to generate code from thin air may be impacting coders’ ability to develop it the old fashion way.
On Thursday, Anthropic published research about the “cognitive offloading” that its AI-powered tools enable. Though these tools can speed up tasks by up to 80%, Anthropic’s research found that reliance on AI-powered coding tools led to a “statistically significant decrease in mastery.”
The company’s research tested 52 software engineers, most of whom were junior, on coding concepts they’d used just minutes before being quizzed. The assessment focused heavily on debugging, code reading and conceptual problems.
The study found:
- Though the group that used AI completed the quiz two minutes faster, they scored 17% lower than the group that coded by hand.
- Those that used AI to slightly speed up the tasks, however, didn’t receive scores that were significantly different from those that coded by hand.
Anthropic said that these scores weren’t changed simply by using AI, but rather were impacted by how the AI was leveraged. While those who used AI to unquestioning generate outputs were less likely to actually learn anything, participants who used the tech to build comprehension, such as by asking follow-up questions or requesting explanations, showed stronger skills.
“Incorporating AI aggressively into the workplace, particularly with respect to software engineering, comes with trade-offs,” Anthropic said in it’s study. “The findings highlight that not all AI-reliance is the same: the way we interact with AI while trying to be efficient affects how much we learn.”




