Nvidia CEO argues speed is key to safer AI

By
Nat Rubio-Licht

Jan 7, 2026

12:30pm UTC

Copy link
Share on X
Share on LinkedIn
Share on Instagram
Share via Facebook
S

afety advocates have long been urging AI firms to tap the brakes. Nvidia’s Jensen Huang thinks they have the wrong idea.

In a media briefing at CES on Monday, the CEO of the world's most valuable company advocated for unified, US federal regulation that enables rapid progress, claiming that slowing the pace of AI innovation wouldn’t improve the tech’s safety. Rather, Huang noted, safer AI will come from more development, claiming “innovation speed and safety goes hand in hand.”

Huang said that the first step in tech innovation is making a product “perform as expected,” such as limiting hallucination and grounding outputs in truth and research. He also compared stymied development to driving a 50-year-old car or flying a 70-year-old plane: “I just don't think this is safe,” said Huang. “It was only a few years ago some people said, ‘let's freeze AI,’ then the first version of ChatGPT would be all we have. And how is that a safer AI?”

Huang’s perspective stands in stark contrast to the common viewpoint held by AI ethics and safety advocates that we shouldn’t forge ahead blindly with tech that could upend humanity without a full picture of what it’s capable of.

  • Several of AI’s most prominent voices have called for model firms to slow down their development to assess risks. Two of AI’s so-called “godfathers,” Yoshua Bengio and Geoffrey Hinton, have warned of the tech’s potential existential threat in recent months.
  • And in late October, the Future of Life Institute advocated for a full moratorium on the push for superintelligence, releasing a petition that has garnered more than 132,000 signatures to date.
  • Some of the signatories include Hinton and Bengio; a number of employees from OpenAI, Anthropic and Google DeepMind; and major artists like Joseph Gordon-Levitt, Kate Bush and Grimes.

But Huang isn’t alone in his desire for free rein. Several of AI’s biggest proponents (and beneficiaries) hold the same view, with the likes of OpenAI’s Sam Altman and Greg Brockman, a16z’s Marc Andreessen, and Palantir’s Joe Lonsdale all joining forces in August to launch a pro-AI super PAC called Leading the Future to back candidates calling for unified regulation.

Our Deeper View

Huang obviously has a bias toward simple, loose AI regulations. The bigger and more powerful AI models get, the more compute and chips they eat up, the more money his multitrillion-dollar company makes. Advocating for slowing down would essentially be talking himself out of a payday. But new capabilities are emerging faster than ever as these firms charge towards AGI, and we may not truly know how to reckon with it – something that even OpenAI might be admitting to itself in hiring a “head of preparedness.”