New pact pushes back on AI replacement race

Mar 5, 2026

6:33am UTC

Copy link
Share on X
Share on LinkedIn
Share on Instagram
Share via Facebook

AI ethicists have put out another plea for the world to pay attention to the tech’s risks.

On Wednesday, a coalition of leaders across industries announced the “Pro-Human AI Declaration,” united by a broad, simple proclamation that AI “should serve humanity, not the reverse.”

“This race to replace poses risks to societal stability, national security, economic prosperity, civil liberties, privacy, and democratic governance,” the statement reads. “It also imperils the human experiences of childhood and family, faith, and community.”

The declaration, which counts names like Yoshua Bengio, Steve Bannon, Susan Rice, Sir Richard Branson and Joseph-Gordon Levitt among its endorsers, proposes five central tenets in creating trustworthy and controllable AI:

  • Keeping humans in charge, which suggests meaningful human controls and override capabilities, an AI “off-switch,” independent oversight and an end to the superintelligence race
  • Avoiding power concentration, such as preventing AI monopolies, having democratic authority over major work, society, and civic life impacts, and shared prosperity of AI’s benefits
  • Protecting the human experience, which proposes that AI should not be allowed to exploit or stunt children's growth, should not be addictive and should not “supplant” foundational relationships
  • Human agency and liberty, suggesting that AI should not be allowed to have personhood, that humans should also retain rights to their data and privacy, and that AI should not “enfeeble” users.
  • Responsibility and accountability for AI companies, advising that AI should not create a "liability shield” for companies, developers or users, and that all failures by models should be made transparent.

This is not the first time tech ethicists have implored the industry to pay attention to the dangers that lie ahead in our current AI trajectory. In October, the Future of Life Institute put out a petition calling for a moratorium on developing superintelligence, claiming that the tech harbors “extreme large-scale risks.” The petition garnered more than 135,000 signatures, many of whom also endorsed this Pro-Human AI Declaration.

Our Deeper View

AI is moving so fast that it often breaks out of restraints quicker than we can make them. Getting people to pay attention to the risks the tech presents is a huge challenge. The fact is that people won’t pay attention to responsible AI until AI actually creates a major crisis. So I ask: What will it take? How many wrongful death lawsuits against LLM providers are going to have to pile up? How many people need to lose their jobs? How many self-driving cars need to crash? Though the ethos of innovation has long been to move fast and break things, what will it have to break to get people to act?