UN aims to shape global AI rules

By
Nat Rubio-Licht

Sep 29, 2025

12:00pm UTC

Copy link
Share on X
Share on LinkedIn
Share on Instagram
Share via Facebook

Every region wants to do AI differently.

On Thursday, the United Nations announced the Global Dialogue on AI Governance, an initiative aimed at building “safe, secure and trustworthy AI systems” grounded in human rights, oversight and international law.

The initiative is part of the Global Digital Compact, an agreement introduced by the UN last year focusing on AI governance. Some of its goals include enabling interoperability between governance regimes, encouraging “open innovation” and allowing every nation “a seat at the table of AI.”

Additionally, the UN announced the creation of the International Independent Scientific Panel on AI,” a group of 40 experts to provide an “early warning system” on AI’s risks.

“The question is whether we will govern this transformation together – or let it govern us,” said António Guterres, secretary-general of the UN, in his remarks.

The problem, however, is that three of the biggest contributors to the AI transformation – the U.S., EU and China – have very, very different approaches to regulating it. 

These different approaches are representative of the “fundamental differences” to governing that already exist within these regions, said Brenda Leong, director of the AI division at law firm ZwillGen.

“AI is going to show up in each of those contexts, in alignment with that context,” said Leong. “Every country is going to use AI as a tool and as political leverage.”

Given that the UN can’t enact or enforce laws itself, the closer it gets to prodding actual regulation of AI systems, “the less and less influence they’re going to have,” said Leong.

However, the UN can still influence areas where there’s “convergence” between regions, said Leong. For example, creating technological standards, setting definitions and promoting interoperability are things that can make “everybody’s lives better.”

Additionally, the UN can represent the interests of the regions that aren’t at the forefront of the AI race, she said, to “keep that gap from growing too big.”

While these three major markets have very different ideas on how to govern their models, the impacts of this on the market are still playing out. It’s possible that the EU’s large marketplace could influence enterprises and model developers to adhere to its particularly stringent rules for matters like privacy and ethics. As Leong noted, “it’s easier to comply with one standard than many, and they're the tightest.”