AI in the International Arena

International Perspectives on Regulation

 

Early last week, the UN Security Council had its first-ever debate on AI. This was a landmark session that displays the growing prominence of discussions on AI governance on an international scale. The meeting included presentations by AI experts as well as Jack Clark, co-founder of a major AI safety firm, Anthropic.

 

Secretary General António Guterres noted that the rapid growth of platforms like ChatGPT — which, before Meta’s release of Twitter competitor Threads, notched the record for quickest platform to 100 million users — and how it underscores the unprecedented speed of AI.

 

Guterres vouched for the creation of a new international watchdog modeled after the International Atomic Energy Agency (IAEA) to have authority on AI regulation, monitoring, and enforcement. He told reporters that: “Alarm bells over the latest form of artificial intelligence – generative AI – are deafening. And they are loudest from the developers who designed it.”

 

However, while many diplomats generally endorsed the idea of a global governing system, there was disagreement on the exact mechanics.

 

The UK’s foreign minister — James Cleverly — who presided over the session proposed four key pillars of governance: openness, responsibility, security, and resilience. The UK’s stance is particularity interesting because it is a glimpse into their stance on an issue the government has been keen to prioritize. In early June, as President Biden was on a visit to Prime Minister Rishi Sunak in London, the latter announced a major global AI summit scheduled for December, with few details currently available.

 

Russia’s ambassador expressed skepticism on the risks that AI could pose and claimed that not enough was known about whether regulation was necessary. China’s ambassador resisted the establishment of a universal set of laws, emphasizing that international regulatory agencies should be adaptable enough to permit countries to formulate their own regulations.

 

Regulating an entity as amorphous and rapidly evolving as AI presents unique challenges. For starters, the landscape is characterized by a lack of understanding and agreement on the nature and magnitude of the risks AI could pose. The skepticism from Russia's ambassador during the UN session epitomizes this concern. Some see AI as the harbinger of the next industrial revolution, while others worry about the "deafening" alarm bells ringing from the sector's pioneers.

 

While the international community acting decisively in the wake of the nuclear threat is a tempting analogy, the comparison has its limitations. The existential risk that nuclear weapons posed was clear and immediate, making consensus easier to achieve among nation-states. In contrast, the risks associated with AI — such as potential misuse, data privacy issues, or the existential question of super-intelligent AI — are more nebulous and don't translate into immediate, tangible consequences.

 

Another key difference is the actors involved. With nuclear weapons, nation-states were the principal players. In the world of AI, a myriad of actors, including tech giants, start-ups, and individual developers, play a pivotal role. This diffuse landscape complicates the process of drafting universally accepted regulations.

 

Moreover, while the International Atomic Energy Agency (IAEA) serves as a model for potential AI oversight, the circumstances surrounding its formation were distinct. The US played a crucial role in promoting nuclear regulation, driven by a strong consensus within its government and society about the necessity of such measures. Today, the same level of agreement does not exist within the US when it comes to AI regulation. The current divided Congress and the Biden administration's ongoing deliberations on AI policy underscore this lack of unified stance.

 

The question then arises: If the world's leading AI innovator is still in a state of flux about how to handle AI, how can we expect an international consensus to materialize? It's a difficult question, and the answer is far from clear.

 

While the recent UN Security Council's debate on AI is undoubtedly a landmark moment, this process is bound to be complex, iterative, and most likely contentious. However, the discussion has started, and that is at least a step in the right direction.

Next
Next

NYC's Visionary Approach to AI and Hiring