In this piece for PublicTechnology, the University of Birmingham’s Dr Martin Wählisch explores the challenges in getting countries with notably differing aims and approaches to work together on AI governance
Throughout history, transformative technologies have challenged humanity’s ability to balance innovation with responsibility.
From the printing press sparking conflicts over information control, to nuclear technology demanding global non-proliferation treaties, to the internet forcing us to grapple with digital sovereignty – each era has required new frameworks for cooperation.
Today, artificial intelligence stands as perhaps humanity’s most profound technological turning point. The recent Paris AI Action Summit has thrown into sharp relief not just the imperative for international cooperation, but the complexity of achieving it meaningfully.
When the world’s first and third most significant AI powers – the United States and United Kingdom – decline to sign a declaration on AI safety, it forces us to examine what real cooperation looks like. Easy narratives about isolation or reluctance to engage mask a deeper truth: effective international cooperation might not mean uniform regulation but rather finding ways for fundamentally different approaches to coexist and complement each other.
The summit revealed a striking divide between two philosophies of AI governance.
The EU-led coalition champions comprehensive regulation, while the US and UK advocate for more targeted approaches that preserve innovation. But we must be careful not to mistake more regulation for better regulation. The EU’s recent retreat from its ambitious AI liability directive suggests the limitations of overly prescriptive approaches.
This divide reflects deeper cultural and historical perspectives. The UK and US, shaped by decades of managing global security challenges, think beyond regional boundaries. Britain’s unique historical experience with empire has left it acutely aware that western European cultural and ethical frameworks are not universally applicable – a crucial insight for governing a technology that transcends borders.
Even traditional advocates of regulation are reconsidering their positions. The UK Labour Party’s unexpected scepticism toward blanket AI regulation reveals an important shift: after observing the economic and security costs of excessive rulemaking in the EU, pragmatism is trumping ideological instincts.
Meanwhile, tech corporations, traditionally resistant to oversight, appear increasingly aware that some form of regulation might be preferable to the current uncertainty.
However, concerning dynamics persist – such as OpenAI’s approach to national security, with teams “querying the models” as if consulting digital oracles, suggesting precisely the kind of corporate overreach that thoughtful international frameworks must prevent.
Threading the needle
South Korea offers an inspiring counterpoint, demonstrating how nations can pursue technological self-determination while engaging in international cooperation. Their development of indigenous AI capabilities, from foundation models to intelligent speakers, shows how innovation can thrive within cooperative frameworks without sacrificing national interests.
Our path forward requires threading a difficult needle.
Rather than pursuing uniform global standards, we need interoperable frameworks that respect different cultural and regulatory approaches while ensuring basic safety and ethical standards. This means creating mechanisms for cooperation that acknowledge legitimate differences in how nations balance innovation, security, and ethical considerations.
“Effective international cooperation might not mean uniform regulation but rather finding ways for fundamentally different approaches to coexist and complement each other”
The alternative – a fragmented approach to AI governance – would indeed be devastating. But fragmentation does not come only from lack of cooperation; it can also arise from forcing artificial consensus where fundamental differences exist. The real challenge is building bridges between different approaches while maintaining the flexibility needed for innovation and cultural adaptation.
At this crossroads, the choice is not between complete regulatory harmony and isolation, but between thoughtful engagement that respects diversity and rigid uniformity that risks breeding resistance.
The future of AI governance depends not on how many countries sign a particular declaration, but on how effectively we can build frameworks that accommodate different approaches while ensuring that the technology benefits humanity.

About the author
Dr Martin Wählisch is the inaugural Associate Professor of Transformative Technologies, Innovation, and Global Affairs at the University of Birmingham. His work explores how emerging technologies and futures thinking can advance international relations, enhance peace processes, and drive sustainable development.
Find out more here about the university’s Centre for Artificial Intelligence in Government.