Mustafa Suleyman and Ian Bremmer's compelling article in Foreign Affairsunderscores the critical importance of addressing the potential threats arising from the rapid advancement of Artificial Intelligence (AI) in addition to the immense associated benefits. The article conveys a sense of urgency, warning that if governments fail to catch up soon, they may never catch up at all.
The authors shed light on the startling possibility that Artificial General Intelligence (AGI) could become a reality in as little as five years. This is when "Brain scale" models, with over 100 trillion parameters (similar to the number of synapses in the human brain), are projected to become feasible.
The article emphasises that whether the year 2035 witnesses the positive breakthroughs brought about by AI or the disruptive challenges it presents hinges on the actions policy makers take today. To guide these actions, the authors offer a series of insightful recommendations. These thought-provoking suggestions point us in the right direction and merit thorough consideration and discussion. Here, we'll delve into a few key takeaways.
Too powerful to pause?
In a world where every nation, corporation, and individual seeks to harness the power of Artificial Intelligence (AI), Suleyman and Bremmer bring to light the impact of AI on global power dynamics. They suggest that, back in March 2023, it was too powerful to pause. However, a stark reality emerges: in the coming years, it will become imperative for governments to possess the capability to halt and steer the course of AI development.
An underlying message echoing throughout their article is a pressing concern: the private sector currently maintains a lead over governments in the AI race. The urgency lies in reshaping this narrative, ensuring that the private sector doesn't unilaterally dictate the trajectory of this transformative technology. As one commentator aptly puts it, "We need to switch that around and not let the private sector dictate the growth of the technology."
To execute this shift in control, the responsibility squarely falls on governments. It will necessitate governments prioritising AI safety over immediate economic gains—a monumental task, particularly given the prevailing power balance between government and the tech industry. The authors highlight that big tech companies wield significant power and influence, often shaping regulations to suit their interests. Thus, aligning economic priorities with the imperative of AI safety presents a formidable challenge.
Characteristics an AI Global Governance regime should haveSuleyman and Bremmer support the view that the nature of AI is such that an effective governance regime has no hope unless it is global. They further argue that its significance is such that it merits new dedicated institutions, rather than seeking to adapt existing institutions.
They make the point that the governance of AI cannot succeed if it mirrors traditional approaches: AI Global Governance (AIGG), they suggest, will need to be “precautionary, agile, inclusive, impermeable and targeted”. Several authors have sought to describe the features required of a successful AI Governance regime:
"Precautionary": is a term that is often missing from major sets of AI principles such as the OECD Principles and the UNESCO recommendation. It typically emerges concerning AI's approach to Artificial General Intelligence (AGI) and its potential existential risks. They rightly argue that developers and owners should bear the responsibility of proving AI safety above a reasonable threshold, rather than governments solely dealing with issues post-factum, even if this raises entry barriers favouring Big Tech. This principle can be underpinned by incorporating "anticipatory." This extended principle emphasises the need to anticipate the swift advancements in AI technologies, particularly regarding super-intelligence, and underscores the importance of proactive measures to navigate the evolving AI landscape safely and ethically.
"Agile": underscores the importance of adaptability and responsiveness in the face of AI's continuous and rapid evolution. This principle enjoys widespread acceptance due to AI's ever-changing nature, making agility in governance a top priority. As AI remains an emerging field with frequent and swift transformations, the need for governance frameworks to not only adapt to technological changes but also to critically review and update their own policy principles is essential. This extension of agility to include being "reflexive" emphasises the importance of governance structures that proactively evaluate and adjust their policies while responding swiftly to external events, ensuring their continued relevance and effectiveness in addressing AI's multifaceted challenges.
"Inclusive": emphasises the importance of engaging the developing world and authoritarian countries. This principle, widely recognized as vital, ensures that AI governance is a global endeavour with diverse participation. However, the nature of this inclusivity, especially regarding the involvement of private technology companies, poses challenges such as regulatory capture. Suleyman and Bremmer advocate a broad range of stakeholders, including tech companies, to participate, but this could potentially grant them undue influence. Our response underscores the need for not only inclusivity but also fairness in AI governance, ensuring that all countries and individuals have access to AI advancements. Historically, international bodies have sometimes been skewed in favour of larger economies, exemplifying the need for a more equitable approach. Addressing challenges beyond voting power for example, such as levelling up AI expertise across all nations, will be essential to achieving this fairness.
“Impermeable”: not common in current literature, but the concept of “comprehensiveness” is not dissimilar. Impermeability is absolutely essential when addressing advanced intelligence. They contrast climate change mitigation, where success is determined by the sum of all independent efforts, with AI, where “safety is determined by the lowest common denominator: a single breakout algorithm could cause untold damage.” To achieve the desired level of impermeability, they propose that the scope of the regime is both global and across the entire supply chain.
“Targeted”: not a term often used by other AI regime designers, the concept appears sound, describing the wide variety of risks and the need to have specific regulation targeted to address each different type of risk: “a light regulatory touch and voluntary guidance will work in some cases; in others, governments will need to strictly enforce compliance”. As the writers suggest, the governance needs to be well-informed.
We embrace these principles and believe they offer a valuable foundation for guiding AI governance. We previously articulated a similar set of principles in our own work, which can be explored further in Section 2 here.
Institutional implicationsIn recent months, there have been various proposed institutional solutions inspired by established models like the International Atomic Energy Agency (IAEA), CERN, the International Civil Aviation Organization (ICAO), and the Intergovernmental Panel on Climate Change (IPCC). In line with the principles they put forth, Suleyman and Bremmer advocate a "techno-prudential" approach, comprising three AI governance regimes. These regimes are as follows: (1) A fact-finding advisory body, modelled on the IPCC, designed to provide critical insights for multilateral and multi-stakeholder negotiations on AI; (2) A mechanism to manage tensions among major AI powers and mitigate the proliferation of advanced AI systems, with a particular focus on the complex relationship between the United States and China; and (3) A technocratic entity dedicated to addressing AI risks: a “Geotechnology Stability Board” akin to the Financial Stability Board, focused on "maintaining geo-political stability amid rapid AI-driven change."
These three regimes all have merit and deserve serious consideration:
Additionally, though these regimes are sound, we believe Russia is undeniably a significant actor in the realm of AI governance. While it may not boast prominent private companies on the scale of China or the United States, nevertheless Russia possesses formidable technical capabilities and has demonstrated a willingness to employ technology for political purposes. Its prowess as an important player in the technology sphere should indeed be highlighted and addressed within the context of AI governance discussions. This thinking can be further extended to outliers such as North Korea.
Potential elements of an AI Governance regime
Suleyman and Bremmer's proposal offers some key potential elements of an AI governance regime and contribute to the crucial dialogue regarding the overall regime’s architecture. That architecture will need to include an effective approach to regulation, such as the recent proposal by Trager et al for a jurisdictional certification approach to the international governance of civilian AI .
It will also need to address a number of other issues including:
1. Collaboration and International EngagementThe different components of a regime complex will need to be complementary, with good collaboration between them. Crucially they will need to collaborate effectively together, something that is sadly lacking at present in key areas of the United Nations.
2. Ensuring Independence
Independence is a cornerstone of effective governance. The regimes and those who people them must be open to valid inputs but free from improper influence, be that from a government or a corporation. To prevent undue influence and ensure impartiality, the next phase should provide a detailed plan for safeguarding the independence of these and other proposed bodies, securing their credibility and trustworthiness.
3. Leveraging Existing Global Bodies
Creating robust AI governance will not only involve creating new institutions but also leveraging the capabilities of existing UN Agencies and other established international organisations.
4. The Imperative of Enforcement
Enforcement mechanisms are at the core of any functional governance framework, but are an area of weakness in too many international Treaties. While legal enforcement is one avenue, it is essential to consider other means, including internal enforcement mechanisms. The establishment of a World Federal Government, although a distant prospect, should be explored further, especially in the event of a global crisis.
5. An Overarching Convention
The different regimes need an all encompassing convention, a perhaps surprising omission from the Suleyman and Bremmer proposals. They call for a governance with no gaps, yet it is not entirely clear how this is achieved. The UN Convention on the Law of the Sea (UNCLOS) has been cited as a relevant precedent, providing an umbrella setting out the main objectives, able to set out the necessary commitments and establishing the required institutions to deliver those objectives. The UNFCCC, with a similar structure, has also been suggested as an appropriate model
Seizing the Opportunity
The November AI Safety Summit at Bletchley Park has as its end goal a world where it is possible to realise the technology’s huge benefits safely and securely. It provides a unique opportunity to agree on a pathway to an effective AI regime complex with the attributes identified by Suleyman and Bremmer - and potentially including their institutional proposals. It will not be easy; it will require an in-depth understanding of the challenges and a real commitment to address them now. This opportunity must be seized.
 Whitfield, R. (2021) AI Global Governance – what are we aiming for One World Trust, pp 2-6
 Wallach, W., Marchant, Wallach, W., Marchant, G. Toward the Agile and Comprehensive International Governance of AI and Robotics, Vol. 107, No. 3, March 2019 PROCEEDINGS OF THE IEEE https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8662741
 Miailhe, N. (2018) ‘AI & Global Governance: Why We Need an Intergovernmental Panel for Artificial Intelligence’ United Nations University Centre for Policy Research. 20 December.
Whitfield, R. et al (2020)Effective, Timely and Global – The Urgent Need for Good Global Governance of AIhttps://www.wfm-igp.org/publication/effective-timely-and-global-the-urgent-need-for-good-global-governance-of-ai/
 arXiv:2308.15514v1 https://doi.org/10.48550/arXiv.2308.15514
 Nemitz, P. (2021) Fundamentals of International Law: AI and Digital Remaking the World – Toward and Age of Enlightenment Boston Global Forum https://bostonglobalforum.org/publications/remaking-the-world-the-age-of-global-enlightenment-2/
Whitfield, R. et al (2020) ibid