Tilman Räuker Tilman Räuker

Will the US Government Control the First AGI?—Finding Base Rates

Our 2024 Research Fellow Luise Woehlke researched together with her mentor Christian Ruhl on how involved the US government may get in the development of AGI.

Overview  

  • This research is exploratory and doesn’t aim to provide precise probabilities. This piece is most useful for other researchers, rather than people interested in bottom-line conclusions.

  • In order to forecast whether the US government (USG) will control the first AGI, it is useful to learn from historical trends in who controls important innovations. This could take the shape of a base rate, which one can then improve upon using considerations specific to the case of AGI.

  • I define ‘control’ as roughly the US government’s ability to take at least 50% of important development and deployment decisions.

  • Learning from historical cases is difficult because their similarity to AGI and their historical circumstances vary widely. In this piece, I explore how to do this well by constructing a base rate and exploring four factors that might suggest significantly changing this base rate:

  1. Whether the base rate of USG-controlled innovations has declined over time

  2. Whether the US share of global R&D activity has declined over time (as another proxy for whether USG-controlled innovation has declined)

  3. Whether the base rate of USG-controlled innovations is significantly different in war/cold war than in peacetime

  4. Whether the base rate of USG-controlled innovations is significantly different for billion-dollar technologies than less expensive technologies

  • I find that factors 3 and 4 are most important to inform forecasts on USG control over AGI.

 

Findings

  • I estimate a naive base rate of 28% for USG control over important innovations.

    • I use historical case data on innovations and who controlled them collected by Bill Anderson-Samways. This data includes 27.5 important innovations, such as the computer and the first synthetic virus, spanning the years 1886 to 2023.

    • My data is of a limited sample size and has some flaws. My use of this data should be viewed as making exploratory approximate guesses. This is more useful as input into further research than to form immediate conclusions.

  • (Factor 1) I could not find a clear decline in USG-controlled innovation over time.

    • This question may be asked because the huge USG investments in the Manhattan and Apollo projects seem to lack an analog today. But, plotting my limited case data, I could not see a clear downward trend in USG-controlled innovation over time.

  • (Factor 2) I only found an uncertain, mild decline in the US share of global R&D activity.

    • I further investigated claims of declining USG-controlled innovation by investigating whether the US share of global R&D activity has declined (as a proxy).

    • Using R&D funding data starting in 1960, I only found a fluctuating, mild decline in the US share (could be 5-10 percentage points) that could be confirmed with more and better data.

  • (Factor 3) USG-controlled innovation may be 1.8x more common in war/cold war than in peacetime.

    • In my case data, the fraction of USG-controlled innovations during war/cold war is higher than in peacetime. I estimate base rates of 35% and 19% respectively.

    • To illustrate what this means for AGI, I estimate the base rates of USG control in example cold war and peaceful scenarios at 30% and 20% respectively.

  • (Factor 4) Billion-dollar innovation projects may be 1.6x more commonly USG-controlled than less expensive innovations.

    • GPT-4 cost $780 million in hardware and models continue to scale. AGI might be more similar to other high-cost inventions rather than, say, low-cost academic research.

    • I estimate USG control base rates for billion-dollar innovations compared to less expensive innovations at 38% and 24% respectively. I counted innovations as high-cost if their cost as a fraction of GDP at the time would be above a billion dollars using the same fraction of today’s GDP.

    • If one puts three times more weight on high-cost cases, the overall base rate of USG control over innovations becomes 31%.

  • An Aside: USG-Controlled Innovations May Be Completed by a Contractor in Only 22% of Cases.

    • Contractors are often involved in USG-controlled innovations. This is relevant since it may lead to a different governance structure with the USG having less unilateral control.

    • But I estimated that only 22% of USG-controlled innovations were brought to their final completed form by a contractor, meaning that most of the time, the USG is the only actor immediately capable of producing the final technology.

Our 2024 Research Fellow Luise Woehlke researched together with her mentor Christian Ruhl on how involved the US government may get in the development of AGI.

Read More
Tilman Räuker Tilman Räuker

Sharing the AI Windfall: A Strategic Approach to International Benefit-Sharing

Michel Justen, mentored by Matthew van der Merwe and Max Dalton, wrote a post on international benefit-sharing.

Summary

If AI progress continues on its current trajectory, the developers of advanced AI systems—and the governments who house those developers—will accrue tremendous wealth and technological power.

In this post, I consider how and why US government[1] may want to internationally share some benefits accrued from advanced AI—like financial benefits or monitored access to private cutting-edge AI models. Building on prior work that discusses international benefit-sharing primarily from a global welfare or equality lens, I examine how strategic benefit-sharing could unlock international agreements that help all agreeing states and bolster global security.

Two “use cases”  for strategic benefit-sharing in international AI governance:

  1. Incentivizing states to join a coalition on safe AI development

  2. Securing the US’ lead in advanced AI development to allow for more safety work

I also highlight an important, albeit fuzzy, distinction between benefit-sharing and power-sharing:

  • Benefit-sharing: Sharing AI-derived benefits that don't significantly alter power dynamics between the recipient and the provider.  

  • Power-sharing: Sharing AI-derived benefits that significantly empower the recipient actor, thereby changing the relative power between the provider and recipient.

I identify four main clusters of benefits, but these categories overlap and some benefits don’t fit neatly into any category: Financial and resource-based benefits; Frontier AI benefits; National security benefits; and ‘Seats at the table.

I conclude with two key considerations with respect to benefit-sharing:

  • Credibility will be a key challenge for benefit-sharing and power-sharing agreements (See more)

  • Benefit sharing strategies should account for potential risks. (See more)

Our 2024 Research Fellow Michel Justen was mentored by Matthew van der Merwe and Max Dalton.

Read More
Tobias Häberli Tobias Häberli

The Role of AI Safety Institutes in Contributing to International Standards for Frontier AI Safety

Our 2024 Research Fellow Kristina Fort has worked with her mentor Oliver Guest on the role of AI Safety Institutes in shaping international standards for frontier AI safety, proposing models to enhance their involvement in global standardization processes.

Abstract

International standards are crucial for ensuring that frontier AI systems are developed and deployed safely around the world. Since the AI Safety Institutes (AISIs) possess in-house technical expertise, mandate for international engagement, and convening power in the national AI ecosystem while being a government institution, we argue that they are particularly well-positioned to contribute to the international standard-setting processes for AI safety.

In this paper, we propose and evaluate three models for AISIs’ involvement: Model 1 – Seoul Declaration Signatories, Model 2 – US (+ other Seoul Declaration Signatories) and China, and Model 3 – Globally Inclusive. Leveraging their diverse strengths, these models are not mutually exclusive. Rather, they offer a multi-track system solution in which the central role of AISIs guarantees coherence among the different tracks and consistency in their AI safety focus.

Our 2024 Research Fellow Kristina Fort has worked with her mentor Oliver Guest on the role of AI Safety Institutes in shaping international standards for frontier AI safety, proposing models to enhance their involvement in global standardization processes.

Read More
Tilman Räuker Tilman Räuker

On Labs and Fabs: Mapping How Alliances, Acquisitions, and Antitrust are Shaping the Frontier AI Industry

Our 2024 Research Fellow Tomás Aguirre has worked with his mentor to study the AI supply chain, focusing on strategic partnerships and governance interventions for safe AI development.

As frontier AI models advance, policy proposals for safe AI development are gaining increasing attention from researchers and policymakers. This paper explores the current integration in the AI supply chain, focusing on vertical relationships and strategic partnerships among AI labs, cloud providers, chip manufacturers, and lithography companies. It aims to lay the groundwork for a deeper understanding of the implications of various governance interventions, including antitrust measures. The study has two main contributions. First, it profiles 25 leading companies in the AI supply chain, analyzing 300 relationships and noting 80 significant mergers and acquisitions along with 40 antitrust cases. Second, we discuss potential market definitions and the integration drivers based on the observed trends. The analysis reveals predominant horizontal integration through natural growth rather than acquisitions and notable trends of backward vertical integration in the semiconductor supply chain. Strategic partnerships are also significant downstream, especially between AI companies and cloud providers, with large tech companies often pursuing conglomerate integration by acquiring specialized AI startups or forming alliances with frontier AI labs. To further understand the strategic partnerships in the industry, we provide three brief case studies featuring companies like OpenAI and Nvidia. We conclude by posing open research questions on market dynamics and possible governance interventions, such as licensing and safety audits.

Our 2024 Research Fellow Tomás Aguirre has worked with his mentor to study the AI supply chain, focusing on strategic partnerships and governance interventions for safe AI development.

Read More
Tilman Räuker Tilman Räuker

How to design an AI ethics board

Organizations that develop and deploy artificial intelligence (AI) systems need to take measures to reduce the associated risks. In this paper, we examine how AI companies could design an AI ethics board in a way that reduces risks from AI. We identify five high-level design choices: (1) What responsibilities should the board have? (2) What should its legal structure be? (3) Who should sit on the board? (4) How should it make decisions and should its decisions be binding? (5) What resources does it need? We break down each of these questions into more specific sub-questions, list options, and discuss how different design choices affect the board's ability to reduce risks from AI. Several failures have shown that designing an AI ethics board can be challenging. This paper provides a toolbox that can help AI companies to overcome these challenges.

Read More