Hide table of contents

Whether Artificial General Intelligence (AGI) is safe will depend on the series of actions carried out by relevant human actors that eventually results in more or less safe AGI. Artificial Intelligence (AI) governance refers to all the actions along that series that help humanity best navigate its transition towards a world with AGI, excluding the actions directly aimed at technically building safe AI systems.[1]

This post compiles and briefly explains arguments in favor and against the relevance for AI governance of actors based in the European Union (EU) and in favor and against the value of pursuing work in EU policy making. It builds on previous attempts to help EAs determine whether to work on AI governance in the EU.[2][3] In doing so, I hope to contribute to a knowledge gap in the EA community's collective assessment of the importance of the EU and initiate a discussion on the various arguments with individuals exploring career, research, or funding opportunities.

The arguments listed have affected career and funding decisions in the past. Note  that several of these arguments apply to the field of AI governance in general, but are included because they are especially salient for EU AI governance. Note also that no attempt is made in this post to weigh the arguments against one another, even though in fact, I disagree with several of these arguments (I disagree with 1 argument in favor and 2 arguments against). As a disclaimer, on balance, I believe that the EU is relevant for AGI governance so, please, keep me on my toes: I have tried hard to strengthen the arguments against and the arguments with which I disagree to provide a solid basis for discussion and reflection. 

You can use the table of content on the left to navigate to the arguments you want to read about (on desktop browsers, not mobile). I welcome additional input! You can help by answering these questions:

  • What are additional arguments (for or against) that have not been included in this list?
  • If you think I don’t do justice to an argument you are familiar with, how would you explain it differently? What are the nuances I miss?
  • Which of the arguments do you think are the strongest?

 

Notes for those unfamiliar with the EU: “Member States” refer to nations that are part of the EU. There are 27 Member States in the EU, for around 450 million citizens and accounts for 16% of global GDP in purchasing power standards (PPS). Neither the UK nor Switzerland is part of the EU. For comparison, the US has 332 million citizens and accounts for 16.3% of global GDP PPS and China, with 1.423 billion citizens, accounts for 16.4% of global GDP PPS.[4]

I thank Risto Uuk, Laura Green, Andrea Miotti, Mathias Bonde and Daniel Schiff for their valuable feedback on this post, as well as everyone who have helped me map or better understand these arguments throughout the years. Disclaimer: the post's original title was "Is the European Union relevant for AGI Governance", but as rightly pointed out in this comment the post's is more accurately described as being about the value of pursuing work in EU governance if you are concerned about AGI. Besides the title and the introduction, the text has remained identical.   

Arguments in favor

1. The Brussels Effect

The Brussels Effect refers to the EU legislative decisions having impact outside of the EU. There are two types of Brussels Effect: de facto effect (i.e. market players adapting to EU’s decisions in non-EU markets even when it is not legally required) and de jure effect (i.e. foreign regulators adapting their own legislation to match EU’s decisions). These effects arise through the EU’s deliberate use of administrative and legal mechanisms to ensure extraterritorial impact. Whether a Brussels Effect also kicks in for AI governance has been explored here and in an upcoming paper by FHI/GovAI. The preliminary answer seems to be yes. 

How does it affect the EU’s relevance? 

The Brussels Effect would multiply the impact of EU decisions on regulatory and market players worldwide. This would make EU AI policymakers’ actions more important for safe AGI development, and therefore would imply the EU is more relevant to AGI governance. 

Further information: 

 

2. The EU is taking AI governance decisions now

The EU institutions (European Commission, Council of the EU, and European Parliament) are developing their landmark AI legislation now. After various forms of public/stakeholders consultations in the past two years, the European Commission has submitted a legislative proposal to regulate artificial intelligence in April 2021. As of January 1st 2022, the latest compromise put forward by the Council of the EU includes the legal notion (to be further defined) of general purpose AI systems for the first time.  

How does it affect the EU’s relevance? 

The debate underway about this legislation – likely to last till 2023 – will see increased research and advocacy about certain governance concepts and their prospects for achieving certain outcomes, affecting how people think about these concepts. For example, the notions of “accuracy”, “robustness” and “general purpose” in AI systems will have to be defined legally. For better or worse, because of path dependency in policymaking and the inefficiently long lifecycle of legal acts, these concepts could shape the way industry ensures “accuracy” and “robustness” in AI for the next 15-30 years. This suggests that now might be a rare opportunity to influence AI safety governance before the development of AGI. 

Further information: 

 

3. The EU is part of many AI development and governance collaborations

The EU has historically relied on and benefited from multilateralism and transnational partnerships for shaping its international political and economic environment. The EU or its Member States lead or co-lead the EU-US Trade & Technology Council, the OECD AI Observatory and the Global Partnership on AI. The EU is also part of the G7, G20 and NATO. EU-China relations are better than US-China relations. In the private sector, there are multiple industry partnerships through which the EU deploys its soft power (GAIA-X, Alliance on Processors & Semiconductor Technologies, InTouchAI.eu, Alliance for Industrial Data, Edge and Cloud) to affect AI governance and standardization. 

How does it affect the EU’s relevance? 

The EU aims to impose – through soft power – its approach to AI governance in the rest of the world. If the approach turns out to be sound enough to help reduce existential and catastrophic risk from AGI, it is important to foster its diffusion. If the approach is not sound, it is important to either correct it or help prevent its successful diffusion. While the AI safety governance research field has not yet reached a scientific consensus on what is sound or not sound, EA-aligned individuals in positions of influence within the EU AI governance field would over time become better positioned to make the judgment calls needed to improve AGI governance. Note that this is distinct from the Brussels Effect, which spreads legislation; the present argument instead relies on spreading EU standards, norms, trade agreement clauses, ethical board reviews for research projects, etc. through these partnerships.  

Further information: 

 

4. The EU market and political environment favor AGI safety

The argument is that the EU is more likely to aim for AGI safety than the US and China. The reasoning is based on three claims. First, EU consumers have a tendency to demand more trustworthy goods (in absolute value) than other regions, for economic or historical reasons, which therefore incentivizes global industry to supply such trustworthiness. Second, the EU has enshrined the precautionary principle in its constitution (i.e. its Treaties). Third, as it lags behind in the AI race, the EU has incentives to slow down the first players by setting high trade barriers, such as trustworthy AI requirements and legislation. 

How does it affect the EU’s relevance? 

If this argument holds, the EU might be the jurisdiction where it is the easiest to promote AI safety research and development through AI governance, while still influencing the global supply of AI technologies.  

Further information: 

 

5. Direct influence from inside relevant AI labs is limited

Individuals who consider entering the field of AI governance sometimes believe that the EA community has (i) identified relevant AI labs (i.e. labs that might develop AGI) and (ii) that it can directly influence them towards a safe outcome. The present argument questions the EA community’s capacity for identification because of the multiplication of AGI projects, the possibly diffuse nature of superintelligent systems breakthroughs (Comprehensive AI Services as General Intelligence) and the confidential nature of relevant projects due to trade secrecy or national security. The argument also questions the EA community’s capacity for influence because of past developments and the current situation. For example, DeepMind has recently failed its bid for independence from Google. Key safety personnel have left OpenAI for unknown reasons after OpenAI made a deal with Microsoft. There is an AI race underway in the private sector, and there is also an AI race underway between the US and China (even though the EA community considers both types of AI races as undermining AI safety at least since Nick Bostrom’s Superintelligence book). Finally, there are dozens of AGI projects identified by GCRI with no direct EA contacts. The present argument is that influencing the developers directly from within the lab is not as effective as previously thought, and, therefore, that there is a greater need for policy-based AGI governance to influence these developers.

How does it affect the EU’s relevance? 

This argument, if true, would increase the relevance of AI governance activities outside the relevant AI labs in general, including EU AI governance. AI policy might be the only effective way to shape the culture and normative institutions of the industry and field as a whole. This in turn might be the only way to redirect the market and political forces at play in AGI labs so as to ensure AGI safety.

Further information:

6. Growing the political capital of AGI-concerned people

Working on AI governance in the EU while it is actively regulating AI is a rare opportunity to increase one’s political capital in this field (through increased credibility, network and expertise). By the time the EA community or scientific community knows what is the best way to govern the development of AGI safety, AGI-concerned individuals can spend this accumulated capital in influencing the decision-makers or in becoming decision-makers directly. The argument also relies on the observation that it takes several years to obtain the network, trust and supporters needed to gain positions of influence. Moreover, every decision taken in AI governance is an opportunity to gain some political capital. If AGI-concerned individuals are not present in the room when these decisions are made, they not only fail to increase their own political capital but also enable individuals not concerned about AGI to accumulate that political capital. By the time AGI is about to happen, it means decision-makers and relevant advisors to these decision-makers will more likely be individuals who are not concerned about AGI (the same way that today’s influential people in AI governance are likely to be yesterday’s people who worked on past privacy, cybersecurity, or digitalization policy files).

How does it affect the EU’s relevance? 

From a political economy perspective, missing the opportunity to earn political capital today on AI-related decisions makes it more difficult for EAs and longtermists to influence humanity’s transition towards AGI tomorrow.

Further information: 

Other arguments in favor 

  • Exploration value – AI governance is a new field and the EA community does not know what’s most promising, hence more EAs should explore EU AI governance to learn whether it is and what type of policies are feasible to recommend there or elsewhere.
  • Personal fit – EU citizens have the best personal fit to work on this and it is  difficult for them to become American/British/Chinese or to gain access to influential positions in the US/UK/China.
  • Low-regret career pathway – due to the way it is organised, many roles in EU governance require juggling with multiple legislative files, either in parallel (e.g. parliamentary assistants and diplomats) or sequentially (Commission staff and lobbyists). The EU is relevant in development aid (biggest development aid budget in the world), research & innovation, emergency response, health & science policy, and animal welfare. There is anecdotal evidence from the direct work of 3 EAs in EU policy institutions supporting this.
  • High personal career capital – because of their influence, the EU institutions are very competitive to enter and are therefore a signal of competence. In addition, the EU institutions offer well-paid and stable jobs. As most experts in industry, civil society or academia who want to influence policy reach out to these institutions to share their views or to invite them to events, these roles are also attractive to build the network necessary to find in due course a valuable professional exit opportunity (e.g. government affairs roles at top AI companies or director at consortia-based AI projects).
  • Neglectedness – there are currently a grand total of ~5 FTE EAs working on EU AI governance making the next person working on it potentially very unique and valuable counterfactually.

 

Arguments against

1. The EU is not an AI superpower

This argument against the relevance of the EU is that its industry and its research & innovation ecosystem is not influential in the technical development of AI technologies, and therefore unlikely to be influential on the development of AGI-related technologies. From the number of AI companies and startups to investment to the number of researchers, the EU lags behind on many metrics that are used as proxies for relevance to AGI governance (see for example this CDI report, this Elsevier report and this Bruegel analysis). Only 10 out of 74 identified AGI projects are in the EU[5] and these are far from being the best funded projects. 

How does it affect the EU’s relevance? 

Without a scientific, technological or commercial lead in some aspects of AI, the “field share” (market share, share of IP, share of publications, share of talent, etc.) of the EU declines. If the relative influence of the EU policies for research, technology and markets is proportional to this field share, then I expect it to decline: these policies or the EU approach are less likely to become the norm in the development of AI.

Further information: 

 

2. EU legislation does not matter enough

When considering the series of actions by various actors along the potential pathways to AGI, the claim here is that EU government officials’ actions – such as legislation – won’t influence the outcome enough relative to the costs of influencing these actions. Therefore, it won’t shift enough our level of confidence in a safe AGI outcome to be worth the investment in EA time and/or money. This is particularly strong for very short timelines such as around <5 years (i.e. before the legislation has had time to be implemented and have a structural effect on industry and R&D) and for very long ones >40 years (i.e. at which point the path dependency effect fades and is anyway offset by the evolution of the ideological, research, institutional and technological landscape). There are various reasons and circumstances where EU regulations might not matter in the end: if the EU fails to develop balanced regulations, if it fails to transfer these regulations proactively abroad or if it fails to develop effective safety-enhancing regulatory requirements. Moreover, if the global digital system surrounding AI doesn’t remain interoperable (e.g. structural decoupling and polarization of the technological landscape) and sustainable (e.g. AI winter), the EU regulations will matter less to the final outcome.

How does it affect the EU’s relevance? 

One of the main pathways of influence on AGI from the EU relies on its regulations on AI to promote a safe outcome. If regulations do not matter enough, the EU’s relevance in AGI governance will be greatly diminished.

Further information: 

 

3. EU governance would slow down US research towards AGI more than it would Chinese research

This argument is particularly strong if the AGI alignment issue is easy to resolve. It suggests that safety-enhancing policy recommendations emanating from the EU would affect US companies and labs more than Chinese ones, as the US AI industry derives more revenues in the EU than the Chinese AI industry does. This is because the Chinese industry relies heavily on its domestic market (where it does not have to comply with existing EU laws for many digital technologies), while the US technology industry relies on exportation, notably to the EU (where it has to comply). Assuming safety-enhancing regulations slow down innovation and research in AI raw capabilities, this would lead to a bigger slowdown in the US than in  China. In a world where AGI alignment is easy to resolve, the slowdown of US industry relative to Chinese industry would therefore make China more likely to get controllable AGI first. The additional assumption is that Chinese AGI developers would be less likely than US AGI developers to use a controllable AGI in ways that align with the EA community’s values.

How does it affect the EU’s relevance? 

If the argument holds, then the priorities of AI governance as a cause area would be to ensure the first controllable AGI systems are developed by entities that would use it for the flourishing of humanity as a whole. This is likely to require the ability to control the supply of goods necessary to build AGI (cutting edge chip designs, lithography systems, highly-skilled researchers, …) to favor some players’ progress towards AGI rather than others. The EU does have some industrial capacity in some of the “AGI supply chain” bottlenecks, notably ASML’s near monopoly in cutting edge lithography for semiconductors. However, given its commitment to open trade and multilateralism and its internal divisions, it is unclear whether the EU would ever be able to manipulate supply chains meaningfully this way.
 

4. The EU is irrelevant to military and national security AGI pathways 

The EU has little military power compared to the US and China, and its national security apparatus is negligible (mostly existing at Member State-level, in an uncoordinated fashion). It is also unclear to what extent foreign military projects on advanced AI or AGI technologies would actually be affected by the EU, but presumably very little. If the critical pathway to AGI goes through military or national security research, I therefore expect EU decisions to be much less relevant to AGI safety. This is not a temporary but a structural situation: the EU does not have the security and military institutions equivalent to the US Department of Defense (DoD), National Security Agency (NSA), Defense Advanced Research Projects Agency (DARPA) or Intelligence Advanced Research Projects Activity (IARPA), which are candidates for hosting AGI research projects. 

How does it affect the EU’s relevance? 

While there is growing support and work towards establishing an EU military and EU security coordinated system, notably for advanced research projects, even the most optimistic experts do not foresee this happening in any meaningful way within the next 10 years. Even then, it doesn’t mean it could significantly influence Chinese, Russian or US military projects on AGI. There could be some slight influence from the EU if the R&D and quality assurance (civilian) norms it sets become strong industry/academic field norms in the coming years, and that they transfer to military or security labs technicians and project managers, but there is no precedent. If AGI gets developed through military or national security research, the EU would therefore be quite irrelevant to AGI governance.

 

5. The EU is fragile and could become irrelevant 

Although the EU institutions and their ancestors have progressively gained power over the past 70 years (in terms of the level of decision-making for various topics), the EU could disaggregate itself. Brexit has been an example of this process in action. Crises like COVID-19 or the 2010 sovereign debt crisis have generally resulted in more power and resources being concentrated at the EU decision-making level. However, the leadership has just happened to rise to the opportunity. There are recurrent issues that the EU has yet to resolve – migration policy, coordinating foreign policy among Member States, Member States’ rule of law and populism, etc. Nothing guarantees that the EU will survive future crises related to these issues. 

How does it affect the EU’s relevance? 

If the EU institutions dissolve or if their power significantly weakens, investment in gaining influence at the EU level would be much less valuable. The influence of EU laws would persist through national legislation (because national laws have to integrate EU laws), but the collapse would reduce the ability of EAs with EU governance experience to influence future decisions on AGI governance. 

Further information: 

 

This argument refers to AI governance activities that involve policy debate or policy changes. There are various ways in which policy-related AI governance could result in a net negative outcome for AGI safety. 

First, there might be information hazard: a strong version of the information hazard argument assumes that many policymakers are already familiar with the concept of AGI, but suggests that a better explanation of AGI’s implications (to justify safety-enhancing policies) could trigger decision-makers to switch towards a race for AGI. This argument assumes that AGI governance requires explicit discussions of AGI. 

Second, it could lead to politicization of AGI safety. As AI safety turns into a policy topic, stakeholders will have incentives to fund/publish/amplify research aimed mostly at swaying the political debate rather than solving the control problem. To the extreme, these could result in dangerous signalling tactics and games (e.g. industry underreporting risks of its AGI research or releasing a powerful proto-AGI programme publicly just to show confidence in the algorithm being safe). This argument assumes that AGI governance requires considering AGI safety as a separate policy topic. There are precedents where this politicization has occurred to the detriment of public health. For example, research about the scientific hypotheses of anthropogenic climate change, of the causation link between cigarettes and cancer, and of the anthropogenic ozone hole have all suffered from organized actions to delegitimise scientific evidence or suggested solutions. 

Finally, policy-related AI governance could waste AI safety resources – There is significant uncertainty about what to recommend to policymakers. For example, no one can demonstrate with certainty that any EA recommendation integrated into the EU AI legislation would be a net positive compared to the default pathway taken by the EU. Regardless of the uncertainty, the staff or financial resources required for achieving a given unit of impact on the likelihood of safe AGI through AI policy might be higher than for achieving the same impact through AI technical research (or different cause areas). Both this uncertainty and the cost for achieving change could result in AI governance work being wasteful. 

How does it affect the EU’s relevance? 

This argument reduces confidence in the impactfulness of AI governance through policymakers. As current EU AI governance approaches rely almost exclusively on policy-making (rather than norm-setting in e.g. industry, the military, academia, etc.), this would reduce the relevance of the EU.

Further information: 

Other arguments against

  • Higher returns on EA investment in the US and China AI governance space than in the EU’s – there might still be too few resources invested in AI governance in the US and China to expect diminishing returns relative to the EU AI governance space, so it makes sense to continue prioritizing the US and China.
  • The EA EU AI governance space is not mature enough for personal career progression and impact – there are not enough resources spent on the EU AI governance space yet to expect that the next resources spent would be able to accomplish much: contrary to the US and UK EA AI policy spaces, the EU EA AI policy space is currently ~5 FTEs. There is therefore no “gravity effect” or “network effect” similar to the US and UK, where well-funded CSET and FHI/GovAI have enabled stable jobs for EAs to specialize and get integrated into the space of AI governance, making it much less risky career-wise to migrate there. There are two perspectives on this argument. On one hand, in terms of career-decision, it is not worth EAs’ time to work on EU AI governance because there is no significant investment by EA funders into EU AI governance that could derisk the approach. On the other hand, in terms of funding decisions, it is not worth EA donors’ investment because there are too few EAs in EU AI policy to ensure an informed and effective use of the funds “on the ground”.
  1. ^
  2. ^
  3. ^
  4. ^
  5. ^

    Aleph Alpha, Animats (formerly Alice In Wonderland), Curious AI (though the core team now seems to be at Apple), EleutherAI, Fairy Tale AGI solutions, FLOWERS, GoodAI, Mauhn, SingularityNET and Xephor Solutions – based on GCRI’s 2020 landscape, adding Aleph Alpha & EleutherAI.

Comments20
Sorted by Click to highlight new comments since:

If anyone reading this post thinks that the arguments in favor outweigh the arguments against working on EU AI governance, then consider applying for the EU Policy Analyst role that we are hiring for at the Future of Life Institute: https://futureoflife.org/2022/01/31/eu-policy-analyst/. If you have any questions about the role, you can participate in the AMA we are running: https://forum.effectivealtruism.org/posts/j5xhPbj7ywdv6aEJc/ama-future-of-life-institute-s-eu-team.

Intuitive reaction: I think these are all valuable arguments to have explicitly laid out, thanks for doing so. I think they don't quite capture my main intuitions about the value of EU-directed governance work, though; let me try explain those below.

One intuition draws from the classic distinction between realism and liberalism in international relations. Broadly speaking, I see the EU as being most relevant from a liberalist perspective; whereas it's much less relevant from a realist perspective. And although I think of both sides as having important perspectives, the dynamics surrounding catastrophic AGI development feel much better-described by realism than by liberalism - it feels like that's the default that things will likely fall back to if the world gets much more chaotic and scary, and there are potential big shifts in the global balance of power.

Second intuition: when it comes to governing AGI, I expect that acting quickly and decisively will be crucial. I can kinda see the US govt. being able to do this (especially by spinning off new agencies, or by presidential power). I have a lot more trouble seeing the EU being able to do this, even in a best-case scenario (does the EU even have the ability in theory, let alone in practice, to empower fast-moving organisations with specific mandates)?

Compared with your arguments, I think these two intuitions are more focused on working backwards from a "theory of victory" to figure out what's useful today (as opposed to working forwards towards gaining more influence). Our overall thinking about theories of victory is still so nascent, though, that it feels like there's a lot of option value in having people going down a bunch of different pathways. Plus I have a few other intuitions in favour of the value of EU-directed governance research: firstly, I think people often overestimate the predictability of AGI development. E.g. a European DeepMind popping up within the next decade or two doesn't seem that much less plausible than the original DeepMind popping up in England. Might  just take a few outlier founders to make that happen. Secondly, separate from progress on AI itself, it does seem plausible that the EU will have significant influence over the chip supply chain going forward (right now most notably via ASML, as you mention).

Overall I do think people with a strong comparative advantage should do EU-governance-related things, I'm just very uncertain how strong the comparative advantage needs to be for that to be one of the best career pathways (although I do know at least a few people whose comparative advantage does seem strong enough for that to be the correct move).

I think liberalism vs realism is an interesting lens but the conclusion doesn't seem right to me. You say you're working backwards from a theory of victory, but at least that argument was working backwards from a theory of catastrophe. I think this is an is-ought problem, and if we want things to go well then we might want to actively encourage more cooperative IR, whilst also not ignoring the powerful forces.

[anonymous]3
0
0

I believe “Victory” here means avoiding catastrophic AGI, which could require encouraging more cooperative international relations.

As for your point on Liberalism vs Realism Richard, I think it is captured by at least 6 arguments listed in the post (Brussels effect, AI development/governance collaborations, growing the political capital, AI superpower, US-China differential, military pathways). Indeed when I read:

"the dynamics surrounding catastrophic AGI development feel much better-described by realism than by liberalism - it feels like that's the default that things will likely fall back to …”

I decompose “dynamics” and “things” ultimately as actions taken by relevant actors (if necessary, here is my post-length explanation of what I mean by that). At the risk of frustrating generations of IR schools of thought, I'd push the framework further  by saying that “Realism” could very roughly  be reduced to “the way relevant actors think when they have a low level of trust in relevant actors from different nations”. And to make sure the other half of IR scholars also shriek, "Liberalism" would be reduced to the "the way relevant actors think when they have a high level of trust in relevant actors from different nations" (I would also reduce "nations" to "groups of actors", but that's not necessary to the main point.) 

The implications of that lower level of international trust (aka Realism) for the debate on whether the EU is relevant, is that the validity of these 6 arguments changes (e.g. in a more Realist world, the Brussels effect is weaker than in a Liberal world). Let me know, Richard, if you think that there is separate, independent causal link between international trust levels and the relevance of the EU that doesn’t rely on these 6 arguments/factors, and I can add it to the main post.

For the concrete impact of governance on AGI, which you tamgent seem to allude to, the implications of Realism are deeper: mistrust definitely alters all the “game theoretical” results. This reduces the range of options that an AGI-concerned decision-maker has. A plausible example is his.her inability to establish a joint alignment testing protocol/international standard. Is there anything that can be achieved with mistrust/realism that cannot be achieved with trust/liberalism? I don’t think so. (That doesn’t mean that increasing this level of international trust is a cost-effective intervention though).

[anonymous]4
0
0

Thank you Richard – I think your second intuition is a great point. Does this rephrasing capture your point? I included the notion of decisiveness as well, which is related. If so, and with your permission, I would add to the main post.
 

+++
7. EU-level policymaking is slower to react to events like AGI and/or less decisive
Compared to the US or China, the EU has little centralized power and resources. All 27 EU member states preserve significant control over EU decision-making, as embodied by veto powers and “executive” procedures that still require member states’ collective approval. This is occasionally made more difficult by the European Parliament, which includes 7 political parties and 705 members and whose power has been growing over the years. As a result, most decisions from the European Commission require consultations with many stakeholders and therefore take time. Moreover, the EU-level public budget represents ~1.5% of GDP, compared to ~20% of GDP in the US – so even when there is agreement, it is unclear whether it can garner the resources to whip up a decisive response. This structure also prevents the EU to produce the equivalent of US executive orders.

How does it affect the EU’s relevance?

If the development of AGI requires a quick or well-resourced policy response from government, EU-level policymaking might not be as influential as American or Chinese policymaking.

+++

My opinion:
This factor is definitely relevant, so thanks for bringing it up; I think it crucially depends on takeoff speeds though. (Anyone with a better understanding of US policymaking should correct me if I am wrong here:) If AGI requires strong and one-off government interventions within a 3-6 month window, US policymaking offers a considerable advantage through its strong and quick executive powers. However, for interventions with time horizons of over 6 months where emergency is not as important, the case for executive orders fades as far as I understand. Decision-making therefore falls onto the legislative power, which has been deadlocked for the past 12 years. On the EU’s side however, legislative power has been particularly strong – spurning tech policies with relative ease. Even though the EU procedure is slow, I have the intuition its impact is more structural than US executive orders (please correct me?). In the cases where AGI safety requires a government intervention within 3-6 months, my hope would be that institutions are already in place to guarantee that this quick intervention takes place – e.g. a regulatory agency that has been mandated to do AI code probing and auditing for >20 years before takeoff, for whom AI labs alignment would be common practice.

This conversation makes me realize a more constructive version of this post would list “important factors” for the relevance of the EU rather than arguments in favour and against. Ah well.

I think there is one argument I really want to back, but I also want to provide a different angle: “Growing the political capital of AGI-concerned people”

I think that even when you think there are substantial odds that the EU doesn’t play an important role in regulation of AGI, having political capital could still be useful for other (tech-related) topics. Quite often I think there is a “halo-effect” related to being perceived as an tech-expert in government. That means that if you are perceived as a tech expert in government because you know a lot about AI, people will also perceive you as an expert on other technologies (where the EU might be more relevant). 

This is also one of the reasons that I advise people to work in AI/tech regulation, even when it’s not (solely) on the long term consequences of AI we care about, but e.g. on short term risks or even more on the side of economic stimulus of AI development. Often it will provide EAs with the political capital and credibility to deal with long term / x-risk relevant risks later on when there is an opportunity to switch roles.

If you believe however that the EU becomes irrelevant at all (argument 5 against), all policy careers for EAs in mainland Europe become quite unappealing suddenly. This makes me think: if you believe the EU market and political environment favor AGI safety (argument 4 in favor), shouldn’t it be a priority for European EAs to keep the EU a relevant political force?

[anonymous]2
0
0

I think that's a very indirect intervention whose cost-effectiveness is probably lower than many other priorities (given the political forces and the many other stakeholders' strong interests, neglectedness is quite low) - but maybe I am missing something? It sounds definitely relevant as a "byproduct" of one's career though.  Would it therefore be a good principle to, ceteris paribus, push for the intervention that strengthen the EU, while working on one's own priority? 

[anonymous]2
0
0

Agreed on the halo-effect, but besides one's prestige, I think one's region-specific knowledge and network matter a lot and does not transfer. As a result, if the EU is less relevant, building up prestige in the EU might not be as efficient as building up prestige in China or the US, given that in parallel you'd be developing a network and region-specific knowledge that will be more helpful to be impactful overall.

(That being said, even though I wanted to avoid anchoring the reader by expressing my opinion in the post, I expect the EU to be most relevant right now for AGI governance given the institutional precedents it sets. I believe the lack of investment in EA time & money there is an unfortunate mistake. So the "if the EU is less relevant" scenario should be disregarded in my opinion.) 

Thanks for this! I appreciate the clear outline of arguments around an important question.

It sounds like this post might be centered around the question, "Should people who want to improve AGI governance work in the EU?", rather than the question of the title, "Is the EU Relevant for AGI Governance?". The focus on the former question seems right to me, since that's the more decision-relevant question. So I wonder if explicitly reframing (e.g., retitling) this post so that it's centered on the former question would make this post more clear?

After all, a bunch of the arguments presented seem to address the value of EU AI governance work, rather than its relevance to AI. I'll try to show that in a subcomment.

Here's the in-the-weeds subcomment:

4.The EU market and political environment favor AGI safety

I agree this magnifies the importance of the EU, but just if we assume that the EU is relevant.

5.Direct influence from inside relevant AI labs is limited

This seems like an argument about relative tractability, rather than relevance.

6.Growing the political capital of AGI-concerned people

This point only seems like an argument for the EU's relevance if we assume (a) that the EU is relevant, or (b) that political capital transfers well across policy communities. (And that seems like a nontrivial assumption, since (a) is the desired conclusion and (b) seems overstated at best.) The point is much more plausible on its own as an argument for EU AI governance work in the near term being valuable.

Personal fit [...] Low-regret career pathway [...] High personal career capital [...] Neglectedness

These all seem like arguments for EU AI governance work being in some way valuable; I don't see how any of these are arguments for the EU being relevant to AI's trajectory.

EU governance would slow down US research towards AGI more than it would Chinese research

This seems to me like an argument for the EU's relevance to AI (and also for the value of working in EU AI governance--if the EU might take some actions that would be harmful for AI's trajectory, then working in the EU and preventing these actions would be a way to positively contribute to AI governance).

Higher returns on EA investment in the US and China AI governance space than in the EU’s [...] The EA EU AI governance space is not mature enough for personal career progression and impact

These also strike me as arguments about career value rather than institutional relevance.

[anonymous]1
0
0

I think there is a bit of confusion on several fronts here and in your main comment. 
On title/framing:
The post aims to be descriptive of factors relevant for decision-making, rather than prescribing a decision (ergo absence of "should",  absence of judgment of the arguments). As mentioned in the intro of the post, I am hoping to inform or trigger a conversation/further research, rather than to directly presume that people should make a decision about it. I trust readers would derive decision-relevant aspects themselves (is that what I am wrong about?) 

Relevance vs value & importance: 

I agree this magnifies the importance of the EU, but just if we assume that the EU is relevant.

I am not sure I understand. The way I see this, the value of EU governance work and the importance of the EU is a function of the relevance of the EU for AGI governance. 

The concept of relevance:
I am confused by "relevance" in your comment:

This point only seems like an argument for the EU's relevance if we assume (a) that the EU is relevant 

[...]

These all seem like arguments for EU AI governance work being in some way valuable; I don't see how any of these are arguments for the EU being relevant to AI's trajectory.

Given your comment mentioning AI's trajectory, is it possible you understood the post as being about AI trajectory rather than the way we govern  AI trajectory?  Also, to be sure we are on the same page, in the post, relevance is not a binary concept and is directly related to the actions leading up to AGI governance. 

If you have time, please let me know what I misunderstood. 

Hm, now I'm also a little confused. I agree with a bunch of the points you clarify (and I think they're in line with how I had originally interpreted this post). Specifically, I think we're on the same page about all of these things, among others:

  • This post aims to outline decision-relevant considerations rather than to make an overall prescription
  • The value of EU governance work and the importance of the EU is a function of [among other variables] the relevance of the EU for AGI governance
  • This post is about the way people govern the trajectory of AI
  • Relevance is not a binary concept

--

I'll try to clarify my earlier point. I'm trying to draw a distinction between these two things:

  • (a) the relevance of EU policy to the trajectory of AI, and
  • (b) the value of pursuing (AI-related) work in EU policymaking.

I agree that arguments about (a) are, by extension, arguments about (b). For example, the Brussels Effect (as the post notes) is an argument for the relevance of EU AI policy to AI's trajectory, and this makes it an argument for the value of EU AI policy work.

But I don't think it necessarily goes the other way; some argument can be an argument about (b) without being an argument about (a). For example, this post raises the point that EU policy matters for animal welfare. This option value may be a reason why someone should work in EU policy, but it tells us nothing about whether the EU's AI policy will influence the global trajectory of AI. So it is an argument about (b), but not about (a).

More broadly, my understanding is that roughly all of this post's arguments are arguments regarding (b), but they are not all arguments regarding (a). So it may be clearer to frame the post as a collection of considerations about (b), rather than a collection of considerations about (a), even if the post doesn't propose an overall judgement.

Or maybe I've misunderstood?

[anonymous]3
0
0

Thank you for taking the time for explaining this so clearly, I understand now √ I will edit the title and link to this comment to disclaim on framing.

Thanks for this post!

Prompted in part by this post, I've now made a Collection of work on whether/how much people should focus on the EU if they’re interested in AI governance for longtermist/x-risk reasons, which might be of interest to the author and/or to readers. I'd also be keen for people to note other relevant work in comments on that shortform so the collection can become more comprehensive (which I hope will in turn make it easier for other people to get up to speed, make decisions, and make novel contributions building on the existing work).

[anonymous]1
0
0

Thanks Michael, that's a great idea! 

Thanks, this is great!

1. The EU is not an AI superpower

When I was thinking about how important this argument is, I thought that I still expect any generically influential international body to be able to have a lot of influence if it wants to and is seen as being a productive party to negotiations. What do you think? Maybe there are similar situations today where parties who don't have much relevant economic power can influence negotiations by showing interest and ability to contribute to finding better solutions?

Though this would only hold in a cooperative atmosphere I suppose... would also be interesting how cooperative we expect the atmosphere of international politics around AI to be in the coming years and decades.

[anonymous]3
0
0

Interesting - that seems to converge towards the discussion on liberalism vs realism atmosphere above. On your specific point, if "productive party" = "party able/motivated to contribute to finding better solutions",  I would agree. In political negotiations at many level, from my limited experience there is this odd phenomenon about the relevance of expertise/motivation: expertise/motivation doesn't matter if everyone has roughly the same amount, but then matters significantly as soon as a subset of parties are having a lot of it (because motivation c an be converted into further expertise or proactive control of the negotiations procedure such as sending reminders to comment to other parties, etc.). 

I have observed that phenomenon between nations (e.g. economically small Estonia mattering much more on e-government or digital discussions than you would expect) and between policymakers (e.g. for multiple dossiers, the opinion of the Member of European Parliament (MEP) or diplomat that cares/knows most about the issue matters more and is more respected by other MEPs/diplomats). 

I imagine low international trust environments  would make this difficult though (as knowledge/expertise can be manipulated for one's own interest)

Yes, that was also what I intended to communicate with "productive party". Thanks for the response and especially the examples about Estonia and expert parliamentarians. 

A related thought: the more that parties are able to productively contribute to discussions, the more productive the discussions will be and feel, and the more likely a cooperative atmosphere can be maintained. Might be another reason why increasing AI governance expertise of the EU could be really helpful, as it seems very likely to me that the EU will be involved in many important discussions on AI governance.

[anonymous]1
0
0

That's a good point

Curated and popular this week
Relevant opportunities