Around the end of Feb 2024 I attended the Summit on Existential Risk and EAG: Bay Area (GCRs), during which I did 25+ one-on-ones about the needs and gaps in the EA-adjacent catastrophic risk landscape, and how they’ve changed.

The meetings were mostly with senior managers or researchers in the field who I think are worth listening to (unfortunately I can’t share names). Below is how I’d summarise the main themes in what was said.

If you have different impressions of the landscape, I’d be keen to hear them.

  • There’s been a big increase in the number of people working on AI safety, partly driven by a reallocation of effort (e.g. Rethink Priorities starting an AI policy think tank); and partly driven by new people entering the field after its newfound prominence.
  • Allocation in the landscape seems more efficient than in the past – it’s harder to identify especially neglected interventions, causes, money, or skill-sets. That means it’s become more important to choose based on your motivations. That said, here’s a few ideas for neglected gaps:
  • Within AI risk, it seems plausible the community is somewhat too focused on risks from misalignment rather than mis-use or concentration of power.
  • There’s currently very little work going into issues that arise even if AI is aligned, including the deployment problem, Will MacAskill’s “grand challenges” and Lukas Finnveden’s list of project ideas. If you put significant probability on alignment being solved, some of these could have high importance too; though most are at the stage where they can’t absorb a large number of people.
  • Within these, digital sentience was the hottest topic, but to me it doesn’t obviously seem like the most pressing of these other issues. (Though doing field building for digital sentience is among the more shovel ready of these ideas.)
  • The concrete entrepreneurial idea that came up the most, and seemed most interesting to me, was founding orgs that use AI to improve epistemics / forecasting / decision-making (I have a draft post on this – comments welcome).
  • Post-FTX, funding has become even more dramatically concentrated under Open Philanthropy, so finding new donors seems like a much bigger priority than in the past. (It seems plausible to me that $1bn in a foundation independent from OP could be worth several times that amount added to OP.)
  • In addition, donors have less money than in the past, while the number of opportunities to fund things in AI safety has increased dramatically, which means marginal funding opportunities seem higher value than in the past (as a concrete example, nuclear security is getting almost no funding from the community, and perhaps only ~$30m of philanthropic funding in total).
  • Both points mean efforts to start new foundations, fundraise and earn to give all seem more valuable compared to a couple of years ago.
  • Many people mentioned comms as the biggest issue facing both AI safety and EA. EA has been losing its battle for messaging, and AI safety is in danger of losing its too (with both a new powerful anti-regulation tech lobby and the more left-wing AI ethics scene branding it as sci-fi, doomer, cultish and in bed with labs).
  • People might be neglecting measures that would help in very short timelines (e.g. transformative AI in under 3 years), though that might be because most people are unable to do much in these scenarios.
  • Right now, directly talking about AI safety seems to get more people in the door than talking about EA, so some community building efforts have switched to that.
  • There’s been a recent influx in junior people interested in AI safety, so it seems plausible the biggest bottleneck again lies with mentoring & management, rather than recruiting more junior people.
  • Randomly: there seems to have been a trend of former leaders and managers switching back to object level work.


 

166

9
2
8

Reactions

9
2
8

More posts like this

Comments46
Sorted by Click to highlight new comments since:

there seems to have been a trend of former leaders and managers switching back to object level work

Guess: people who enjoy object level work ended up doing leadership and management because EA catastrophic risk work was so low on experience there. As this crunch has somewhat resolved those people are able to go back to the work they like and are good at?

(But you probably know more than I do about whether the management/leadership experience crunch has actually lessened)

Another guess: people who were competent in individual contributor roles got promoted into people management roles because of issues I mentioned here:

My guess is Ben's referring to people like Holden Karnofsky, who went from working in finance to co-founding and -running GiveWell and then Open Phil to now doing research at a think tank.

Also Nick Bostrom, Nick Beckstead, Will Macaskill, Ben Todd, some of whom have been lifelong academics.

Probably different factors in different cases.

I'd love to hear from other people whether the management/leadership crunch has lessened?

nuclear security is getting almost no funding from the community, and perhaps only ~$30m of philanthropic funding in total.

Do we know why OP aren't doing more here? They could double that amount and it would barely register on their recent annual expenditures.

I want to be clear it's not obvious to me OP is making a mistake. I'd lean towards guessing AI safety and GCBRs are still more pressing than nuclear security. OP also have capacity constraints (which make it e.g. less attractive to pursue smaller grants in areas they're not already covering, since it uses up time that could have been used to make even larger grants elsewhere). Seems like a good fit for some medium-sized donors who want to specialise in this area.

I don't know if they're making a mistake - my question wasn't meant to be rhetorical.

I take your point about capacity constraints, but if no-one else is stepping up, it seems like it might be worth OP stepping up their capacity constraints.

I continue to think the EA movement systematically underestimates the x-riskiness of nonextinction events in general and nuclear risk in particular by ignoring much of the increased difficulty of becoming interstellar post-destruction/exploitation of key resources. I gave some example scenarios of this here (see also David's results) - not intended to be taken too seriously, but nonetheless incorporating what I think are significant factors that other longtermist work omits (e.g. in The Precipice, Ord defines x-risk very broadly, but when he comes to estimate the x-riskiness of 'conventional' GCRs, he discusses them almost entirely in terms of their probability of making humans immediately go extinct, which I suspect constitutes a tiny fraction of their EV loss).

For what it's worth, my working assumption for many risks (e.g. nuclear, supervolcanic eruption) was that their contribution to existential risk via 'direct' extinction was of a similar level to their contribution via civilisation collapse. e.g. that a civilisation collapse event was something like 10 times as likely, but that there was also a 90% chance of recovery. So in total, the consideration of non-direct pathways roughly doubled my estimates for a number of risks.

One thing I didn't do was to include their roles as risk factors. e.g. the effect that being on the brink of nuclear war has on overall existential risk even if the nuclear war doesn't occur.

Thanks Toby, that's good to know. As I recall, your discussion (much of which was in footnotes) focussed very strongly on effects that might be extinction-oriented, though, so I would be inclined to put more weight on your estimates of the probability of extinction than your estimates of indirect effects. 

E.g. a scenario you didn't discuss that seems seem plausible to me is approximately "reduced resource availability slows future civilisations' technical development enough that they have to spend a much greater period in the time of perils, and in practice become much less likely to ever successfully navigate through it" - even if we survive as a semitechnological species for hundreds of millions of years.

I discuss something similar to that a bit on page 41, but mainly focusing on whether depletion could make it harder for civilisation to re-emerge. Ultimately, it still looks to me like it would be easier and faster the second time around. 

I'd be interested to reread that, but on my version p41 has the beginning of the 'civilisational virtues' section and end of 'looking to our past', and I can't see anything relevant. 

I may have forgotten something you said, but as I recall, the claim is largely that there'll be leftover knowledge and technology which will speed up the process. If so, I think it's highly optimistic to say it would be faster:

1) The blueprints leftover by the previous civilisation will at best get us as far as they did, but to succeed we'll necessarily need to develop substantially more advanced technology than they had.

2) In practice they won't get us that far - a lot of modern technology is highly contingent on the exigencies of currently available resources. E.g. computers would presumably need a very different design in a world without access to cheap plastics.

3) The second time around isn't the end of the story - we might need to do this multiple times, creating a multiplicative drain on resources (e.g. if development is slowed by the absence of fossil fuels, we'll spend that much longer using up rock phosphorus), whereas lessons available from previous civilisations will be at best additive and likely not as good as that - we'll probably lose most of the technology of earlier civilisations when dissecting it to make the current one. So even if the second time would be faster, it would move us one civilisation closer to a state where it's impossibly slow.

Thanks for the context, Toby!

For what it's worth, my working assumption for many risks (e.g. nuclear, supervolcanic eruption) was that their contribution to existential risk via 'direct' extinction was of a similar level to their contribution via civilisation collapse

I was guessing you agreed the direct extinction risk from nuclear war and volcanoes was astronomically low, so I am very surprised by the above. I think it implies your annual extinction risk from:

  • Nuclear war is around 5*10^-6 (= 0.5*10^-3/100), which is 843 k (= 5*10^-6/(5.93*10^-12)) times mine.
  • Volcanoes is around 5*10^-7 (= 0.5*10^-4/100), which is 14.8 M (= 5*10^-7/(3.38*10^-14)) times mine.

I would be curious to know your thoughts on my estimates. Feel free to follow up in the comments on their posts (which I had also emailed to you around 3 and 2 months ago). In general, I think it would be great if you could explain how you got all your existential risk estimates shared in The Precipice (e.g. decomposing them into various factors as I did in my analyses, if that is how you got them).

Your comment above seems to imply that direct extinction would be an existential risk, but I actually think human extinction would be very unlikely to be an existential catastrophe if it was caused by nuclear war or volcanoes. For example, I think there would only be a 0.0513 % (= e^(-10^9/(132*10^6))) chance of a repetition of the last mass extinction 66 M years ago, the Cretaceous–Paleogene extinction event, being existential. I got my estimate assuming:

  • An exponential distribution with a mean of 132 M years (= 66*10^6*2) represents the time between i) human extinction in such catastrophe and ii) the evolution of an intelligent sentient species after such a catastrophe. I supposed this on the basis that:
    • An exponential distribution with a mean of 66 M years describes the time between:
      • 2 consecutive such catastrophes.
      • i) and ii) if there are no such catastrophes.
    • Given the above, i) and ii) are equally likely. So the probability of an intelligent sentient species evolving after human extinction in such a catastrophe is 50 % (= 1/2).
    • Consequently, one should expect the time between i) and ii) to be 2 times (= 1/0.50) as long as that if there were no such catastrophes.
  • An intelligent sentient species has 1 billion years to evolve before the Earth becomes habitable.

It seems plausible to me that $1bn in a foundation independent from OP could be worth several times that amount added to OP.

How can $1B to a foundation other than OP be worth more than $2B to OP, unless OP is allocating grants very inefficiently? You would have to believe they are misjudging the EV of all their grants, or poorly diversifying against other possible futures, for this to be true.

Intellectual diversity seems very important to figuring out the best grants in the long term.

If atm the community, has, say $20bn to allocate, you only need a 10% improvement to future decisions to be worth +$2bn.

Funder diversity also seems very important for community health, and therefore our ability to attract & retain talent. It's not attractive to have your org & career depend on such a small group of decision-makers.

I might quantify the value of the talent pool around another $10bn, so again, you only need a ~10% increase here to be worth a billion, and over centralisation seems like one of the bigger problems.

The current situation also creates a single point of failure for the whole community.

Finally it still seems like OP has various kinds of institutional bottlenecks that mean they can't obviously fund everything that would be 'worth' funding in abstract (and even moreso to do all the active grantmaking that would be worth doing). They also have PR constraints that might make some grants difficult. And it seems unrealistic to expect any single team (however good they are) not to have some blindspots.

$1bn is only 5% of the capital that OP has, so you'd only need to find a 1 grant for every 20 that OP makes that they've missed with only 2x the effectiveness of marginal OP grants in order to get 2x the value.

One background piece of context is that I think grants often vary by more than 10x in cost-effectiveness.

I might quantify the value of the talent pool around another $10bn, so again, you only need a ~10% increase here to be worth a billion, and over centralisation seems like one of the bigger problems.

I find it plausible that a strong fix to the funder-diversity problem could increase the value of the talent pool by 10% or even more. However, having a new independent funder with $1B in assets (spending much less than that per year) feels more like an incremental improvement.

$1bn is only 5% of the capital that OP has, so you'd only need to find a 1 grant for every 20 that OP makes that they've missed with only 2x the effectiveness of marginal OP grants in order to get 2x the value.

You'd need to do that consistently (no misses, unless counteracted by >2x grants) and efficiently (as incurring similar overhead as OP with $1B of assets would consume much of the available cash flow). That seems like a tall order. 

Moreover, I'm not sure if a model in which the new major funder always gets to act "last" would track reality very well. It's likely that OP would change its decisions, at least to some extent, based on what it expected the other funder to do. In this case, the new funder would end up funding a significant amount of stuff that OP would have counterfactually funded.

It might take more than $1bn, but around that level, you could become a major funder of one of the causes like AI safety, so you'd already be getting significant benefits within a cause.

Agree you'd need to average 2x for the last point to work.

Though note the three pathways to impact - talent, intellectual diversity, OP gaps - are mostly independent, so you'd only need one of them to work.

Also agree in practice there would be some funging between the two, which would limit the differences, that's a good point.

nuclear security is getting almost no funding from the community

For reference, I collected some data on this:

Supposedly cause neutral grantmakers aligned with effective altruism have influenced 15.3 M$[17] (= 0.03 + 5*10^-4 + 2.70 + 3.56 + 0.0488 + 0.087 + 5.98 + 2.88) towards efforts aiming to decrease nuclear risk[18]:

Many people mentioned comms as the biggest issue facing both AI safety and EA. EA has been losing its battle for messaging, and AI safety is in danger of losing its too (with both a new powerful anti-regulation tech lobby and the more left-wing AI ethics scene branding it as sci-fi, doomer, cultish and in bed with labs).

 

My sense is more could be done here (in the form of surveys, experiments and focus groups / interviews) pretty easily and cheaply relative to the size of the field. I'm aware of some work that's been done in this area, but it seems like there is low-hanging fruit, such as research like this, which could easily be replicated in quantitative form (rather than a small-scale qualitative form), to assess which objections are most concerning for different groups. That said, I think both qualitative research (like focus groups and interviews) and quantitative work (e.g. more systematic experiments to assess how people respond to different messages and what explains these responses), is lacking.

Caveat: conflict of interest

I agree. However, I also think that doing more surveys do not prevent the failure mode of EAs "doing comms" as doing more surveys over actual interventions for aligning the general opinion with more rational takes on this particular topic. Shyness and low socio-emotional skills among leaders seems to me commonplace in EA, far too much compared to the rest of the world, up to the point where the best interventions targetting communications skills seems neglected to me.

Skills in communications, and funds for paying skilled individuals responsible for communications in any particular org is, imho, generally lacking. I have eavesdropped to a certain number of (non-sensitive) meetings of a small AIS org, and the general level of knowledge on how to convey any given message (especially outside of EA, especially to an opponent) is, in my opinion, insufficient, despite good knowledge of the surveys and their results. People in this org mostly generated their own ideas, judged them using their intuition, and did them, rather than using established knowledge or empirical expertise to pick the best ideas. Most of the people in the process are AIS researchers with background in CS, rather than people with both a background in AIS and communications, who are also excellent communicators (to a non-EA audience). One person I met openly shared their concern of not having enough funding for paying a PR and Comms responsible, as well as them growing tired of managing something they have no background in. Surveys didn't really help with this bottleneck.

My fear is that there is not enough money, and that most people don't care enough because they trust their intuitions too much / are afraid to actualy remedy to this lack of skills and would rather do surveys (on my side, I definitely feel fear and worry about talking to journalists or carefully balancing epistemics whilst not hurting common sense). 

My only not-so-real data point is this (compare karma on LW vs EAF for a better sense). In a world where people saw a technical problem in communications, I would have expected this post to have more success. In short, I'd bet that most communications-skills related interventions/hires are usually considered with reluctance.

I do aknowledge that surveys could be an even lower-hanging fruit, of course. But I think that they should not distract us from improving skills per se.

It seems possible that both of these are neglected for similar reasons. 

It seems surprising the funding would be the bottleneck (which means you can't just have more of both). But that has been my experience surprisingly often, i.e. core orgs are willing to devote many highly valuable staff hours to collaborating on survey projects, but balk at ~$10,000 survey costs.

All but 3 bullet points were about AI. I know that AI is the number one catastrophic risk but I'm dyin' for variety (news on other fronts).

Here is the non-AI content:

  • Allocation in the landscape seems more efficient than in the past – it’s harder to identify especially neglected interventions, causes, money, or skill-sets. That means it’s become more important to choose based on your motivations.
  • Post-FTX, funding has become even more dramatically concentrated under Open Philanthropy, so finding new donors seems like a much bigger priority than in the past. (It seems plausible to me that $1bn in a foundation independent from OP could be worth several times that amount added to OP.)
  • Both points mean efforts to start new foundations, fundraise and earn to give all seem more valuable compared to a couple of years ago.

(My bad if there were indications that this was going to be AI-centric from the outset, I could have easily missed some linguistic signals because I'm not the most savvy forum-goer.)

My impression is that of EA resources focused on catastrophic risk, 60%+ are now focused on AI safety, or issues downstream of AI (e.g. even the biorisk people are pretty focused on the AI/Bio intersection).

AI has also seem dramatic changes to the landscape / situation in the last ~2 years, and my update was focused on how things have changed recently.

So for both reasons most of the updates that seemed salient to me concerned AI in some way.

That said, I'm especially interested in AI myself, so I focused more on questions there. It would be ideal to hear from more bio people.

I also briefly mention nuclear security, where I think the main update is the point about lack of funding.

 

I think there is more value in separating out AI vs bio vs nuclear vs meta GCR than having posts/events marketed as GCR but be mainly on one topic. Both from the perspective of the minor causes and the main cause which would get more relevant attention. 

Also the strategy/marketing of those causes will often be different and so it doesn't make as much sense to lump them together unless it is about GCR prioritisation or cross cause support.

Within AI risk, it seems plausible the community is somewhat too focused on risks from misalignment rather than mis-use or concentration of power.


My strong bet is that most interventions targeted toward concentration of power end up being net-negative by further proliferating dual-use technologies that can't adequately be defended against.

Do you have any proposed interventions that don't contain this drawback?

Further, why should this be prioritised when there are already many powerful actors deadset on proliferating these technologies as quickly as possible, if you count the large open-source labs, plus all of the money that governments are spending on accelerating commercialization which dwarfs spending on AI safety. And all the efforts by various universities and researchers at commercial labs to publish as much as possible about how to build such systems.

I am extremely confused (theoretically) how we can simultaneously have:

1. An Artificial Superintelligence

2. It be controlled by humans (therefore creating misuse of concentration of power issues)

The argument doesn't get off the ground for me

It doesn't seem too conceptually murky. You could imagine a super-advanced GPT, which when you ask it any questions like 'how do I become world leader?' gives in-depth practical advice, but which never itself outputs anything other than token predictions.

Hi Arepo, thanks for your idea. I don't see how it could give advice so concrete and relevant for something like that without being a superintelligence, which makes it extremely hard to control.

You might be right, but that might also just be a failure of imagination. 20 years ago, I suspect many people would have assumed by the time we got AI the level of ChatGPT it would basically be agentic - as I understand it, the Turing test was basically predicated on that idea, and ChatGPT has pretty much nailed that while having very few characteristics that we might recognise in an agent. I'm less clear, but also have the sense that people would have believed something similar about calculators before they appeared.

I'm not asserting that this is obviously the most likely outcome, just that I don't see convincing reasons for thinking it's extremely unlikely.

I should have maybe added that several people mentioned "people who can practically get stuff done" is still a big bottleneck..

  • People might be neglecting measures that would help in very short timelines (e.g. transformative AI in under 3 years), though that might be because most people are unable to do much in these scenarios.

There's a lot that can be done, especially in terms of public and political advocacy. PauseAI is really gaining momentum now as a hub for the slow/Pause/Stop AGI/ASI movement (which is largely independent of EA). Lots of projects happening in the Discord, and see here for a roadmap of what they could do with more funding.

There’s currently very little work going into issues that arise even if AI is aligned, including the deployment problem

The deployment problem (as described in that link) is a non-problem if you know that AI is aligned.

Hi Benjamin! Thanks for your post. Regarding this comment > "Right now, directly talking about AI safety seems to get more people in the door than talking about EA, so some community building efforts have switched to that."

What do you mean by "in the door"?

I mean "enter the top of the funnel".

For example, if you advertise an event as being about it, more people will show up to the event. Or more people might sign up to a newsletter.

(We don't yet know how this translates into more intense forms of engagement.)

Thanks for the update, Ben.

nuclear security is getting almost no funding

You mean almost no philanthropic funding? According to 80,000 Hours' profile on nuclear war:

This issue is not as neglected as most other issues we prioritise. Current spending is between $1 billion and $10 billion per year (quality-adjusted).

I estimated the nearterm annual extinction risk per annual spending for AI risk is 59.8 M times that for nuclear risk. However, I have come to prefer expected annual deaths per annual spending as a better proxy for the cost-effectiveness of interventions which aim to save lives (relatedly). From this perspective, it is unclear to me whether AI risk is more pressing than nuclear risk.

I think the 80K profile notes (in a footnote) that their $1-10 billion guess includes many different kinds of government spending. I would guess it includes things like nonproliferation programs and fissile materials security, nuclear reactor safety, and probably the maintenance of parts of the nuclear weapons enterprise -- much of it at best tangentially related to preventing nuclear war. 

So I think the number is a bit misleading (not unlike adding up AI ethics spending and AI capabilities spending and concluding that AI safety is not neglected). You can look at the single biggest grant under "nuclear issues" in the Peace and Security Funding Index (admittedly an imperfect database): it's the U.S. Overseas Private Investment Corporation (a former government funder) paying for spent nuclear fuel storage in Maryland... 

A way to get at a better estimate of non-philanthropic spending might be to go through relevant parts of the State International Affairs Budget, the Bureau of Arms Control, Deterrence and Stability (ADS, formerly Arms Control, Verification, and Compliance), and some DoD entities (like DTRA), and a small handful of others, add those up, and add some uncertainty around your estimates. You would get a much lower number (Arms Control, Verification, and Compliance budget was only $31.2 million in FY 2013 according to Wikipedia -- don't have time to dive into more recent numbers rn).

All of which is to say that I think Ben's observation that "nuclear security is getting almost no funding" is true in some sense both for funders focused on extreme risks (where Founders Pledge and Longview are the only ones) and for the field in general 

Thanks for the comment, Christian!

I think the 80K profile notes (in a footnote) that their $1-10 billion guess includes many different kinds of government spending. I would guess it includes things like nonproliferation programs and fissile materials security, nuclear reactor safety, and probably the maintenance of parts of the nuclear weapons enterprise -- much of it at best tangentially related to preventing nuclear war. 

Most of these are relevant to preventing nuclear war (even if you do not think they are the best way of doing it):

  • More countries having nuclear weapons makes nuclear war more likely.
  • Fissile materials are an input to making nuclear weapons.
  • Malfunctioning nuclear systems/weapons could result in accidents.

So I think the number is a bit misleading (not unlike adding up AI ethics spending and AI capabilities spending and concluding that AI safety is not neglected). You can look at the single biggest grant under "nuclear issues" in the Peace and Security Funding Index (admittedly an imperfect database): it's the U.S. Overseas Private Investment Corporation (a former government funder) paying for spent nuclear fuel storage in Maryland... 

According to the footnote in the 80,000 Hours' profile following what I quoted, the range is supposed to refer to the spending on preventing nuclear war (which is not to say the values are correct):

The resources dedicated to preventing the risk of a nuclear war globally, including both inside and outside all governments, is probably $10 billion per year or higher. However, we are downgrading that to $1–10 billion per year quality-adjusted, because much of this spending is not focused on lowering the risk of use of nuclear weapons in general, but rather protecting just one country, or giving one country an advantage over another. Much is also spent on anti-proliferation measures unrelated to the most harmful scenarios in which hundreds of warheads are used. It is also notable that spending by nongovernment actors represents only a tiny fraction of this, so they may have some better opportunities to act.


A way to get at a better estimate of non-philanthropic spending might be to go through relevant parts of the State International Affairs Budget, the Bureau of Arms Control, Deterrence and Stability (ADS, formerly Arms Control, Verification, and Compliance), and some DoD entities (like DTRA), and a small handful of others, add those up, and add some uncertainty around your estimates. You would get a much lower number (Arms Control, Verification, and Compliance budget was only $31.2 million in FY 2013 according to Wikipedia -- don't have time to dive into more recent numbers rn).

For reference, the range mentioned by 80,000 Hours suggests the spending on nuclear risk is 4.87 % (= 4.04/82.9) of the 82.9 billion $ spent on nuclear weapons in 2022.

All of which is to say that I think Ben's observation that "nuclear security is getting almost no funding" is true in some sense both for funders focused on extreme risks (where Founders Pledge and Longview are the only ones) and for the field in general 

I think one had better assess the cost-effectiveness of specific interventions (as GiveWell does) instead of focussing on spending. You estimated doubling the spending on nuclear security would save a life for 1.55 k$, which corresponds to a cost-effectiveness around 3.23 (= 5/1.55) times that of GiveWell's top charities. I think corporate campaigns for chicken welfare are 1.44 k times as cost-effective as GiveWell's top charities, and therefore 446 (= 1.44*10^3/3.23) times as cost-effective as what you got for doubling the spending on nuclear security.

From what I understand, the MacArthur foundation was one of the main funders of nuclear security research, including at the Carnegie Endowment for International Peace, but they massively reduced their funding of nuclear projects and no large funder has replaced them.  https://www.macfound.org/grantee/carnegie-endowment-for-international-peace-2457/

(I've edited this comment, I got confused between the MacArthur foundation and the various Carnegie philanthropic efforts.) 

Thanks, Abby. I knew MacArthur had left the space, but not that Carnegie Endowment had recently decreased funding. In any case, I feel like discussions about nuclear risk funding often implicitly assume that a large relative decrease in philanthropic funding means a large increase in marginal cost-effectiveness, but this is unclear to me given it is only a small fraction of total funding. According to Founders Pledge's report on nuclear risk, "total philanthropic nuclear security funding stood at about $47 million per year ["between 2014 and 2020"]". So a 100 % reduction in philantropic funding would only be a 1.16 % (= 0.047/4.04) relative reduction in total funding, assuming this is 4.04 G$, which I got from the mean of a lognormal distribution with 5th and 95th percentile equal to 1 and 10 G$, corresponding to the lower and upper bound guessed in 80,000 Hours’ profile on nuclear war.

Just to clarify:

  •  MacArthur Foundation has left the field with a big funding shortfall
  • Carnegie Corporation is a funder that continues to support some nuclear security work
  • Carnegie Endowment is a think tank with a nuclear security program
  • Carnegie Foundation is an education nonprofit unrelated to nuclear security

Thanks for the clarification, too many Carnegies! 

It looks to me like the nuclear security space isn't in dire need of funding, despite MacArthur ending its nuclear security program. Nuclear Threat Initiative (NTI) ran a deficit in 2022 (they reported $19.5M in expenses versus $14M in revenues), but they had net assets of $79M, according to their Form 990 which can be found here. Likewise, Carnegie Endowment has no shortage of major funders. Is it important for the EA movement to make up for the funding shortfall?

Thanks for the comment! I commented below that:

In any case, I feel like discussions about nuclear risk funding often implicitly assume that a large relative decrease in philanthropic funding means a large increase in marginal cost-effectiveness, but this is unclear to me given it is only a small fraction of total funding. According to Founders Pledge's report on nuclear risk, "total philanthropic nuclear security funding stood at about $47 million per year ["between 2014 and 2020"]". So a 100 % reduction in philantropic funding would only be a 1.16 % (= 0.047/4.04) relative reduction in total funding, assuming this is 4.04 G$, which I got from the mean of a lognormal distribution with 5th and 95th percentile equal to 1 and 10 G$, corresponding to the lower and upper bound guessed in 80,000 Hours’ profile on nuclear war.

More importantly, I believe the global catastrophic risk community had better assess the cost-effectiveness of specific interventions (as GiveWell does) instead of focussing on spending. Christian Ruhl from Founders Pledge estimated doubling the spending on nuclear security would save a life for 1.55 k$, which corresponds to a cost-effectiveness around 3.23 (= 5/1.55) times that of GiveWell's top charities. I think corporate campaigns for chicken welfare are 1.44 k times as cost-effective as GiveWell's top charities, and therefore 446 (= 1.44*10^3/3.23) times as cost-effective as what Christian got for doubling the spending on nuclear security.

I meant from the EA catastrophic risk community, sorry not clarify.

I see. I think it is better to consider spending from other sources because these also contribute towards decreasing risk. In addition, I would not weight spending by cost-effectiveness (and much less give 0 weight to non-EA spending), as this is what one is trying to figure out when using spending/neglectedness as an heuristic.

Curated and popular this week
Relevant opportunities