J

jackva

Climate Research Lead @ Founders Pledge
3246 karmaJoined Apr 2018Working (6-15 years)

Comments
258

Thanks, this updates me, I had cached something more skeptical on chicken welfare campaigns.

Do you have a sense of what "advocacy multiplier" this implies? Is this >1000x of helping animals directly?

I have the suspicion that the relative results between causes are -- to a significant degree -- not driven by cause-differences but by comfort with risk and the kind of multipliers that are expected to be feasible.

FWIW, I also do believe that marginal donations to help farmed animals will do more good than marginal climate donations.

While I think it is a mistake to motivate this estimate with a 2017 BOTEC (here we agree!), it is also mistaken to claim that such a range – spanning more than two OOMs and high and fairly low cost-effectiveness – is implausible as a quite uncertain best guess.

As discussed many times, CCF grantmaking does not rely on 2017 BOTECs and neither does my best guess on cost-effectiveness (Vasco operationalized it one specific way I am not going to defend here, I am just defending a view that expected cost-effectiveness is roughly in the 0.1 USD/t to to 10 USD/t range).

Why an estimate in this range seems plausible

This seems plausible for many reasons, none of which depending on the specific BOTEC:

I. Outside-view multiplier reasoning

  • (1) It is clearly possible to reduce tons of CO2eq for USD 100/t through direct and high-certainty action.
  • (2) If you only assumed a conventional advocacy multiplier – of the form that many EA orgs assume when modeling policy work (e.g. OP) and that is well-substantiated by empirical political science research and many studies on successes in philanthropy – you would assume a 10x multiplier.
  • (3) You now “only” need another 10x multiplier to get to USD 1/t and there seem many plausible mechanisms to get there – e.g. focusing on actions with transformative potential such as innovation, avoiding carbon lock-in etc. or – more meta – driving in additional funding from other donors / foundations when supporting early-stage organizations.
  • (4) Obviously, one also needs to discount for things like funding additionality, execution risk. Etc.
  • (5) This will result in a very uncertain range, but it is well-approximated by what Vasco has chosen to model this.

Note that these are overall quite weak assumptions and, crucially, if you do not buy them you should probably also not buy the cost-effectiveness analyses on corporate campaigns for chicken welfare.


II. Observations of grants and inside-view modeling

  • (1) While I generally put less stock in them than in comparative analysis, we also do more inside-view cost-effectiveness analyses that often have a range close to USD 0.1-USD 10t/CO2e.
  • (2) While the CCF does not exist long enough to be confident in long-run emissions outcomes – we generally invest in theories of change that need time – there is a lot of reason to expect that some of those bets will pay off at the very high cost-effectiveness:
    • (a) Many of the charities supported – such as TerraPraxis, Future Cleantech Architects etc. – have crowded in multiples of the funding we allocated to them often as a direct result of our recommendation and/or organizational development we enabled with early grants.
    • (b) While hard to disentangle, they also play key roles in many policy changes – e.g. Carbon180 was a leading advocacy org on carbon removal in the IRA/IIJA window, two of our grantees are pushing a conversation on repowering with advanced heat sources (nuclear or geothermal) and one of our grantees (FCA) had several policy wins in the EU (not all they can talk about). 
    • (c) More nascent work is focused electricity market liberalization to advance renewables in emerging economies (Energy for Growth), a stronger climate civil society on the right (DEPLOY/US), as well as geothermal innovation in Canada (Cascade).
    • (d) This is a diversified sets of bets that leverages different mechanisms, with the uniting theme of leveraging advocacy, the focus on actions / spaces that are neglected, that have the potential to change trajectories, and that have a risk-reducing quality (hedging).  


III. Learning from other areas of philanthropy

Most areas of philanthropy seem structured such that, when being alright with risk neutrality and leveraged theories of change, one can get significant multiplier.

For example, I am quite confident that the implied multiplier for the case of chicken welfare campaigns compared to direct action is likely similarly large for what we are assuming for the case of Climate Fund. I also do not think any nuclear risk grant-maker would find it implausible that they could reduce nuclear risk 100x more cost-effectively (in expectation) than whatever the direct action equivalent would be. Or a global health grant-maker that would expect that their grants are 100x more cost-effective by influencing advocacy to have government invest in vaccine RD&D rather than buying equipment for their local hospital.


Bottom line: This cost-effectiveness range as a risk-neutral best guess does not depend on a 2017 BOTEC, but rather can be motivated via different streams of reasoning and evidence.

(I also think the critique of the 2017 BOTEC is way over-confident but this would be a separate comment)
 

@Vasco Grillo would be well-placed to do the math here, but I have the strong intuition that under most views giving some weight to animal welfare the marginal climate damage from additional beef consumption will be outweighed by animal suffering reduction by a large margin.

 

My sense is that it is not a big priority.

However, I would also caution against the view that expected climate risk has increased over the past years.

Even if impacts are faster than predicted, most GCR-climate risk does probably not come from developments in the 2020s, but on emissions paths over this century.

And the big story there is that the expected cumulative emissions have much decreased (see e.g. here).

As far as I know no one has done the math on this, but I would expect that the decrease in likelihood of high warming futures dominates somewhat higher-than-anticipated warming at lower level of emissions.

Even if one is skeptical of the detailed numbers of a cost effectiveness analysis like this (as I am), I think it is nonetheless pretty clear that this 1M spent was a pretty great bet:

  1. When I talked to ITIF in 2020, they were pretty clear how transformative the Let's Funds campaign had been for their fundraising. 
  2. Given the amount of innovation-related decision making that occurred in the run-up to and early Biden administration -- what became the IIJA, CHIPS, and IRA, probably the largest expansion of energy innovation activity in decades -- significantly strengthening one of the most respected voices on energy innovation seemed clearly very good.
  3.  ITIF literally co-authored the most detailed blueprint for the Biden energy innovation agenda (Energizing America) and had clear ties into the White House so, conditional on them being funding-constrained (which they perceived themselves to be, see (1)) it seems hard to think there wasn’t a pretty useful way to spend this additional funding. 
  4. Even if one thinks ITIF shifted zero dollars towards innovation (from other areas), just marginally improving a single decision would quickly make this a great investment. 
  5. We have lots of evidence from other areas that this kind of philanthropy works and often has large impacts via legislative subsidy and other mechanisms.
  6. 2000 smallish donors would not have spent their money better otherwise given how most small climate donors allocate their funds (Big Green etc).

I think there’s a failure mode of looking at a cost-effectiveness model like this and rightly thinking -- this is really crude and unbelievable! -- while, in this case, wrongly concluding that this wasn’t a great bet even though it is hard to put into a credible BOTEC.

"Pyramid scheme" has a new meaning.

I am also just beginning to think about this more, but some initial thoughts:

  • Path dependency from self-ampliying processes -- Thinking about model generations as forks where significant changes in the trajectory become possible (e.g. crowding in a lot more investment, as has happened with ChatGPT/GPT4, but also, as has also happened, a changed Overton window). I think overall this introduces a dynamic where the extremes of the scenario space become more likely, with social dynamics such as strong increase in investment or, on the other side, stricter regulation after a warning shot, having self-amplifying dynamics. As the sums get larger and the public and policy makers pay way more attention, I think the development process will become a lot more contingent (my sense is that you are already thinking about these things at Convergence).
  • Modeling domestic and geopolitics -- e.g. the Biden and Trump AI policies probably look quite different, as does the outlook for race dynamics (essentially all mentions of artificial intelligence by Project 2025, a Heritage-backed attempt to define priorities for an incoming Republican administration, are about science dominance and/or competition with China, there is no discussion of safety at all).
  • Modeling more direct AI progress > AI politics > AI policy > AI progress feedback loops, i.e. based on what we know from past examples or theory, what kind of labor displacement would one need to see to expect serious backlash? what kind of warning shots would likely lead to serious regulation? and similar questions.

     
jackva
1mo17
3
0
1
2

I agree with you that the 2018 report should not have been used as primary evidence for CATF cost-effectiveness for WWOTF (and, IIRC, I advised against it and recommended an argument more based on landdscaping considerations with leverage from advocacy and induced technological change). But this comment is quite misleading with regards to FP's work as we have discussed before:

  1. I am not quite sure what is meant with "referencing it", but this comment from 2022 in response to one of your earlier claims already discusses that we (FP) have not been using that estimate for anything since at least 2020. This was also discussed in other places before 2022.
  2. As discussed in my comment on the Rethink report you cite, correcting the mistakes in the REDD+ analysis was one of the first things I did when joining FP in 2019 and we stopped recommending REDD+ based interventions in 2020.  Indeed, I have been publicly arguing against treating REDD+ as cost-effective ever since and the thrust of my comment on the RP report is that they were still too optimistic.

It seems like that this number will increase by 50% once FLI (Foundation) fully comes online as a grantmaker (assuming they spend 10%/year of their USD 500M+ gift)

https://www.politico.com/news/2024/03/25/a-665m-crypto-war-chest-roils-ai-safety-fight-00148621

Interesting, thanks for clarifying!

Just to fully understand -- where does that intuition come from? Is it that there is a common structure to high impact? (e.g. if you think APs are good for animals you also think they might be good for climate, because some of the goodness comes from the evidence of modular scalable technologies getting cheap and gaining market share?)

Load more