Hide table of contents

Effective Altruism movement often uses a scale-neglectedness-tractability framework. As a result of that framework, when I discovered issues like baitfish, fish stocking, and rodents fed to pet snakes, I thought that it is an advantage that they are almost maximally neglected (seemingly no one is working on them). Now I think that it’s also a disadvantage because there are set-up costs associated with starting work on a new cause. For example:

  • You first have to bridge the knowledge gap. There are no shoulders of giants you can stand on, you have to build up knowledge from scratch.
  • Then you probably need to start a new organization where all employees will be beginners. No one will know what they are doing because no one has worked on this issue before. It takes a while to build expertise, especially when there are no mentors.
  • If you need support, you usually have to somehow make people care about a problem they’ve never heard before. And it could be a problem which only a few types of minds are passionate about because it was neglected all this time (e.g. insect suffering).

graph

Now let’s imagine that someone did a cost-effectiveness estimate and concluded that some well-known intervention (e.g. suicide hotline) was very cost-effective. We wouldn’t have any of the problems outlined above:

  • Many people already know how to effectively do the intervention and can teach others.
  • We could simply fund existing organizations that do the intervention.
  • If we found new organizations, it might be easier to fundraise from non-EA sources. If you talk about a widely known cause or intervention, it’s easier to make people understand what you are doing and probably easier to get funding.

Note that we can still use EA-style thinking to make interventions more cost-effective. E.g. fund suicide hotlines in developing countries because they have lower operating costs.

Conclusions

We don’t want to create too many new causes with high set-up costs. We should consider finding and filling gaps within existing causes instead. However, I’m not writing this to discourage people from working on new causes and interventions. This is only a minor argument against doing it, and it can be outweighed by the value of information gained about how promising the new cause is.

Furthermore, this post shows that if we don’t see any way to immediately make a direct impact when tackling a new issue (e.g. baitfish), it doesn’t follow that the cause is not promising. We should consider how much impact could be made after set-up costs are paid and more resources are invested.

Opinions are my own and not the views of my employer.

Comments18
Sorted by Click to highlight new comments since:

I didn’t mention it in the post because I wanted to keep it short but there was a related discussion on a recent 80,000 hours podcast with WillMacAskill with some good points:

Will MacAskill: So take great power war or something. I’m like great power wars are really important? We should be concerned about it. People normally say, “Oh, but what would we do?”. And I’m kinda like, I don’t know. I mean policy around hypersonic missiles is like one thing, but really I don’t know. We should be looking into it. And then people are like, “Well, I just don’t really know”. And so don’t feel excited about it. But I think that’s evidence of why diminishing marginal returns is not exactly correct. It’s actually an S curve. I think if there’d never been any like investment and discussion about AI and now suddenly we’re like, “Oh my God, AI’s this big thing”. They wouldn’t know what to do on Earth about this. So there’s an initial period of where you’re getting increasing returns where you’re just actually figuring out like where you can contribute. And that’s interesting if you get that increasing returns dynamic because it means that you don’t want really spread, even if it’s the case that–

Robert Wiblin: It’s a reason to group a little bit more.

Will MacAskill: Exactly. Yeah. And so I mean a couple of reasons which kind of favor AI work over these other things that maybe I think are just as important in the grand scheme of things is we’ve already done all the sunk cost of building kind of infrastructure to have an impact there. And then secondly, when you combine with the fact that just entirely objectively it’s boom time in AI. So if there’s any time that we’re going to focus on it, it’s when there’s vast increases in inputs. And so perhaps it is the case that maybe my conclusion is perhaps I’m just as worried about war or genetic enhancement or something, but while we’ve made the bet, we should follow through with it. But overall I still actually would be pretty pro people doing some significant research into other potential top causes and then figuring out what should the next thing that we focus quite heavily on be

Robert Wiblin: I guess especially people who haven’t already committed to working on some other area if they’re still very flexible. For example, maybe they should go and think about great power conflict if you’re still an undergraduate student.

Will MacAskill: Yeah, for sure and then especially different causes. One issue that we’ve found is that we’re talking so much about biorisk and AI risk and they’re just quite weird small causes that can’t necessarily absorb large numbers of people perhaps who don’t have… Like I couldn’t contribute to biorisk work, nor do I have a machine learning background and so on, whereas some other causes like climate change and great power war potentially can absorb just much larger quantities of people and that could be a strong reason for looking into them more too.

If we're being careful, should these considerations just be fully captured by Tractability/Solvability? Essentially, the marginal % increase in resources only solves a small % of the problem, based on the definitions here.

Yes, the formula cancels out to good done / extra person or dollar, so the framework remains true by definition no matter what. Although if we are talking about a totally new cause, then % increase in resources is infinity and the model breaks.

But the way I see it, the framework is useful partly it’s because it’s easy to use intuitively. Maybe it’s just me, but when I’m now trying to think about newish causes with accelerating returns in terms of the model, I find it confusing. It’s easier just to think directly about what I can do and what impact it can end up having. Perhaps the model is not useful for new and newish causes.

Also, I think the definition is different from how EAs casually use the model and I was making this point for people who are using it casually.

It seems to me that there are two separate frameworks:

1) the informal Importance, Neglectedness, Tractability framework best suited to ruling out causes (i.e. this cause isn't among the highest priority because it's not [insert one or more of the three]); and

2) the formal 80,000 Hours Scale, Crowdedness, Solvability framework best used for quantitative comparison (by scoring causes on each of the three factors and then comparing the total).

Treating the second one as merely a formalization of the first one can be unhelpful when thinking through them. For example, even though the 80,000 Hours framework does not account for diminishing marginal returns, it justifies the inclusion of the crowdedness factor on the basis of diminishing marginal returns.

Notably, EA Concepts has separate pages for the informal INT framework and the 80,000 Hours framework.

Caspar Oesterheld makes this point in Complications in evaluating neglectedness:

I think many interventions initially face increasing returns from learning/research, creating economies of scale, specialization within the cause area, etc. For example, in most cause areas, the first $10,000 are probably invested into prioritization, organizing, or (potentially symbolic) interventions that later turn out to be suboptimal.

(I strongly recommend this neglected (!) article.)

Ben Todd makes a related point about charities (rather than causes) in Stop assuming ‘declining returns’ in small charities:

Economies of scale are a force for increasing returns, and they win out while still at a small scale, so the impact of the 5th staff member can easily be greater than the 4th.
Economies of scale are caused by:
1. Gains from specialisation. In a one person organisation, that person has to do everything – marketing, making the product, operations and so on. In a larger organisation, however, you can hire a specialist to do each function, which is more efficient.
2. Fixed costs. Often you have to pay the same amount of money for a service no matter what scale you have e.g. legally registering as an organisation costs about the same amount of time no matter how large you are; an aircraft with 100 passengers requires the same number of pilots as one with 200 passengers. As you become larger, fixed costs become a smaller and smaller fraction of the total.
3. Physical effects. Running an office that’s 2x as large doesn’t cost 2x as much to heat, because the volume increases by the cube of the length, while the surface area only increases by the square of the length. A rule of thumb is that capital costs only increase 50% in order to double capacity.

I think this suggests the cause prioritization factors should ideally take the size of the marginal investment we're prepared to make into account, so Neglectedness should be

"% increase in resources / extra investment of size X"

instead of

"% increase in resources / extra person or $",

since the latter assumes a small investment. At the margin, a small investment in a neglected cause has little impact because of setup costs (so Solvability/Tractability is low), but a large investment might get us past the setup costs and into better returns (so Solvability/Tractability is higher).

As you suggest, if you're spreading too thin between neglected causes, you don't get far past their setup costs, and Solvability/Tractability remains lower for each than if you'd just chosen a smaller number to invest in.

I would guess that for baitfish, fish stocking, and rodents fed to pet snakes, there's a lot of existing expertise in animal welfare and animal interventions (e.g. corporate outreach/campaigns) that's transferable, so the setup costs wouldn't be too high. Did you find this not to be the case?

I think there are some transferable set-up costs. For corporate campaigns to work, you need an involvement of public-facing corporations that have at least some customers who care about animal welfare, and an ask that wouldn't bankrupt them if implemented. I don't think that any corporations involved in baitfish or fish stocking are at all like that. Maybe you could do a corporate campaign against pet shops to stop selling pet snakes. But snakes are also sold at specialized reptile shops and I don't think that such a campaign against them would make sense.

However, animal advocates also do legislative outreach and that could be used for these issues as well. Also, they have an audience who cares about animals and that could be very useful too.

I'm sure I've heard this idea before - maybe in the old 80k career guide?

Yeah, probably someone mentioned it before. Before posting this, I wanted to read everything that was written about neglectedness to make sure that what I’m saying is novel. But there is a lot of text written about it, and I got tired of reading it. I think that’s why I didn’t post this text on the EA forum when I initially wrote it a year ago. But then I realized that it doesn’t really matter whether someone mentioned it before or not. I knew that it’s not a very common knowledge within EA because the few times I mentioned the argument, people said that it’s a good and novel point. So I posted it.

I find this comment relatable, and agree with the sentiment.

I also think I’ve come across this sort of idea in this post somewhere in EA before. But I can’t recall where, so for one thing this post gives me something to link people to.[1] Plus this particular presentation feels somewhat unfamiliar, so I think I’ve learned something new from it. And I definitely found it an interesting refresher in any case.

In general, I think reading up on what’s been written before is good (see also this), but that there’s no harm in multiple posts with similar messages, and often there are various benefits. And EA thoughts are so spread out that it’s often hard to thoroughly check what’s come before.

So I think it’s probably generally a good norm for people to be happy to post things without worrying too much about the possibility they’re rehashing existing ideas, at least when the posts are quick to write.

[1] Also, I sort of like the idea of often having posts that serve well as places to link people to for one specific concept or idea, with the concept or idea not being buried in a lot of other stuff. Sort of like posts that just pull out one piece of a larger set of ideas, so that that piece can be easily consumed and revisited by itself. I try to do this sort of thing sometimes. And I think this post could serve that function too - I’d guess that wherever I came across a similar idea, there was a lot of other stuff there too, so this post might serve better as a reference for this one idea than that source would've.

ETA: A partial counter consideration is that some ideas may be best understood in a broader context, and may be misinterpreted without that context (I'm thinking along the lines of the fidelity model). So I think the "cover just one piece" approach won't always be ideal, and is rarely easy to do well.

I agree! I think it was worth posting

I list a couple of possible sources.

I think this is a potentially very large problem with a lot of EA work. For eg, with AI research, it seems like we're still at the far left end of the graph, since no-one is actually building anything resembling dangerous AI yet, so all research papers do is inform more research papers.

We should also really stop talking about neglectedness as an independent variable from importance and tractability - IMO it's more like a function of the two, that you can use as a heuristic to estimate them. If a huge-huge problem has had a merely huge amount of resources put into it (eg climate change), it might still turn out to be relatively neglected.

Perhaps EA's roots in philosophy lead it more readily to this failure mode?

Take the diminishing marginal returns framework above. Total benefit is not likely to be a function of a single variable 'invested resources'. If we break 'invested resources' out into constituent parts we'll hit the buffers OP identifies.

Breaking into constituent parts would mean envisaging the scenario in which the intervention was effective and adding up the concrete things one spent money on to get there: does it need new PhDs minted? There's a related operational analysis about time lines: how many years for the message to sink in?

Also, for concrete functions, it is entirely possible that the sigmoid curve is almost flat up to an extraordinarily large total investment (and regardless of any subsequent heights it may reach). This is related to why ReLU functions are popular in neural networks: because zero gradients prevent learning.

Thanks for this! Well-written and an important point.

This dynamic is probably at play for psychedelics: Whether or not psychedelics are an EA cause area

Curated and popular this week
Relevant opportunities