Crossposted from the Global Priorities Project

This post discusses the question of how we should seek to compare human- and animal-welfare interventions. It argues: first, that indirect long-term effects mean that we cannot simply compare the short term welfare effects; second, that if any animal-welfare charities are comparably cost-effective with the best human-welfare charities, it must be because of their effect on humans, changing the behaviour of humans long into the future.

In the search for the most cost-effective interventions, one major area of debate is whether the evidence favours focussing on human or animal welfare. Some have argued for estimates showing animal welfare interventions to be much more cost-effective per unit of suffering averted, with an implication that animal welfare should perhaps therefore be prioritised. However this reasoning ignores a critical difference between the two types of intervention: the long-term impact. There are good arguments that the long-term future has a large importance, and that we can expect to be able to influence it.

The intention here is not to attack the cost-effectiveness estimates, which may well be entirely correct as far as they go. However, like most such assessments they only consider the immediate, direct impact of interventions and compare these against each other. For example, disease relief schemes would be compared by looking at the immediate welfare benefits to the humans or animals cured of disease.

What this process misses out is that interventions to improve human welfare have ongoing positive effects throughout society. It has been argued that healthy humans with access to education, food and clean water are far more likely to be productive, and contribute to the economic development of their society, with knock-on improvements for everyone who comes after them. Also, not having to worry about their basic needs may free them up to spend more time considering and improving the circumstances of others around them.

The upshot of this is that it is likely interventions in human welfare, as well as being immediately effective to relieve suffering and improve lives, also tend to have a significant long-term impact. This is often more difficult to measure, but the short-term impact can generally be used as a reasonable proxy.

By contrast, no analogous mechanism ensures that an improvement in the welfare of one animal results in the improvements in the welfare of other animals. This is primarily because animals lack societies, or at least the sort of societies capable of learning and economic development. Even though many animals are able to learn new skills, they are extremely limited in their ability to share this knowledge or even pass it on to their offspring.

The result is that short-term welfare benefits to animals cannot be used as even a rough proxy for long-term improvements in the same way as they can for humans. So even if the short-term estimates suggest that animal welfare interventions are more cost-effective, it is certainly questionable whether this aspect dominates when considering the overall benefits.

This does not, of course, rule out the possibility that the most effective interventions could have an animal welfare element. For example, a shift in society towards vegetarianism would reduce the number of animals kept in poor conditions today, as well as improving human welfare in numerous ways (such as using fertile land more efficiently to grow crops rather than cattle). Moreover, if it could achieve a lasting improvement in societal values, it might have a large benefit in improved animal welfare over the long-term.

A push towards vegetarianism is one sort of value-shifting intervention. It is possible that this or another such intervention could be more effective than direct improvements to human welfare, but in order to assess this we need to  to model how changing societal values today will influence the behaviour of future generations. This should be a target for further research.

Humans are uniquely placed to pass on the benefits of interventions to the rest of society and to future generations, and if we ignore these future-compounding effects we may achieve less good than we could have. For many types of human welfare intervention, we can use the short-term benefits to humans as a proxy for ongoing improvements in a way that is not possible - and may be misleading - when it comes to improvements to animal welfare. Although it is difficult to quantify, this hidden benefit may be enough to make the best human-focussed interventions more cost-effective than the best animal-focussed ones, even if the reverse seems true looking at the short run.

Comments18
Sorted by Click to highlight new comments since: Today at 7:23 AM

Thanks for the post. :)

It's far from obvious that short-term human development is a good metric for far-future trajectories. Indeed, some believe the opposite. I'm personally extremely ambivalent on the matter ( http://foundational-research.org/publications/differential-intellectual-progress-as-a-positive-sum-project/#economic-growth ).

As Nick Bostrom says in "Astronomical Waste," what matters is the safety / wisdom with which we approach the future, not the speed. A lot of arguing needs to be done to say that speeding human development in the short run improves the safety of the future. I personally expect that many interventions are much better than human development for the far future, and short-term helping of humans may not be a very good proxy at all.

I agree that short-term helping of animals is also not a great proxy of long-term helping of animals, though the two may correlate because of memetic side effects. Memes might help make human development good for the far future too, though probably the effect is less than for animals because it's already widely accepted that human suffering matters.

Thanks Brian. The argument was supposed to be that short term human welfare effects are a reasonable proxy for long term effects (after multiplying by a factor on whose size and sign I don't make a conclusion, although I did point to people claiming it is positive), and that it's harder to find such corrective factor for comparing different kinds of animal welfare interventions.

Your and Peter's comments did persuade me to change the title of the post, which was slightly misleading in focusing attention on welfare.

Fair enough about citing others who claim it's a positive correlation. :)

The idea that the quality of the far future is strongly influenced in a compounding fashion by human empowerment strikes me as a rather specific and controversial model. From the outside, it looks like anchoring to human-poverty charities. If I were to come up with a list of variables to push on that I thought would causally improve the far future, Third-world poverty or economic growth probably wouldn't make the top 10.

Of course, other variables that I would care about (e.g., degree of international cooperation, good governance, philosophical sophistication, quality of discourse, empathy, etc.) might happen to correlate well with poverty reduction or growth, but causation matters. Even if the welfare of elderly patients is correlated with a good far future, working to improve the welfare of elderly people probably isn't the best place to push.

The discussion of where to push to make the far future better seems to me inadequately discussed, with different people assuming their own particular views. (Hence part of the importance of GPP. :) )

I'm really glad the Global Priorities Project exists and I look forward to seeing more research. I think this piece was also particularly well-written in a very accessible yet academic voice.

That being said, I'm not sure the intention of this piece, but it feels neither novel nor thorough. I'm excited that my calculator is linked in this piece, but to clarify I no longer hold the view that those cost-effectiveness estimates are to be taken as the end-all of the impact, and I don't think any EAs still do.

Furthermore, many people now argue that the impact of working on animals is to have a long-term gestalt shift in the view to help not humans, but rather future animals. Ending factory farming, for example, would have a large compounding effect on all future animals that are no longer factory farmed, toward the future, and attitude change is the only way to make that happen.

Likewise, some people (though I'm unsure) think that spreading anti-speciesism might be a critical gateway toward helping people expand their moral concern to wild animals or computer programs (e.g., suffering subroutines) in the far future too.

It's not just that this piece doesn't address this fact, but it seems to ignore the possibility entirely by focusing (somewhat dogmatically) on humans.

Thanks for the comments. I actually agree that it's neither novel nor thorough. It was written not as a research piece but to fill a (perceived) gap in the recorded EA conversation on this. I think we have cases where thinking outstrips accessible accounts of the output, and we need to make sure that there's a good route into these things for people coming at it anew.

I didn't want to spend too long looking at the ways that animal interventions today could help future animals, although I agree that this is an important route to impact (which I did flag: "Moreover, if it could achieve a lasting improvement in societal values, it might have a large benefit in improved animal welfare over the long-term.")

I guess overall the tone of the piece is not quite right for you -- which makes sense as you're a little too informed to be the target audience. The takeaway is supposed to be that the routes to impacting the far future are by impacting humans today. I'm not trying to draw any conclusions about the nature of those interventions (although I use human welfare interventions as an easy-to-grok example and because their long-term effects have been discussed elsewhere).

Although I generally encourage dissenting opinions in the EA community, I think the idea expressed by this post is harmful and dangerous for similar reasons as those expressed by Brian and Peter.

1) "Some have argued for estimates showing animal welfare interventions to be much more cost-effective per unit of suffering averted, with an implication that animal welfare should perhaps therefore be prioritised."

This seems to be a misrepresentation of the views held by many EAs. Cost-effectiveness calculations are employed by every EA prioritization organization, and nobody is claiming they imply a higher priority. They are only one of many factors we consider when evaluating causes.

2) "Moreover, if it could achieve a lasting improvement in societal values, it might have a large benefit in improved animal welfare over the long-term."

I am glad this sentence was included, but it is relatively deep in the post and is one of the strongest reasons EAs advocate against factory farming and speciesism. I posted my thoughts on the subject here: ( http://thebestwecan.org/2014/04/29/indirect-impact-of-animal-advocacy/ )

3) "The upshot of this is that it is likely interventions in human welfare, as well as being immediately effective to relieve suffering and improve lives, also tend to have a significant long-term impact. This is often more difficult to measure, but the short-term impact can generally be used as a reasonable proxy. ... "

I could replace "human" with "non-human animal" welfare here and the argument would be just as valid. It's a grand assumption to think this applies to human-focused causes but not others. If you have further justification for this, I think that would be an interesting post.

4) "For many types of human welfare intervention, we can use the short-term benefits to humans as a proxy for ongoing improvements in a way that is not possible – and may be misleading – when it comes to improvements to animal welfare."

I think we'd all be happy for you to defend this assertion, since it is quite controversial within EA and the broader community.

Hi Jacy,

Thanks for your comment and the link to your own post, which I'd not read. I'm glad to see discussion of these indirect effects, and I think it's an area that needs more work for a deeper understanding.

I'm a bit confused by your hostility, as it seems that we are largely in agreement about the central point, which is that the route to long-term benefits flows in large part through short-term effects on humans (whether those are welfare improvements, value shifts, or some other category). I'm aware that this is not a novel claim, but it's also one that is not universally known.

I'm particularly confused by your opening sentence. Could you explain how this is harmful or dangerous?

A couple of replies to specific points follow, so that we can thread the conversation properly if need be.

Owen,

I appreciate that you're thinking about flow-through/long-term effects and definitely agree we need more discussion and understanding in the area.

My "hostility" (although it isn't that extreme =] really) is primarily due to the propagation of the assumption that "human-focused causes have positive significant flow-through effects while non-human animal-focused causes do not." We have a lot more research to do into questions like this before we have that sort of confidence.

So the danger here is that impact-focused people might read the post and say "Wow, I should stop trying to support non-human animal-welfare since it doesn't matter in the long-run!" I realize that your personal view is more nuanced, and I wish that came across more in your post. The possible flow-through effects: (i) promoting antispeciesism, (ii) scope sensitivity, (iii) reducing cognitive dissonance, and many more seem perfectly viable.

Hope that makes sense.

Sure, that makes sense. I think that the post would only be likely to elicit that immediate response in someone whose major reason for supporting animal welfare was the large amount of short-term suffering that it could avert, but I will make sure to pay attention to the possible take-home messages when writing blog posts.

I'm happy to hear you state your views outside the post. They seem reasonable and open-minded, which was not my original impression. I look forward to reading more of your work. Always feel free to send me articles/ideas for critique/discussion.

> I think we'd all be happy for you to defend this assertion, since it is quite controversial within EA and the broader community.

I think you must be misreading my assertion, because I don't think it's very controversial.

I am here saying -- and not here defending (though I link to others saying this) -- that many short term welfare benefits to humans are likely to compound in a way that means that the size of short term benefit tracks the size of long term benefit.

I'm also claiming that, in contrast, with animal welfare interventions it matters much more how the benefit was achieved, because most of the indirect benefits will come through the same channel (better welfare outcomes from human value shifts, for example, may be much better than similarly sized better welfare outcomes from the invention of a comfier cage for battery hens).

If that is your assertion, I feel the post is misrepresenting your view as something much stronger (i.e human-focused causes have significant positive impact that non-human animal-focused causes do not, therefore human-focused causes are better). This is disingenuous and caused our negative reactions.

I've just re-read the post and I don't think it misrepresents the view. But it is clearly the case that people reading it can come away with an erroneous impression, so something has gone wrong. Sorry about that.

"that many short term welfare benefits to humans are likely to compound in a way that means that the size of short term benefit tracks the size of long term benefit."

I think this is actually controversial in the EA community. My impression is that Eliezer Yudkowsky and Luke Muehlhauser would disagree with it, as would I. Others who support the view are likely to acknowledge that it's non-obvious and could be mistaken. Many forms of short-term progress may increase long-term risks.

> Cost-effectiveness calculations are employed by every EA prioritization organization, and nobody is claiming they necessarily imply a higher priority.

Cost-effectiveness is one of the classic tools used in prioritisation, and at least in theory a higher level of cost-effectiveness should exactly imply higher priority. Now the issue is that we don't trust our estimates, because they may omit important consequences that we have some awareness of, or track the wrong variables. But when people bring cost-effectiveness estimates up, there is often an implicit claim to priority (or one may be read in even if not intended).

"Cost-effectiveness is one of the classic tools used in prioritisation, and at least in theory a higher level of cost-effectiveness should exactly imply higher priority. Now the issue is that we don't trust our estimates, because they may omit important consequences that we have some awareness of, or track the wrong variables."

I totally agree.

"But when people bring cost-effectiveness estimates up, there is often an implicit claim to priority (or one may be read in even if not intended)."

I would agree with the point in parentheses, but often it's just brought up as one factor in a multitude of decision-making criteria. And I think that's a good place for it, at least until we get better at it.

"By contrast, no analogous mechanism ensures that an improvement in the welfare of one animal results in the improvements in the welfare of other animals."

Do you count ensuring an animal who would have lived a net negative life never comes into existence as "an improvement in the welfare of one animal"? If so, it's possible that an improvement in the welfare of one animal could result in improvements in the welfare of other animals by reducing greenhouse gas emissions. Climate change mitigation may also help animals living in the far future.

I think superintelligence/FAI is a critical far future factor that could very much benefit from animal advocacy. Caring about animals (and lesser minds in general) is very important from the perspective of FAI. If such an AI ends up taking some "Coherent Extrapolated Volition" of humanity as the basis of it's utility function, then we need "care for lesser minds" to be a strong component of this, else we are doomed.