I'm a Senior Research Manager at Rethink Priorities, an Associate Professor of Philosophy at Texas State University, and the Director of the Society for the Study of Ethics & Animals.
Thanks for your discussion of the Moral Weight Project's methodology, Carl. (And to everyone else for the useful back-and-forth!) We have some thoughts about this important issue and we're keen to write more about it. Perhaps 2024 will provide the opportunity!
For now, we'll just make one brief point, which is that it’s important to separate two questions. The first concerns the relevance of the two envelopes problem to the Moral Weight Project. The second concerns alternative ways of generating moral weights. We considered the two envelopes problem at some length when we were working on the Moral Weight Project and concluded that our approach was still worth developing. We’d be glad to revisit this and appreciate the challenge to the methodology.
However, even if it turns out that the methodology has issues, it’s an open question how best to proceed. We grant the possibility that, as you suggest, more neurons = more compute = the possibility of more intense pleasures and pains. But it's also possible that more neurons = more intelligence = less biological need for intense pleasures and pains, as other cognitive abilities can provide the relevant fitness benefits, effectively muting the intensities of those states. Or perhaps there's some very low threshold of cognitive complexity for sentience after which point all variation in behavior is due to non-hedonic capacities. Or perhaps cardinal interpersonal utility comparisons are impossible. And so on. In short, while it's true that there are hypotheses on which elephants have massively more intense pains than fruit flies, there are also hypotheses on which the opposite is true and on which equality is (more or less) true. Once we account for all these hypotheses, it may still work out that elephants and fruit flies differ by a few orders of magnitude in expectation, but perhaps not by five or six. Presumably, we should all want some approach, whatever it is, that avoids being mugged by whatever low-probability hypothesis posits the largest difference between humans and other animals.
That said, you've raised some significant concerns about methods that aggregate over different relative scales of value. So, we’ll be sure to think more about the degree to which this is a problem for the work we’ve done—and, if it is, how much it would change the bottom line.
Nope, not assuming neartermism. The report has the details. Short version: across a range of decision theories, chickens look really good.
That said, I totally agree that from a purely conceptual perspective, we should "be more open-minded about how we should think of the different 'buckets' in a Worldview-Diversified portfolio, and cautious of completely dismissing common-sense priorities (even as we give significant weight and support to a range of theoretically well-supported counterintuitive cause areas)."
Admittedly, we weren't factoring in the (ostensible) ripple effects, but our modeling indicates that if we're interested in robust goodness, we should be spending on chickens.
Also, for the reasons that @Ariel Simnegar already notes, even if there are unappreciated benefits of investing in GHD, there would need to be a lot of those benefits to justify not spending on animals. Could work out that way, but I'd like to see the evidence. (When I investigated this myself, making the case seemed quite difficult.)
Hi Ramiro. No, we haven't collected the CURVE posts as an epub. At present, they're available on the Forum and in RP's Research Database. However, I'll mention your interest in this to the powers that be!
I agree with Ariel that OP should probably be spending more on animals (and I really appreciate all the work he's done to push this conversation forward). I don't know whether OP should allocate most neartermist funding to AW as I haven't looked into lots of the relevant issues. Most obviously, while the return curves for at least some human-focused neartermist options are probably pretty flat (just think of GiveDirectly), the curves for various sorts of animal spending may drop precipitously. Ariel may well be right that, even if so, the returns probably don't fall off so much that animal work loses to global health work, but I haven't investigated this myself. The upshot: I have no idea whether there are good ways of spending an additional $100M on animals right now. (That being said, I'd love to see more extensive investigation into field building for animals! If EA field building in general is cost-competitive with other causes, then I'd expect animal field building to look pretty good.)
I should also say that OP's commitment to worldview diversification complicates any conclusions about what OP should do from its own perspective. Even if it's true that a straightforward utilitarian analysis would favor spending a lot more on animals, it's pretty clear that some key stakeholders have deep reservations about straightforward utilitarian analyses. And because worldview diversification doesn't include a clear procedure for generating a specific allocation, it's hard to know what people who are committed to worldview diversification should do by their own lights.
Thanks for all this, Hamish. For what it's worth, I don't think we did a great job communicating the results of the Moral Weight Project.
Thanks for your question, Moritz. We distinguish between negative results and unknowns: the former are those where there's evidence of the absence of a trait; the latter are those where there's no evidence. We penalized species where there was evidence of the absence of a trait; we gave zero when there was no evidence. So, not having many negative results does produce higher welfare range estimates (or, if you prefer, it just reduces the gaps between the welfare range estimates).
Good question, Keyvan. This was pragmatic: our main goal was to make a point about welfare ranges, not p(sentience), so we wanted to discuss things that way in the key takeaways. But knowing people would want a single number per species to play with in models, we figured we should give people placeholders that are already adjusted.