Hide table of contents

By longtermism I mean “Longtermism =df the view that the most important determinant of the value of our actions today is how those actions affect the very long-run future.”

I want to clarify my thoughts around longtermism as an idea - and to understand better why some aspects of how it is used within EA make me uncomfortable despite my general support of the idea.

I'm doing a literature search but because this is primarily an EA concept that I'm familiar with from within EA I'm mostly familiar with work (e.g Nick Beadstead etc) advocates of this position. I'd like to understand what the leading challenges and critiques to this position are (if any) as well. I know of some within the EA community (Kaufmann) but not of what the position is in academic work or outside of the EA Community.

Thanks!

New Answer
New Comment

3 Answers sorted by

“The Epistemic Challenge to Longtermism” by Christian Tarsney is perhaps my favorite paper on the topic.

Longtermism holds that what we ought to do is mainly determined by effects on the far future. A natural objection is that these effects may be nearly impossible to predict—perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present options is mainly determined by short-term considerations. This paper aims to precisify and evaluate (a version of) this epistemic objection. To that end, I develop two simple models for comparing “longtermist” and “short-termist” interventions, incorporating the idea that, as we look further into the future, the effects of any present intervention become progressively harder to predict. These models yield mixed conclusions: If we simply aim to maximize expected value, and don’t mind premising our choices on minuscule probabilities of astronomical payoffs, the case for longtermism looks robust. But on some prima facie plausible empirical worldviews, the expectational superiority of longtermist interventions depends heavily on these “Pascalian” probabilities. So the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism.

“How the Simulation Argument Dampens Future Fanaticism” by Brian Tomasik has also influenced my thinking but has a more narrow focus.

Some effective altruists assume that most of the expected impact of our actions comes from how we influence the very long-term future of Earth-originating intelligence over the coming ~billions of years. According to this view, helping humans and animals in the short term matters, but it mainly only matters via effects on far-future outcomes.

There are a number of heuristic reasons to be skeptical of the view that the far future astronomically dominates the short term. This piece zooms in on what I see as perhaps the strongest concrete (rather than heuristic) argument why short-term impacts may matter a lot more than is naively assumed. In particular, there's a non-trivial chance that most of the copies of ourselves are instantiated in relatively short-lived simulations run by superintelligent civilizations, and if so, when we act to help others in the short run, our good deeds are duplicated many times over. Notably, this reasoning dramatically upshifts the relative importance of short-term helping even if there's only a small chance that Nick Bostrom's basic simulation argument is correct.

My thesis doesn't prove that short-term helping is more important than targeting the far future, and indeed, a plausible rough calculation suggests that targeting the far future is still several orders of magnitude more important. But my argument does leave open uncertainty regarding the short-term-vs.-far-future question and highlights the value of further research on this matter.

Finally, you can also conceive of yourself as one instantiation of a decision algorithm that probably has close analogs at different points throughout time, which makes Caspar Oesterheld’s work relevant to the topic. There are a few summaries linked from that page. I think it’s an extremely important contribution but a bit tangential to your question.

My essay on consequentialist cluelessness is also about this: What consequences?

Thanks! The top paper seems very relevant in particular.

This is not exactly what you're looking for, but the best summary of objections I'm aware of is from the Strong Longtermism paper by Greaves and MacAskill.

Thanks - I’ve read the summaries of this but hadn’t twigged it was developed into a full paper

Most people don't value not-yet-existing people as much as people already alive. I think it is the EA community holding the fringe position here, not the other way around. Neither is total utilitarianism a majority view among philosophers. (You might want to look into critiques of utilitarianism.)

If you pair this value judgement with a belief that existential risk is less valuable to work on than other issues for affecting people this century, you will probably want to work on "non-longtermist" problems.

I don't think longtermism depends on either (i) valuing future people equally to presently alive people or (ii) total utilitarianism (or utilitarianism in general), so I don't think these are great counterarguments unless further fleshed out. Instead it depends on something much more general like 'whatever is of value, there could be a lot more of it in the future'.

[Not primarily a criticism of your comment, I think you probably agree with a lot of what I say here.]

Instead it depends on something much more general like 'whatever is of value, there could be a lot more of it in the future'.

Yes, but in addition your view in normative ethics needs to have suitable features, such as:

  • A sufficiently aggregative axiology. Else the belief that there will be much more of all kinds of stuff in the future won't imply that the overall goodness of the world mostly hinges on its long-term future. For example, if you think total value is a bounded function of whatever the sources of value are (e.g. more happy people are good up to a total of 10 people, but additional people add nothing), longtermism may not go through.
  • [Only for 'deontic longtermism':] A sufficiently prominent role of beneficence, i.e. 'doing what has the best axiological consequences', in the normative principles that determine what you ought to do. For example, if you think that keeping some implicit social contract with people in your country trumps beneficence, longtermism may not go through.

(Examples are to illustrate the point, not to suggest they ar... (read more)

3
Denise_Melchin
That's very fair, I should have been a lot more specific in my original comment. I have been a bit disappointed that within EA longtermism is so often framed in utilitarian terms - I have found the collection of moral arguments in favour of protecting the long-term future brought forth in The Precipice a lot more compelling and wish they would come up more frequently.
2
Benjamin_Todd
I agree!
5
Max_Daniel
I also like the arguments in The Precipice. But per my above comment, I'm not sure if they are arguments for longtermism, strictly speaking. As far as I recall, The Precipice argues for something like "preventing existential risk is among our most important moral concerns". This is consistent with, but neither implied nor required by longtermism: if you e.g. thought that there are 10 other moral concerns of similar weight, and you choose to mostly focus on those, I don't think your view is 'longtermist' even in the weak sense. This is similar to how someone who thinks that protecting the environment is somewhat important but doesn't focus on this concern would not be called an environmentalist.
6
Benjamin_Todd
Yes, I agree with that too - see my comments later in the thread. I think it would be great to be clearer that the arguments for xrisk and longtermism are separate (and neither depends on utilitarianism).
Comments15
Sorted by Click to highlight new comments since:

Not academic or outside of EA, but this Forum comment and this Facebook post may be good starting points if you haven't seen them already.

As an update, I am working on a full post that will excerpt 20 arguments against working to improve the long-term future and/or working to reduce existential risk as well as responses to those arguments. The post itself is currently at 26,000 words and there are six planned comments (one of which will add 10 additional arguments) that together are currently at 11,000 words. There have been various delays in my writing process but I now think that is good because there have been several new and important arguments that have been developed in the past year. My goal is to begin circulating the draft for feedback within three months.

Judging from the comment, I expect the post to be a very valuable summary of existing arguments against longtermism, and am looking forward to reading it. One request: as Jesse Clifton notes, some of the arguments you list apply only to x-risk (a narrower focus than longtermism), and some apply only to AI risk (a narrower focus than x-risk). It would be great if your post could highlight the scope of each argument.

Strongly agree - I think it's really important to disentangle longtermism from existential risk from AI safety. I might suggest writing separate posts.

I'd also be keen to see more focus on which arguments seem best, rather than having such a long list (including many that have a strong counter, or are no longer supported by the people who first suggested them), though I appreciate that might take longer to write. A quick fix would be to link to counterarguments where they exist.

Thanks Pablo and Ben. I already have tags below each argument for what I think it is arguing against. I do not plan on doing two separate posts as there are some arguments that are against longtermism and against the longtermist case for working to reduce existential risk. Each argument and its response are presented comprehensively, so the amount of space dedicated to each is based mostly on the amount of existing literature. And as noted in my comment above, I am excerpting responses to the arguments presented.

FWIW I'd still favour two posts (or if you were only going to one, focusing on longtermism). I took a quick look at the original list, and I think they divide up pretty well, so you wouldn't end up with many reasons that should appear on both lists. I also think it would be fine to have some arguments appear on both lists.

In general, I think conflating the case for existential risk with the case for longtermism has caused a lot of confusion, and it's really worth pushing against.

For instance, many arguments that undermine existential risk actually imply we should focus on (i) investing & capacity building (ii) global priorities research or (iii) other ways to improve the future, but instead get understood as arguments for working on global health.

Thanks Ben. There is actually at least one argument in the draft for each alternative you named. To be honest, I don't think you can get a good sense of my 26,000 word draft from my 570 word comment from two years ago. I'll send you my draft when I'm done, but until then, I don't think it's productive for us to go back and forth like this.

Any updates on how this post is going? I'm really curious to see a draft!

While I have made substantial progress on the draft, it is still not ready to be circulated for feedback.

I have shared the draft with Aaron Gertler to show that it is a genuine work in progress.

I've completed my draft (now at 47,620 words)! 

I've shared it via the EA Forum share feature with a number of GPI, FHI, and CLR people who have EA Forum accounts.

I'm sharing it in stages to limit the number of people who have to point out the same issue to me.

that sounds fantastic. I'd love to read the draft once it is circulated for feedback

Hmm, I remember seeing a criticism somewhere in the EA-sphere that went something like:

"The term "longtermism" is misleading because in practice "longtermism" means "concern over short AI timelines", and in fact many "longtermists" are concerned with events on a much shorter time scale than the rest of EA."

I thought that was a surprising and interesting argument, though I don't recall who initially made it. Does anyone remember?

This sounds like a misunderstanding to me. Longtermists concerned with short AI timelines are concerned with them because of AI's long lasting influence into the far future.

Curated and popular this week
Relevant opportunities