Hide table of contents

This post is in 6 parts, starting with some basic reflections on suffering and ethics, and ending with a brief project description. While this post might seem overly broad-ranging, it’s meant to set out some basic arguments and explain the rationale for the project initiative in the last section, for which we are looking for support and collaboration. I go into much greater detail about some of the core ethical ideas in a new book about to be published, which I will present soon in a separate post. I also make several references here to Will MacAskill’s What We Owe the Future, because many of the ideas he expresses are shared by many EAs, and while I agree with many of the things he says, there are some important stances I disagree with that I will explain in this post.

My overall motivation is a deep concern about the persistence of extreme suffering far into the future, and the possibility to take productive steps now to reduce the likelihood of that happening, thereby increasing the likelihood that the future will be a flourishing one.

Summary:

  1. Suffering has an inherent call to action, and some suffering literally makes non-existence preferable.
  2. For various reasons, there are mixed attitudes within EA towards addressing suffering as a priority.
  3. We may not have the time to delay value lock-in for too long, and we already know some of the key principles.
  4. Increasing our efforts to prevent intense suffering in the short term may be important for preventing the lock-in of uncompassionate values.
  5. There’s an urgent need to research and promote mechanisms that can stabilise compassionate governance at the global level.
  6. OPIS is initiating research and film projects to widely communicate these ideas and concrete steps that can already be taken, and we are looking for support and collaboration.
     

1. Some reflections on suffering

  • Involuntary suffering is inherently bad – one could argue that this is ultimately what “bad” means – but extreme, unbearable suffering is especially bad, to the point that non-existence is literally a preferable option. At this level, people choose to end their lives if they can in order to escape the pain.
  • We probably cannot fully grasp what it’s like to experience extreme suffering unless we have experienced it ourselves. To get even an approximate sense of what it’s like requires engaging with accounts and depictions of it. If not, we may underestimate its significance and attribute much lower priority to it than it deserves. As an example, a patient with a terrible condition called SUNCT whom I provided support to, who at one point attempted suicide, described in a presentation we recently gave together in Geneva the utter hell he experienced, and how no one should ever have to experience what he did.
  • Intense suffering has an inherent call to action – we respond to it whenever we try to help people in severe pain, or animals being tortured on factory farms.
  • There is no equivalent inherent urgency to fill the void and bring new sentient beings into existence, even though this is an understandable desire of intelligent beings who already exist.
  • Intentionally bringing into existence a sentient being who will definitely experience extreme/unbearable suffering could be considered uncompassionate and even cruel.

I don’t think the above reflections should be particularly controversial. Even someone who would like to fill the universe with blissful beings might still concede that the project doesn’t have an inherent urgency – that is, that it could be delayed for some time, or even indefinitely, without harm to anyone (unless you believe, as do some EAs, that every instance of inanimate matter in space and time that isn’t being optimally used to create bliss isn't just a waste of resources but actually represents a morally compelling call to action). On the other hand, anyone screaming in agony, in the present or future, has an urgent need for their pain or suffering to be relieved.

Perhaps more controversial is determining how much suffering is actually “acceptable” against a background of otherwise happy or blissful sentient beings. The classical utilitarian solution is to posit a relative weighting of happiness and suffering, by which even the most horrible experiences are acceptable to create if there is enough additional bliss going around. I don’t believe that this comparative weighting is objectively justified, as I argue in detail in my upcoming book. For example (excuse the graphic nature, but this is just one of the many concrete, real-life scenarios in question here), I don’t think that a child being raped and killed in front of their parents is objectively justified by any number of sentient beings experiencing bliss, whether pill-induced, virtual-reality-triggered, digitally-generated or otherwise.

In the introduction to What We Owe the Future, Will MacAskill urges us to imagine living through all the lives that have ever been lived. He refers to the wide range of positive and negative experiences one would have, and the description reads like a rollercoaster-style adventure. What he doesn’t explicitly mention is that countless lives along the way would contain the most brutal torture and unbearable suffering. Anyone re-experiencing these lives would scream for the experiment to end.

But I acknowledge that we have a desire to thrive and to see sentient life continue, and this strong intuition has to have a place in any realistic ethical framework. I would also add that even philosopher Derek Parfit was apparently torn between his recognition of the significance of suffering and his desire for a flourishing future (italics added): “Some of our successors might live lives and create worlds that, though failing to justify past suffering, would have given us all, including those who suffered most, reasons to be glad that the Universe exists.”

Regardless of one’s precise ethical views, I think that most people would agree that the lower the amount of extreme suffering that occurs in the future, the better. And that there are scenarios that are clearly worse than non-existence.

2. Mixed attitudes towards suffering within EA

While the archetypal EA intervention is saving lives from malaria – a benchmark for cost-effectiveness – many cause areas involve relieving suffering, which is often explicitly mentioned as one of the goals of EA. Preventing malaria itself prevents both direct suffering from the disease and suffering experienced by those who lose a child, and the same can apply to other disease-related interventions. Some proposed interventions that are framed as improving human happiness or wellbeing, such as the Happier Lives Institute’s recommended group therapy for depression, are actually directly about alleviating suffering. Animal welfare and ending factory farming have been key cause areas within EA since early on. And wild animal suffering and possible interventions to reduce it, which could be viewed as radical among much of the general public, are considered a legitimate cause area within EA, and are even taken more seriously by some prominent EAs than by many animal rights or vegan activist groups.

On the other hand, direct relief of pain and suffering in humans remains a neglected cause area within EA – perhaps in part because the obstacles are often legal or regulatory, and the path to success is often uncertain and can be difficult to demonstrate. And reducing even extreme suffering cannot be directly compared with saving lives without making some questionable assumptions about using a common metric of value, complicating cost-effectiveness analyses. Also, potentially more non-human animal suffering can be prevented with the same resources – which on the one hand reflects essential anti-speciesist thinking, but which also leaves a gap in important human-related cause areas being considered.

Furthermore, the dominance of x-risk prevention and AI safety as perhaps the highest-profile cause areas within EA has arguably led to a sidelining of direct concern about suffering in both the present and long term. This is despite the obvious fact that no one wishes for a future filled with suffering, and Will paints a utopian vision of a future where life for everyone is better than the very best lives today. While risks of extreme suffering on an astronomical scale (see for example Tobias Baumann’s new book on s-risks) are more readily recognised as important to avoid, smaller-scale risks are more easily viewed as acceptable, even though the awfulness and inherent urgency of the experienced suffering is the same. If we are thinking about how to optimise the long-term future at potentially cosmic scales, then we could presumably be more ambitious than just trying to reduce s-risks, and aim to prevent any extreme suffering from occurring, to the extent that this is possible.

Longtermism has been criticised by some for its shift in emphasis away from those in need in the present. If this could be expected to result in less suffering overall, this could much more easily be justified. But much of the focus is on our survival as a species, and less on preventing future suffering. This suggests a possible imbalance in priorities, and could make large-scale suffering more likely due to fewer resources spent aiming to prevent it.

3. We may need compassionate value lock-in sooner rather than later

Many x-risk events would cause widespread suffering, whether or not they would wipe out humanity. And no one wants to die in a catastrophe. So preventing x-risks is itself compatible with preventing short- and medium-term suffering, along with respecting the intuitions and preferences of humans alive today.

But if extinction is avoided, one of the ways that extreme suffering could persist far into the future is, notably, through the lock-in of an uncompassionate totalitarian system – not necessarily purely AI-controlled, but also employing AI for this purpose. It’s entirely plausible, for example, to imagine Russia or China, or even the US if events took a turn for the worse, entrenching totalitarianism while instrumentalising AI for this purpose. Promoting both principles and concrete mechanisms for entrenching compassionate governance and global cooperation therefore seems essential. While the scale and enormous complexity of the challenge are obvious, I don’t see how we can secure a flourishing future without trying to tackle it, using creative approaches.

I believe there is little time to lose. Will has argued that it would be better to wait until we have reflected longer – even for many centuries – to make sure that we get the ethics right, arguing for example that value lock-in a century ago would have gotten many things wrong. He writes, “you might conclude that we should aim to lock in the values we, today, think are right, thereby preventing dystopia via the lock-in of worse values. But that would be a mistake. While the lock-in of Nazism and Stalinism would have been nightmarish, the lock-in of the values of any time or place would be terrible in many respects.” He also argues that “the attempt to lock in values through AGI would run a grave risk of an irrecoverable loss of control to the AGI systems themselves,” whereas “transparently removing the risk of value lock-in altogether” has the benefit that “by assuring everyone that this outcome is off the table, we remove the pressure to get there first—thus preventing a race in which the contestants skimp on precautions against AGI takeover or resort to military force to stay ahead.”

But given the state of the world and the threats we face, I don’t think we can afford to wait a few hundred years to further refine our ethical thinking before a possible value lock-in occurs. Lock-in could occur much sooner, and even a partial lock-in could be difficult to escape. By the time we have achieved a greater consensus and settled on a precise set of values and principles, it might be too late. Furthermore, how do we avoid lock-in while ensuring compassionate governance? Wouldn’t we want that kind of lock-in? And if an irrecoverable loss of control might happen anyways, we need to ensure we have programmed in the values in advance (of course, provided this is technically possible).

The “right” kind of values aren’t necessarily that difficult to formulate, and we already know some of the key principles. We know that intense and especially extreme suffering is terrible and needs to be avoided wherever possible, no matter who or what is experiencing it. We know that people have physical and emotional needs to be fulfilled, and that diverse, blissful experiences make life feel meaningful and worthwhile. We also know that causing or concentrating harm, even for utilitarian reasons, runs up against strong moral intuitions. And we know that cooperation rather than confrontation or excess competition tends to be the best way of ensuring everyone's wellbeing. This doesn’t mean there is an objectively correct, non-arbitrary process for carrying out decisions. But the core ethical principles already seem robust enough that we wouldn’t risk much by already trying harder to start entrenching them.

Will writes that “there are so many ethical questions to which we know we haven’t yet figured out the answer. Which beings have moral status: just Homo sapiens, or all primates, or all conscious creatures, including artificial beings that we might create in the future?” I admit I find this question puzzling – at least from a suffering-focused perspective. It seems clear to me that any sentient being capable of suffering – including artificial ones – is deserving of moral concern. Will himself talks about the possibility of a digital civilisation; surely those beings who compose it must be prevented from suffering too?

He also mentions that “the Golden Rule, if true at all, is true across all times and places. Promotion of that principle would stay relevant and, if true, have robustly positive effects into the indefinite future. ... This suggests that, as longtermists, when trying to improve society’s values, we should focus on promoting more abstract or general moral principles or, when promoting particular moral actions, tie them into a more general worldview. This helps ensure that these moral changes stay relevant and robustly positive into the future.” I believe that the Golden Rule is, in fact, a very strong approximation of what we would ideally be aiming for, provided it explicitly applies to all sentient beings and prioritises actions by degree of urgency, including how urgently we would want to be rescued if we ourselves were being tortured or experiencing another form of extreme suffering.

4. Preventing intense suffering now may positively influence value lock-in

It seems reasonable to me that one of the important ways to lock in compassionate values is to start implementing them now so as to normalise them. There are few direct causes of suffering whose alleviation isn’t technically within our near-term reach. These include better access to effective pain medications, more effective societal support mechanisms to ensure that people’s needs are met, and an end to the abuse and torture of animals. Wild animal suffering is the big exception – the elephant in the forest, so to speak. But there are already ways to help some wild animals, and if we take the issue seriously, we may be able to address it more comprehensively in the medium-to-long term. Interventionism in nature is controversial, and it can be risky and shouldn’t be rushed. But in principle, if one is counting on a future filled with galaxies worth of digital/artificial sentient beings, one could hardly object to helping the remaining biological beings still being born on our planet  to avoid unnecessary suffering either.

If there is eventually lock-in of values through an AGI that was designed to align itself with human values, then how we treat humans and non-humans today might have a monumental effect on the values that it learns. And if society’s actions to improve the world are perceived as being future- rather than present-oriented, relieving present suffering may appear to be deprioritised as a value. I’m not saying that this is the most likely scenario. But to the extent that an AGI will have learned what our values are from our behaviours, it is essential that society’s behaviours be aligned with our ideal values, and most importantly, how we respond to sentient beings in agony.

5. Global coordination

While object-level interventions can help create a model on which the future could be based, preventing large-scale suffering far into the future requires that our global governance mechanisms embody these values and be designed for long-term stability. Whether or not governance is ultimately executed by an AGI, this will require both value spreading and large-scale coordination in the present. Even if there may be an eventual AGI takeover, global coordination is necessary to reduce x-risks until this happens. And if there is no such takeover, coordination will be essential for a long-term solution. The coordination problem, even if potentially solvable, may be extremely complicated, as explained by social philosopher and The Consilience Project co-founder Daniel Schmachtenberger in various online interviews (e.g. In Search of the Third Attractor, part 1 and part 2). Decentralisation makes catastrophes more likely, while highly centralised power can easily become dystopian. We need to solve the problem of multipolar traps that lead to arms races, a large-scale tragedy of the commons and other catastrophic risks, without depending on a centralised dictatorship or government that isn’t ultimately controlled by special interests. The strategy, in his words, “has to make some kinds of destructive game theory obsolete, while winning at some other kinds of game theory.”

If an AGI really does take over, then I believe we need it to embody all the characteristics of the most compassionate benevolent dictator, so that it strives to eradicate extreme suffering while not posing a threat to humans or unduly constrain their liberties. (Whether it can truly be benevolent is another question; Schmachtenberger describes this idea as messianic.) But even if an AGI doesn’t actually take over, we still need to find a way to design a multipolar system in which all players are stably incentivised to cooperate and malign urges are thwarted.

6. OPIS and projects to help embed compassion in governance

This brings me to the last section, which is about a set of planned OPIS projects that may help further the above aims. I am just presenting the general idea here, but I look forward to discussing details with anyone who is inspired by it.

Our long-term goal since our founding has been to promote compassionate ethics that prioritises the prevention of intense suffering of all sentient beings. Until now we’ve mostly focused on projects to help ensure that people in severe pain can get access to effective medications, which has meant advocating for better access to morphine in lower-income countries (ref 1ref 2ref 3) and communicating the dramatic effectiveness of certain psychedelics for treating horrible conditions like cluster headaches (ref 4ref 5). These are relatively narrow cause areas we have understandably become associated with. But we think that we can have far more impact in the long term by addressing the very principles of governance, ensuring that all significant causes of intense suffering receive adequate attention, and also promoting strategies to prevent locked-in totalitarianism. These may appear to be only distantly related cause areas, but I think they are closer to one another than they appear, because they can be addressed by invoking a common though frequently neglected underlying principle and strategy: explicitly addressing people’s needs. I think that this approach, which is a core principle of conflict resolution, is also key to long-term solutions for global governance.

The goals of the projects are two-fold:

  1. Promote a concrete vision of what the world could look like in the not-too-distant future if we adopted a more comprehensive approach to governance and meeting needs, especially the prevention of intense suffering.
  2. Promote some of the best current ideas available for how this could come about, and provide concrete steps that people and organisations can take.

Some but not all of the ideas will come from the knowledge base and experience of the EA community, and they will be researched, solicited and packaged as a report with concrete recommendations. An essential element of this project is a full-length film to set out the vision and inspire people with it, and explain steps people can take. We will promote the film creatively to try to reach a large worldwide audience.

We are looking for support from within the EA community and beyond, in the form of both donations and people willing to devote some significant time on a regular basis to working with us. It is probably reasonable to support us if:

  1. you think that suffering really matters and generally agree with the ideas presented in this post;
  2. you see a need for ambitious, creative communication projects to promote the vision of a world that aims to phase out intense suffering;
  3. you agree that there are concrete steps individuals, organisations and governments can take to bring us closer to this vision; and
  4. you agree that there’s a reasonable chance that we will end up doing something interesting and especially impactful with this project, even if it is difficult to provide an accurate quantitative estimate.

Will wrote that the “British antislavery movement was a historical accident, a contingent event”. It’s possible that a worldwide movement for compassionate governance could also represent a contingent event, and that we can play a role in promoting it.

Critical comments on all of the above are, of course, welcome. But I am especially interested in inspired, constructive ideas about how we can take these projects forward with maximum impact. I encourage anyone who would like to get involved to contact me directly.
 

Many thanks to Marieke de Visscher, Alex “Nil” Shchelov, Manu Herrán, Robert Daoust, Jean-Christophe Lurenbaum, Sorin Ionescu and Nell Watson for providing feedback on the draft.
 

Comments5
Sorted by Click to highlight new comments since:

What do you mean by "compassionate"?

The definition I use is caring about suffering – others' and also one's own – and being motivated to prevent or alleviate it.

Post summary (feel free to suggest edits!):
Some suffering is bad enough that non-existence is preferable. The lock-in of uncompassionate systems (eg. through AI or AI-assisted governments) could cause mass suffering in the future.

OPIS (Organisation for the Prevention of Intense Suffering) has until now worked on projects to help ensure that people in severe pain can get access to effective medications. In future, they plan to “address the very principles of governance, ensure that all significant causes of intense suffering receive adequate attention, and promote strategies to prevent locked-in totalitarianism”. One concrete project within this is a full length film to inspire people with this vision and lay out actionable steps. They’re looking for support in the form of donations and / or time.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Thanks Jon. I agree on all fronts. Looking forward to reading your book.

In addition to normalisation and any "lock-in" being based on sentiocentric, compassionate values would baking in a broadly naturalistic epistemology also be desirable?

I describe the Sentientism worldview as "evidence, reason and compassion for all sentient beings" in part because I don't think compassion alone is sufficient.

Thanks, Jamie. Yes, I entirely agree, assuming of course that this epistemology encompasses subjective experience. In other places I consistently refer to the combination of compassion and rationality as core values. In fact, one could argue that compassion is a consequence of rationality if one takes into account the content of all current and potential subjective experiences/mind states as  the most relevant part of reality to act upon, and one also takes a metaphysically accurate view of personal identity. In this post I didn't focus on rationality because it is already a strong given within the EA community (although I dispute the rationality of some widely held principles), whereas concern for suffering is more variable.

Curated and popular this week
Relevant opportunities