Hide table of contents

(Note: this is not about recent events. At least, not directly.)

Introduction and summary

I recently read two interesting articles that identify problems that afflict Effective Altruism (EA) when it works at the kind of large scales that it is commonly intended (and probably obliged) to consider. Those problems are (a) a potentially paralysing “cluelessness” resulting from uncertainty about outcomes and (b) a “crazy train” to absurd beliefs from which it seems impossible to escape. I argue below that mental models or habits developed by conservatives provide workable answers to these two problems. I would also suggest that this is indicative of a wider point, namely that EA can benefit from the accumulated wisdom of conservatism, which has experience of operating over the kinds of scale and time periods with which EA is now concerned, not least if EA-ers want to achieve or maintain widespread acceptability for their beliefs and practices. 

Tyler Cowen has recently recommended that EA-ers should exercise more social conservatism in their private lives. No doubt Cowen is right, but that is no my concenr here. I am concerned in this article only with the professional lives of EA-ers. 

While I think, as many people do, that EA should spend more time just getting on with making the world better and less time considering science fiction scenarios and philosophical thought experiments, I make no apology for adding to the ‘theology’ rather than the practice of EA in this article. (If you want my suggestion for a practical project then consider the domestication of the zebra here.) One way or another, EA is getting publicity and with that comes scrutiny: my argument is intended to help EA survive that scrutiny (and reduce the potential for ridicule) by giving it some tried and tested techniques for converting good intentions in action. 

The problems

The two articles I am concerned with are Peter McLaughlin’s article “Getting on a different train: can Effective Altruism avoid collapsing into absurdity?” and Hilary Greaves’ paper “Cluelessness” . I suggest that you read both of them if you are at all interested in these topics: they make their arguments in good and interesting ways, and both touch on areas outside the scope of this essay. For present purposes, a brief summary of the concerns raised by each will suffice.

“Cluelessness” identifies a particular worry that people engaged in EA face when wondering whether a particular project under consideration will result in net good or net evil. Greaves suggests that that the EA practitioner will be “clueless”, in an important sense, when facing (to take one of the author’s examples) “the arguments for and against the claim that the direct health benefits of effective altruists’ interventions in the end outweigh any disadvantages that accrue via diminished political activity on the part of citizens in recipient countries”.  There are various consequences – perfectly foreseeable consequences – that could follow from the various alternatives before us and we just don’t know what those consequences will be: there are good and plausible arguments for all outcomes. But the kinds of big decisions that EA seems to entail (and tends to involve) are ones that typically face uncertainty of this kind. Are we “clueless” as to what to do?

“Getting on a different train” looks at the kinds of problems that EA faces when dealing with problems of very large scale. One aspect of EA that is appealing to many people, whatever their moral beliefs, is that it opens up the possibility of doing a lot of good. You can save thousands of lives! Whatever your particular religious or moral beliefs, you are likely to think that’s a good thing to do. But if saving thousands of lives is better than saving hundreds of lives (which it is) then surely saving millions or billions of lives will be even better? Which means that, if we are being serious about doing good, we need to think about very large numbers and the very long term. But once you start thinking in those terms then you quickly find yourself on the ‘train to crazy town’, i.e. endorsing bizarre or repugnant moral conclusions. (You know the sort of thing: you are morally compelled to torture someone to death if there is a small but non-zero chance that this would do some enormous good for enormous numbers of people as yet unborn.) But if you try to escape the train to crazy town then what principles do you have? Don’t you have to abandon the project of doing good for large numbers of people? Aren’t you forced to discount some people, to reintroduce myopias or biases you tried so hard to shed?

Why these are problems of scale

The two articles, which I will call “Cluelessness” and “Crazy Train”, both take the approach that we do not need to pin down our theory of morality very precisely: we can simply agree that there are better and worse states of affairs and that it is generally morally preferable to bring about the better states of affairs. Greaves says that our decisions “should be guided at least in part by considerations of the consequences that would result from the various available actions”, while McLaughlin refers to “broadly consequentialist-style” reasoning. We can probably simply think in terms of utilitarianism.

At a small scale, pure “all consequences matter” utilitarianism is simply not an intuitive moral approach. The reason for that might be summarised as follows: we have an intuitive sense of the consequences of our actions for which we are responsible and those for which we are not responsible. Doctors don’t have to consider whether it would be better, all things considered, if their patients recover or if their organs instead were used for donations: in fact, they should not even ask themselves that question. Their job – their responsibility – is simply to make their patients better. Lawyers should not try to work out whether it would be in the best interests of everyone if they ‘threw the case’ and let their clients get convicted. That’s how common-sense morality works, and that’s how common-sense morality is (generally) codified in law.

Or let’s take an even more homely example. If you have just bought an ice cream on a hot summer’s day then you don’t have to look around at all the people within “melting distance” of the ice cream vendor and work out which would get the most pleasure from the ice cream: just give it to your child, who is your responsibility, and let everyone else buy their own ice creams. Only weirdos are utilitarian in their private lives. So much the better for weirdos, you might say. But if EA wants to gain wider approval then I suggest that it steers away from encouraging people to give ice creams to strangers' children.

None of these examples is a knock-down argument against utilitarian or consequentialist, maximising-type thinking. My point is simply to say that common sense morality finds such thinking repugnant at this scale. (And this is true even if you try to ‘reverse-engineer’ common sense morality via utilitarian-type chains of reasoning: Bernard Williams famously suggested that the man who is faced with the choice of saving his drowning wife or a stranger is not only justified in saving his wife, but should do so with no thought more sophisticated than “that’s my wife!” because thinking, for example, “that’s my wife and in such situations it’s permissible to save one’s wife” would be one thought too many. The man who is presented with his drowning wife and starts thinking “it is optimal for society overall if individuals are able to make a binding, life-long commitments to each other and such commitments can be credible only if …” will end up having about thirty-seven thoughts too many.)

But when we change the scale then our intuitions also change. If the average person were forced to govern a small country then they would almost inevitably approach the questions that arise by applying broadly utilitarian thinking. Should the new hospital go here or there? Should taxes on X be raised or taxes on Y lowered? People automatically reach for some kind of consequentialist, utilitarian calculus to answer these questions: they look to familiar proxies for utility (GDP, quality-adjusted life years, life expectancy, satisfaction surveys and so on). Common sense morality might suggest a couple of non-utilitarian considerations (would the glory of the nation be best served by a new museum? What if my predecessor has promised that the hospital should go here even though it would save more lives if it went there?) but, even so, no one would be shocked to subject even these kinds of consideration to utilitarian scrutiny, certainly not in the way that they are shocked when considering (for example) whether doctors should slaughter innocent hospital visitors in order to provide healthy organs for others.

I would suggest that the difference derives from the same underlying sense of responsibility. The doctor, parent or lawyer can, quite properly, say “I am only responsible for outcomes of a certain kind, or for the welfare of certain people. The fact that my actions might cause worse outcomes on other metrics or harm to other people is simply not my responsibility.” But the government of a country cannot say that. If our notional ruler decides to place a hospital here, in a vibrant city, because it would be easier to recruit and retain medical staff than if the hospital were there, s/he cannot reply, when pressed with the problem that the roads here are congested and too many people will die in ambulances before they reach the hospital, “not my problem – I was only looking at the recruitment side of things”. The ruler is responsible for everything – all the consequences – and can’t wash their hands of some subset of them. (I leave aside the case of foreigners: the ruler is responsible for their own country, not its neighbours.)

In Crazy Train, McLaughlin quotes a suggestion from Tyler Cowen that utilitarianism is only good as a “mid-scale” theory, i.e. the small country scale I have described, the scale between, on the one hand, the small, personal level of doctors, lawyers and ice cream vans and, on the other, the mega-scale beloved of these kinds of theoretical discussions, consisting of trillions of future humans spread across the galaxy. Samuel Hammond makes a similar point here. Perhaps that’s right, perhaps not. But the fact is that EA is in fact interested in operating at this mid-level; we are talking about the kinds of things that states do: public health projects, for example, or criminal justice reform, or just plain crime reduction. That means that EA needs to deal with cluelessness and craziness at this kind of scale.

Solutions

As I said, I think taking a leaf out of conservatives’ book can help here. What do I mean? Let’s think again about running a small country.

In broad terms, the left-wing view of the politics of running a country is that it consists of two parts: (1) raising money through taxation and (2) spending money on government programmes. The taxation part is a mixture of good and bad: there are some taxes that are bad (ones paid by poor people or ones that cause resentment) but there are some that are good (ones that reduce inequality, say, or ones that penalise anti-social activities like polluting or smoking). Overall, the taxing side of things might be about neutral. But the spending side is positive: each time the government spends money on X then it helps X. 

That’s a simple and pleasant model to believe in. It leaves room for some tricky decisions (should high earners be taxed more or should smokers? Should more money go on welfare or education?), but the universe of those decisions is constrained and familiar.

The conservative model is more complicated. First, and most familiar to left-wingers, the taxing side is more fraught; taxation might best be summarised as a necessary evil that should be minimised. But more significantly for present purposes, the spending side is also fraught: it is by no means obvious to right-wingers that when the government spends money on X then it helps X. It might instead be making X worse off, if not now then in the long run, or – and even more confusingly – it might be making Y worse off.

I think that left-wingers often see these kinds of right-wing objections to government programmes as being objections made in bad faith: for example, “you don’t want to see welfare payments to single mothers because, really, you just don’t like single mothers”. But the “cluelessness” that Greaves refers to might, I hope, explain why this is not so. Government spending on a social evil in some sense ‘rewards’ that evil and might encourage more of it; or it might (to take Greaves’ own example) diminish the ability of people to solve the problem themselves. 

That means that conservatives are perpetually “clueless” in just the way Greaves describes. They can see both benefits and dangers from spending government-sized amounts of money on government-style programmes. And much of EA, at least to the conservative, looks like government-style programmes - specifically the kind of foreign aid or international development projects on which Western governments have lavished billions over many years and about which conservatives are typically very sceptical. Why, the conservative asks, should we believe that EA practitioners will achieve outcomes so much better than those of the myriad of well-meaning aid workers who have come before? Are the 'Bright Young Things' of EA really so much cleverer, or better-intentioned, or knowledgeable than, say, the Scandinavian or Canadian international development establishments?

Yet conservatives do support some government spending programmes. Indeed, a notable feature of the intellectual development of the Right in English-speaking countries since c.2008 is its increased willingness to support government spending programmes. How do they do it?

In short: by exercising the old and well-established virtues of prudence, good judgment and statesmanship. 

I’m afraid that sounds vague and unhelpful, and a far cry from the kind of quantitative, data-driven, rapidly scalable maximising decision-making processes that EA practitioners would like. But it’s true.  These virtues are the best tools that human have yet found for navigating the cluelessness inherent in making big decisions that affect the future. If you are not cultivating and endorsing these virtues then you are not thinking seriously about how to run something resembling a government-sized operation. 

Now let’s turn to Crazy Train. What contribution can people as notoriously averse to strict rules and principles as conservatives make to the problem of drawing theoretical distinctions? Quite a useful one, I would suggest: prioritising the avoidance of negative outcomes. 

The virtue of statesmanship is perhaps best exemplified in practice by the likes of Abraham Lincoln or Winston Churchill. Each of those left office with their countries in a bad way. Churchill lost the 1945 election, at which time the country he led was still at war, impoverished and bombed. Lincoln was assassinated at a time when the country he led was devastated by civil war. Neither of them has the obvious record of a utility-maximiser. Yet they are renowned because their actions contributed to the avoidance of far greater evils. 

There are other examples too. Nelson Mandela’s great achievement was not bringing majority rule to South Africa: that was FW De Klerk’s achievement. Mandela’s achievement was to ensure that the process was peaceful – it was his magnanimity in victory, his avoidance of vindictiveness and violence, that earned him his plaudits. Pitt the Younger was lauded as the ‘Pilot that weathered the storm’, the man who, although he died with Napoleonic France not yet defeated, had ensured that England would not fall to the threat it presented.

I would suggest that this asymmetry between the value we ascribe to achieving positive outcomes and that we ascribe to avoiding negative ones is a feature, not a bug. It’s how we found the greatest heroes of large scale social projects we have yet seen, and it’s our best chance of finding more such heroes in the future.

Or let me put it another way. Perhaps, as Toby Ord has suggested, we are walking along the edge of a precipice which, if badly traversed, will lead to disaster for humankind. What kind of approach is the right one to take to carrying out such an endeavour? Surely there is only one answer: a conservative approach. One that prioritises good judgment, caution and prudence; one that values avoiding negative outcomes well above achieving positive ones. Moreover, not only would such an approach be sensible in its own terms, but it would also help EA to acquire the kind of popular support that would help it achieve its outcomes.

We have to be realistic here. EA likes to talk about helping all of the sentient beings that will ever exist, but it’s a human institution, likely to fall far short of its aims and fail in the ways that other human institutions have done. But that is no reason to be downhearted: with statesmanship, cautious good judgment and a keen aversion to negative outcomes, a lot of good can yet be done. If EA were to vanish from the face of the Earth having done no more than avoided humankind being eliminated from existence in the next 100 years then it would earn the gratitude of billions and rank among our greatest achievements. A deeply conservative achievement of that kind would be truly admirable. Achieving it would be effective altruism of the best kind.

3

0
0

Reactions

0
0

More posts like this

Comments9
Sorted by Click to highlight new comments since:

Tyler Cowen has recently recommended that EA-ers should exercise more social conservatism in their private lives. No doubt Cowen is right

Er, I doubt this! And "how should EAs conduct their private lives?" doesn't really seem like my business, and is the sort of question that strikes me as easy to get wrong. So I'd want to believe in a pretty large effect size here, with a lot of confidence (and ideally some supporting argument!), before I started asserting this as obvious.

(Raising it as a question to think about is another matter.)

While I think, as many people do, that EA should spend more time just getting on with making the world better and less time considering science fiction scenarios and philosophical thought experiments

This is a weird sentence to my eyes. It reads to me like "Of course we all believe that EAs should spend more time just getting on with making the world better, and less time thinking about Hollywood movie scenarios like 'pandemics' or adding numbers together."

Pandemics don't parse to me as a silly Hollywood thing, and if you disagree, I'd much rather hear specifics rather than just an appeal to fictional evidence ("Hollywood says p, so p is low-probability").  And I could imagine that people are doing too much adding-numbers-together or running-thought-experiments if they enjoy it or whatever, but if you think the very idea of adding numbers together is silly (/ the very idea of imagining scenarios in order to think about them is silly), then I worry that we have very deep disagreements just below the surface. Again, specifics would help here.

Doctors don’t have to consider whether it would be better, all things considered, if their patients recover or if their organs instead were used for donations: in fact, they should not even ask themselves that question.

I think there are three different things going on here, which are important to distinguish:

  1. Many doctors think of making the future go well overall as "not their responsibility", such that even if there were a completely clear-cut way to save thousands of lives at no cost to anyone but the doctor, the doctor might still go "eh, not my job".
  2. Doctors correctly recognize that it's not worth the time and effort to micro-evaluate so many ordinary decisions they make day-to-day. Expected utility maximization is about triaging your attention and cognition, as much as any other resource -- it's in large part about deciding what topics are most useful to think about.
  3. Doctors correctly recognize that it's just plain wrong to deceive people and let them die after you said you'd help them, for the sake of helping somebody else. (And they recognize that openly telling patients "we're going to let you die at random if we think your organs will do more good elsewhere" would cause more harm than good, in real life, via drastically reduced trust in doctors; and they recognize that the general policy of routinely lying to and manipulating people in such a drastic way won't work well either. So ordinary evaluation of consequences is a big part of the story here, even if it's not the full story.)

2 and 3 seem good to me, but mostly seem to fit fine into garden-variety consequentialism, or maybe consequentialism subject to a few deontology-like prohibitions on specific extremely-unethical actions.

1 seems more relevant to your argument, and appears to me to come down to some combination of:

  • People just aren't maximally altruistic.
  • In many cases people are altruistic, but unendorsed laziness, akrasia, and blindly-following-local-hedonic-gradients cause them to lose touch with this value and pursue it less than they'd reflectively want to. ("Someone is just optimizing for what looks Normal or Professional or Respectable, and trying to save the world is not the optimal way to achieve those social goals" is usually a special case of this: the person may not deeply endorse living their life that way, but it's not salient to them moment-to-moment that they're not following their values.)

Professional specialization is useful; not every human should try to be a leading expert in moral philosophy or cause prioritization. But that doesn't mean that doctors don't have a responsibility to make decisions well if they get themselves into a weird situation where they face a dilemma similar to what heads of state often face. It just means that it doesn't come up much for the typical doctor in real life.

Bernard Williams famously suggested that the man who is faced with the choice of saving his drowning wife or a stranger is not only justified in saving his wife, but should do so with no thought more sophisticated than “that’s my wife!” because thinking, for example, “that’s my wife and in such situations it’s permissible to save one’s wife” would be one thought too many.

I think the "save my wife" instinct is a really good part of human nature. And since time is of the essence in this hypothetical, I agree that in the moment, for pragmatic reasons, it's important not to spend too much time deliberating before acting; so EAs will tend to perform better if they trust their guts enough to act immediately in this sort of situation.

From my perspective, this is an obviously good reason to stay in the habit of trusting your gut on various things, as opposed to falling into the straw-rationalist error of needing to carefully deliberate about everything you do. (There are many other reasons it's crucial to be in touch with your gut.)

That said, I don't think it's best for most people to go through life without ever reflecting on their values, asking "Why?", questioning their feelings and society's expectations and taboos, etc. And if the reason to be unreflective is just to look less weird, then that seems outright dishonorable and gross to me.

(And I think it would look that way to many others. Following the correct moral code is a huge, huge deal! Many lives are potentially at stake! Choosing not to think about the pros and cons of different moral codes at any point in your life because you want to seem normal is really bad.)

I think the right answer to reach here is to entertain the possibility "Maybe I should value my friends and family the same as strangers?", think it through, and then reach the conclusion "Nope, valuing my friends and family more than strangers does make more sense, I'll go ahead and keep doing that".

Being a true friend is honorable, noble, and good. Unflinchingly, unhesitatingly standing for the ones you love in times of crisis is a thing we should praise.

Going through your entire life refusing to think "one thought too many" and intellectually question whether you're doing the right thing in all this, not so much. That strikes me either as laziness triumphing over the hard work of being a good person, or as someone deciding they care more about signaling their virtues than about actual outcomes.

(Including good outcomes for your loved ones! Going through life without thinking about hard philosophy questions is not the ideal way to protect the people you care about!)

I think intuitions like Bernard Williams' "one thought too many" are best understood through the lens of Robin Hanson's 80,000 Hours interview. Hanson interprets "one thought too many" type reasoning as an attempt to signal that you're a truer friend, because you're too revolted at the idea of betraying your friends to apply dispassionate philosophical analysis to the topic at all.

You choose to live your life steering entirely by your initial gut reactions in this regard (and you express disgust at others who don't do the same), because your brain evolved to use emotions like that as a signal that you're a trustworthy friend.

(And, indeed, you may be partly employing this strategy deliberately, based on consciously recognizing how others might respond if you seemed too analytical and dispassionate.)

The problem is that in modern life, unlike our environment of evolutionary adaptedness, your decisions can potentially matter a lot more for others, and making the right decision requires a lot more weird and novel chains of reasoning. If you choose to entirely coast on your initial emotional impulse, then you'll tend to make worse decisions in many ways.

In that light, I think the right response to "one thought too many" worries is just to stop stigmatizing thinking. Signaling friendship and loyalty is a good thing, but we shouldn't avoid doing  a modicum of reflection, science, or philosophical inquiry for the sake of this kind of signaling. Conservatives should recognize that there are times when we should lean into our evolved instincts, and times when we should overrule them; and careful reflection, reasoning, debate, and scholarship is the best way to distinguish those two cases.

Some specific individuals may be bad enough at reflection and reasoning that they'll foreseeably get worse outcomes if they try to do it, versus just trusting their initial gut reaction. In those cases, sure, don't try to go test whether your gut is right or wrong.

But I think the vast majority of EAs have more capacity to reflect and reason than that.

I would suggest that the difference derives from the same underlying sense of responsibility. The doctor, parent or lawyer can, quite properly, say “I am only responsible for outcomes of a certain kind, or for the welfare of certain people. The fact that my actions might cause worse outcomes on other metrics or harm to other people is simply not my responsibility.” But the government of a country cannot say that.

I could buy that EA should think more about role-based responsibilities on the current margin; maybe it would help people with burnout and coordination if they thought more about "what's my job?", about honest exchanges and the work they're being paid to do, etc.

Your argument seems to require that role-based thinking play a more fundamental role than it actually does, though. I think "our moral responsibility to strangers is about roles and responsibility" falls apart for three reasons:

  1. People correctly intuit that "I was just doing my job" is no excuse for the atrocities committed by rank-and-file Nazis. This seems like a super clear case where most people's moral intuitions are more humanistic, universal, and "heroic", rather than being about social roles.
  2. The main difference between the Nazi case and ordinary cases seems to be one of scale. But EAs face tough choices on a scale far larger than WW2. Saying "it's not my job" seems like a very weak excuse here, if you personally spot an opportunity to do enormous amounts of good.
  3. There's no ground truth about what someone's "role" is, except something like social consensus. But social consensus can be wrong: if society has tasked you with torturing people to death every day, you shouldn't do it, even if that's your "role".

My own approach to all of this is:

  • Utilitarianism is false as a description of most human's values -- we aren't literally indifferent to improving a spouse's welfare vs. a stranger's welfare. We aren't perfectly altruistic, either.
    • Nor, honestly, should we want to be. I like the aspect of human nature where we aren't completely self-sacrificing, where we take a special interest in our own welfare.
  • But the ways that utilitarianism is false tend to be about mundane individual-scale things that we evolved to care about preferentially.
  • And these ordinary individual-scale things don't tend to have much relevance to large-scale decisions. Heads of state are basically never making a decision where their family's survival directly depends on them disregarding the welfare of the majority of humanity.
  • Our idiosyncratic personal values tend to become especially irrelevant to large-scale decisions once we take into account the consequentialist benefits of "being the sorts of people who keep promises and do the job they said they'd do". Societies work better when people stick to their oaths. If your oath of office involves setting aside your idiosyncratic personal preferences in a circumscribed set of decisions, then you should do that because of the consequentialist value of oath-keeping.
  • The above points are sufficient to explain the data. I don't have to be a head of state in order to want societal outcomes to be good for everyone (and not just for my family). People aren't perfectly altruistic and impartial, but we do care a great deal about strangers' welfare, which is why this is a key component of morality. And it's a key component regardless of the role you're playing, though in practice some jobs involve making much more consequential decisions than other jobs do.

In Crazy Train, McLaughlin quotes a suggestion from Tyler Cowen that utilitarianism is only good as a “mid-scale” theory, i.e. the small country scale I have described, the scale between, on the one hand, the small, personal level of doctors, lawyers and ice cream vans and, on the other, the mega-scale beloved of these kinds of theoretical discussions, consisting of trillions of future humans spread across the galaxy.

I don't see any reason to think that. If it's a prediction of the "roles" theory, it seems to be a totally arbitrary one: society happened to decide not to hire anyone for the "save the world" or "make the long-term future go well" jobs, so nobody is on the hook. I don't think my moral intuitions should depend crucially on whether society forget to assign an important role to anyone!

If the fire department doesn't exist, and I see a house on fire, I should go grab buckets, not shrug my shoulders.

The alternative theory I sketched above seems a lot simpler to me, and predicts that we'd care about galaxy-level outcomes for the same reason we care about planet- or country-level ones. People love their family, but the state of policymakers' family members doesn't tend to matter much for setting social policy; and smart consequentialists ought to be trustworthy and promise-keeping. These facts make sense of the same moral intuitions as the ice cream, doctor, and policymaker examples, without scrabbling for some weird reason to carve out an exception at really large scales.

(We should probably have more moral uncertainty when we get to really extreme cases, like infinite ethics. But you aren't advocating for more moral uncertainty, AFAICT; you're advocating that we concentrate our probability mass on a specific dubious moral theory centered on social roles.)

Maybe I'd find your argument more compelling if I saw any examples of how cluelessness or 'crazy train' actually bears on a real-world decision I have to make about x-risk today?

From my perspective, cluelessness and 'crazy train' don't matter for any of my actual EA tactics or strategy, so it's hard for me to get that worked up about them. Whereas 'stop caring about large-scale things unless society has told me it's my Job to do so' would have large effects on my behavior, and (to my eye) in directions that seem worse on reflection. I'd be throwing out the baby, and there isn't even any bathwater I thereby get rid of, as far as I can tell.

Why, the conservative asks, should we believe that EA practitioners will achieve outcomes so much better than those of the myriad of well-meaning aid workers who have come before? Are the 'Bright Young Things' of EA really so much cleverer, or better-intentioned, or knowledgeable than, say, the Scandinavian or Canadian international development establishments?

Yep, at least somewhat more. (It doesn't necessarily take a large gap. See Inadequate Equilibria for the full argument.)

I think EAs are pretty often tempted to think "no way could we have arrived at any truths that weren't already widely considered by experts, and there's no way that the world's expert community could have failed to arrive at the truth if they did consider it".

But a large part of the answer to this puzzle is, in fact, the mistaken "roles" model of morality. (Which is one part of why it would be a serious mistake for EA to center its morality on this model.)

There's no one whose job it is to think about how to make civilization go well over the next thousand years.

There's no one whose job it is to think about priority-setting for humanity at the highest level.

Or, of the people whose job is nominally to think about things at that high a level of abstraction, there aren't enough people of enough skill putting enough effort in to figuring out the answer. The relevant networks of thinkers are often small, and the people in those networks are often working under a variety of political constraints that force them to heavily compromise on their intellectual inquiry, and compromise a second time when they report the findings from their inquiry.

There's no pre-existing field of generalist technological forecasting. At least, not one with amazing bona fides and stunning expertise that EAs should defer to.

Etc., etc.

People have said stuff about many of the topics EAs focus on, but often it's just some cute editorializing, not a mighty edifice of massively vetted expert knowledge. The world really hasn't tried very hard at most of the things EA is trying to focus on these days. (And to the extent it has, those are indeed the areas EA isn't trying to reinvent the wheel, as far as I can tell.)

In short: by exercising the old and well-established virtues of prudence, good judgment and statesmanship. I’m afraid that sounds vague and unhelpful, and a far cry from the kind of quantitative, data-driven, rapidly scalable maximising decision-making processes that EA practitioners would like. But it’s true.  These virtues are the best tools that human have yet found for navigating the cluelessness inherent in making big decisions that affect the future.

The dichotomy here between "good judgment" and being "quantitative" doesn't make sense to me. It's pretty easy in practice to assign probabilities to different outcomes and reach decisions that are informed by a cost-benefit analysis.

Often this does in fact look like "do the analysis and integrate it into your decision-making process, but then pay more attention to your non-formalized brain says about what the best thing to do is", because your brain tends to have a lot more information than what you're able to plug into any spreadsheet. But the act of trying to quantify costs and benefits is in fact a central and major part of this whole process, if you're doing it right.

Or let me put it another way. Perhaps, as Toby Ord has suggested, we are walking along the edge of a precipice which, if badly traversed, will lead to disaster for humankind. What kind of approach is the right one to take to carrying out such an endeavour? Surely there is only one answer: a conservative approach. One that prioritises good judgment, caution and prudence; one that values avoiding negative outcomes well above achieving positive ones. Moreover, not only would such an approach be sensible in its own terms, but it would also help EA to acquire the kind of popular support that would help it achieve its outcomes.

Every time you sprinkle in this "moreover, it would help you acquire more popular support!" aside, it reduces my confidence in your argument. :P Making allies matters, but I worry that you aren't keeping sufficiently good bookkeeping about the pros and cons of interventions for addressing existential risk, and the separate pros and cons of interventions for making people like you. At some point, humanity has to actually try to solve the problem, and not just play-act at an attempt in order to try to gather more political power. Somewhere, someone has to be doing the actual work.

That said, a lot of your point here sounds to me like the old maxipok rule (in x-risk, prioritize maximizing the probability of OK outcomes)? And the parts that aren't just maxipok don't seem convincing to me.

Lots of good points here - thank you.

I'm happy to discuss moral philosophy. (Genuinely - I enjoyed that at undergraduate level and it's one of the fun aspects of EA.) Indeed, perhaps I'll put some direct responses to your points into another reply. But what I was trying to get at with my piece was how EA could make some rough and ready, plausibly justifiable, short cuts through some worrying issues that seemed to be capable of paralysing EA decision-making. 

I write as a sympathiser with EA - someone who has actually changed his actions based on the points made by EA. What I'm trying to do is show the world of EA - a world which has been made to look foolish by the collapse of SBF - some ways to shortcut abstruse arguments that look like navel-gazing, avoid openly endorsing 'crazy train' ideas, resolve cluelessness in the face of difficult utilitarian calculations and generally do much more good in the world. Your comment "Somewhere, someone has to be doing the actual work" is precisely my point: the actual work is not worrying about mental bookkeeping or thinking about Nazis - the actual work is persuading large numbers of people and achieving real things in the real world, and I'm trying to help with that work.

As I said above, I don't claim that any of my points above are knock-down arguments for why these are the ultimately right answers. Instead I'm trying to do something different. It seems to me that EA is (or at least should be) in the business of gaining converts and doing practical good in the world. I'm trying to describe a way forward for doing that, based on the world as it actually is. The bits where I say 'that's how get popular support' are a feature, not a bug: I'm not trying to persuade you to support EA - you're already in the club! - I'm trying to give EA some tools to persuade other people, and some ways to avoid looking as if EA largely consists of oddballs. 

Let me put it this way. I could have added: "and put on a suit and tie when you go to important meetings". That's the kind of advice I'm trying to give. 

I enjoyed this read, and agree with the vibes of it. I am not sure what specifically is being recommended. I do think it would be good if EA avoids alienating people unnecessarily. That has two implications:

(a) the movement avoid identifying with the political left any more than is strictly entailed by the goals of EA, because it will alienate people on the political right who could contribute;

(b) being more conservative, in the non-political sense of holding new ideas lightly  and giving a lot of weight to conventional ideas. This could be framed on "moral parliament" terms--you give a lot of weight in decision-making to the question, "what would conventional, common-sense morality say about this decision/cause area?".

I think EA has generally achieved (a), but I probably have blind spots there so could be wrong.

I don't have a good sense of (b). It's hard to say. EA leaders like Will MacAskill have said they waited several years after being told that longtermism should be the overriding concern for EAs, before he started publicly advocating for that stance. Plenty of EA Funding still goes to global health and development, even though philosophers like Will MacAskill and Hilary Greaves seem to think long-termism is the overriding concern.

Presumably that comes from a reluctance to go all-in on EA thinking and to have some sort of compromise with more conventional norms. I don't have a good sense if we give conventional morality too much weight, or not enough.

Thanks for your comments!

What specifically is being recommended? Good question. I would say two things.

(1) Think about issues of recruitment, leadership, public messaging, public association with an eye to virtues such as statesmanship & good judgment. There’s no shortage of prophets in EA; it’s time for some kings.

But that’s really vague & unhelpful too! Ok, true. I’m no statesman but how about something like this:

(2) Choose one disease and eliminate it entirely. People will say that eliminating disease X is doing less good than reducing disease Y by 40% (or something like that). Ignore them. Eliminating disease X shows the world that EA is a serious undertaking that achieves uncontroversially good things. Maybe disease X would have mutated and caused something worse; maybe not – who knows! We’re clueless! But it would show that EA is not just earnest young people following a fad but a real thing that really matters. That’s the kind of achievement that could take EA to the next level.

(Obviously, don’t give up on the existential risks & low-chance/high-impact stuff. I just mean that concrete proof of effectiveness is a great recruiting & enthusing tool.) 

On whether EA appeals enough to conservatives

(1) It’s not bad, but could be a lot better. Frankly, EA is a good fit with major world religions that encourage alms-giving (Christianity & Islam spring to mind) and ought to be much bigger there than it is. 

(2) This anecdote from Tyler Cowen’s talk: “And after my talk, a woman went up to me and she came over and she whispered in my ear and she said, you know, Tyler, I actually am a social conservative and an effective altruist, but please don't tell anyone.” Hmm.

What kind of approach is the right one to take to carrying out such an endeavour? Surely there is only one answer: a conservative approach. One that prioritises good judgment, caution and prudence; one that values avoiding negative outcomes well above achieving positive ones.

 

Really interesting read!

Would you agree that an underlying assumption of conservatism is that continuing 'business as usual' is the safe option?

In Bioterrorism and AI Safety the assumption is that we're on course for disasters that results in billions of deaths unless we do something radical to change course.

Whether you agree about the risks of Bioterrorism and AGI shouldn't be about a general vibe you pick up of  "science fiction scenario(s)" or being on "the crazy train to absurd beliefs". I think it should be about engaging with those arguments on the object level. Sam Harris / Rob Reid's podcast and  Robert Miles' YouTube channel are great ways in if you're interested.  

That's an interesting point. There's a lot of thinking about how we judge the output of experts in other fields (and I'm not an expert in that), but I'll give you my thoughts. In short, I'm not sure you can engage with all the arguments on the object level. Couple of reasons:

(1) There are lots of people who know more about X than I do. If they are trying to fool me about X, they can; and if they are honestly wrong about X then I've got no chance. If some quantum physicist explains how setting up a quantum computer could trigger a chain reaction that could end human life, I've got no chance of delving into the details of quantum theory to disprove that. I've got to with ... not just vibes, exactly, but a kind of human approach to the numbers of people who believe things on both sides of the argument, how plausible they are and so on. That's the way I deal with Flat Earth, Creationism and Global Warming arguments: there are guys out there who know much more than me, but I just don't bother looking at their arguments.

(2)  People love catastrophes and apocalypses! Those guys who keep moving the doomsday clock so that we are 2 seconds to midnight or whatever; the guys who thought the Cold War was bound to end in a nuclear holocaust; all the sects who have thought the world is going to end and gathered together to await the Rapture or the aliens or whatever - there are just too many examples of prophets predicting disaster. So I think it's fair to discount anyone who says the End is Nigh. On the other hand, the civilisation we have behind us has got us to this state, which is not perfect, but involves billions of decently-fed people living long-ish lives, mostly in peace. There's a risk (a much less exciting risk that people don't get so excited about) that if you make radical changes to that then you'll make things much worse.

I appreciate the thoughtful reply.

(1) I don't think that the engineered pandemics argument is of the same type as the Flat Earther or Creationist arguments. And it's not the kind of argument that requires a PhD in biochemistry to follow either. But I guess from your point of view there's no reason to trust me on that? I'm not sure where to go from there.

I've got to with ... not just vibes, exactly, but a kind of human approach to the numbers of people who believe things on both sides of the argument, how plausible they are and so on.

Maybe one question is: why do you think engineered pandemics are implausible?

(2) I agree that you should start from a position of skepticism when people say the End is Nigh. But I don't think it should be a complete prohibition on considering those arguments. 

And the fact that previous predictions have proven overblown is a pattern worth paying attention to (although as an aside: I think people were right to worry during the cold war — we really did come close to full nuclear exchange on more than 1 occasion! The fact that we got through it unscathed doesn't mean they were wrong to worry. If somebody played Russian Roulette and survived you shouldn't conclude "look, Russian Roulette is completely safe."). Where I think the pattern of overblown predictions of doom has a risk of breaking down is when you introduce dangerous new technologies. I don't expect technology to remain roughly at current levels. I expect technology to be very different in 25, 50, 100 years' time. Previous centuries have been relatively stable because no new dangerous technologies were invented (nuclear weapons aside). You can't extrapolate that pattern into the future if the future contains for example easily available machines that can print Covid-19 but with 10x transmissibility and a 50% mortality rate. Part of my brain wants to say "We will rise to the challenge! Some hero will emerge at the last moment and save the day" but then I remember the universe runs on science and not movie plot lines.

Thank you for your comments.

I wouldn't say that I believe engineered pandemics or AI mis-alignment or whatever are implausible. It’s simply that I think I’ll get a better handle on whether they are real threats by seeing if there’s a consensus view among respected experts that these things are dangerous than if I try to dive in to the details myself. Nuclear weapons are a good example because everyone did agree that they were dangerous and even during the Cold War the superpowers co-operated to try to reduce the risks (hotline, arms treaties), albeit after a shaky start, as you say.

I also agree with you that there is no prohibition on considering really bad but unlikely outcomes. In fact, I think this is one of the good things EA has done – to encourage us to look seriously at the difference between very very bad threats and disastrous, civilisation-destroying threats. The sort of thing I have in mind is: “let’s leave some coal in the ground in case we need to re-do the Industrial Revolution”. Also, things like seed banks. These kinds of ‘insurance policies’ seem like really sensible – and also really conservative – things to think about. That’s the kind of ‘expect the best, prepare for the worst’ conservatism that I fully endorse. Just like I recommend you get life insurance if your family depend on your income, although I have no reason to think you won’t live to a ripe old age. Whatever the chances of an asteroid strike or nuclear war or an engineered pandemic are, I fully support having some defences against them and/or building capacity to come back afterwards. 

I suppose I’d put it this way: I’m a fan of looking out for asteroids, thinking about how they could be deflected and preparing a space craft that can shoot them down. But I wouldn’t suggest we all move underground right now – and abandon our current civilisation – just to reduce the risk. I’m exaggerating for effect, but I hope you see my point.

Curated and popular this week
Relevant opportunities