Hide table of contents

I want to start by thanking everyone for the warm welcome in response to my first post introducing myself on the EA forum last week. In it, I wrote the following:

To be clear, I think cause neutrality is probably EA's greatest innovation, and I am not at all suggesting it be abandoned. But EA has tended to treat cause specificity as the enemy of cause neutrality in a battle to the death, whereas I see a future in which they coexist peacefully and indeed advance each other's goals…

And also:

I hope to share a more fleshed-out case for this idea in a future post.

Here is that case.

* * *

I claim that EA’s insistence upon cause neutrality for its adherents, while sensible up until now, will drastically limit the potential of the movement as it grows. More specifically, I claim that effective altruism will very likely achieve dramatically more good if it actively encourages the adoption of effective altruist principles within domains.

This claim is based on the following premises:

  • Emotional considerations play a large role in motivating decisions for most donors and do-gooders, and this is unlikely to change during our lifetimes.

  • Convincing donors and do-gooders to value impact within causes and geographies they care about is likely to be easier than convincing them to value both impact and causes or geographies they don’t currently care about.

  • Domain-specific effective altruism does not have to compete with or cannibalize cause-neutral effective altruism. It is a way to engage people who would otherwise reject or ignore EA.

  • Domain-specific effective altruism has the potential to increase the total good accomplished by the effective altruism movement via a number of mechanisms, including: 1) increasing the effectiveness of a greater number of donors and do-gooders; 2) mitigating the risk of EA having made poor choices in its initial portfolio of cause areas; 3) decreasing coordination challenges between cause areas, 4) building stronger bridges between effective altruism and institutional philanthropy, and 5) encouraging better allocation of resources in the most wasteful domains.

Let’s go through each of these in turn.

 

Most people don’t care about EA causes

A wide swath of research on donor motivation indicates that giving typically has much more to do with factors like having a connection to a specific cause, a feeling of obligation to “give back,” and cultivating social relationships rather than dispassionate explorations of the best opportunities to make a difference. According to a landmark 2010 donor segmentation study focusing on affluent individuals in the United States, only 4% of donors consider the effectiveness of an organization the key driver of a gift, and just 3% actually research organizations’ effectiveness in order to choose which one to support. The Hewlett Foundation decided to shut down a program intended to increase donors’ demand for information in large part because it saw these results, along with a subsequent evaluation of its efforts in this arena, as so discouraging.

Put simply, changing donor behavior is really hard. Effective altruism’s growth in recent years has been impressive, and the Open Philanthropy Project is likely to give away hundreds of millions of dollars a year in the not-too-distant future. But there’s little evidence to indicate that the overall dynamics of the donor marketplace have changed much in the decade since GiveWell has been around. Even if donations to EA-approved charities were to reach as much as $1 billion annually, that figure would represent less than one-third of one percent of total donations in the United States alone. We should not take Elon Musk’s favor as a sign of anything other than a good start down a long, long road.

Effective altruism has been very smart to target people while they’re still in college. At that point, the road ahead is wide open and everything seems possible. But the later in life one comes to these ideas, the more commitments, emotional and otherwise, one has to contend with and find a way to balance.

And when we extend the view to encompass the institutions that shape our lives and those around us – the mechanisms for impact most readily available to us – it is extremely rare to find powerful entities that not only operate on a truly global scope both geographically and topically, but do so in an integrated rather than siloed fashion. Unless effective altruism becomes the dominant social movement of our time, which (and I really can’t emphasize this enough) would place it among the extreme outliers of social movements across history, I very much doubt these dynamics are going to change.1

 

EA is leaving impact opportunities on the table

Motivating donors to care about impact is hard enough. It stands to reason that motivating them to care about impact outside of domains they already care about is even harder.

Consider my wife and me. When we combined our bank accounts several years ago right after we got married, I already knew that I would want a large portion of our philanthropy to go to GiveWell-recommended charities, to which I’d been giving for several years. What I didn’t anticipate was how my wife’s cultural background and spiritual beliefs would affect her view of charity. To her, charity begins locally – with the people whose lives we cross every day. To do nothing to acknowledge our privilege and our neighbors’ lack of it would not only be callous but dehumanizing. It was very important to her that, whatever other decisions we made, we reserve some of our money for those in our city who need it.

In any marriage you pick your battles, and this was not going to be one I was going to win. But in fairness, my wife was not the only one who had special preferences for our giving. As I’ve written before, my professional life has been spent entirely in arts and culture – hardly a priority for EAs. For the past nine years, I’ve operated a website devoted to finding the most important issues in the arts and what we can do about them. To abandon the arts in my charitable budget would have felt like a massive denial of a core element of my background and identity. If someone as motivated and passionate about the arts as me wouldn’t make them a giving priority, then who would?

In the end, we allocated our charitable giving roughly this way: 50% to GiveWell-recommended charities, 25% to combating homelessness in Washington DC, and 25% to the arts. That allocation has remained fairly consistent in the years since.

My wife and I are ideal targets for EA in many ways. We are both highly educated professionals who work in the nonprofit sector. Not only that, I had both familiarity and a donation history with GiveWell at the time when we began giving as a family. And yet, even with all of those advantages, effective altruism – in its current form – has nothing to say about fully half of our donation budget. Had there been a GiveWell-like resource for either of the domains that occupied that half, we certainly would have taken advantage of it. But there is not, so we do the best we can with the limited capacity for research that we have.

How many more people like us are out there? I’d bet it’s a lot more than the number of people who are willing to commit their entire donation budgets to GiveWell (or GWWC, or TLYCS)-recommended charities. Especially considering that even some of GiveWell’s own employees don’t commit their entire donation budgets to GiveWell-recommended charities.

 

Embracing domain-specific doesn’t mean abandoning cause-neutral

Crucial to my argument is the idea that cause-neutral effective altruism can coexist with domain-specific EA, rather than be threatened by it.

I believe the notion that domain-specific EA poses a threat is rooted in a misunderstanding of EAs’ agency in the world. In Doing Good Better, Will MacAskill tells the story of James Orbinksi, head of a Red Cross hospital inundated with injured victims of the Rwandan genocide in 1994 to illustrate the necessity of doing the most good. MacAskill writes:

With so many casualties coming in, Orbinski knew he could not save everyone, and that meant he had to make tough choices: whom did he save, and whom did he leave to die? Not all could be helped, so he prioritized and engaged in triage. If it were not for that cold, calculating, yet utterly necessary allocation of 1s, 2s, and 3s, how many more lives would have been lost?

This analogy situates Orbinski as the effective altruist within his hospital, faced with tradeoffs about how to do the most good. However, there is one crucial difference between Orbinski and actual effective altruists: Orbinski was the boss. He had the authority to set up the triage system that MacAskill so enthusiastically praises, and could trust that his staff would carry it out. A more accurate analogy to the real world would be to imagine our effective altruist as one out of fifty employees in a leaderless hospital. Patients coming into the ER are being treated largely on the basis of which ones are loudest or friendliest rather than their actual condition. The effective altruist has ideas for a triage system that could save more lives, but there is no one she can talk to who has the authority to put that system in place. Most of her fellow employees don’t even know she’s there, and the few she comes into contact with think she’s a little crazy. She is confident she can get one of her colleagues to adopt the triage system, maybe in a best-case scenario two or even three. She knows, however, that a much larger number will refuse to treat patients that they don’t already know, no matter what she says to them.

Our effective altruist faces a dilemma. She knows the triage system can save lives. She also knows most of her colleagues will only treat patients they already know, and she has no authority to compel them otherwise. If she wants to accomplish the most good she can, the clearest strategy available to her is to encourage those colleagues to apply the triage system among the patients that they already know. That will result in more lives saved. And as an added bonus, it does much more to raise the awareness and status of the triage system across the entire population of hospital employees. Potentially, this opens the door to a more serious conversation down the road about treating all patients according to need.

In making this request of her reluctant colleagues, there is nothing stopping our effective altruist from first asking them to apply the triage system to everyone. But the difference is that, if they say no, she has a backup plan that has a much better chance of actually working.

 

All causes are EA causes

The effective altruism movement will have to grow roughly two to three orders of magnitude in size2 before it captures even the scant 3-4% of donors who engage in EA-aligned behaviors now. But what about everyone else? Which is better, having donors and do-gooders seeking to make the most possible impact within domains they care about (but that may not have the highest potential for impact overall), or donors and do-gooders not only sticking to lower-potential domains, but not making a difference in them either?

This is not the same thing as simply letting people define effectiveness or good in their own terms (the so-called “thin” version of effective altruism). One of the greatest benefits of domain-specific effective altruism is that it can provide an epistemic bridge between individual domains and the global perspective that EAs care so much about. For example, Createquity, the publication I mentioned earlier that researches the most important issues in the arts, makes an explicit connection between the arts and a broader conception of collective good or overall wellbeing – the same thing that most EAs are trying to maximize through their efforts. Thus, any success Createquity has in identifying promising issues and motivating productive action on them will make the arts field more efficient from a global effective altruist perspective.

Should other fields adopt a similar approach, they would similarly move toward greater efficiency in delivering impact. And that potentially is no small thing. To see why, consider a simple math problem. Which does more to reduce carbon emissions: replacing a 7-mpg truck with a 10-mpg truck, or replacing a 30-mpg compact car with a zero-emission Tesla Model S? The winner here is the gas guzzler, because the even-worse gas guzzler it’s replacing was so wasteful. Fostering and cultivating domain-specific effective altruism could actually achieve more aggregate impact than encouraging investment in only the most promising domains, if the ratio between potential domain-specific EAs and potential cause-neutral EAs is large enough.

Domain-specific effective altruism has other potential benefits as well. For one thing, it opens up the potential for greater dialogue with the professional establishment in institutional philanthropy, which has invested heavily in the past 20 years in in-house evaluation and learning capabilities. There are numerous foundations that care deeply about the effective altruist values of critical thinking and empathy, but have committed to top-level restrictions on their giving, sometimes subject to unbreakable legal covenants. That might be one reason why there are indications that EA has yet to gain widespread influence or recognition even among this ostensibly friendly audience. Yet some high-profile foundations that fund in multiple cause areas, such as Ford and Irvine, are starting to experiment with an integrated, cross-team approach that would benefit immensely from epistemic bridges between domains. What’s more, these institutional funders are in a position to play a crucial role in creating the research that forms the evidence base for effective altruist interventions, so it behooves everyone to be speaking the same language.

Finally, embracing domain-specific effective altruism diversifies the portfolio of potential impact for effective altruism. Even within the EA movement currently, there are disagreements about the highest-potential causes to champion. Indeed, one could argue that domain-specific effective altruist organizations already exist. Take, for example, Animal Charity Evaluators (ACE) or the Machine Intelligence Research Institute (MIRI), both of which are considered effective altruist organizations by the Centre for Effective Altruism. Animal welfare and the development of “friendly” artificial intelligence are both considered causes of interest for the EA movement. But how should they be evaluated against each other? And more to the point, if it were conclusively determined that friendly AI was the optimal cause to focus on, would ACE and other animal welfare EA charities shut down to avoid diverting attention and resources away from friendly AI? Or vice versa?

The reality, as most EAs will admit, is that virtually all estimates of the expected impact of various interventions are rife with uncertainty. Small adjustments to core assumptions or the emergence of new information can change those calculations dramatically. Even a risk-friendly investor would be considered insane to bank her entire asset base with a single company or industry, and if anything, the information available in the social realm is far less plentiful and precise than is the case in business. Particularly as the EA movement seeks to grow in influence, the idea of risk mitigation is going to become increasingly applicable.

* * *

Maybe you're not convinced by what I've written so far because you think cause neutrality is so central to EA identity that no true EA leader would ever countenance any distraction from it. Well, don't just take my word for it. Two years ago, I had the privilege of interviewing GiveWell co-founder Elie Hassenfeld for Createquity about the relationship between effective altruism and the arts. And this is what Elie said when I asked him if there was anything contradictory about trying to combine the two:

While I think people will reach different conclusions about which causes they are excited to work on, there is nothing that seems particularly problematic to me about someone saying, "the way in which I think that I can best contribute to the world is via the arts and, therefore, I’m going to try and maximize in some broad sense the impact that I have in that domain."

Effective altruism is a truly transformative idea that has the potential to improve billions of lives – but the movement’s rhetoric and ideology is currently limiting that potential in very significant ways. The few, wonderful people who are prepared to embrace any cause in the name of global empathy should be treasured and cultivated. But solely relying on them to change the world is very likely a losing strategy. If effective altruists can come up with ways to additionally engage those who want to maximize their impact but are not prepared to abandon causes and geographies they care about deeply, that could be the difference between EA ending up as a footnote to history or the world-changing social force it seeks to be.

 

Notes:

1. Arguably, a dramatic, imminent, and undeniable threat to human existence could unify human beings behind the cause of defeating the common enemy and thereby motivate vast social and structural change in a short period of time. Some might believe that certain existential threats, such as the development of unfriendly artificial general intelligence, may both fit these criteria and happen within most of our lifetimes. I personally think this is unlikely.

2. Based on inputs of 250m adults in the US, 69% of Americans donating to charity, and a generous estimate of 10,000 US participants in the EA community. Rates for other countries assumed to be roughly analogous.

Comments34
Sorted by Click to highlight new comments since:

If more people adopted an EA mindset within the causes they already care about, that would probably be a good thing. But my goal isn't to get more people to adopt EA ideas; my goal is to do whatever's most effective. Is focusing on movement building targeted at cause-specific people the most effective thing to do?

You don't really argue for this claim and I don't know if you believe it; but I'm pretty sure it's false. Consider, for example, that the most effective global poverty charities are about 100 to 1000 times more effective than the most effective charities in other popular sectors of giving. The difference probably matters even more in some causes—I would posit that SCI probably does 10,000 to a million times more good than the best arts charity. That means if you can convince one person to give to SCI, that's as good as convincing 10,000 arts enthusiasts to make donations more effectively within the arts. One of these sounds a lot easier than the other.

This post seems to argue that we should prefer the world in which cause-committed people still care about effectiveness than the one in which they don't. It's not that you're wrong; I agree with most of what you say. But I don't think you're looking at this in the right way. The question is not, what world is best? But what can I do that has the greatest impact? I don't get to choose what world we live in, I just get to choose what I do. And I don't think I should spend my time trying to convince people in suboptimal causes to donate to better organizations within those causes.

Maybe you're not convinced by what I've written so far because you think cause neutrality is so central to EA identity that no true EA leader would ever countenance any distraction from it.

I'm quoting this sentence because I believe it's a great example of how you're not framing this correctly. EA is not about holding onto an identity and rejecting anything that challenges it. EA is about doing the most good. It's not about having a centrally defined mission that you must adhere to. It's about doing the most good. EA is a question, not an ideology.

Consider, for example, that the most effective global poverty charities are about 100 to 1000 times more effective than the most effective charities in other popular sectors of giving.

Citation please. I believe this claim is false. E.g. one of the most popular foundation grant areas is life sciences research. This GiveWell post ballparks generic cancer research (a relatively very heavily funded field that has gotten relatively weak results) as being less than 100x, and suggests that particularly effective biomedical research could be orders of magnitude more effective than that.

I would posit that SCI probably does 10,000 to a million times more good than the best arts charity.

I say that this is quite unlikely. GiveWell estimates the benefits of SCI as about 5-10x the direct benefits of cash transfers. Cash transfers might give a 30x multiplier, but not 100x, I think.

And I am confident that the best arts projects generate more benefits for the beneficiaries than simply handing over the cash would. Consider something like loosening the Mickey Mouse copyright stranglehold and releasing old works into the public domain. The consumer surplus from such increased access to and creation of art could be quite large relative to the investment.

There is also the intersection of art and other important causes and issues. The arts (especially mass media like film-making) have played important roles in drawing attention to a variety of problems, and such art can be specifically supported.

From a cause-neutral point of view these are unlikely to be the very top priorities but the numbers you give above are not plausible to me.

It does quack like one though.

People frequently behave as though EA is an ideology. I believe they ought not behave this way; we will do more good if we focus on doing good and not on the dogmas that inevitably arise out of the EA community. I myself am guilty of this: when responding to OP, I originally wanted to "defend my tribe" and say OP is bad and EA is good. I re-wrote my comment several times to focus on the fact that OP is actually good and saying valuable things, and to formulate a constructive and cooperative comment, instead of just defending my tribe. I believe this is a good thing and I ought to try to do this more often, and I can probably do better.

  1. People often behave as though EA is an ideology, not a question; this is generally harmful.
  2. I can probably mitigate the effects of (1) by reminding people to think of EA as a question, not an ideology; which is why I did so in my previous comment.

These are great points.

If we have more modest numbers we might get to a similar conclusion though.

e.g. suppose the distribution of cost-benefit ratios looks like this:

  1. Typical US charity: 1
  2. Good US charity: 10
  3. Typical international charity: 20
  4. GiveDirectly: 30
  5. Biomedical research, US policy advocacy: 100
  6. Best international charities (i.e. AMF): 300
  7. Good meta-charity, xrisk, advocacy etc: 1500+

Then, moving someone from a typical US focused charity to a good one produces an extra 9 units of impact per dollar; whereas moving them to the best international charity produces 299.

So you need to persuade 33 times as many people (299/9) to switch to the best thing within the US vs number of people you can persuade to switch to the best international development charity.

At the current margin, it seems substantially easier to me to persuade one person to change cause towards international health and support the best charity in the area, than to persuade 33 US focused donors to choose the best thing in their area.

So, it seems like finding people willing to switch cause should be the community's key priority until we're at least 1-2 orders of magnitude bigger.

There are some other important strategic priorities that tell in favor of focus, such as (i) community building - it's easier and safer to form the community around a well-coordinated, dedicated core of people rather than a wider base that is more vulnerable to dilution and fracturing (ii) delayability - it seems possible to add in cause-specific efforts in the future.

However, I think there are good arguments for putting a small amount of resources into a wider range of causes - such as (i) information value (i.e. learning about a wider range of areas) (ii) building expertise (we want to learn about lots of areas and skills since we'll need this knowledge in the future) (iii) improving the brand (having some material for a wide range of areas makes it clear that we really do care about all ways of doing good (iv) creating stepping stones (you might be able to get some people involved with cause-specific content who will switch cause later on). Fortunately, this is already happening to some degree.

Edited to change some of the numbers

(First, really stupid question - not sure I understand the math here? Why wouldn't switching from typical US to good US produce 9 extra units of impact per your assumption, not 4?)

Anyway, regarding this:

At the current margin, it seems substantially easier to me to persuade one person to change cause towards international health and support the best charity in the area, than to persuade 49 US focused donors to choose the best thing in their area.

I think one thing you're not taking into account is that not all EA community members are interchangeable; different people have different leverage within their communities. It would be trivial for me to motivate 49 arts enthusiasts to switch donations to a better US charity in the arts, given that I run a publication with roughly 10k total followers and several hundred "true fans." There are analogous people in other domains across the spectrum. So one approach to outreach could largely involve finding and forming partnerships with aligned, influential individuals in those domains.

Sorry I changed some of the numbers mid-way while writing then forgot to change the others to be in line. The're updated now.

I think one thing you're not taking into account is that not all EA community members are interchangeable; different people have different leverage within their communities. It would be trivial for me to motivate 49 arts enthusiasts to switch donations to a better US charity in the arts, given that I run a publication with roughly 10k total followers and several hundred "true fans." There are analogous people in other domains across the spectrum. So one approach to outreach could largely involve finding and forming partnerships with aligned, influential individuals in those domains.

I agree - I was focusing on what the core focus of the community should be, but if people have a comparative advantage in an area that gives them 10-100x more leverage, it could outweigh. I can also see we might have underweighted the size of these differences in the past.

That's interesting, Carl, I wouldn't have necessarily thought of copyright reform as one of the highest-impact arts interventions, but the consumer surplus angle is intriguing. This exchange is actually a great example of how domains can benefit from participation in this community.

I also want to throw another thought out there: it's not inconceivable to me that we might find the most effective way to support the arts in the world is to, say, give cash transfers to poor people in Africa. Or put resources towards some other broad, systemic issue that affects everyone but is disproportionately relevant in the domain of the arts. If people in the effective altruist community said that, everyone would freak out and think you're just throwing stuff at a wall to get people to switch donations away from the arts. But if an entity with authentic roots in the arts said that, the reaction would be quite different. See, for example, this: http://creativz.us/2016/02/02/what-artists-actually-need-is-an-economy-that-works-for-everyone/ Furthermore, Createquity would only come to that conclusion after researching the other major interventions and causes within the arts that people already care about, so we would have a much more concrete comparative case to make.

As always, everything I'm saying here potentially applies in other cause areas as well. I know we're talking about the arts a lot in this thread because that's my background and what I know best, but I don't think any of this is less true for, e.g., higher education or local social services.

I also want to throw another thought out there: it's not inconceivable to me that we might find the most effective way to support the arts in the world is to, say, give cash transfers to poor people in Africa. Or put resources towards some other broad, systemic issue that affects everyone but is disproportionately relevant in the domain of the arts.

For a related take on advancing science, see this.

If we mean 'the arts' in general and over time, I think this is extremely likely. Basically that would mean working to reduce existential risk, in my view. The long-run artistic achievements of civilization (provided that it survives and retains any non-negligible interest in art) will be many orders of magnitude more numerous, and much higher in peak quality, than those we have seen so far.

I was taking you to mean 'the arts' as something like a constraint on the degree of indirectness, patience, etc, that one accepts. E.g. 'the arts for my community now, not for foreigners or future times, via methods that are connected reasonably closely to the arts world.'

Regarding loosening copyright, it's not just letting people enjoy the old works, but enabling new creation.

Michael, a large part of my argument rests on the premise that the EA community has grown to the point where it is capable of walking and chewing gum at the same time. You seem to be viewing this through an individual scarcity lens where we only have one choice to make about which action we're going to take, and it has to be the most effective one. I disagree. I see EA as a diverse, multifaceted movement with many assets that can be deployed toward the collective good. This piece is about how those resources can be collectively deployed most effectively, which is a different question from "how can I do the most good."

Imagine chewing gum is an unbelievably effective cause: it's life-saving impact is many orders of magnitude higher than walking. If we want to maximise chewing gum to the fullest we cannot have any distractions, not even potential or little ones. Walking has opportunity costs and prevents us from extremely super effective gum chewing.

This piece is about how those resources can be collectively deployed most effectively, which is a different question from "how can I do the most good."

Michael's post still applies. Collective resources are just a sum of many individuals and everyone/every group contemplating their marginal impact ideally includes other EAs' work in their considerations. The opportunity cost bit applies both to individuals and groups (or the entire movement)

Any unit EA resource spent by x people has opportunity costs.

Can you walk me through your reasoning of why the marginal value of encouraging the practice of effective altruism within domains is not likely to be greater than the marginal opportunity cost of doing so?

Because we could work on more effective causes with these resources. See Michael's

The difference probably matters even more in some causes—I would posit that SCI probably does 10,000 to a million times more good than the best arts charity. That means if you can convince one person to give to SCI, that's as good as convincing 10,000 arts enthusiasts to make donations more effectively within the arts. One of these sounds a lot easier than the other.

Spreading EA thinking within domains is an idea for an intervention in the EA outreach cause. I don't think the good per unit time invested (=impact) can compete with already existing EA interventions

So, are you arguing that investing in EA outreach in domain-specific ways can't compete or that investing in EA outreach at all can't compete? Your last paragraph sounds like you're saying the latter, but I find that to be a rather nonsensical position if you think that correctly targeted donations are so highly leveraged.

If the claim is that domain-specific EA outreach is less effective per unit invested than cause neutral EA outreach, keep in mind that I argue domain-specific EA outreach will grow the movement faster/more than the alternative, which in turn creates more resources that can be deployed toward further outreach (or other helpful functions, like operations or research). Depending on your assumptions about the ratio between the total ceiling of cause-neutral people and domain-specific people out there, that growth factor could be extremely significant to EA's total impact on the world.

The former; outreach is great. It would probably be better if you argued in the thread above to collect your thoughts in one place, since I share Ben Todd's opinion and he put it much better than I could. I enjoyed reading your well thought out post by the way!

Maybe I am missing something here, but -- given your post and your arguments -- how does it follow that the EA movement should not endorse case-specific effective altruism?

If I understand the "EA mission" correctly, it is about doing the most good in total. The original poster seems to believe that EA endorsing case-specific effective altruism will do more good than if they don't (overall). Hence, if you disagree, you should argue why it would be better for EA to not endorse this. Where am I making a mistake in this logic?

My own intuition (which I tried to hint at in my first post) is that any official endorsement of case-specific effective altruism on behalf of EA would take away too much from the core of EA to be worth it. YES, the world would be better, if everyone applied the EA core values to their own field, BUT ressources are too tight -- or it might be too distracting -- to devote any attention to such "secondary" causes. (That being said, I am very much aware that my intuition might be wrong!)

What does it mean for the EA movement to endorse something? If that just means that I should say cause-specific effective altruism is a good thing, then okay, I hereby declare that cause-specific effective altruism is a good thing. But if you mean that I should spend my limited time campaigning to convince people in causes like the arts to focus on more effective interventions within their own cause, then I think it's pretty clear that I should not do that.

Michael, I'll clarify what actions from the EA community I am specifically I am making a case for. I am arguing two things:

1) people who are already invested in EA outreach ought to consider strategies that reach and activate people invested in specific domains; and

2) people who are invested in EA in general, but not in EA outreach specifically, ought to recognize the value of 1).

Now, those "ought tos" are of course contingent upon your agreement with the specific arguments and assumptions that I lay out in the piece. But I am not trying to convince you, specifically, to campaign for domain-specific EA except to the extent that you're campaigning for EA already and not 100% successful in those efforts.

Your figures sounds too high. Remember we are comparing the best arts charity, not the average arts charity to SCI. In order to make such a comparison, we'd have to write down believable impacts from arts and from SCI for a particular amount of money invested, then we'd actually be able to make this kind of comparison.

I'm not saying that I'm in favour of Effective Altruism engaging in domain-specific effective altruism, just that we would need a more nuanced comparison to evaluate this claim. Anyway, even if we were to loosen up EA, it seems that allowing arts to count as EA would be going too far, to the point of damaging our credibility.

Thanks for writing this Ian. I read both this and your last post with interest, and like your point about the small fraction of donations we currently capture.

I actually think you understate your case: it's not just that most people care more about "fuzzies" than "utilons", but even the most hard-core EA cares deeply about fuzzies. (E.g. there's basically no one who would not hold open a door for an old lady walking by, even if that time might be better spent spreading bed nets.)

However, the classic answer is to optimize for these things separately, which is exactly what you and your wife have done:

we allocated our charitable giving roughly this way: 50% to GiveWell-recommended charities, 25% to combating homelessness in Washington DC, and 25% to the arts

I'm curious why you don't like the strategy that you and your wife have followed? It seems preferable to me to have someone give half their donations to a top charity rather than have them give 100% of their donations to a moderately effective charity.

Hi Ben, the answer to that is simple. EA currently says to donors, "give 10% (or whatever amount you're willing) to the top charities that we recommend. Then do whatever you want with the rest, we don't care."

My claim is simply that EA should care about "the rest," if the goal is to maximize total wellbeing improvement. For many donors, I believe it is not as simple as having two pots, one for which you use your head and one for which you use your heart. In my family's case, we are interested in maximizing the good we do within the other 50%, subject to those top-level restrictions. That is also true of a large portion of grantmaking foundations with professional staff. It all comes back to my point about EA leaving opportunities for impact on the table.

Right, to phrase my question another way: suppose we could either:

  1. Convince someone to give 11% of their income, instead of 10%, to the top charities
  2. Convince someone to make their "non-EA" donations slightly more effective

It seems to me that (1) is both easier and more impactful.

EdIT: Ben Todd said the same thing as me but better.

I would question whether 1 is in fact easier. In the case of most of the people I know, I would guess that it's not.

Hi Ian,

Great and thought-provoking post. Thank you very much for taking the time to write it!

I will think about it and might respond at length later, but for now, let me ask you this: How do you propose the EA movement go about introducing "case-specific effective altruism"? Do you imagine several official sub-groups, each dedicated to a specific cause?* Or do you simply want EA to acknowledge that case-specific effective altruism is a good thing, so that people can set up their own domain-specific EA groups if they like?

In sum, a few words on your thoughts for actual implementation going forward, would be nice! :)

*It seems, that if you are advocating that the EA movement actively pursues domain-specific effective altruism in a number of different domains, this would require a large amount of work from the group/community -- work that will therefore not go into the (cause-neutral) traditional EA domain. For this reason alone, one could argue against this implementation (i.e. one could acknowledge that case-specific effective altruism would be a good thing, but still reject to actively do something about it, since the work load would be too high as compared to what you get out of it).

Sure thing. I don't have a fully-fleshed-out plan to offer you, but here is an initial thought.

My main suggestion is to implement a kind of chapter-based network (let's assume for the sake of argument that we can figure out a way to avoid confusion with the existing EA-based local chapter system). This is similar to your suggestion of sub-groups dedicated to specific causes and geographies. I think the difference between what I'm envisioning and what you're suggesting, though, is that I am not thinking that the talent and resources for these organizations would primarily come from the existing EA community. For example, in Createquity's case, we are all people in the arts and I am the only one who even borderline considers myself an effective altruist. Yet, the work we do is very aligned. Similarly, there is a large if somewhat unorganized community of evaluators, scientists, philanthropists, think tanks, and service organizations dedicated to effective practice in various domains. (I use the term "domain" here rather than "cause" since I am considering geography to be a potential domain.) It is possible that some of those folks could be converted to working on more global EA issues, but for those who can't be, the domain-specific groups would be a way for them to plug in and put to good use the knowledge that the larger network is generating.

So it would not be a huge drain on existing EA resources, but neither am I advocating that EA take a completely hands-off approach. I think there is a ton of value to be realized from coordination and spread of the EA brand to individual domains. As long as it's always recognized that domain-specific is subordinate to cause neutral, the brand need not be harmed. It's almost like the domain-specific organizations are the farm team for EA's major leagues, both in terms of recommended interventions/actions and potentially for talent as well.

Thanks IanDavidMoss, I (unlike many other commenters here) also support the existence of what you call domain-specific EA.

If this "domain-specific EA" involves supporting existing charities to do good better, I would NOT be in favour of the EA community doing this. Not because it's a bad thing, but because there are already people doing this (here's 3 examples off the top of my head: http://www.thinknpc.org/, https://giving-evidence.com/, http://www.ces-vol.org.uk/)

If "domain-specific EA" involves providing guidance on which charities to donate to in a specific field, I agree that there is a gap in the market for this. I wouldn't call it EA, but I think it would be valuable if it were possible. I even tried to do it - and now I'm doubtful about whether it is feasible. I promise I will write up on thoughts on this and share it here before long.

I note that this is a discussion about a view which we have essentially one person arguing for and already many people arguing against, and so in the interests of not burning Ian out, I suggest pro-status-quo people put more effort than usual into being concise, and perhaps consider letting existing threads play out a little before adding more balls to be juggled :)

I realise this might come across as patronising or unwelcoming or something. There's an unfortunate social norm that "organizational" often correlates with "authoritative". I explicitly disclaim authority on this matter, just trying to make some commons less tragic.

That's very kind of you, Ben. I'm not getting burned out, but I am getting a little frustrated that some of my more substantive responses and clarifications are getting spread out across multiple threads when it would be easier for everyone if they were collected in one place. Not sure if you have any suggestions for that...maybe I could update the OP?

I would be really pleased to see a drive for greater effectiveness within specific domains. At the same time, it isn't clear that Effective Altruism as a movement should get involved in this push, as opposed to individual Effective Altruists.

Firstly, while weakening our commitment to cause neutrality would almost certainly lead to growth in the short-term, it could prove devastating in the longer term because a community needs shared values to bind it together. Although we would still have domain-specific effectiveness a shared value, I fear that this will be insufficient as the person who is concerned with existential risk will not feel that the person who is only passionate about the arts is "like them". People who are passionate about the arts and want to support the arts can already join the community, it is just that if they truly want to be part of the community, then they should be supporting Effective Altruist causes as well, just as you can't become an artist just by hanging around artists, you have to occasionally make things.

People who are passionate about the arts and want to support the arts can already join the community, it is just that if they truly want to be part of the community, then they should be supporting Effective Altruist causes as well, just as you can't become an artist just by hanging around artists, you have to occasionally make things.

I think this is reasonable. I guess the way I would put it is this. We need people in the effective altruist community who can serve as bridges between EA and domains, to help make those domains more effective. The people who are serving that bridge function should really understand EA and buy into its core concepts, including the basic logic of cause neutrality. That said, the people they're bridging to, in the domains, don't necessarily need to consider themselves effective altruists or be active in the EA community in order to do effective things within their domains. They just need to be willing to work with the person who is serving as the bridge.

I've always thought the same as you, Ian. Great point about foundations, BTW. Very few people are willing to only give to the highest EV charity across all causes and countries, therefore they might as well give as effectively as possible within whatever criteria they have (ie. domestic, homeless). The only argument to the contrary is that there is a counterfactual if the all-or-nothing purist form of EA is broken and people donate to top domain-specific charities that would have given to the best cause neutral charity. I doubt there is much of a counterfactual because the EA community can still promote cause neutral charities while passively recommending domain-specific charities on the internet. It can even be a baby step on the way to practicing a more strict EA – people who wouldn't consider neutrality get used to the idea of effectiveness within their favourite domain. After having accepted the effectiveness doctrine, they start to open up to cause neutrality, increasing donation size, etc.

While it is good to have top domain-specific charities and interventions suggestions available for those who seek them, there remains 2 questions in my mind: 1) whether they should be branded as “effective altruism” even if they are in a low-potential cause, and 2) how much EA outreach and research should focus on them.

Ideally, those within the domain would find and promote the top charities in the domain themselves so there is no counterfactual from the EA community doing it. I wouldn't want the limited resources within EA going to low-medium potential domain-specific research and promotion, however, perhaps EA leaders could incite those within the domain to do it themselves (particularly foundations, as you mentioned). That seems like it could be quite high impact. Another point is that perhaps domain-specific discussion will bring in new people to EA that otherwise aren't interested. These new people could then promote effectiveness within their preferred domains. I think this is very plausible.

Regarding the responses about it being more impactful to persuade someone to give slightly more to a high impact cause neutral charity than significantly more to a domain-specific one, I think that depends on the situation. For a person in Europe, for instance, I would think that it would be more impactful to put effort into persuading them to give to a top overall charity than the best charity their favourite personal cause. However, for people in poor countries, like India, I think outreach would have the greatest impact by persuading them to give to the most effective Indian charities. I find that people from poor countries are primarily concerned about the poverty and problems in their home country.

Finally, embracing domain-specific effective altruism diversifies the portfolio of potential impact for effective altruism.

There is no need for a more diverse portfolio. There is no evidence to suggest that there are causes higher in expected value than are being worked on. If anything, the most effective way to maximise the EA portfolio is by doing cause prioritisation research, but this already is one of the most impactful causes.

Even within the EA movement currently, there are disagreements about the highest-potential causes to champion. Indeed, one could argue that domain-specific effective altruist organizations already exist.

People have different values and draw different conclusions from evidence, but this is hardly an argument for branching out to further causes most people agree there is little high impact evidence for.

Take, for example, Animal Charity Evaluators (ACE) or the Machine Intelligence Research Institute (MIRI), both of which are considered effective altruist organizations by the Centre for Effective Altruism. Animal welfare and the development of “friendly” artificial intelligence are both considered causes of interest for the EA movement. But how should they be evaluated against each other? And more to the point, if it were conclusively determined that friendly AI was the optimal cause to focus on, would ACE and other animal welfare EA charities shut down to avoid diverting attention and resources away from friendly AI? Or vice versa?

If it were conclusively determined (unrealistic) that X (in this case AI) is better than Y (in this case animals), then yes everyone who can should switch, since that would increase their marginal impact.

If you don't believe that there are other valuable causes out there, or that cause X can be conclusively determined to be better than cause Y, then why do you think cause prioritization research is a valuable use of EA resources?

Yes, I should have phrased these things more clearly.

a) The evidence we currently have in this world suggests that the usual EA causes have an extraordinarily higher impact than other causes. That is the entire reason EA is working on them: because they do the most good per unit time invested.

Indeed there might be even better causes but the most effective way to find them is, well, to look for them in the most efficient way possible which is (cause prioritisation) research. Spreading EA-thinking in other domains doesn't provide nearly as much data.

b) I just meant that we probably won't be 100% sure of anything, but I agree that we could find overwhelming evidence for an incredibly high-impact opportunity. Hence the need for cause prioritisation research

Spreading EA-thinking in other domains doesn't provide nearly as much data

I really disagree with this. I think it would result in dramatically more data compared to the alternative, especially if each of those domains is doing its own within-cause prioritization.

Curated and popular this week
Relevant opportunities