Hide table of contents

This is part 3 of my attempt to disentangle and clarify some parts of what comprises Effective Altruism, in this case, the community. As I’ve written earlier in this series, EA is first a normative philosophical position that is near-universal, as well as some widely accepted ideas about maximizing good that are compatible with most moral positions. It’s also, as I wrote in the second post, a set of causes, in many cases contingent on very unclear or deeply debated philosophical claims, and a set of associated ideas which inform specific funding and prioritization decisions, but which are not necessary parts of the philosophy, yet are accepted by (most of) the community for other reasons.

The community itself, however, is a part of Effective Altruism as an applied philosophy, for two reasons. The first, as noted above, is that it impacts the prioritization and funding decisions. It affects them both because of philosophical, political, and similar factors belonging to those within the community, and because of directly social factors, such as knowledge of projects, the benefits of interpersonal trust, and the far less beneficial conflicts of interest that occur. The second is that EA promotes community building as itself a cause area, as a way to build the number of people donating and directly working on other high-priority cause areas.

Note: The posts in this sequence are intended primarily as descriptive and diagnostic, to help me, and hopefully readers, make sense of Effective Altruism. EA is important, but even if you actually think it’s “the most wonderful idea ever,” we still want to avoid a Happy Death Spiral. Ideally, a scout mindset will allow us to separate different parts of EA, and feel comfortable accepting some things and rejecting others, or assisting people in keeping identity small but still embrace ideas. That said, I have views on different aspects of the community, and I’m not a purely disinterested writer, so some of my views are going to be present in this attempt at dispassionate analysis - I've tried to keep those to the footnotes.

What is the community? (Or, what are the communities?)

This history of Effective Altruism involves a confluence of different groups which overlap or are parallel. A complete history is beyond the scope of this post. On the other hand, it’s clear that there was a lot happening. Utilitarian philosophers started with doing good, as I outlined in the first post, but animal rights activists pushed for taking animal suffering seriously, financial analyst donors pushed for charity evaluations, extropians pushed for a glorious transhuman future, economists pushed for RCTs, rationalists pushed for bayesian viewpoints, libertarians pushed for distrusting government, and so on. And in almost all cases I’m aware of, central people in effective altruism belonged to several of these groups simultaneously.

Despite the overlap, at a high level some of the key groups in EA as it evolved are the utilitarian philosophers centered in Oxford, the global health economists, the Lesswrong rationalists and AI-safety groups centered in the Bay, and the biorisk community[1]. Less central but at some-point relevant or related groups are the George Mason libertarians, the animal suffering activists[2], former extropians and transhumanists, the EA meme groups, the right-wing and trad Lesswrong splinter groups, the leftist AI fairness academics, the polyamory crowd, the progress studies movement, the democratic party funders and analysts, post-rationalist mystics, and AI safety researchers. Getting into the relationship between all of these groups is several careers worth of research and writing as a modest start, but there are a few important things I think it’s worth noting.

First, they are all distinct, even if they have overlap. Individuals affiliated with some groups shouldn’t be painted with a broad brush because other people are affiliated with different groups. (Unfortunately, confusing the boundaries is inevitable, since the insane multidimensional Venn-diagram has almost every overlap, with some polyamorous progress-study-supporting global health activists, and some animal-suffering reduction libertarian AI safety researchers.)

Second, the groups all compete for attention, funding, people, and impact. Unfortunately, time and allocated funding from each funding source are basically zero-sum. On the other hand, Effective Altruism is much more high-trust, collaborative, and cooperative than the vast majority of movements. These two effects are in tension - and it’s to Effective Altruism’s tremendous credit that it mostly works on growing the pie and being effective with limited resources rather than fighting over limited resources.

Third, the groups influence one another socially and in terms of worldviews. Extropian ideas influence people working on biorisk[3], and libertarians at George Mason influence animal rights activists[4], and EA meme groups influence large funders[5]. This is all understandable, and often mutually beneficial, but it’s worth noticing when the assumptions and assertions being made are fundamental, or imported. Even as someone who mostly disagrees with Cremer and Kemp’s viewpoints, they are clearly correct to notice that many debatable transhumanist or techno-optimist ideas are slipped into existential risk discussions by often unquestioned assumptions. People - both supporters and critics - should be very careful to distinguish between criticism or dispute about ideas, attacks on the community, and attacks on effective altruism cause areas or philosophy. 

How do the communities work?

Unlike many shared communities of interest, which often splinter based on differences in approach, Effective Altruist groups tend to be collaborative across organizations and approaches. This is particularly notable because the cooperation isn’t just within cause areas. Biorisk and animal suffering might have shared goals like reducing factory farming, but they aren’t natural allies. It’s tempting to say that this is just because the correct answers are obvious and widely shared, and this reaction is  fairly common. But in fact, the critical areas of focus for improving the world are deeply debated outside of Effective Altruism, and the uniformity within the movement is therefore remarkable. This might be partly due to the cooperative and collaborative spirit, but it’s more clearly attributable to the fact that all of these groups are paid for by a fairly small group of foundations and funds.

The interactions between communities aren’t incidental. The different cause areas share spaces, both virtual, especially including the EA Forum, and physical, at shared offices, group houses, and events like EA Global. Social groups and socialization reinforces the sense of community. And because cause areas are also communities[6], there are also interpersonal dynamics that take on oversized roles[7].

There is also a competitive dynamic between cause areas and organizations due to limited funding and attention. This has been somewhat muted because it was unclear whether Effective Altruism was funding constrained, and sufficient funding can help hide many problems. The other reason that the feeling of competition is muted is that it is routed through somewhat opaque[8] decisions of funders and community leaders.

Which brings us to a critical part of the communities - their structure and leadership.

Explicit and Implicit Structure

Most communities of interest have some formal or informal structure. Often, they have community or professional groups that elect leaders, or they have clear intellectual leaders. In the case of EA, though, much of the intellectual and formal leadership is closely related to funders. There are few explicit structural mechanisms, so most of the leadership is implicit and informal[9].

Holden Karnofsky is a good writer, and I really appreciate his cold-takes, but they clearly wouldn’t be nearly as influential if there wasn’t a few billion dollars in the foundation where he is co-CEO. (It takes being a much better writer, over the course of a decade or more, to have anything like the same level of influence without having funds to allocate.) On the other hand, Peter Singer and Will MacAskill are intellectual leaders, but each have fairly clear ways to allocate funding as well[10].

And the community has tried to remain disaggregated and not enforce any type of intellectual orthodoxy. In fact, central organizations like Open Philanthropy, the Global Priorities Institute, and the Future of Humanity Institute, all are fairly welcoming of dissent, and each employs individuals who are publicly critical of effective altruism in various ways. Outside of those organizations, intellectual dissent is somewhat less acceptable, despite significant efforts of leaders to encourage it[11]. This is a failure, according to the stated goals of EA. It’s also a predictable one[12].

Jo Freeman wrote a fairly important essay, The Tyranny of Structurelessness, about this problem, originally from 1970. She was referring to a different movement, but I think the below quote is directly relevant;

If the movement is to grow beyond these elementary stages of development, it will have to disabuse itself of some of its prejudices about organization and structure. There is nothing inherently bad about either of these. They can be and often are misused, but to reject them out of hand because they are misused is to deny ourselves the necessary tools to further development. We need to understand why "structurelessness" does not work.

The lack of explicit structure in EA had large benefits at one point. Still, based on Freeman I think it’s clear that  there is an impossibility result here; Structurelessness, Scale, Effectiveness - pick 2[13] (at most!) Effective Altruism is already too big to be a single large community without centralized and legible leadership. 

And the problem with lacking explicit structure, as Freeman points out, is that it doesn’t mean there is no leadership, it instead means that the responsibilities and structure is implicit. And there is more than one structure.

The problem with illegibility

Illegibility isn’t just external. Obviously, heads of large EA orgs are influential. But I suspect that most people who are very important within EA don’t have a clear understanding of that fact.

Some people have lots of communication influence, informally. A very rough proxy for this is EA Forum karma - some people are listened to more than others, and this is reflected in how much they get to influence the dialogue. There are 100 users with over 2,500 Karma, who get to give +6 or more strong upvotes. Many are just very active, but all have many high-karma posts. Other places that have a lot of informal influence via communication are  large EA orgs, especially 80,000 hours. There are also people on EA Twitter and in EA Facebook groups - QALY The Lightbulb might be a meme, but has a lot of informal influence. But these types of influence are mostly different than (semi-formal) leadership - Will MacAskill isn’t in the top 25 for EA forum karma, and Toby Ord isn’t in the top 250. Speakers at EA Global have influence, as do the people running them. The Center for Effective Altruism runs EA Global, funds much of the community building work, and runs the EA forum - but are not usually in the spotlight. 

Some people have lots of influence over funding. I don’t know what percentage of the largest EA donors are on the forum, because many give indirectly or less publicly, but Dustin has only made a few comments, Cari is absent, Jaan Tallinn isn’t there, and the once-leadership of FTX was never much involved. On the other hand, almost everyone running one of the CEA funds is high on the EA forum lists[14], but almost no-one at OpenPhil, with the exception of Aaron Gertler, the communications officer, and Holden Karnosfky, have high forum karma.

So, who is in charge? 

Nobody. EA is a movement, not an organization. As mentioned in previous posts, the concrete goals are unclear and disputed based on philosophical disputes, and there is no central strategy. This is part of why no-one is sure who to go to with problems - they end up posted on the Forum[15]. (Which also functions as a way to appeal to Openphil, and to coordinate, and to disagree.) 

But like Feminism, or the Civil Rights movement, or political parties, there are clear elites[16]. Still, as Freeman points out, [parenthetically adapted to EA]:

Elites are not conspiracies... Elites [in EA] are nothing more, and nothing less, than groups of friends who also happen to participate in the same… activities… These friendship groups [and local communities] function… as the [primary] networks of communication. Because people are friends, because they usually share the same values and orientations, because they talk to each other socially and consult with each other when common decisions have to be made, the people involved in these networks[17] have more power in the group than those who don't.

But as discussed in previous posts, the small networks which end up in control have specific values other than the actual philosophy of EA, and that ends up mattering. And the perception of the people in control by the broader community, and the illegibility of the structures, is critical. 

Community Building in Illegible Communities

If effective altruism is good, and doing good in the world, then it seems likely that having a larger community is beneficial. And growing a community is far cheaper than saving the world directly, so it makes sense that this is itself a cause area[18].

Unfortunately, scaling is hard, and it’s very easy for people to push for growth in ways that warp culture[19]. Of course, the easiest way to encourage growth is fund disparate community building groups, and “align” these efforts by having metrics for community size and engagement. This will create a larger community, but it’s likely to succumb to Goodhart’s Law, and loss of cohesion. Community growth is going to require not just convincing people of the value of the community, but socializing them into the norms. That takes the form of fellowships, leadership training with senior EAs, attendance at EA Global conferences, and similar.

But this ends up reinforcing the parts of the community which are incidental, as discussed in the previous post, and also reinforces uniformity of ideas. The alternative, however, is having the community drift away from its norms - about cause-neutrality, about the importance of object-level concerns, about communication, about actually trying to have an impact, and so on. And these are incredibly valuable, or vital. Unfortunately, as the community scales, they are getting harder to transmit - because communication of complex ideas requires feedback and personal interaction with those who already understand the ideas and norms.

What this means is that the default behaviors are that we either devolve into a large community that regresses to the mean in terms of impact, or we accidentally over-enforce conformity. Concretely, both of these have happened, to different extents.

Conclusion

This has been helpful for me to think out loud about what is happening within EA, and to separate the philosophy, cause areas, and community more clearly in my mind. I certainly embrace EA as a philosophy, and believe prioritizing several cause areas is worthwhile as a key part of my life, but have realized that it’s hard to mentally separate that from identity and community[20]. I don’t know if it has been helpful for others, but it certainly helped me iron things out personally - and if reading this wasn’t useful to you, I still think that figuring out how to break down EA into its components is an exercise that is worth doing for yourself.

 

  1. ^

    Feel free to debate which groups I am ignoring or unfairly omitting or grouping anywhere except in the comments to this post. (Twitter seems like a good place to do that, so you can’t write a novel for me to read before I respond.)

  2. ^

    These should be more central, given how prominent the ideas are in EA, but in practice it seems the groups focused on this aren't central. For example, as noted below, the CEA fund focused on it has more people but fewer central EAs than other funds.

  3. ^

    About questions of disease eradication, desirability of extended lifespans, and expectations about technological progress.

  4. ^

    About using market forces as the critical way to reduce animal agriculture.

  5. ^

    I’m not saying Dustin has a twitter-crush on Qualy the Lightbulb, but...

  6. ^

    I’ve discussed the dynamics and wisdom of having a global community before.

  7. ^

    Several months ago, the example which was most obvious to me was how much the political activism of SBF was due to and guided by his mother and brother. Similar dynamics exist with romantic relationships between EAs working in different cause areas, and polyamory has made the power structures in some EA communities far more metaphorically incestuous and complex.

  8. ^

    Opaque decisionmaking is normal, and in this context, this isn’t a criticism.

  9. ^

    This makes it nearly impossible to quantitatively measure interconnection, and this relates to the later point I make about illegibility. (I also discussed illegibility in terms of that type of metric in an old ribbonfarm post.) 

  10. ^

    Will’s brief stint helping to run FTX Foundation is the most obvious example, as shown by his relationship to the Forethought Foundation and to CEA. Singer’s The Life You Can Save serves a similar role. None of this is any indication of anything self-serving, but it does show that power has accumulated.

  11. ^

    I think this discussion about the EA criticism contest with Dustin Moskowitz and a note near the end from one of the judges, is indicative of this to some extent

  12. ^

    I don’t know how many times I’ve said that institutional design is ignored in places where it really matters, and we can and should know better, but I’ll say it again here.

  13. ^

    I’ve pointed out that I think EA needs to be larger, and shouldn’t be a single global community. I obviously don’t think we should give up on being effective, but in neither post did I explicitly point out that the tradeoff forces us to do both, or neither, but I’ll point it out here, at least.

  14. ^

    It’s notable that the Animal Welfare fund managers are significantly lower than the others, which probably reflects the relative emphasis in EA.

  15. ^

    Unrelatedly, but for the same reason, it is why EA seems to overlap with essentially whichever outgroup or ingroup you want to claim; anyone can be considered affiliated based on some subset of philosophical or social overlap.

  16. ^

    And given who are members of Effective Altruism, which already explicitly targets the global rich, and within that group heavily overrepresents Ivy-league / Oxbridge / Silicon Valley, these are elites within the elite, within the elite.

  17. ^

    I think that the polyamorous communities and extended polycules are particularly worth noting here, because they definitionally exclude people who don’t embrace that lifestyle, and they have unusually important impacts on the perception of the inner groups in EA. 

  18. ^

    As noted earlier, I’ve written elsewhere about the wisdom of this, but this post is attempting to be positive, not normative.

  19. ^

     I discuss this in another old ribbonfarm post, here.

  20. ^

    I knew this would be the case, but it’s still helpful to actually split things out to make sure I’ve thought through the question

Comments4
Sorted by Click to highlight new comments since:

Thanks for writing this. I’ve not commented on the previous two posts because I didn’t have much to add. However I want you to know that I found all three to be quite well laid out and concise for the amount of information and clarity packed into them. This one in particular I think I’ll share to when disambiguation is necessary (as it often is).

I read The Tyranny of Structurelessness because of it being mentioned in this post, and I found it very applicable to EA groups, and to other non-structured groups I've been a part of. I'm not a sociologist, but I enjoy adopting the lens of sociology to look at social psychology and group dynamics. So I wanted to thank you for sharing a reference to something that I found interesting and useful.

Thanks.

Some more suggestions for longer / more advanced reading you might enjoy, in rough order of how strongly I recommend them:

Peter Senge, The Fifth Discipline

James Q. Wilson, Bureaucracy

Hofstede, Cultures and Organisations

Blau and Scott, Formal Organizations: A Comparative Approach

Graham Allison, Essence of Decision

Curated and popular this week
Relevant opportunities