tldr: The more important this century is, the more important it is to increase EA’s representativeness of the human population this century by recruiting members from developing countries who can contribute to priority setting. This is due to value lock in and the variance in moral values across cultures.

Holden Karnofsky argues that we are living in the most important century, and that “[w]e, the people living in this century, have the chance to have a huge impact on huge numbers of people to come.”

He also says that “there is a chance of ‘value lock-in’ here: whoever is running the process of space expansion might be able to determine what sorts of people are in charge of the settlements and what sorts of societal values they have, in a way that is stable for many billions of years.”

He is specifically talking about space expansion, but I think the same possibility applies to other long-termist projects such as AI alignment and pandemic preparedness.

If this is true, then I believe it increases the moral imperative to ensure that the Effective Altruism community is representative of humanity’s many different cultures and value systems.

According to the 2019 EA survey, “74% of EAs in the survey currently live in the [a] set of 5 high-income English-speaking western countries.” Additionally, “EAs living outside of the USA and Europe reported the largest shares of non-engaged or only mildly engaged EAs, possibly stemming from their obstacles to  participating in ‘high engagement activities’.” Demographics from CEA events and the 2020 EA survey put members of the EA Community at ~50-75% white. 

EA tends to draw members from - and at times organizations explicitly focus recruitment on - elite schools in countries like the US and UK. For example, India is the only developing country included in the CEA’s list of locations eligible for Community Building Grants. The only eligible university group outside the US and UK is University of Hong Kong.

The brief reasoning for the selectivity is that these locations  “especially high priority in terms of growing EA presence globally” - developing countries are not a priority. And universities are prioritized based on “track record of having highly influential graduates (e.g. Nobel prize winners, politicians, major philanthropists).”

There would be a reasonable justification for this if EA was focused only on attracting high-earners in rich countries to donate money to people in poor countries. Or if the EA community has already figured out all the answers to the most important moral questions, and just needs to attract highly influential people to implement successful change.

But EA is not just about sending money to the global poor, and we do not have all the answers yet. An important part of the Effective Altruism project is a search for and discussion about which moral values are most important so that we can maximize those values. Long-termism requires making value judgments about what is best for humanity in the long term. It involves grappling with questions like:

  • What principles should govern space exploration and potential colonization of other worlds?
  • What role should AGI play in human societies?
  • How should tradeoffs between medical privacy and the ability to respond quickly to global pandemics be made?

Answers to these sorts of questions of value vary from culture to culture. And to the extent that values determined by the EA community now may “lock in”, it is critical that the values of the community reflect the values of humanity as a whole. This is made even more important by demographic trends: Pew projects that in 2100 half of babies will be born in Africa, which is one of the areas of the world least represented in EA.

What do we do about this? The one potential tactical recommendation I would offer is this: Anecdotally, I’ve heard that many students in Nairobi have a hard time connecting easily with EA ideas because so much introductory EA material is pitched at people from rich countries. Perhaps creation of new intro-to-EA materials and tweaking of existing materials could aid EA community organizers in attracting new members who can contribute to EA community discussions. 

I don’t have any clear answers here, but would be very curious to see if others here agree with the assessment that the more important this century is, the more important it is to increase EA’s representativeness of the human population this century.


 

Comments10
Sorted by Click to highlight new comments since:

I'm trying to understand whether you mean one of two different propositions:
 

  1. The more important this century is, the more important in absolute terms it is to raise the representativeness of EA (in this century).
    1. That is, if we increase our probability that this is the most important century there is, we should expect more utility from increasing the representativeness of EA
  2. The more important this century is, the more important in relative terms it is to raise the representativeness of EA (in this century).
    1. That is, if we increase our probability that this is the most important century there is, this raises the relative importance of increasing the representativeness of EA, compared to the importance of other EA activities (e.g., recruiting scientific talent and researching ways to decrease existential risk)

The first propositions is a useful note, but does not by itself demand additional actions. The second proposition suggests that we should change our prioritizations. I think your post here is arguing for the second proposition, but I'm not sure and would like to get some clarity. 

Ah good distinction! Agree I was not clear on that in my post (and to be honest, my thinking on it wasn't very clear either before you pointed out this distinction).

In part I am arguing for proposition 2. If it is the most important century, all long-term causes become more important relative to near-term causes. So at the very least, if it is the most important century, raising the representativeness of EA increases in importance relative to e.g., distributing bednets (1).

But what I'm really arguing for,  is that representativeness is more important for long-termism than most people in EA seem to think it is. And if you were underrating the importance of raising EA's representativeness (as I think the EA community does), additional action is demanded. I look through the lens of "if this is the most important century, representativeness is urgent" to illustrate the point. 

I could as have well, and maybe more accurately, called this article "An long-termist argument for the importance of EA's representativeness based on values-lock-in"

 

1.

I think it's a thornier question when it comes to whether or not raising the representativeness of EA becomes more important relative to other long-term cause areas. The answer here would depend on the timeline of different long-termist issues, and the degree of lock-in each of them have.

  • Lock-in: If lock-in is stronger in decisions driven by value judgments than in decisions driven by scientific understanding, then representativeness increases in importance relative to recruiting scientific talent. Or the converse
  • Timelines: Imagine that in an  "EA business as usual" approach (e.g., not the most important century) it takes 30 years to attract the best scientific talent and 300 years to make EA representative. But in a "most important century approach" it takes 10 years to attract the best talent, and 10 years to make EA representative. Then "making EA representative" has likely increased in importance relative to "attracting the best scientific talent" as a result of it being the most important century. (My sense is that something like this is the case)

I don't have a strong view on this, and it could make for some interesting analysis!

I could say much more about this topic, but I'll keep my comment short for lack of time:

I agree that it would be better if EA was more representative and had more geographic and ethnic diversity. I think that's true though regardless of if this century is the most important century. I still strongly upvoted this post though because it highlights how EA's diversity is important from a longtermist perspective too.

If you or anyone else is interested about the topic of growing EA in developing countries, you can watch this Q&A I did with the Hispanic EA community about how we started and grew EA Philippines, and some advice of mine that might cross-apply to other groups starting or operating in developing countries.

Thank you so much for sharing! Agreed that it's important regardless of if the century is the most important. If you're interested, see my response to Linch above on this.

I watched the Q&A and wrote up notes to it as I was watching - thought I would make them sharable in case anyone else involved in community organising wants to see the main points but doesn't have time to watch! Notes here.

Thanks so much for taking these notes! I've put some suggestions. We normally phrase EA Philippines as EA PH and not EAP, but that's a minor thing. And it's a community building grant I'm on, not a community grant.

Could I ask the Hispanic EA community to even link to your notes in the YouTube video's description? That might help people find it.

Another thing I would have liked for the video was there to be timestamps to certain questions/topics, so people wondering what my answers were can view those. But anyway I think these notes work too.

No problem, thanks for doing the Q&A and for the suggestions!  Happy if you want to share it with the Hispanic EA community

I would love to see intro-to-EA materials that are more applicable for people living outside of high income countries.

I'm not sure if it would make sense to A) make the current intro-to-EA materials less targeted (and therefore more inclusive), B) have a parallel set of intro-to-EA materials that is for developing countries or for non-high income countries, or C) for various regions/countries to have their own intro-to-EA materials.

I lean towards A, but I think that it would be a massive undertaking. Maybe a good first step would be to take a single piece of intro-to-EA material and alter/tweak/update it to be more relatable outside of OECD countries, and then make a "merge request" so that it is integrated into the intro-to-EA materials. 

I suppose that more content created by people outside of OECD countries is beneficial too. I'll consider this option D. There are (to be blunt) white native-English speakers writing blogs and posting YouTube videos about EA and EA-relevant topics. Maybe a small fund could encourage the Nigerian EA community in Lagos to produce content, and thereby provide some parallel and alternative narratives. This would also lessen the perception that "EA is primarily young men from wealthy families that attended top universities in US & UK." I'm worried about poor messaging and mis-understandings though. It only takes one article to mis-characterize key concepts for a lot of people to be turned off of EA, so there does seem to be significant risks. I'd have to put a lot more thought into this.

Thanks for the good post. I'm reminded of a paper by the philosopher Elizabeth Anderson that you might find interesting . It's about  how epistemic injustice  (harm or unfairness done to a person in their capacity as a source of knowledge)  is not just a transactional phenomenon between individuals, but is instantiated in social structures too. And responses to these injustices may need to be structural. https://doi.org/10.1080/02691728.2011.652211

The particular relevance to EAs might be that while each member may be epistemically virtuous (e.g. not allowing ethnic or racial biases to affect their judgments of others' claims), particular structures might still be objectionable. There's another paper (that I thought was by Anderson, but can't find!) that talks about the epistemic benefits of getting diverse input. 

[Edit: Also,  just because I'm currently reading it and it is somewhat relevant, it's worth noting Hans Rosling's broad summary of  changes in the global population distribution:
currently there is 1 billion people in Europe, 1 billion in the Americas, 1 billion in Africa, and 4 billion in Asia. In 2100, it is predicted that there will still be 1 billion in Europe and 1 billion in the Americas, but 4 billion in Africa and 5 billion in Asia.]

Ah yes that sounds super relevant!

Unfortunately the paper is behind a paywall and I'm not a student. And while it might be fine from an individual morality basis to pay for philosophy papers I object to the academic journal system that requires it, so I can't in good conscience shell out $45 to read it ;)

Thanks for sharing though!

[And thanks for the handy stats]

No worries, and although I'm a little unsure if it is against forum rules or whatever,  this might be helpful: https://sci-hub.mksa.top/10.1080/02691728.2011.652211 

Curated and popular this week
Relevant opportunities