Today, a few Bay Area EAs and myself are asking the question, "How can we measure whether the EA movement is winning?"

Intutively, deciding on a win condition seems important for answering this question. Most social movements appear to have win conditions. These win conditions refer to a state of the world that looks different from the world's present state, and they are often even implicit within the movement's name (e.g., abolishionism, animal rights).

What does winning look like for EA? And how do we know if we're winning?

Discuss!

5

0
0

Reactions

0
0
Comments11
Sorted by Click to highlight new comments since: Today at 2:14 AM

Some possible criteria:

  • Number of GWWC members
  • Number of GWWC pledge signers
  • Number of EA facebook group members
  • Amount of traffic on this website
  • Amount of money moved by GiveWell
  • Amount of 80k Hours career advice requests
  • Number of applications
  • Amount of media coverage
  • Amount of positive media coverage
  • Number of EA organizations and projects
  • Size and scale of EA organizations and projects
  • Number of job applications at EA organizations
  • Number of applications for EA funds, contests, and projects
  • Amount of money donated by EAs
  • Level of credibility EA holds in academia

I like this list. We could improve on it by establishing a hierarchy of metrics.

1st Tier: more quantifiable and objective metrics which are also most strongly tied or correlated with direct impact.

  • Amount of money moved by Givewell and/or other effective altruist organizations.
  • Amount of money donated by effective altruists

2nd Tier: quantifiable metrics which aren't directly tied to increased impact, but are strongly expected to lead to increased impact. In this tier I include memberships which are expected to lead to more donations, and to overcome constraints on talent and human capital.

  • Number of GWWC members
  • Number of GWWC pledge signers
  • Amount of 80,000 Hours career advice requests
  • Number of effective altruism organizations and projects
  • Number of job applications at effective altruism organizations
  • Number of applications for effective altruism funds, contests and projects
  • Scale and scope of effective altruism organizations and projects

3rd Tier: metrics which are less direct, more subjective, less quantifiable, and are more about awareness than exactly expected impact.

  • Amount of traffic on this website
  • Amount of media coverage
  • Amount of positive media coverage
  • Level of credibility effective altruism holds in academia

I think it's possible for one metric to jump from one tier to the next in terms of how much confidence we put on it. This can happen under dramatic circumstances. For example, "media coverage" or "positive media coverage" would be something we would have much confidence in as impactful if effective altruism gets a cover story on, e.g., TIME magazine.

I'm skeptical of explicit metrics like "number of GWWC pledge signers", "money moved", etc. Any metrics that get proposed will be imperfect and may fall prey to Goodhart's law.

To me, careful analysis and thoughtful discussion are the most important aspects of EA. Good intentions are not enough. (After you read the previous article, imagine if an earlier EA movement had focused on "money moved to Africa" as its success metric.)

The default case is for humans to act altruistically in order to look good, not do good. It's very important for us to resist the pull of this attractor for as long as possible.

Turning the current negative feedback loop (donors give based on "warm glow", not impact-> charities dis-incentivized to gather/provide meaningful impact info->donors who want impact info can't find it and give based on warm glow) into a positive feedback loop (donors give based on impact-> charities incentivized to achieve/measure/report impact->easier for donors to conduct better analysis).

More generally, drastically shifting incentives people face re: EA behavior (giving effectively, impact-based career decisions, keeping robots from killing us, etc.)

A sustainable flourishing world!

I was reading Lifeblood by Alex Perry (it details the story of malaria bed nets). The book initially criticizes a lot of aid organizations because Perry claims that the aim of aid should be "for the day it's no longer needed". E.g., the goal of the Canadian Cancer Society should be to aim for the day when cancer research is unnecessary because we've already figured out how to beat it. However, what aid organizations actually do is expand to fill a whole range of other needs, which is somewhat suboptimal.

In this case, EA is really no exception. Suppose that in the future, we've tackled global poverty, animal welfare, and climate change/AI risk/etc. We would just move on to the next most important thing in EA. Of course, EA is separate from classical aid organizations, because it's closer to a movement/philosophy than a single aid effort. Nevertheless, I still think it might be useful to define "winning" as "alleviating a need for something". This could be something like "to reach a day when we no longer need to support GiveDirectly [because we've already eliminated poverty/destitution/because we've reached a quality of wealth redistribution such that nobody is living below X dollars a year]."

On that note, for Effective Altruist organizations, I imagine that 'not being needed' means 'not continuing to be the best use of our resources', or, 'have faced significant diminishing marginal returns to additional work'. That said, the condition for an organization to rationally end is different than their success condition.

On obvious point: Most organizations/causes have multiple increasingly-large success conditions. There's not one 'success condition', but a progressive set of improvements. We won't 'win' as an abstract term. I mean, I don't think Martin Luther King would say that he 'won', he accomplished a lot, but things got complicated at the end and there was still a lot to be done; needless to say though, he did quite well.

A better set of questions may be 'what are some reasonable goals to aim for?' Then, 'how can we measure how far we are from those specific goals?'

In completely pragmatic matters, I think that the best goals for us is not legislation, but monetary donations to EA-related causes.

Goal 1: 100m/year

3: 1b/year

4: 10b/year

etc

The ultimate goal for all of us may be a positive-singularity, though that is separate from effective altruism itself and harder to measure. Also, of course the money above would have to be adjusted for quality of the EA org relative to the best.

There is of course, still the question of how good the interventions are and how good the intervention-deciding mechanisms are. However, I feel like measuring / estimating those are quite a bit more challenging and also present a very orthogonal and distinct challenge from raising money. For instance, growing a movement and convincing people in the large would be an 'EA popularity goal', which would be measured in money, while finding new research to understand effectiveness would be more of a 'EA Research Goal'. Two very different things.

Hitting sharply diminishing returns on QALYs/$

Currently you can buy decades and decades of QALYs for a year's salary or less. And that's just straight forward, low variance, uncontroversial purchases. If you cast your net wider (far future concerns) you could potentially be purchasing trillions of QALYs on expectation. I'll consider EA to have won once those numbers drop to something reasonable.

Clippy wants to point out that this goal could easily be achieved through a deadly virus that wipes out the human race, planetwide nuclear winter, etc. :P

Yep, that's fundamental. Also, we don't want to give the impression that our obligations are limited to opportunities that land in our lap. If we seem to be hitting diminishing returns, it's time to try looking for awesome opportunities in different domains.

I would think winning is likely to depend sharply on cause area, or at least on particular assumptions that are not agreed upon in the EA community, at least if it is to be sufficiently concrete. Most EAs could probably agree that a world where utility is maximized (or some fairly similar metric, or optimization function) is a win. What world will realize this depends on views about the value of nonhuman animals, the value of good vs. bad experiences, and other issues where I've seen quite a bit of disagreement in the EA community.

Curated and popular this week
Relevant opportunities