In EA we focus a lot on legible impact. At a tactical level, it's the thing that often separates EA from other altruistic efforts. Unfortunately I think this focus on impact legibility, when taken to extremes and applied in situations where it doesn't adequately account for value, leads to bad outcomes for EA and the world as a whole.

Legibility is the idea that only what can easily be explained and measured within a model matters. Anything that doesn't fit neatly in the model is therefore illegible.

In the case of impact, legible impact is that which can be measured easily in ways that a model predicts is correlated with outcomes. Examples of legible impact measures for altruistic efforts include counterfactual lives saved, QALYs, DALYs, and money donated; examples of legible impact measures for altruistic individuals include the preceding plus things like academic citations and degrees, jobs at EA organizations, and EA Forum karma.

Some impact is semi-legible, like social status among EAs, claims of research progress, and social media engagement. Semi-legible impact either involves fuzzy measurement procedures or low confidence models of how the measure correlates with real world outcomes.

Illegible impact is, by comparison, invisible, like helping a friend who, without your help, might have been too depressed to get a better job and donate more money to effective charities or filling a seat in the room at an EA Global talk such that the speaker feels marginally more rewarded for having done the work they are talking about and marginally incentives them to do more. Illegible impact is either hard or impossible to measure or there's no agreed upon model suggesting the action is correlated with impact. And the examples I gave are not maximally illegible because they had to be legible enough for me to explain them to you; the really invisible stuff is like dark matter—we can see signs of its existence (good stuff happens in the world) but we can't tell you much about what it is (no model of how the good stuff happened).

The alluring trap is thinking that illegible impact is not impact and that legible impact is the only thing that matters. If that doesn't resonate, I recommend checking out the links above on legibility to see when and how focusing on the legible to the exclusion of the illegible can lead to failure.

One place we risk failing to adequately appreciate illegible impact is in work on far future concerns and existential risk. This comes with the territory: it's hard to validate our models of what will happen in the far future, and the feedback cycle is so long that it may be thousands or millions of lifetimes before we get data back that lets us know if an intervention, organization, or person had positive impact, let alone if that impact was effectively generated.

Another place we risk impact illegible is in dealing with non-humans since there remains great uncertainty in many people's minds about how to value the experiences of animals, plants, and non-living dynamic systems like AI. Yes, people who care about non-humans are often legible to each other because they share enough assumptions that they can share models and can believe measures in terms of those models, but outside these groups interventions to help non-humans can seem broadly illegible, up to interpreting these the interventions, like those addressing wild animal suffering, as being silly or incoherent rather than potentially positively impactful.

Beyond these two examples, there's one place where I think the problems of illegible impact are especially neglected and that is easily tractable if we bother to acknowledge it. It's one EAs are already familiar with, though likely not framed in this way. And it's one that I perceive as having a lot of potential energy built up in, ready to be unleashed to solve the problem, if only we know it's there. That problem is the illegibility of aspects of an individual's impact.

By itself having some aspects of an individual's impact be illegible is not a problem, especially if they have many legible aspects of impact that provide feedback and indicators of their ability to improve the world. But in cases where most of a person's impact is illegible, it can create a positive feedback loop that destroys the possibility of future positive impact via an EA failing to have as much positive impact as they could because they believe they aren't on track to produce as much positive impact as they could and downregulate the amount of effort they put into producing positive impact since their effort appears ineffective. Some key evidence I see for the existence of this self-fulfilling prophecy includes

  • Nate Soares's Replacing Guilt series, which appears to have been largely motivated by his interacting with people who feel guilty that they aren't doing enough and should be doing more while their guilt simultaneously works against them being more effective;
  • Ozy's Defeating Scrupulosity series, which deals with shame a bit more generally and the way shame over failing to live up to one's own ideals results in difficulty at living up to those ideals;
  • anecdotally, plenty of EAs living with mental health issues find these issues are exacerbated by feelings of inadequacy, and since mental health issues often reduce productivity, it creates a self-reinforcing "death spiral" away from impact;
  • and the common experience of trying and failing to secure a job in EA (additional context), which can lead to feeling that one isn't good enough or isn't doing as much as one should be.

When I've talked to EAs in the throes of this positive feedback loop away from impact or reflected on my own limited experience of it in years past, a common pattern is that, by the sort of methods we apply in EA for measuring the impact of interventions, people are not so much actively getting evidence that they are not having impact or are generating negative impact as they are getting little to no evidence of impact and (rationally) take this absence of evidence as evidence of absence of impact. This is often made worse the harder they try to make progress on work they believe to be tractable and impactful: they work ever harder and get no clear signals that it's amounting to anything, reinforcing the notion that they aren't capable of positive impact. As you can imagine, this can be very demotivating, so much so that it can even lead to burnout (also). It's hard to know how many dedicated EAs have dropped out because they tried, saw no signs they were making headway, and (reasonably) gave up, but I'm confident it's greater than zero.

Now it's possible that this situation is unfortunate but correct, viz. the people going through this impact death spiral are correctly moving away from EA efforts because they are not having positive impact and the system is kicking them out where they can have greater positive impact by not contributing at all. I suspect this is not the case given that I can think of people who, by luck or good fortune or the help of friends, managed to break out of an anti-impact positive feedback loop to go on to do legibly and positively impactful things. So given that the impact death spiral is in fact a problem, what might we do about it?

To me the first step is acknowledging that illegible impact is still impact. For example, to me all of the following activities are positively impactful to EA such that if we didn't have enough of them going on then the EA movement would be less effective and less impactful and if we had more of them going on then EA would be more effective and more impactful, yet all of them produce impact of low legibility, especially for the person performing the action:

  • Reading the EA Forum, LessWrong, the Alignment Forum, EA Facebook, EA Reddit, EA Twitter, EA Tumblr, etc.
  • Voting on, liking, and sharing content on and from those places
  • Helping a depressed/anxious/etc. (EA) friend
  • Going to EA meetups, conferences, etc. to be a member in the audience
  • Talking to others about EA
  • Simply being counted as part of the EA community

You'll notice some of these produce legible impact, but importantly not very much to the person producing the impact. For example, being the 14,637th person counted among the ranks of EA doesn't feel very impactful, but building the EA movement and bringing in more people who have more impact, some of whom will produce more legible impact, only happens by the marginally small contributions of lots of people.

Another tractable step to addressing the problems caused by illegible individual impact is creating places to support illegible impact. I don't know that this is still or ever really was part of the mission of the EA Hotel (now CEEALAR), but one of the things I really appreciated about it from my fortnight stay there was that it provided a space for EA-aligned folks to work on things without the pressure to produce legible results. This to me seems extremely valuable because I believe many types of impact are quantized such that no impact is legible until a lot of things fall into place and you get a "windfall" of impact all at once and that there is also a large amount of illegible, "dark" impact being made in EA that goes largely unacknowledged but without which EA would not be as successful.

To some extent I think valuing illegible impact is convergent with efforts to strengthen the EA community, but not always in legible ways like starting local groups and bringing in people, but via the illegible work that weaves strong communities together. Maybe we can figure out ways to make more aspects of building a strong community legible, but I don't think we should wait for the models and measures to do the work because I expect that without it we will fail. Thus we are put in the awkward situation of needing to do and acknowledge illegible impact in order to get more legible impact more effectively.

All of this is complicated by our desire as effective altruists to do the most good. Somewhere down the slippery slope of praising illegible impact is throwing money after ineffective charities and giving money to rich people to buy nicer positional goods. I think we are smart enough and strong enough as a community to figure out how to avoid that without also giving up the many things we risk losing by too much focusing on legible impact. I already see plenty to make me believe we will not Goodhart ourselves by becoming QALY monsters, but I also think we need to better appreciate illegible impact since in a world where we did this enough I don't think we would see people suffer the impact death spiral.

Comments9
Sorted by Click to highlight new comments since:

This is a really good post! I often have difficulty trying to estimate my own illegible impact or that of other people. Here are some thoughts on the situation in general:

  1. People should take more time to thank others who have helped them would increase the amount of legible impact in the movement. I was startled to hear someone attribute their taking a job to me more than a year after the fact; this led me to update appropriately on the value of a prior project, and other projects of that type.
  2. It would be cool if people developed a habit of asking other people about impact they think they'd had. I'd love to see EA foster a culture where Bob can ask Alice "did our conversation last month have any detectable impact on you?", and Alice can answer truthfully without hurting Bob's feelings. (80,000 Hours and CFAR both seem to do a good job of hunting for evidence of illegible impact, though I'm concerned about the incentive fundraising organizations have to interpret this evidence in a way that overestimates their impact.)
  3. Small actions matter!
    1. I really appreciate people who take the time to vote on the Forum; very few posts get more than 50 votes, and many excellent posts only get a dozen or so. The more people vote, the better our sorting algorithm performs, and the more knowledge we (CEA) have about the types of content people find valuable. We have lots of other ways of trying to understand the Forum, of course, but data is data!
    2. Likewise, I'm really happy whenever I see someone provide useful information about EA to another person on Twitter or Reddit, whether that's "you might find this concept interesting" or "this claim you made about EA doesn't seem right, here's the best source I could find". If EA-affiliated people are reliably kind and helpful in various corners of the internet, this seems likely to contribute both to movement growth and to a stronger reputation for EA among people who prefer kind, helpful communities (these are often very good people to recruit).

People should take more time to thank others who have helped them would increase the amount of legible impact in the movement. I was startled to hear someone attribute their taking a job to me more than a year after the fact; this led me to update appropriately on the value of a prior project, and other projects of that type.
 

Hey Aaron, this comment left an impression on me. I think I am (marginally) more likely to leave this feedback now. 

Thanks for letting me know! Your comment, in turn, makes me update a tiny amount on the value of leaving comments with this kind of advice :-)

Once upon a time, my desire to build a useful mastery and career made me neglect my family, and more precisely my little brothers. Not dramatically, but whenever we were together, I was too stuck up with my own issues to act like a good old brother, or even to interact correctly with them. At some point, I realized that giving time and attention to my family was also important to me, and thus that I could not simply allocate all my mental time and energy to "useful" things.

This happened before I discovered EA, and is not explicitly about the EA community, but that's what popped into my mind when reading to this great post. In a sense, I refused to do illegible work (being a good brother, friend and son) because I considered it worthless in comparison with my legible aspirations.

Beyond becoming utility monster, I think what you are pointing out is that optimizing what we can measure, even with caveat, yields a negligence of small things that matters a lot. And I agree that this issue is probably tractable, because the illegible but necessary tasks themselves tend to be small, not too much of a burden. If one wants to allocate their career to it, great. But everyone can contribute in the ways you point out. Just a bit every day. Like calling your mom from time to time.

That is to say, just like you don't have to have an EA job to be an effective altruist, you don't have to dedicate all your life to illegible work to contribute some.

Thank you for writing this. As someone whe estimates his own career path has almost entirely illegible impact, it's made me more excited to continue trying to maximise that impact, even though it's unlikely to be visible. I thought it was worth commenting mostly as even though the majority of the impact you've had by writing this post will be illegible, it might be nice to see some of it.

This is a great post, thanks for writing it. I've thought about this topic a lot, specifically about the value helping others in the movement - I used the term "soft impact", because it improves coordination and community and interconnectedness. These actions often are illegible, but don't have to be.

EAs do this a lot anyways, but making the value of such action explicit or trying to track it could be really useful (and I think more of the following would only help!)

Examples:

  • connecting people to each other with the goal of them collaborating, one person getting advice, or just having shared interests.
  • providing detailed feedback on projects, posts etc.
  • sending people relevant resources
  • encouraging someone
  • maintaining connections with people whether that's in person, email, at conferences etc.

To track soft impact, you could send a message to people you know (maybe annually) and ask for feedback. Some people send out feedback forms so that it can be anonymous and people can give honest answers.

I currently have kept a list of introductions I have made and update it occasionally. I also track projects where I have given feedback. I will need to send out feedback forms at some point and see the actual impact, but even if only 1/5 introductions is very useful, I think this is worth the few minutes it takes to make an intro.

I'd be curious to know how much time people put aside to do these kind of things - especially for people who aren't very involved in meta stuff.

I don't know that this is still or ever really was part of the mission of the EA Hotel (now CEEALAR), but one of the things I really appreciated about it from my fortnight stay there was that it provided a space for EA-aligned folks to work on things without the pressure to produce legible results. This to me seems extremely valuable because I believe many types of impact are quantized such that no impact is legible until a lot of things fall into place and you get a "windfall" of impact all at once

Yes, this was a significant consideration in my founding of the project. We also acknowledge it where we have collated outputs. And whilst we have had a good amount of support (see histogram here), I feel that many potential supporters have been holding back, waiting for the windfall (we have struggled with a short runway over the last year).

Cf. Katja Grace's Estimation is the best we have (which was re-published in the first version of the EA Handbook, edited by Ryan Carey).

Since I originally wrote this post I've only become more certain of the central message, which is that EAs and rationalist-like people in general are at extreme risk of Goodharting ourselves. See for example a more recent LW post on that theme.

In this post I use the idea of "legibility" to talk about impact that can be easily measured. I'm now less sure that was the right move, since legibility is a bit of jargon that, while it's taken off in some circles, hasn't caught on more broadly. Although the post deals with this, a better version of this post might avoid talking about legibility all together and instead speak in more familiar language about measurement, etc. that people are already familiar with. There's nothing in here that I think hinges on the idea of legibility, though it's certainly helpful for framing the point, so if there were interest I think I'd be willing to revisit this post and see if I can make a shorter version of it that doesn't teaching some extra jargon above all the other necessary jargon.

I think I'd also highlight the Goodharting part more, since that's really what the problem is. More time on Goodharting and why this is a consequence of that, less time on going round the topic.

Curated and popular this week
Relevant opportunities