Hide table of contents

I crowdsourced EA criticisms at the recent EAG San Francisco conference by wandering around bothering people.

Why is Crowdsourcing Criticisms a Good Idea?

You can win money by writing up a criticism. If people had criticisms, wouldn't they write their own forum posts?

There are many reasons why they might not:

  • Too busy
  • Too lazy
  • Writing is hard
  • I didn't realise that this idea floating in my head was a criticism until given the right prompt
  • Not sure if this is a real issue or I'm just being dumb
  • Didn't know about the competition
  • No one wants to be accountable for the criticism because it might be unpopular or wrong. If Alice tells me a criticism then she's not the one who decided that this random observation was worth publishing. And I'm just repeating what Alice told me. So neither of us are accountable.
  • You can't write a criticism if you don't have hands

Unfortunately, I never managed to find anyone who didn't have hands.

"Methodology"

  • In the last 8 hours of the EAG conference, I realised that I was sitting in a gold mine for data collection. If I could run a survey of conference attendees and prove with hard numbers that Big EA was failing The People in a systematic way, then I'd have a really solid entry for the criticism competition and could win some money.
  • I had a hunch that there's a failure mode something like:
    • By encouraging people to go out in the world and be super ambitious at doing weird things, EA is creating a handful of highly-visible superstars, and buckets of people who get derailed from conventional career paths and silently fail as their life falls apart.
  • So I started asking people if and how EA had failed or helped them personally (quantified in QALYs), and whether they had had any tangible impact as a result of engaging with EA.
  • I got a pseudo-random sample of conference attendees by wandering around and approaching anyone who looked available.
    • A better approach would have been to import a list of attendees ahead of the conference, then request one-on-ones with a random sample.
  • After I approached someone, I would ask if they were open to a few minutes of conversation. If they consented I would quickly explain what I was doing and why, then go into the questions.
  • After surveying only 4 people, my original hypothesis was effectively debunked (at least with respect to the EAG population). Most people at the conference were living significantly better lives because of EA and they had achieved tangible positive impact as a result of interacting with EA.
  • Engaging with EA had been worth 2 QALYs, 2 QALYs, 11 QALYs, and 2 QALWs (quality adjusted life weeks) to the people I talked to, in purely selfish terms.
  • Only 1 of the 4 people I talked to couldn't think of any concrete altruism that had been achieved as a result of them interacting with EA. But they had only been engaging with EA for a month or so.
  • I pivoted to totally unstructured interviews where I simply ask conference attendees if they had any critcisms of EA, or if EA had failed them or could have served them better in some way.
  • I managed to talk to an additional 9 people before the conference ended, for a total of 13 people.
  • Some of the interviews were very short. A few people had literally nothing bad to say about EA.
  • Some of the interviews were long. When someone said something surprising I tried to understand it as well as possible.
  • One interviewee joked that they were the wrong person to ask for criticisms because they were too happy with EA. Which gave me the idea to deliberately seek out people who weren't having a good time at the conference. If anyone out there was feeling miserable and excluded then clearly EA has failed them in some way and I should try and figure out what went wrong.
  • If I were a miserable person where would I be? Skulking in a corner probably.
  • When I approached someone in a corner, they said they were having a great time and the corner was the designated meeting spot for a one-on-one.
  • Finding unhappy people is really hard.
    • Firstly, it feels pretty icky looking for unhappy people. There's something predetory about it.
    • Secondly, unhappy people look a lot like people who want to be left alone. If someone's staring at their phone in a corner, is it because they couldn't find anyone to talk to, because they're exhausted from talking to too many people, or because they're doing important work and need to focus?
    • Thirdly, my brain kept saying "Hey, this person looks like they'd be fun to talk to. Let's approach them." NO BRAIN. The whole point is to find outsiders and have difficult and awkward conversations.
  • One person said "When I feel low, I go to the nap room."
    • This gave me the idea of putting up fake "nap room" signs and lure unhappy people into a dark room where I could interrogate them. Fortunately for everyone involved, I did not have time for this.
  • So I never really did figure out a good strategy for finding unhappy outsiders. Almost everyone I talked to was having a blast. The best I could do was chat to a handful of non-young-white-males hanging out among the beanbangs.

Raw List of Criticisms

Here are almost all the criticisms I heard in no particular order. These are not quotes, they are mostly in my own words and may reflect my biases. If anyone recognises their criticism here and doesn't like the way it is being used/presented, please get in contact. Also note that some of these were not presented as criticisms, but seemed worth including.

  • EAs aren't critical enough of EA ideas. Everyone believes basically the same stuff and this makes for boring conversation.
  • Some of the conferences come off as a uncomfortably decadent. For example: there was a fancy dinner cruise on the Danube at a Prague conference.
  • EA is too meta.
  • EA overthinks everything.
  • The level of emphasis on AI alignment is "insane". (This critic wasn't convinced that AI was a big deal).
  • I heard second-hand about someone at the conference who really cared about climate change and was put off by how little attention it was given, especially compared to AI.
  • Someone else I talked to said they'd experienced the same thing, and even felt "judged" for continuing to care about climate change after learning that "it's not neglected".
  • The jargon usage makes a lot of EA discussion inaccessable to newbies. Problematic examples are "on the margin" and "counterfactual". These could be substituted for "one extra unit" and "where X happens instead of Y".
  • One person who was several years into a traditional career path said they couldn't see a bridge from their current work to something more impactful. Strategies like "quit current job and reskill as AI researcher" seemed very costly and had low chance of success.
  • The quiet space at the conference wasn't big enough.
  • Someone wanted to criticise a tweet which was critcising the food at EAG.
    • Remember the "too meta" criticism?
  • Not enough "deep" conversations at the conference. Everyone is regurgitating AI risk talking points and it's hard to make an emotional connection with an actual human being.
  • EA is too centralised. All the cool AI stuff is locked up in San Fran, so if you're not willing or able to pack up your life and move there, then you're always going to be an outsider.
  • It would be nice to have a "Personal/Social Skills" workshop at the start of the conference. The secret Straussian purpose of this workshop is to force shy people out of their shells with goofy exercises like doing an interpretive dance then describing how it made them feel.
  • All of the 5-10 most important/famous EAs are white men. It would be nice to hear other voices at keynotes.
  • There's too much emphasis on reducing bad and not enough on increasing good.
  • Not enough emphasis on economic growth.
  • Not enough emphasis on epistemology.
  • Too many new orgs are being created. Surely they're not all necessary?
  • There are many "naive EAs" who are arrogant and over-confident. These people are setting themselves up for disappointment and are also annoying to talk to.
  • EAs beat themselves up to much about not having enough impact ("impact distress")
  • EA can be clique-y
  • EA isn't christian enough (said a christian)
  • Efforts to support mental health feel token-y
  • Someone said they didn't trust Julia Wise as the contact person for mental health because one time she shared/published a sensitive private correspondence without the author's permission. I have no idea if this is true or just a nasty rumour, but either way something bad has happened. This is a thorny issue and I was very tempted to not publish it, but the whole point of this exercise is to air uncomfortable criticisms so here we are. [Update: this is an acknowledged mistake that has been apologised for. See Linch's comment.]
  • There's a growing disparity between EA elites and non-elites
  • There's an anxiety that comes along with wanting to have an impact
  • Giving loads of money to lots of young, inexperienced people who want to work on AI safety doesn't seem like a great idea

Policy Suggestions

In this section I attempt to do three things:

  • Identify themes in the above criticisms
  • Theorise about why the criticisms are coming up
  • Brainstorm solutions to the problem

1. Accomodate people who want to work on climate change

I only talked to 13 people, and yet I heard two reports of this sentiment: "I care about climate change and am put off by how dismissive EA is of it".

If people are dedicated to fighting climate change, then it's preferrable to give them the EA tools so they can fight climate change better than to meet them with a dismissive attitude for a problem that's deeply important to them and turn them off of EA forever.

If someone enters an EA space looking for ideas on fighting climate change, then there should be resources available for them. You can try nudging them towards something you think is higher impact, but if they aren't interested then don't turn them away.

The EA tribe should at worst be neutral to the Climate Change tribe. But if EA comes across as dismissive to climate change, then there's the potential to make enemies due to mere miscommunication.

[Note: this section has been significantly reworked after Gavin's input.]

2. Be more human / emotional

This point is mostly drawing from the following comment:

  • It's hard to make an emotional connection with an actual human being

But here are some other descriptors that seem related:

  • boring
  • overthinks
  • inaccessable
  • regurgitating AI risk talking points
  • naive
  • arrogant
  • over-confident
  • annoying to talk to

I think there's a theme here of: "when in EA spaces, you're often talking to ideas and not people".

But on the other hand, I've met plenty of EAs who are:

  • spontaneous
  • humble
  • kind
  • honest
  • emotional
  • engaged in torrid love affairs with humanity
  • alive to the possibilities of the world

I love talking to these people and want them to have a bigger role in the culture.

Some ideas:

  • I liked the "Personal/Social Skills" workshop idea. I would love it if there were people dancing and juggling and playing music at EA conferences.
  • More cultivation of art in EA. The creative writing competition was a good move in this direction. What about an EA painting competition? A song competition? A standup comedy competition?
  • Encourage more discussion of what EAs get up to outside of work hours. What did you learn about the lived experience of global poverty by backpacking through Africa? How do you cultivate empathy by reading novels? What are your top tips for cooking delicious, cheap, vegan meals?
  • Women tend to be better at interacting on the actual-human-being-with-emotions level. Get more microphones into the hands of women!
  • Similarly, people tend to get better at human connection with age (at least within my experience in EA). Raise the mean age of people with microphones!
  • Probably the same goes for other minorities in EA, but I'm less sure.
  • Wild pitch, but... what about deliberately building an EA TikTok community? TikTok is IMO the most human social media platform. It's full of normal people dancing and acting and joking and telling stories. And if you have compelling content you can reach a huge audience very fast. Great way to spread ideas. The polar opposite of the EA Forum, aka "wall-of-text city".

3. Give more attention to EA outsiders

Some extracts from the critques:

  • Jargon
  • Inaccessable to newbies
  • Couldn't see a bridge from their current work to something more impactful
  • Too centralised
  • If you're not willing or able to pack up your life and move ... you're always going to be an outsider
  • clique-y
  • EA elites

It's like there's an island where all the "EA elites" or "EA insiders" hang out and talk to each other. They accumulate jargon, ideosyncratic beliefs, ideosyncratic values, and inside jokes about potatoes. Over time, the EA island drifts further and further out to sea, so becomes harder and harder for people to reach from the mainland.

The island is a metaphor for EA spaces like conferences, retreats, forums, and meetups. Spaces designed by, and arguably for, EA insiders. The mainland is a metaphor for mainstream culture, where the newbies and outsiders live.

This dynamic leads to stagnation, cult vibes, and monoculture fragility.

I think the general strategy for combatting this trend is to redistribute attention in EA spaces from insiders at the frontier of doing good to everyone on a journey of doing more good.

This way, when Nancy Newbie hangs out in EA spaces, she doesn't just see an inacessible island of superstar do-gooders. She sees an entire peninsula which bridges her current position all the way to the frontier. She can see both a first step to improve do-gooding on her current margin, and a distant ideal to aspire to.

Another way of thinking about it: here are two possible messages EA could be sending out into the world:

  • "We've figured out what the most important problems to solve are. You should come here and help us solve them."
  • "We've figured out some tools for doing good better. What kind of good do you want to do and how can we help you do more of it?"

EA tends to send out the first message, but the second message is much is more inviting, kind, and I would guess it actually does more good in the long run.

Why does EA send out the first message? Legacies from academia? From industry? This is just how human projects work by default?

Concrete ideas:

  • Keynotes should be less like "random person interviews Will MacAskill" and more like "Will MacAskill interviews random person". Or better yet, "Will MacAskill chats with a diverse sample of conference attendees about how EA can help them achieve more good in the world". I would guess that all parties would find this a more enlightening exercise.
  • The 80,000 Hours Podcast should have interviews with a random sample people they've given career advice to, rather than just people who have had successful careers. This would give 80,000 Hours higher-fidelity feedback than they get by doing surveys, it would give listeners a more realistic sense of what to expect from being an ambitious altruist, and it would let everyone learn from failures rather than just successes.
  • More people should do what I'm doing here: going out and talking to "the people". Give the plebs a platform to make requests and voice concerns. "The people" can be conference attendees, forum users, facebook group members, university club members, or even people who have never interacted with EA before but are interested in doing good.
  • Do more user testing. Get random people off the street to read EA articles, submit grant requests, or post something on the EA forum. Take note of what they find confusing or frustrating.
  • Add a couple lines to the EA Forum codebase so that whenever anyone uses jargon like "on the margin" or "counterfactual", a link to a good explanation of the term is automatically inserted.
  • Run surveys to find out what people at different levels of engagement actually value, know, and believe. Make EA spaces more accomodating to these.

4. Be more positive

More extracts:

  • too much emphasis on reducing bad and not enough on increasing good.
  • EAs beat themselves up to much about not having enough impact
  • anxiety that comes along with wanting to have an impact

Most EA discussion looks something like this:

Oh no! A Bad Thing is happening or will happen! We need to stop it!

And not like this:

Hey, wouldn't it be amazing if Good Thing happened? How can cause it?

Why? Is there a structural reason why "preventing bad" is a better altruism strategy than "causing good"?

There might be within certain philosophies (such as prioritarianism and deontology), but to me "preventing bad" and "causing good" look like they're actually the same thing. They are both situations where:

  • there are two possible futures
  • one is better than the other
  • we cause the better future to come about

So the distinction between "preventing bad" and "causing good" is purely psychological.

Preventing bad tends to feel like:

  • obligation
  • recovering lost ground
  • fighting an uphill battle
  • anxiety and depression
  • burnout

Causing good feels like:

  • opportunity
  • getting away with something
  • excitement and joy
  • good long-term mental health

By my reckoning, if we reframe all altruistic activities as causing a positive rather than preventing a negative, then we should be able to both achieve more altruism, and have more fun doing it.

I have a ton more to say about this, but I didn't want to stray too far from the crowdsourced critcisms. More in a future post.

Also more in a previous post

5. More transparency about money

  • conferences come off as a uncomfortably decadent
  • Giving loads of money to lots of young, inexperienced people who want to work on AI safety doesn't seem like a great idea

Related previous discussion:

Related anecdote:

When I got back from EAG and my dad picked me up from the airport I was wearing an EAG branded jumper. My dad asked a very good question: "Why is a conference about doing the most good giving out free merch?" After all, the money spent on that merch could have gone to malaria bednets or something.

I would guess the party line is something like:

In the long run, investing in people and culture will more than pay for itself in utils. Wearing merch makes EA feel like a real thing that's part of your identity, so you're much more likely to let EA influence major life decisions.

Which is true... but it takes a long time to explain.

What looks to me like the best solution is to invest aggressively in altruism when it makes sense to do so, even in things that feel like luxuries, but be as transparent as you can about it:

  • EAG merch should have "THIS JUMPER COST $15 TO MAKE, AND WE EXPECT IT TO GENERATE $25 IN VALUE FOR THE WORLD" printed on the back
  • There could be a public payscale for EA workers. If an EA organisation follows this payscale then they get a badge on their website.
  • Encourage grant proposals and feedback to be published on the forum. This would also let people learn more about what kinds of projects funders are keen to fund.
  • Publish reports on how much money is spent on conferences, etc. Don't justify each expense, but give people the opportunity to ask "why was this expense made?" and if no good answer can be produced then don't make that expense again.

This is a delicate area, and these may be terrible suggestions. But the general strategy of "more transparency" looks like the best way to both aggressively invest in altruism and avoid optics problems.

6. Better mental health support

  • Efforts to support mental health feel token-y.
  • EAs beat themselves up to much about not having enough impact
  • anxiety that comes along with wanting to have an impact

This is a big, hard problem and I don't have any great suggestions.

I suspect some of the things I've already talked about would help:

  • Focusing more on positives
  • Don't make EA look like an unobtainable island
  • Make EA more emotional and human

7. More quiet space at conferences

  • The quiet space at the conference wasn't big enough.

This agrees with my own experience. I spent a bit of time in both the quiet working room and the nap room, and saw lots of people who coudn't find a space.

A Cautionary Tale

During the conference I thought I had come up with a clever hack to win some free money. If I just chat to a few people about what EA is doing wrong and transcribe it onto the forum, then I've got some of that uncompromising data-driven criticism that EA loves.

Boy was I wrong.

It turns out it's pretty easy to:

  • go out and absorb a few thousand words in conversation plus all kinds of social nuances and context cues
  • relate what someone is saying to my own experience so that I feel like I understand where they're coming from
  • scribble a few notes about the key points I heard

But that it's really hard to:

  • reconstruct what I was thinking at various stages
  • disentangle what the other person actually said from what I projected
  • figure out what was actually significant
  • communicate some subtle or personal topics through text on a public forum

I was reminded of Karnofsky's wicked problem post.

Some of the issues I encountered felt surprising and important, so I wanted to make it as easy as possible for them to be corrected. This required:

  • understanding the problem well
  • coming up with a clear and compelling explanation of the problem
  • thinking of solutions workable solutions.

Each criticism opened up a world tangents and associations, and it's been really hard to decide how far to go down each rabbit hole.

This write-up also took some weirdly personal turns. At one point I wrote a caricature of an EA to illustrate all the problems, then realised "OH MY GOD, THIS IS ME EXACTLY" then got spooked and deleted it.

I've also been trying to not do the things that were being criticised, which is really hard.

Do not undertake this kind of project lightly.

More Criticisms Please

If you have anything to add to something discussed here, please leave a comment, message me, or enter something into this anonymous form.

If you have any additional criticisms which you don't expect to write-up yourself, please leave a comment, message me, or enter something into this anonymous form.

If you can think of any way that this post is bad or could be made better, please leave a comment, message me, or enter something into this anonymous form.

I will also write one comment below for each of the suggestions I've discussed here. If you particularly agree with any of the suggestions, upvote that comment. The number of upvotes will then give a sense of how important each of the points are.

Comments20
Sorted by Click to highlight new comments since:

If you agree that EA should:

Give more attention to EA outsiders

Please upvote this comment (see the last paragraph of the post).

If you agree EA should:

Have more quiet spaces at conferences

Please upvote this comment (see the last paragraph of the post).

I really liked the tone of this post, it was funny and charming

Impressed with the infrastructure you built around the post (the anon forms and the votey comments)! Also love the randomisation ideas.

You do well at reporting the views without necessarily endorsing them in the first half - but then the policy suggestions seem to endorse every criticism. (Maybe you do agree with all of them?) But if not there's a PR flavour to it: "we have to spend on climate cos otherwise people will be sad and not like us". Of the four arguments in Policy section 1, none seem to depend on estimating the expected value and comparing it to the existing EA portfolio, as Ben Dixon memorably did.

(I'd have no objection if the section was titled "appropriate respect for climate work" rather than "more emphasis", which implies a zero-sum bid for resources, optimality be damned.)

I ended up significantly reworking the section. Any feedback on the new version?

Thank you and good points.

but then the policy suggestions seem to endorse every criticism. (Maybe you do agree with all of them?)

I guess what I was attempting was to steelman all of the criticisms I heard. Trying to come up with a version I do agree with.

I will change the title to "Be more respectful of climate change work"

Someone said they didn't trust Julia Wise as the contact person for mental health because one time she shared/published a sensitive private correspondence without the author's permission. I have no idea if this is true or just a nasty rumour, but either way something bad has happened. This is a thorny issue and I was very tempted to not publish it, but the whole point of this exercise is to air uncomfortable criticisms so here we are.

This is true and well-documented, see here (under confidentiality mistakes) and here. I do consider it a nontrivially large mistake, but I think we all have had made nontrivial mistakes on the job, and I personally do not see this as disqualifying Julia.

I'm troubled that the version of the story you heard didn't mention it was a fuckup she repeatedly apologised for.

Sorry I should've been more clear in my comment. The links should've been pretty obvious though (like it was important enough to be prominently on CEA's list of mistakes).

I meant Hamish

Yeah, hard to know what to do with that. I'll make it clear in the post that it is an acknowledged mistake that has been apologised for.

Great, thank you.

I will update the bullet point with a link to your comment.

If you agree that EA should:

Be more accomodating of people who want to work on climate change

Please upvote this comment (see the last paragraph of the post).

If you agree EA should:

Be more positive

Please upvote this comment (see the last paragraph of the post).

If you agree that EA should:

Be more human / emotional

Please upvote this comment (see the last paragraph of the post).

If you agree EA should:

Have better mental health support

Please upvote this comment (see the last paragraph of the post).

If you agree EA should:

Have more money transparency

Please upvote this comment (see the last paragraph of the post).

[comment deleted]1
0
0
More from Hmash
Curated and popular this week
Relevant opportunities