Comments30
Sorted by Click to highlight new comments since:

EDIT: Scott has admitted a mistake, which addresses some of my criticism:



(this comment has overlapping points with titotal's)

I've seen a lot of people strongly praising this article on Twitter and in the comments here but I find some of the arguments weak. Insofar as the goal of the post is to say that EA has done some really good things, I think the post is right. But I don't think it convincingly argues that EA has been net positive for the world.[1]

First: based on surveys, it seems likely that most (not all!) highly-engaged/leader EAs believe GCRs/longtermist causes are the most important, with a plurality thinking AI x-risk / x-risk more general are the more important.[2] I will analyze the post from a ~GCR/longtermist-oriented worldview that thinks AI is the most important cause area during the rest of this comment; again I don't mean to suggest that everyone has it, but if something like it is held by the plurality of highly-engaged/leader EAs it seems highly relevant for the post to be convincing from that perspective.

My overall gripe is exemplified by this paragraph (emphasis mine):

And I think the screwups are comparatively minor. Allying with a crypto billionaire who turned out to be a scammer. Being part of a board who fired a CEO, then backpedaled after he threatened to destroy the company. These are bad, but I’m not sure they cancel out the effect of saving one life, let alone 200,000.

(Somebody’s going to accuse me of downplaying the FTX disaster here. I agree FTX was genuinely bad, and I feel awful for the people who lost money. But I think this proves my point: in a year of nonstop commentary about how effective altruism sucked and never accomplished anything and should be judged entirely on the FTX scandal, nobody ever accused those people of downplaying the 200,000 lives saved. The discourse sure does have its priorities.)

I'm concerned about the bolded part; I'm including the caveat for context. I don't want to imply that saving 200,000 lives isn't a really big deal, but I will discuss from the perspective of "cold hard math" .

  1. 200,000 lives equals roughly a ~.0025% reduction in extinction risk, or a ~.25% reduction in risk of a GCR killing 80M people, if we care literally zero about future people. To the extent we weight future people, the numbers obviously get much lower.
  2. The magnitude of the effect size of the board firing Sam, of which the sign is currently unclear IMO, seems arguably higher than .0025% extinction risk and likely higher than 200,000 lives if you weight the expected value of all future people >~100x of that of current people.
  3. The FTX disaster is a bit more ambiguous because some of the effects are more indirect; quickly searching for economic costs didn't find good numbers, but I think a potentially more important thing is that it is likely to some extent an indicator of systemic issues in EA that might be quite hard to fix.
  4. The claim that "I’m not sure they cancel out the effect of saving one life" seems silly to me, even if we just look at generally large "value of a life" estimates compared to the economic costs of the FTX scandal.

Now I'll discuss the AI section in particular. There is little attempt to compare the effect sizes of "accomplishments" (with each other and also with potential negatives, with just a brief allusion to EAs accelerating AGI) or argue that they are net positive. The effect sizes seem quite hard to rank to me, but I'll focus on some ones that seem important but potentially net negative (not claiming that they definitely are!), in order of their listing:

  1. "Developed RLHF, a technique for controlling AI output widely considered the key breakthrough behind ChatGPT."
    1. This is needless to say controversial in the AI safety community
  2. Got two seats on the board of OpenAI, held majority control of OpenAI for one wild weekend, and still apparently might have some seats on the board of OpenAI, somehow?
    1. As I said above the sign of this still seems unclear, and I'm confused why it's included when later Scott seems to consider it a negative
  3. Helped found, and continue to have majority control of, competing AI startup Anthropic, a $30 billion company widely considered the only group with technology comparable to OpenAI’s.
    1. Again, controversial in the AI safety community
  1. ^

    My take is that EA has more likely than not been positive, but I don't think it's that clear and either way, I don't think this post makes a solid argument for it.

  2. ^

    As of 2019, EA Leaders thought that over 2x (54% vs. 24%) more resources should go to long-term causes than short-term with AI getting the most (31% of resources), and the most highly-engaged EAs felt somewhat similarly. I'd guess that the AI figure has increased substantially given rapid progress since 2019/2020 (2020 was the year GPT-3 was released!). We have a 2023 survey of only CEA staff in which 23/30 people believe AI x-risk should be a top priority (though only 13/30 say "biggest issue facing humanity right now", vs. 6 for animal welfare and 7 for GHW). CEA staff could be selected for thinking AI is less important than those directly working on it, but would think it's more important than those at explicitly non-longtermist orgs.

seems arguably higher than .0025% extinction risk and likely higher than 200,000 lives if you weight the expected value of all future people >~100x of that of current people.

If Scott had used language like this, my guess is that the people he was trying to convince would have completely bounced off of his post.

I interpreted him to be saying something like "look Ezra Klein et al., even if we start with your assumptions and reasoning style, we still end up with the conclusion that EA is good." 

And it seems fine to me to argue from the basis of someone else's premises, even if you don't think those premises are accurate yourself.

I do think it would have been clearer if he had included a caveat like "if you think that small changes in the chance of existential risk outweigh ~everything else then this post isn't for you, read something else instead" but oh well.

If Scott had used language like this, my guess is that the people he was trying to convince would have completely bounced off of his post.

I mostly agree with this, I wasn't suggesting he included that specific type of language, just that the arguments in the post don't go through from the perspective of most leader/highly-engaged EAs. Scott has discussed similar topics on ACT here but I agree the target audience was likely different.

I do think part of his target audience was probably EAs who he thinks are too critical of themselves, as I think he's written before, but it's likely a small-ish fraction of his readers.

I do think it would have been clearer if he had included a caveat like "if you think that small changes in the chance of existential risk outweigh ~everything else then this post isn't for you, read something else instead" but oh well.

Agree with that. I also think if this is the intention the title should maybe be different, instead of being called "In continued defense of effective altruism" it could be called something else like "In defense of effective altruism from X perspective". The title seems to me to imply that effective altruism has been positive on its own terms.

Furthermore, people who identify as ~longtermists seemed to be sharing it widely on Twitter without any type of caveat you mentioned.

And it seems fine to me to argue from the basis of someone else's premises, even if you don't think those premises are accurate yourself.

I feel like there's a spectrum of cases here. Let's say I as a member of movement X in which most people aren't libertarians write a post "libertarian case for X", where I argue that X is good from a libertarian perspective.

  1. Even if those in X usually don't agree with the libertarian premises, the arguments in the post still check out from X's perspective. Perhaps the arguments are reframed to make to show libertarians that X will lead to positive effects on their belief system as well as X's belief system. None of the claims in the post contradict what the most influential people advocating for X think.
  2. The case for X is distorted and statements in the piece are highly optimized for convincing libertarians. Arguments aren't just reframed, new arguments are created that the most influential people advocating for X would disendorse.

I think pieces or informal arguments close to both (1) and (2) are common in the discourse, but I generally feel uncomfortable with ones closer to (2). Scott's piece is somewhere in the middle and perhaps even closer to (1) than (2) but I think it's too far toward (2) for my taste given that one of the most important claims in the piece that makes his whole argument go through may be disendorsed by the majority of the most influential people in EA.

Pablo - thanks for sharing the link to this excellent essay.

Apart from being a great intro to EA for people made semi-skeptical by the FTX and OpenAI situations, Scott Alexander's writing style is also worthy of broader emulation among EAs. It's light, crisp, clear, funny, fact-based -- it's everything that EA writing for the public should be.

Scott Alexander's writing style is also worthy of broader emulation among EAs

I very much agree. Here’s a post by Scott with nonfiction writing advice.

Here's a long excerpt (happy to take it down if asked, but I think people might be more likely to go read the whole thing if they see part of it): 

The only thing everyone agrees on is that the only two things EAs ever did were “endorse SBF” and “bungle the recent OpenAI corporate coup.”

In other words, there’s never been a better time to become an effective altruist! Get in now, while it’s still unpopular! The times when everyone fawns over us are boring and undignified. It’s only when you’re fighting off the entire world that you feel truly alive.

And I do think the movement is worth fighting for. Here’s a short, very incomplete list of things effective altruism has accomplished in its ~10 years of existence. I’m counting it as an EA accomplishment if EA either provided the funding or did the work, further explanations in the footnotes. I’m also slightly conflating EA, rationalism, and AI doomerism rather than doing the hard work of teasing them apart:

Global Health And Development

  • Saved about 200,000 lives total, mostly from malaria1
  • Treated 25 million cases of chronic parasite infection.2
  • Given 5 million people access to clean drinking water.3
  • Supported clinical trials for both the RTS.S malaria vaccine (currently approved!) and the R21/Matrix malaria vaccine (on track for approval)4
  • Supported additional research into vaccines for syphilis, malaria, helminths, and hepatitis C and E.5
  • Supported teams giving development economics advice in Ethiopia, India, Rwanda, and around the world.6

Animal Welfare:

  • Convinced farms to switch 400 million chickens from caged to cage-free.7
  • Things are now slightly better than this in some places! Source: https://www.vox.com/future-perfect/23724740/tyson-chicken-free-range-humanewashing-investigation-animal-cruelty
  • Freed 500,000 pigs from tiny crates where they weren’t able to move around8
  • Gotten 3,000 companies including Pepsi, Kelloggs, CVS, and Whole Foods to commit to selling low-cruelty meat.

AI:

  • Developed RLHF, a technique for controlling AI output widely considered the key breakthrough behind ChatGPT.9
  • …and other major AI safety advances, including RLAIF and the foundations of AI interpretability10.
  • Founded the field of AI safety, and incubated it from nothing up to the point where Geoffrey Hinton, Yoshua Bengio, Demis Hassabis, Sam Altman, Bill Gates, and hundreds of others have endorsed it and urged policymakers to take it seriously.11
  • Helped convince OpenAI to dedicate 20% of company resources to a team working on aligning future superintelligences.
  • Gotten major AI companies including OpenAI to work with ARC Evals and evaluate their models for dangerous behavior before releasing them.
  • Got two seats on the board of OpenAI, held majority control of OpenAI for one wild weekend, and still apparently might have some seats on the board of OpenAI, somehow?12
  • [Skipped screenshot]
  • Helped found, and continue to have majority control of, competing AI startup Anthropic, a $30 billion company widely considered the only group with technology comparable to OpenAI’s.13
  • [Skipped screenshot]
  • Become so influential in AI-related legislation that Politico accuses effective altruists of having “[taken] over Washington” and “largely dominating the UK’s efforts to regulate advanced AI”.
  • Helped (probably, I have no secret knowledge) the Biden administration pass what they called "the strongest set of actions any government in the world has ever taken on AI safety, security, and trust.”
  • Helped the British government create its Frontier AI Taskforce.
  • Won the PR war: a recent poll shows that 70% of US voters believe that mitigating extinction risk from AI should be a “global priority”.

Other:

I think other people are probably thinking of this as par for the course - all of these seem like the sort of thing a big movement should be able to do. But I remember when EA was three philosophers and few weird Bay Area nerds with a blog. It clawed its way up into the kind of movement that could do these sorts of things by having all the virtues it claims to have: dedication, rationality, and (I think) genuine desire to make the world a better place.

II.

Still not impressed? Recently, in the US alone, effective altruists have:

  • ended all gun violence, including mass shootings and police shootings
  • cured AIDS and melanoma
  • prevented a 9-11 scale terrorist attack

Okay. Fine. EA hasn’t, technically, done any of these things.

But it has saved the same number of lives that doing all those things would have.

About 20,000 Americans die yearly of gun violence, 8,000 of melanoma, 13,000 from AIDS, and 3,000 people in 9/11. So doing all of these things would save 44,000 lives per year. That matches the ~50,000 lives that effective altruist charities save yearly18.

People aren’t acting like EA has ended gun violence and cured AIDS and so on. all those things. Probably this is because those are exciting popular causes in the news, and saving people in developing countries isn’t. Most people care so little about saving lives in developing countries that effective altruists can save 200,000 of them and people will just not notice. “Oh, all your movement ever does is cause corporate boardroom drama, and maybe other things I’m forgetting right now.”

In a world where people thought saving 200,000 lives mattered as much as whether you caused boardroom drama, we wouldn’t need effective altruism. These skewed priorities are the exact problem that effective altruism exists to solve - or the exact inefficiency that effective altruism exists to exploit, if you prefer that framing.

Caude.ai summary for those in a hurry:

The article argues in defense of the effective altruism movement, citing its accomplishments in areas like global health, animal welfare, and AI safety, while contending criticisms of it are overblown. It makes the case that effective altruism's commitment to evidence-based altruism that focuses on the most tractable interventions to help others is a positive development worth supporting, despite some mistakes. The article concludes the movement has had significant positive impact that outweighs the negatives.

I'll read the article itself later, so be warned that I don't know how good this summary is.

Update: The summary is correct but significantly less viscerally motivating than the original. I love it!

This might be stating the obvious, but this article is not a balanced accounting of the positive and negative effects of the effective altruism movement. It's purely a list of "the good stuff", with only FTX and the openAI mess being mentioned as "the bad stuff". 

For example, the article leaves out EA's part in helping get openAI off the ground, which many in AI think is a big mistake, and I believe has caused a notable amount of real world harm already. 

It also leaves out the unhealthy cultish experiences at leverage, the alleged abuse of power at nonlinear, and the various miniature cults of personality that lead to extremely serious bad outcomes, as well as the recent scandals over sexual harrassment and racism. 

It's also worth pointing out that in a counterfactual world without EA, a lot of people would still be donating and doing good work. Perhaps a pure Givewellian movement would have formed, focusing on evidence based global health alone, without the extreme utilitarianism and other weird stuff, and would have saved even more lives. 

This is not to say that EA has not been overall good for the world. I think EA has done a lot of good, and we should be proud of our achievements.  But EA is not as good as it could be, and fixing that starts with honest and good-faith critique of it's flaws. I'll admit you won't find a lot of that on twitter though. 

It also leaves out the unhealthy cultish experiences at leverage, the alleged abuse of power at nonlinear, and the various miniature cults of personality that lead to extremely serious bad outcomes, as well as the recent scandals over sexual harrassment and racism. 

That seems totally justified to me because the magnitude of these events is many orders of magnitude smaller than the other issues being discussed. They matter for EAs because they are directly about EA, not because they are actually one of the world's largest problems, or represent a significant direct impact on the world. 

I'm concerned that characterizing harms attributable to EA as roughly several orders of magnitude less bad than 200K lives and saving humanity from AI are good could be used to dismiss an awful lot of bad stuff, including things significantly worse than anything in the quotation from @titotal

Even accepting the argument and applying it to Scott Alexander's blog post, I don't think an orders-of-magnitude or an internal-affairs defenses are fully convincing. Two of the items were:

  • Donated a few hundred kidneys.
  • Sparked a renaissance in forecasting, including major roles in creating, funding, and/or staffing Metaculus, Manifold Markets, and the Forecasting Research Institute.

The first bullet involves benefit to a relatively small group of people; it is by utilitarian reckoning several orders of magnitude less than the top-line accomplishments on which the post focuses. Although the number of affected people is unknown, people experiencing significant trauma due to by sexual harassment, sexual assault, and cult-like experiences would not be several orders of magnitude less significant than giving people with kidney disease more and higher-quality years of life.

The second bullet is not worth mentioning for the benefits accrued by the forecasting participants; it is only potentially worth mentioning because of the potentially indirect effects on the world. If that's fair game despite being predominately inside baseball at this point, then it seems that potentially giving people who created cultish experiences and/or abused power more influence over AI safety than would counterfactually have been the case and potentially making the AI & AI safety communities even more male-centric than they would have counterfactually been due to sexual harassment and assault making women feel unsafe should be fair game too.

The list of achievements in the post contains items like "played a big part in creating the YIMBY movement" (A bit of an overstatement imo), and "sparked a renaissance in forecasting" (made a bunch of fun online prediction markets). To be clear, I like both these things! 

But if you are going to include relatively minor things like this in the "pros" column, it's disingenuous to leave out things like " accidently created a cult" out of the "cons" column (plus the worse stuff I'm trying to avoid mentioning directly here). 

Either we can list all the benefits and all the harms caused by EA, or we can just list the very seriously good stuff and the very seriously bad stuff. If you list all of the benefits, and only the super mega bad harms, you aren't making an honest argument. Although I wouldn't have even had a problem if he'd just stated upfront that he was writing an unbalanced defence. 

I think it's a bit messy. I think each individual one of these really doesn't have large consequences, but it matters a lot in as much as Scott's list of good things about EA is in substantial parts a list of "EAs successfully ending up in positions of power", and stuff like Leverage and Nonlinear are evidence about what EAs might do with that power.

... Scott's list of good things about EA is in substantial parts a list of "EAs successfully ending up in positions of power"

I think this is not a very good description of Scott's list? Lets go through it, scoring each one YES if they agree with you or NO if not.

Note that some of these 'NO's are about political power, but they are about the results of it, not the mere gaining of power.

Global Health And Development

NO: Saved about 200,000 lives total, mostly from malaria1

NO: Treated 25 million cases of chronic parasite infection.2

NO: Given 5 million people access to clean drinking water.3

NO: Supported clinical trials for both the RTS.S malaria vaccine (currently approved!) and the R21/Matrix malaria vaccine (on track for approval)4

NO: Supported additional research into vaccines for syphilis, malaria, helminths, and hepatitis C and E.5

NO: Supported teams giving development economics advice in Ethiopia, India, Rwanda, and around the world.6

Global Health And Development Score: 0/6

Animal Welfare:

NO: Convinced farms to switch 400 million chickens from caged to cage-free.7

NO: Freed 500,000 pigs from tiny crates where they weren’t able to move around8

NO: Gotten 3,000 companies including Pepsi, Kelloggs, CVS, and Whole Foods to commit to selling low-cruelty meat.

Animal Welfare Score: 0/3

AI:

NO: Developed RLHF, a technique for controlling AI output widely considered the key breakthrough behind ChatGPT.9

NO: …and other major AI safety advances, including RLAIF and the foundations of AI interpretability10.

NO: Founded the field of AI safety, and incubated it from nothing up to the point where Geoffrey Hinton, Yoshua Bengio, Demis Hassabis, Sam Altman, Bill Gates, and hundreds of others have endorsed it and urged policymakers to take it seriously.11

NO: Helped convince OpenAI to dedicate 20% of company resources to a team working on aligning future superintelligences.

NO: Gotten major AI companies including OpenAI to work with ARC Evals and evaluate their models for dangerous behavior before releasing them.

YES: Got two seats on the board of OpenAI, held majority control of OpenAI for one wild weekend, and still apparently might have some seats on the board of OpenAI, somehow?12

YES: Helped found, and continue to have majority control of, competing AI startup Anthropic, a $30 billion company widely considered the only group with technology comparable to OpenAI’s.13

YES: Become so influential in AI-related legislation that Politico accuses effective altruists of having “[taken] over Washington” and “largely dominating the UK’s efforts to regulate advanced AI”.

NO: Helped (probably, I have no secret knowledge) the Biden administration pass what they called "the strongest set of actions any government in the world has ever taken on AI safety, security, and trust.”

NO: Helped the British government create its Frontier AI Taskforce.

NO: Won the PR war: a recent poll shows that 70% of US voters believe that mitigating extinction risk from AI should be a “global priority”.

AI score: 3/11

Other:

NO: Helped organize the SecureDNA consortium, which helps DNA synthesis companies figure out what their customers are requesting and avoid accidentally selling bioweapons to terrorists14.

NO: Provided a significant fraction of all funding for DC groups trying to lower the risk of nuclear war.15

NO: Donated a few hundred kidneys.16

NO: Sparked a renaissance in forecasting, including major roles in creating, funding, and/or staffing Metaculus, Manifold Markets, and the Forecasting Research Institute.

NO: Donated tens of millions of dollars to pandemic preparedness causes years before COVID, and positively influenced some countries’ COVID policies.

NO: Played a big part in creating the YIMBY movement - I’m as surprised by this one as you are, but see footnote for evidence17.

Other: 0/6

So overall I would say that 'successfully ended up in positions of power' is a fair description of only 3/26 (12%) of his statements, which seems quite low to me. Virtually all of them are about the consequences of EAs gaining some measure of power or influence. This seems like much stronger evidence about the consequences of EAs gaining power than how NonLinear treated two interns.

 

(I am mostly thinking about the AI section, and disagree with your categorization there: 

NO: Developed RLHF, a technique for controlling AI output widely considered the key breakthrough behind ChatGPT

Yep, agree with a NO here

NO: …and other major AI safety advances, including RLAIF and the foundations of AI interpretability10.

Yep, agree with a NO here

NO: Founded the field of AI safety, and incubated it from nothing up to the point where Geoffrey Hinton, Yoshua Bengio, Demis Hassabis, Sam Altman, Bill Gates, and hundreds of others have endorsed it and urged policymakers to take it seriously.

I think this should be a YES. This is clearly about ending up in an influential position in a field.

NO: Helped convince OpenAI to dedicate 20% of company resources to a team working on aligning future superintelligences.

This should also be a YES. This is really quite centrally about EAs ending up with more power and influence.

NO: Gotten major AI companies including OpenAI to work with ARC Evals and evaluate their models for dangerous behavior before releasing them.

This should also be a YES. Working with AI companies is about power and influence (which totally might be used for good, but it's not an intellectual achievement).

YES: Got two seats on the board of OpenAI, held majority control of OpenAI for one wild weekend, and still apparently might have some seats on the board of OpenAI, somehow?12

Agree

YES: Helped found, and continue to have majority control of, competing AI startup Anthropic, a $30 billion company widely considered the only group with technology comparable to OpenAI’s.13

Agree

YES: Become so influential in AI-related legislation that Politico accuses effective altruists of having “[taken] over Washington” and “largely dominating the UK’s efforts to regulate advanced AI”.

Agree

NO: Helped (probably, I have no secret knowledge) the Biden administration pass what they called "the strongest set of actions any government in the world has ever taken on AI safety, security, and trust.”

What we have seen here so far is institutes being founded and funding being promised, with some very extremely preliminary legislation that might help. Most of this achievement is about ending up with people in positions of power. So should be a YES.

NO: Helped the British government create its Frontier AI Taskforce.

This seems like a clear YES? The task force is very centrally putting EAs and people concerned about safety in power. No legislation has been passed.

NO: Won the PR war: a recent poll shows that 70% of US voters believe that mitigating extinction risk from AI should be a “global priority”.

Agree

Overall, for AI in-particular, I count 8/11. I think some of these are ambiguous or are clearly relevant for more than just being in power, but this list of achievements is really quite hugely weighted towards measuring the power that EA and AI Safety have achieved as a social movement, and not its achievements towards making AI actually safer. 

I think basically all of our disagreements here have the same form, so lets just focus on the first one:

NO: Founded the field of AI safety, and incubated it from nothing up to the point where Geoffrey Hinton, Yoshua Bengio, Demis Hassabis, Sam Altman, Bill Gates, and hundreds of others have endorsed it and urged policymakers to take it seriously.

I think this should be a YES. This is clearly about ending up in an influential position in a field.

To my ear, this is not a claim about becoming influential. It is a claim about creating something in the past, and getting a bunch of people to agree. It's similar to saying "Karl Marx founded Communism and got many people to agree" - the claim is not that Marx is in an influential position (he cannot be, since he is dead) but that he created something. There are related claims that are about influence, like if it had instead said "Founded and remain influential in the field", but the actual specific claim here is fully consistent with the EAs birthing and then seeing their relevance come to an end. The thing that the hundreds of people endorsed wasn't the EA movement itself, or even specific EAs, but the abstract claim that AI is risky.

We might mostly be arguing about semantics. In a similar discussion a few days ago I was making the literal analogy of "if you were worried about EA having bad effects on the world via the same kind of mechanism as the rise of communism, a large fraction of the things under the AI section should go into the 'concern' column, not the 'success' column". Your analogy with Marx illustrates that point. 

I do disagree with your last sentence. The thing that people are endorsing is very much both a social movement as well as some object level claims. I think it differs between people, but there is a lot of endorsing AI Safety as a social movement. Social proof is usually the primary thing evoked these days in order to convince people.

Virtually all of them are about the consequences of EAs gaining some measure of power or influence. This seems like much stronger evidence about the consequences of EAs gaining power than how NonLinear treated two interns.

I don't think this reasoning adequately factors in the negative effects of having people of poor character gain power and influence. Those effects can be hard to detect until something blows up. So evidence of EAs badly mistreating interns, running fraudulent crypto schemes, and committing sexual misconduct would be germane to the possibility that some other EAs have similar characterological deficits that will ultimately result in their rise to influence and power being a net bad thing.

In many cases, the community did not detect and/or appropriately react to evidence of bad character, before things publicly blew up in a year of scandals, and thus did not prevent the individuals involved from gaining or retaining power and influence. That does not inspire confidence that everyone who currently has power and/or influence is of non-bad character. Furthermore, although the smaller scandals are, well, smaller . . . their existence reduces the likelihood that the failure to detect and/or react to SBF's bad character was a one-off lapse.

In the comment that originated this thread, titotal made a good point about the need for counterfactual analysis. I think this factor is relatively weak for something like AI safety, where the EA contribution is very distinct. But it is a much bigger issue for things like mistreating interns or sexual misconduct, because I am not aware of any serious evidence that EA has these problems at higher than expected* rates. 

* there is some subtly here with what the correct comparison is - e.g. are we controlling for demographics? polyamory? - but I have never seen any such statistical analysis with or without such controls. 

Hm. Okay, I buy that argument. But we can still ask whether the examples are representative enough to establish a concerning pattern. I don't feel like they are. Leverage and Nonlinear are very peripheral to EA and they mostly (if allegations are true) harmed EAs rather than people outside the movement. CFAR feels more central, but the cultishness there was probably more about failure modes of the Bay area rationality community rather than anything to do with "EA culture."

(I can think of other examples of EA cultishness and fanaticism tendencies, including from personal experiences, but I also feel like things turned out fine as EA professionalized itself, for many of these instances anyway, so they can even be interpreted positively as a reason to be less concerned now.)

I guess you could argue that FTX was such a blatant and outsized negative example that you don't need a lot of other examples to establish the concerning pattern. That's fair. 

But then what is the precise update we should have made from FTX? Let's compare three possible takeaways:
(1) There's nothing concerning, per se, with "EA culture," apart from that EAs were insufficiently vigilant of bad actors. 
(2) EAs were insufficiently vigilant of bad actors and "EA culture" kind of exacerbates the damage that bad actors can do, even though "EA culture" is fine when there isn't a cult-leader-type bad actor in the lead.
(3) Due to "EA culture," EA now contains way too many power-hungry schemers that lack integrity, and it's a permeating problem rather than something you only find in peripheral groups when they have shady leadership.

I'm firmly in (2) but not in (3).

I'm not sure if you're concerned that (3) is the case, or whether you think it's "just" (2) but you think (2) is worrying enough by itself and hard to fix. Whereas I think (2) is among the biggest problems with EA, but I'm overall still optimistic about EA's potential. (I mean, "optimistic" relative to the background of how doomed I think we are for other reasons.) (Though I'm also open to re-branding and reform efforts centered around splitting up into professionalized subcommunities and de-emphasizing the EA umbrella.)

Why I think (2) instead of (3): Mostly comes down to my experience and gut-level impressions from talking to staff at central EA orgs and reading their writings/opinions and so on. People seem genuinely nice, honest, and reasonable, even though they are strongly committed to a cause. FTX was not the sort of update that would overpower my first-order impressions here, which were based on many interactions/lots of EA experience. (FWIW, it would have been a big negative update for me if the recent OpenAI board drama had been instigated by some kind of backdoors plan about merging with Anthropic, but to my knowledge, these were completely unsubstantiated speculations. After learning more about what went down, they look even less plausible now than they looked at the start.)

. Leverage and Nonlinear are very peripheral to EA and they mostly (if allegations are true) harmed EAs rather than people outside the movement.

I will again remind people that Leverage at some point had approximately succeeded at a corporate takeover of CEA, placing both the CEO and their second-in-command in the organization. They really were not very peripheral to EA, they were just covert about it.

That's indeed shocking, and now that you mention it, I also remember the Pareto fellowship Leverage takeover attempt. Maybe I'm too relaxed about this, but it feels to me like there's no nearby possible world where this situation would have kept going? Pretty much everyone I talked to in EA always made remarks about how Leverage "is a cult" and the Leverage person became CEA's CEO not because it was the result of a long CEO search process, but because the previous CEO left abruptly and they had few immediate staff-internal options. The CEO (edit: CEA!) board eventually intervened and installed Max Dalton, who was a good leader. Those events happened long ago and in my view they tentatively(?) suggest that the EA community had a good-enough self-correction mechanism so that schemers don't tend to stay in central positions of power for long periods of time. I concede that we can count these as near misses and maybe even as evidence that there are (often successfully fended off) tensions with the EA culture and who it attracts, but I'm not yet on board with seeing these data points as "evidence for problems with EA as-it-is now" rather than "the sort of thing that happens in both EA and outside of EA as soon as you're trying to have more influence."

I think the self-correction mechanism was not very strong. I think if Tara (who was also strongly supportive of the Leverage faction, which is why she placed Larissa in charge) had stayed, I think it would have been the long-term equilibrium of the organization. The primary reason why the equilibrium collapsed is because Tara left to found Alameda.

Interesting; I didn't remember this about Tara.

Two data points in the other direction:

  • A few months (maybe up to 9 months, but could be as little as 1 month, I don't remember the timing) before Larissa had to leave CEA, a friend and I talked to a competent-seeming CEA staff member who was about to leave the org (or had recently left – I don't remember details) because the org seemed like a mess and had bad leadership. I'm not sure if Leverage was mentioned there – I could imagine that it was, but I don't remember details and my most salient memory is that I thought of it as "not good leadership for CEA." My friend and I encouraged them to stay and speak up to try to change leadership, but the person had enough for the time being or had some other reason to leave (again, I don't remember details). Anyway, this person left CEA at the time without a plan to voice their view that the org was in a bad state. I don't know if they gave an exit interview or deliberately sought out trustees or if they talked to friends or whether they said nothing at all – I didn't stay in touch. However, I do remember that my friend discussed if maybe we should at some point get back this former CEA staff person and encourage them again to find out if there are more former colleagues who are dissatisfied and if we could cause a wave of change at CEA. We were so far removed and had so few contacts to anyone who actually worked there that it would've been a bit silly for us to get involved in this. And I'm not saying we would've done it – it's easy to talk about stuff like that, but then usually you don't do anything. Still, I feel like this anecdote suggests that there are sometimes more people interested and invested in good community outcomes than one might think, and multiple pathways to beneficial leadership change (like, it's very possible this former staff had nothing to do with the eventual chain of causes that led to leadership change, and that means that multiple groups of people were expressing worried sentiments about CEA at that time independently).
  • At one point somewhere between 2017-2018, someone influential in EA encouraged me to share more about specific stuff that happened in the EA orgs I worked because they were sometimes talking to other people who are "also interested in the health of EA orgs / health of the EA community." (To avoid confusion, this was not the community health team.) This suggests that people somewhat systematically keep an eye on things and even if CEA were to get temporarily taken over by a Silicon Valley cultish community, probably someone would try to do something about it eventually. (Even if it's just writing an EA forum post to create common knowledge that a specific is now taken over and no longer similar to what it was when it was founded. I mean, we see that posts did eventually get written about Leverage, for instance, and the main reasons it didn't happen earlier are probably more because many people thought "oh, everyone knows already" and "like anywhere else, few people actually take the time to do community-useful small bits of work when you can just wait for someone else to do it.") 
     

By the way, this discussion (mostly my initial comment and what it's in reaction to; not so much specifics about CEA history) reminded me of this comment about the difficulty of discussing issues around culture and desired norms. Seems like maybe we'd be better off discussing what each of us thinks would be best steps forward to improve EA culture or find a way to promote some kind of EA-relevant message (EA itself, the importance of AI alignment, etc.) and do movement building around that so it isn't at risk of backfiring. 

The CEO board

"The CEA board", right?

What probability would you assign to some weakened version of (3) being true? By some weakened version, I roughly mean taking the way out of "way too many," and defining too many as ~ meaningfully above the base rate for people in positions of power/influence.

10%.

Worth noting that it's not the highest of bars.

Agreed on it not being the highest of bars. I felt there was a big gap between your (2) and (3), so was aiming at ~ 2.4 to 2.5: neither peripheral nor widespread, with the understanding that the implied scale is somewhat exponential (because 3 is much worse than 2).

Yeah, I should've phrased (3) in a way that's more likely to pass someone like habryka's Ideological Turing Test.

Basically, I think if EAs were even just a little worse than typical people in positions of power (on the dimension of integrity), that would be awful news! We really want them to be significantly better.

I think EAs are markedly more likely to be fanatical naive consequentialists, which can be one form of "lacking in integrity" and is the main thing* I'd worry about in terms of me maybe being wrong. To combat that, you need to be above average in integrity on other dimensions.

*Ideology-induced fanaticism is my main concern, but I can think of other concerns as well. EA probably also attracts communal narcissists to some degree, or people who like the thought that they are special and can have lots of impact. Also, according to some studies, utilitarianism correlates with psychopathy at least in trolley problem examples. However, EA very much also (and more strongly?) attracts people who are unusually caring and compassionate. It also motivates people who don't care about power to seek it, which is an effect with strong potential for making things better.

What exactly are you referring to when you mention 'miniature cults of personality that lead to extremely serious bad outcomes'?  

Do you mean actually 'extremely serious bad outcomes', in the scope-sensitive sense, like millions of people dying?

More from Pablo
Curated and popular this week
Relevant opportunities