As an outsider to the field, here are some impressions I have:
The neural substrates of emotions do not appear to be confined to cortical structures. In fact, subcortical neural networks aroused during affective states in humans are also critically important for generating emotional behaviors in animals.
Feedback on third episode: Also really liked it! Felt different from the first two. Less free-wheeling, more clearly useful. (Still much more on the relaxed, informal side than main-feed 80k podcasts.)
Felt very useful to get an inside perspective on what 80k thinks its doing with career advising. I really appreciated Dwarkesh kicking the tires on the theory of change ("why not focus 100% on the tails?"), as well as the responses.
It wasn't entirely an easy listen. I identify with the common EA tropes of: trying to push myself to be more ambitious, but this doesn't come naturally so I end up often feeling bad about how non-agentic I am. Ex ante trying some things to see if I'm in the right tail of the distribution, figuring I'm probably not, and being kind of upset and adrift about it.
I personally appreciate that 80k thinks a lot about doing right by people like me. It was somewhat hard to hear Dwarkesh focus so intently people at the tails, as if the other 99% of us are a rounding error, but I see the case for it and I'm not sure it's completely wrong. (I'm not supposed to be the primary beneficiary of 80k advising / other EA resources. If I voluntarily sign up to try being an ambitious altruist, and later feel bad about not (yet) succeeding, I'm not sure I get to blame anyone except myself.)
Feedback on first two episodes: I really enjoyed them, and was instantly sold this series. I felt like I was sitting in on a conversation with fun people having great conversations. Wasn't really sure what the impact case was for these, but they gave me a feeling I have at the best EA meetups: oh my gosh, these are my people. [1]
(Feedback on third episode in another comment)
I have some reservations about this: the cultural characteristics that sets off the "my people" sense don't seem too strongly connected to doing the most good? So while I love finding "my people," it's strange that they are such a big fraction of EA, both at local meetups and apparently at 80k.
Nick Bostrom's website now lists him as "Principal Researcher, Macrostrategy Research Initiative."
Doesn't seem like they have a website yet.
Nick Bostrom's website now lists him as "Principal Researcher, Macrostrategy Research Initiative."
Doesn't seem like they have a website yet.
This seems relevant to any intervention premised on "it's good to reduce the amount of net-negative lives lived."
If factory-farmed chickens have lives that aren't worth living, then one might support an intervention that reduces the number of factory-farmed chickens, even if it doesn't improve the lives of any chickens that do come to exist. (It seems to me this would be the primary effect of boycotts, for instance, although I don't know empirically how true that is.)
I agree that this is irrelevant to interventions that just seek to improve conditions for animals, rather than changing the number of animals that exist. Those seem equally good regardless of where the zero point is.
I wholeheartedly agree, and think we need to look elsewhere to apply this model.
Donor Lotteries unhealthily exhibit winner-take-all dynamics, centralizing rather than distributing power. If this individual makes a bad decision, then the impact of that money evaporates -- it's a very risky proposition.
A more robust solution would be to proportionally distribute the funds to everyone who joins, based on the amount they put in. This would democratize funding ability throughout the EA ecosystem and lead to a much healthier funding ecosystem.
The concrete suggestions here seem pretty wild, but I think the possible tension between computationalism and shrimp welfare is interesting. I don't think it's crazy to conclude "given x% credence on computationalism (plus these moral implications), I should reduce my prioritization of shrimp welfare by nearly x%."
That said, the moral implications are still quite wild. To paraphrase Parfit, "research in [ancient Egyptian shrimp-keeping practices] cannot be relevant to our decision whether to [donate to SWP today]." The Moral Law keeping a running tally of previously-done computations and giving you a freebie to do a bit of torture if it's already on the list sounds like a reductio.
A hazy guess is that something like "respecting boundaries" is a missing component here? Maybe there is something wrong with messing around with a water computer that's instantiating a mind, because that mind has a right to control its own physical substrate. Seems hard to fit with utilitarianism though.
Thanks for posting, these look super interesting!
I'm hoping to read (and possibly respond to) more, but I ~randomly started with the final article "Saving the World Starts at Home."
My thoughts on this one are mostly critical: I think it fundamentally misunderstands what EA is about (due to relying too heavily on a single book for its conception of EA), and will not be persuasive to many EAs. But it raises a few interesting critiques of EA prioritization at the end.
I think the "status" and "politics" critiques of EA prioritization are useful and probably under-discussed.
Certain fields (e.g. AI safety research) are often critiqued for being suspiciously interesting / high-status / high-paying, but this makes the case that even donating to GiveWell is a little suspicious in how much status it can buy. (But I think there are likely much more efficient ways to buy status; donating 1% of your income probably buys much more than 1/10 the status you'd get from donating 10%.)
I also think it's reasonably likely that there are some conservative-coded causes that EAs undervalue for purely political reasons (but I don't have any concrete examples at hand).
There are a few fundamental issues with the analysis that cause this to fail to connect for me.
(this is a bit scattershot; I tried to narrow it down to a few points to prevent this from being 3x longer)
Is this true? The links only establish that two safety-focused researchers have recently left, in very different circumstances.
It seemed like OpenAI made a big push for more safety-focused researchers with the launch of Superalignment last July; I have no idea what the trajectory looks like more recently.
Do you have other information that shows that the number of safety-focused researchers at OpenAI is decreasing?