Mjreard

Advising Team @ 80,000 Hours
516 karmaJoined Working (6-15 years)London, UK
bit.ly/mattreardon

Bio

Doing calls, lead gen, application reviewing, and public speaking for the 80k advising team

How others can help me

Apply for a 1-1 call with 80k. Yes, now is a good time to do it – you can book in later, we can have a second call, come on now. 

Follow me on Twitter and listen to my podcast (search for "actually after hours" on YouTube or podcast apps)

Comments
34

Things downstream of OpenPhil are in the 90th+ percentile of charity pay, yes, but why do people work in the charity sector? Either because they believe in the specific thing (i.e. they are EAs) or because they want the warm glow of working for a charity. Non-EA charities offer more warm glow, but maybe there's a corner of "is a charity" and "pays well for a charity even though people in my circles don't get it" that appeals to some. I claim it's not many and EA jobs are hard to discover for the even smaller population of people who have preferences like these and are high competence. 

Junior EA roles sometimes pay better than market alternatives in the short run, but I believe high potential folks will disproportionately track lifetime earnings vs the short run and do something that's better career capital.

I'd guess the biggest con is adverse selection. Why is this person accepting a below market wage at a low-conventional-status organization? 

An EA might be the most conventionally talented candidate because they're willing to take the role despite these things.   

Agree with the analysis and quite likely to take the Off the Clock suggestion. Thank you!

I think your current outlook should be the default for people who engage on the forum or agree with the homepage of effectivealtruism.com. I’m glad you got there and that you feel (relatively) comfortable about it. I’m sorry that the process of getting there was so trying. It shouldn't be.

It sounds like the tryingness came from a social expectation to identify as capital ‘E’ capital ‘A’ upon finding resonance with the basic ideas and that identifying that way implied an obligation to support and defend every other EA person and project.

I wish EA weren’t a question of identity, but behavior. Which actions are you choosing to take and why? Your reasons can draw on many perspectives at once and none needs to dominate. Even the question of “should I do meta EA work/advocacy?” can be taken this way and I think something like it is what’s at stake in the “should I identify as EA?” question. 

I personally do meta work and feel free to criticize particular EA-identifying people and organizations. I also feel free to quit this work all together and be quieter about cause-spanning features of EA. If I quit and got quieter, I’d feel like that’s just another specific thing I did, rather than some identity line in the sand I’ve crossed. Maybe it’s because my boss sucked; maybe it’s because I felt compelled to focus on tax policy instead of GCRs, but it wouldn’t be because of my general feelings of worthiness or acceptance by the EA monolith. 

A lot of what I’m saying is couched in the language of personal responsibility on your part in what kinds of identity questions you ask yourself, but I want to be clear that the salience of the identity question is also socially created by people asking questions in the direction of fidelity to ideas and their logical limits as a sort of status test, e.g. which utilitarian bullets you will or won’t bite. As much as we shouldn’t preoccupy ourselves with such questions, we shouldn’t preoccupy others with them either. 

If you haven't, you should talk to the 80k advising team with regard to feedback. We obviously aren't the hiring orgs ourselves, but I think we have reasonable priors on how they read certain experiences and proposals. We've also been through a bunch of EA hiring rounds ourselves and spoken to many, many people on both sides of them. 

I think you've failed to think on the margin here. I agree that the broad classes of regulation you point to here have *netted out* badly, but this says little about what the most thoughtful and determined actors in these spaces have achieved. 

Classically, Germany's early 2000s investments in solar R&D had enormous positive externalities on climate and the people who pushed for those didn't have to support restricting nuclear power also. The option space for them was not "the net-bad energy policy that emerged" vs "libertarian paradise;" it was: "the existing/inevitable bad policies with a bet on solar R&D" vs "the existing/inevitable bad policies with no bet on solar R&D."

I believe most EAs treat their engagement with AI policy as researching and advocating for narrow policies tailored to mitigate catastrophic risk. In this sense, they're acting as an organized/expert interest group motivated by a good, even popular per some polls, view of the public interest. They are competing with rather than complementing the more selfishly motivated interest groups seeking the kind of influence the oil & gas industry did in the climate context. On your model of regulation, this seems like a wise strategy, perhaps the only viable one. Again the alternative is not no regulation, but regulation that leaves out the best, most prosocial ideas. 

To the extent you're trying to warn EAs not to indiscriminately cheer any AI policy proposal assuming it will help with x-risk, I agree with you. I don't however agree that's reflective of how they're treating the issue. 

Tiny nit: I didn't and don't read much into the 80k comment on liking nice apartments. It struck me as the easiest way to disclose (imply?) that he lived in a nice place without dwelling on it too much. 

Yes, in general it's good to remember that people are far from 1:1 substitutes for each other for a given job title. I think the "1 into 2" reasoning is a decent intuition pump for how wide the option space becomes when you think laterally though and that lateral thinking of course shouldn't stop at earning to give. 

A minor, not fully-endorsed object level point: I think people who do ~one-on-one service work like (most) doctors and lawyers are much less likely to 10x the median than e.g. software engineers. With rare exceptions, their work just isn't that scalable and in many cases output is a linear return to effort. I think this might be especially true in public defense where you sort of wear prosecutors down over a volume of cases.  

Looks like the UK hardcover release isn't until 21 May, but it's available on Kindle? Is that right? 

If the lives of pests are net negative,* I think a healthy attitude is to frame your natural threat/disgust reaction to them as useful. The pests you see now are a threat to all the future pests they will create. It's imperative to the suffering of those future creatures that the first ones don't live to create them. Our homes are fertile breeding grounds for enormous suffering. I think creating these potential breeding grounds gives us a responsibility to prevent them from realizing that potential. 

I take the central (practical) lesson of this post to be that that responsibility should spark some urgency to act and overcome guilt when we notice the first moth or mouse. We've already done the guilty thing of creating this space and not isolating it. The only options left are between more suffering and less.  

Thank you for the post!

 

*I mean this broadly to include both cases where their lives are net negative in the intervention-never scenario and in (more likely) scenarios like these where the ~inevitable human intervention might make them that way. 

Load more