Mjreard

Advising Team @ 80,000 Hours
476 karmaJoined Dec 2017Working (6-15 years)London, UK
twitter.com/Mjreard

Bio

Doing calls, lead gen, application reviewing, and public speaking for the 80k advising team

How others can help me

Apply for a 1-1 call with 80k. Yes, now is a good time to do it – you can book in later, we can have a second call, come on now. 

Follow me on Twitter and listen to my podcast (search for "actually after hours" on YouTube or podcast apps)

Comments
31

I think your current outlook should be the default for people who engage on the forum or agree with the homepage of effectivealtruism.com. I’m glad you got there and that you feel (relatively) comfortable about it. I’m sorry that the process of getting there was so trying. It shouldn't be.

It sounds like the tryingness came from a social expectation to identify as capital ‘E’ capital ‘A’ upon finding resonance with the basic ideas and that identifying that way implied an obligation to support and defend every other EA person and project.

I wish EA weren’t a question of identity, but behavior. Which actions are you choosing to take and why? Your reasons can draw on many perspectives at once and none needs to dominate. Even the question of “should I do meta EA work/advocacy?” can be taken this way and I think something like it is what’s at stake in the “should I identify as EA?” question. 

I personally do meta work and feel free to criticize particular EA-identifying people and organizations. I also feel free to quit this work all together and be quieter about cause-spanning features of EA. If I quit and got quieter, I’d feel like that’s just another specific thing I did, rather than some identity line in the sand I’ve crossed. Maybe it’s because my boss sucked; maybe it’s because I felt compelled to focus on tax policy instead of GCRs, but it wouldn’t be because of my general feelings of worthiness or acceptance by the EA monolith. 

A lot of what I’m saying is couched in the language of personal responsibility on your part in what kinds of identity questions you ask yourself, but I want to be clear that the salience of the identity question is also socially created by people asking questions in the direction of fidelity to ideas and their logical limits as a sort of status test, e.g. which utilitarian bullets you will or won’t bite. As much as we shouldn’t preoccupy ourselves with such questions, we shouldn’t preoccupy others with them either. 

If you haven't, you should talk to the 80k advising team with regard to feedback. We obviously aren't the hiring orgs ourselves, but I think we have reasonable priors on how they read certain experiences and proposals. We've also been through a bunch of EA hiring rounds ourselves and spoken to many, many people on both sides of them. 

I think you've failed to think on the margin here. I agree that the broad classes of regulation you point to here have *netted out* badly, but this says little about what the most thoughtful and determined actors in these spaces have achieved. 

Classically, Germany's early 2000s investments in solar R&D had enormous positive externalities on climate and the people who pushed for those didn't have to support restricting nuclear power also. The option space for them was not "the net-bad energy policy that emerged" vs "libertarian paradise;" it was: "the existing/inevitable bad policies with a bet on solar R&D" vs "the existing/inevitable bad policies with no bet on solar R&D."

I believe most EAs treat their engagement with AI policy as researching and advocating for narrow policies tailored to mitigate catastrophic risk. In this sense, they're acting as an organized/expert interest group motivated by a good, even popular per some polls, view of the public interest. They are competing with rather than complementing the more selfishly motivated interest groups seeking the kind of influence the oil & gas industry did in the climate context. On your model of regulation, this seems like a wise strategy, perhaps the only viable one. Again the alternative is not no regulation, but regulation that leaves out the best, most prosocial ideas. 

To the extent you're trying to warn EAs not to indiscriminately cheer any AI policy proposal assuming it will help with x-risk, I agree with you. I don't however agree that's reflective of how they're treating the issue. 

Tiny nit: I didn't and don't read much into the 80k comment on liking nice apartments. It struck me as the easiest way to disclose (imply?) that he lived in a nice place without dwelling on it too much. 

Yes, in general it's good to remember that people are far from 1:1 substitutes for each other for a given job title. I think the "1 into 2" reasoning is a decent intuition pump for how wide the option space becomes when you think laterally though and that lateral thinking of course shouldn't stop at earning to give. 

A minor, not fully-endorsed object level point: I think people who do ~one-on-one service work like (most) doctors and lawyers are much less likely to 10x the median than e.g. software engineers. With rare exceptions, their work just isn't that scalable and in many cases output is a linear return to effort. I think this might be especially true in public defense where you sort of wear prosecutors down over a volume of cases.  

Looks like the UK hardcover release isn't until 21 May, but it's available on Kindle? Is that right? 

If the lives of pests are net negative,* I think a healthy attitude is to frame your natural threat/disgust reaction to them as useful. The pests you see now are a threat to all the future pests they will create. It's imperative to the suffering of those future creatures that the first ones don't live to create them. Our homes are fertile breeding grounds for enormous suffering. I think creating these potential breeding grounds gives us a responsibility to prevent them from realizing that potential. 

I take the central (practical) lesson of this post to be that that responsibility should spark some urgency to act and overcome guilt when we notice the first moth or mouse. We've already done the guilty thing of creating this space and not isolating it. The only options left are between more suffering and less.  

Thank you for the post!

 

*I mean this broadly to include both cases where their lives are net negative in the intervention-never scenario and in (more likely) scenarios like these where the ~inevitable human intervention might make them that way. 

Nice punchy writing! I hope this sparks some interesting, good faith discussions with classmates. 

I think a powerful thing to bring up re earning to give is how it can strictly dominate some other options. e.g. a 4th or 5th year biglaw associate could very reasonably fund two fully paid public defender positions with something like 25-30% of their salary. A well-paid plastic surgeon could fund lots of critical medical workers in the developing world with less.

One important thing to keep in mind when you have these chats is that there are better options; they're just harder to carve out and evaluate. One toy model I play with is entrepreneurship. Most people inclined towards working for social good have a modesty/meekness about them where they just want to be a line-worker standing shoulder-to-shoulder with people doing the hard work of solving problems. This suggests there might be a dearth of people with this outlook looking to build, scale, and importantly sell novel solutions.

As you point out, there are a lot of rich people out there. Many/most of them just want to get richer, sure, but lots of them have foundations or would fund exciting/clever projects with exciting leaders, even if there wasn't enormous (or any) profitability in it. The problem is a dearth of good prosocial ideas – which Harvard students seem well positioned to spin up: you have four years to just think and learn about the world, right? What projects could exist that need to? Figure it out instead of soldiering away for existing things.  

Curious if you've seen or could share botecs on the all-in cost per retreat? 

Naïvely, people like to benchmark 5% of property value per year as the all-in-cost of ownership alone (so ~$750k/yr here? Really not sure how this scales to properties like Wytham). 

I wonder how that compares to the savings in variable retreat costs? Like if you had 20 retreats/yr are you saving (close to?) $32,500 per retreat (assuming 750k is the right number). Accommodation for 25 people for 4 nights in Oxford could plausibly be ~$20k itself, so it seems like with a given number of retreats or attendees, you could get quite close, but the numbers matter here. 

For what it's worth, I think you shouldn't worry about the first two bullets. The way you as an individual or EA as a community will have big impact is through specialization. Being an excellent communicator of EA ideas is going to have way bigger and potentially compounding returns than your personal dietary or donation choices (assuming you aren't very wealthy). If stressing about the latter takes away from the former, that seems like a mistake to be worried about. 

I also shouldn't comment without answering the question:

  • I balk at thorny or under-scoped research/problems that could be very valuable
    • It feels aversive to dig into something without a sense of where I'll end up or whether I'll even get anywhere
    • If there's a way I can bend what I already know/am good at into the shape of the problem, I'll do that instead
    • One way this happens is that I only seek out information/arguments/context that are legible to me, specifically more big picture/social science-oriented things like Holden, Joe Carlsmith or Carl Shulman, even though understanding whether technical aspects of AI alignment/evals make sense is a bigger and more unduly under-explored crux for understanding what matters 
  • I fail to be a team player in a lot of ways. 
    • I have my own sense of what my team/org's priorities should be
    • I expect others around me to intuit, adopt these priorities with no or minimal communication
    • When we don't agree or reach consensus and there's a route for me to avoid resolving the tension, I take the avoidant route. Things that I don't think are important, but others do, don't happen  
Load more