V

Vaipan

290 karmaJoined Working (0-5 years)

Participation
5

  • Completed the In-Depth EA Virtual Program
  • Attended an EA Global conference
  • Attended an EAGx conference
  • Attended more than three meetings with a local EA group
  • Received career coaching from 80,000 Hours

Comments
86

Of course. It is much easier for privileged individuals to relate to the suffering of minds that do no exist yet compared to the very real suffering of people and animals today that force you to confront your emotions and uneasiness towards those who have so little when you have so much. 

The divide between gender and cause-area is obvious (not just from this study but also from my own EA group!). Women in general care much more about GHD and animal welfare and dislike fixing technological issues with yet another technology; they want more systemic change. That some privileged men who benefit from the current status quo do not want to change the current power dynamics and prefer to think about future beings who do not have a voice yet to feel useful is hard to deny.

Sadly I have not seen any research mixing gender dynamics and longtermist urgency.

I agree. We have to take into account that 80k strongly pushed for careers in AI safety, encouraged field building specifically for AI safety, and the job board has become increasingly dominated by AI safety job offers. And the trend is not likely to be reversed soon. 

However, that does not keep people outside of EA to obtain jobs in the GHD field (which is not just development economics, as someone wrote one day);  they are just not accounted for. And if the movement keep giving opportunities and funding specifically towards AI safety, sure we'll get less and less GHD people. So it's still impressive, taking all this funding concentration, that we get so many EAs that still consider GHD as the most pressing cause-area. 

It is always appalling to see tech lobbying power shut down all the careful work done by safety people.

Yet the article highlights a very fair point: that safety people have not succeeded at being clear and convincing enough about the existential risks posed by AI. Yes, it's hard, yes it's a lot about speculations. But that's exactly where impact lies : trying to have a consistent and pragmatic discourse about AI risks, that is not uselessly alarmist or needlessly vague.

The state of the EA community is a good example of that. I often hear that yes, risks are high, but what risks exactly, and how can they be quantified? Impact measurement is awfully vague when it comes to AI safety (and a minor measure, AI governance).

I wish this was more well-known and read in the EA community. So far I have not seen any credible objections to these three compelling arguments. Perils or not perils, these arguments are still valid on their own. 

Hey Joseph, 

I am exactly in the same boat, very specialized path and lack of financial visibility. I also work for an EA org, which means that I chose a pay cut (and the role is time-constrained in terms of funding) compared to other jobs that could be safer (consulting, etc). 

But recently, I've been thinking about the fact that donating is a bit like starting a new sport class or any new habit; if you don't start, you'll never start (except under ideal conditions but that rarely happens!). Accepting a bit of risk to accomplish something that you care a lot about makes sense for me, which is why I will start giving soon. There will never be a threshold of financial safety where I'll feel completely safe, so waiting will not do good to me. 

Also, inflation means that all my careful savings are losing value right now, so I'm realizing that I would be better off spending a part of it now rather than wait and see their value slowly disappearing. 

This is only my choice; I just wanted to comment since I am a bit in the same case but came to think differently about it recently. Also just want to empathize with your situation. Sometimes I feel bad when I see that some of my colleagues have been giving for ten years, but again, we clearly were not given the same set of circumstances at birth. 

Thanks for saying it, though! Because it feels validating to hear it, instead of having this internal voice that hammers that time is being wasted and that I'm letting everyone and everything down. I might do just that!

I've sent your post to someone who has 20+ years of experience in the field of US intelligence and who has recently gone into EA, you might be contacted by them soon! Happy to read this kind of post anyway--this kind of thoughts does definitely not cross everyone's mind when faced with such a situation. 

Absolutely, and of course get feedback from these orgs once the draft isn't a draft anymore. Amateurism in EA when it comes to nuclear risks have been denounced more than once, so will try to steer clear of that!

Load more