I have noted an increased hostility from many researchers/commentators from what I may describe (there may be better descriptions out there) as the AI 4 social justice community towards EA and longtermism.

A14SJ, in my opinion, view the risks of AI through the lens of structural power. This power imbalance over who owns emerging tech, who builds it, and who the tech is trained on, manifests in issues such as algorithmic bias and reduced privacy/human agency for minority groups. People in the A14SJ group include Timnit Gebru, Kate Crawford, Karen Hao and Margaret Mitchell.

At  times, it seems like AI4SJ should be natural allies of EA. They do great work on alternative LLMs, and worry deeply about the social implications that poorly governed AI could lead to. Personally, I think that Gebru and Mitchell should be given as much support as possible, given that their unfair dismissal from Google's AI Ethics team is a signal of the challenges that come with a unaccountable concentration of technological powers to a small number of firms.

Yet Gebru recently Tweeted about "longtermism and effective altruism bullshit", and said Silicon Valley EA types are "convincing themselves that the way in which they're exploiting people and causing harm is the best possible thing they can be doing in the world".

Part of this is probably down to the core thesis of the infamous Phil Torres critique of longtermism; that it "ignores structural injustice today and doesn't value the developing world". Algorithmic bias today is not the same as x-risk from unaligned AI in 30 years. But surely there is enough in common that both communities can work on?

How do people view the debate between these different groups, and what is the best way of engaging/working together to try and create progress against risk from AI/emerging tech?

12

0
0

Reactions

0
0
Comments5
Sorted by Click to highlight new comments since:

I don't know if productive disagreement with Gebru would be possible as she seems to have already maybe up her mind. Maybe engaging would simply draw more attention and more fire? One option is to reply to those posts and target the comments mostly towards the readers.

Here are some of my previous thoughts (before these SJ-based critiques of EA were published) on connections between EA, social justice, and AI safety, as someone on the periphery of EA. (I have no official or unofficial role in any EA orgs, have met few EA people in person, etc.) I suspect many EA people are reluctant to speak candidly about SJ for fear of political/PR consequences.

Thank you for the link and comments here! As someone also on EA's periphery, I understand the concern. I wonder if there is a degree of status fighting that EA and AI4SJ may have to do down the line.

I think that policy-makers/the public are now starting to think about social implications of tech. The public generally worries about this through the lens  of automation and surveillance, but from my experience (UK tech policy), policy-makers are also pretty concerned about algorithmic bias.

If that leads policy-makers to the AI4SJ way of viewing things, then the problems you outlined in your above link could potentially manifest.

Having said that, I know that FTC have brought in people from the AI4SJ wing such as Meredith Whittaker etc. No idea what impact that has had though.

Thanks for linking to this discussion - I found the replies useful!

Curated and popular this week
Relevant opportunities