The institute is called Käte Hamburger Centre for Apocalyptic and Post-Apocalyptic Studies and is based in Heidelberg, Germany. They started in 2021 and initially received 9 million € of funding from the German government for the first four years. 

AFAICT, they study sociological aspects of narratives of apocalypses, existential risks, and the end of the world. 

They have engaged with EA thinking, and I assume they will have an interesting outside perspective of some prevalent worldviews in EA. For example, here is a recorded talk about longtermism (I have only skipped through it so far), which mentions MIRI, FHI, and What We Owe The Future. 

I stumbled upon this today and thought it could interest some people here. Generally, I am very curious to learn more about alternative worldviews to EA that also engage with existential risk in epistemically sound ways. One criticism of EA that became more popular over the last months is that EA organizations engage too little with other disciplines and institutions with relevant expertise. Therefore, I suggest checking out the work of this Centre. 

Please comment if you have engaged with them before and know more than I do. 

75

0
0

Reactions

0
0
Comments10
Sorted by Click to highlight new comments since:

One of their Directors Thomas Meier came to our most recent Cambridge Conference on Catastrophic Risk (2022).  They've also got some good people on their board like Elaine Scarry. 

I would note that my sense is that they're a bit more focussed on analysing 'apocalyptic imaginaries' from a sociological and criticial theory perspective. See for example their first journal issue, which is mostly critical analysis of narratives of apocalypse in fiction or conspiracy theories (rather than e.g. climate modelling of nuclear winter). They strike me as somewhat similar to the Centre for the Critical Study of Apocalyptic and Millenarian Movements. Maybe a crude analagous distinction would be between scientists and philosophers of science?

On the youtube video, I wasn't super impressed by that talk. It seemed more interested in pathologising research on global risks than engaging on the object level, similar to some of the more lurid recent work from Torres and Gebru. But I'm going to Schwarz's talk this Friday in Cambridge so hopefully will be able to dig deeper.

The distinction between scientists and philosophers of science doesn't massively seem apt. Their work is primarily critical, similar to the work of sociologists of science or STS scholars rather than philosophers of science

Yeah that's fair. Depends on the particular researcher, they're quite eclectic. Some are even further removed, like the difference between scientists and literary criticism of a novel about scientists (see e.g. this paper on Frederic Jameson).

Awesome, thanks for giving useful context!

Generally, I am very curious to learn more about alternative worldviews to EA that also engage with existential risk in epistemically sound ways.

 

I'd be careful not to confuse polished presentation, eloquent speaking and fundraising ability with good epistemics.

I watched the linked video and honestly thought it was a car crash epistemically speaking.

The main issue is I don't think any of her arguments would pass the ideological turing test. She says "Will MacAskill thinks X..." but if Will MacAskill was in the room he would obviously respond "Sorry no, that's not what I think at all..."

A real low point is when she points at a picture of Nick Bostrom, Stuart Russell, Elon Musk, Jaan Tallinn etc. and suggests their motivation for working on AI is to prove that men are superior to women. 

That's fair. It was unreasonable of me to imply that their work is epistemically sound without engaging with it more. I flagged that I had only stumbled upon it and hadn't engaged with it, but that should have restrained me from implying their criticism being epistemically sound.  

In general, my idea of this post was more, "How curious, this sociology-ish institute is studying something related to x-risk and has engaged with EA in some way. Let me just share this and see what other people think." and not "The criticism of this group is correct and very important, EA needs to engage with it to improve." 

yeah whatever you want to call the academic and cultural memeplexes that are obsessed with demographic identities, I do think "when literally everyone dies in our threatmodels, the nonwhitemales die too" is kinda all you need to say. Based on my simulation of those memeplexes (which I sorta bought into when I first started reading the sequences, so I have a minor inside view), we can probably expect the retort to be "are you introspective enough to be sure you're not doing motivated reasoning about those threatmodels, rather than these other threatmodels over here?" at best, but my more honest money is on something more condescending than that. And to be clear, there's a difference between constant vigilance about "maybe I rolled a nat 1 on introspection this morning" (good) and falsely implying that someone accusing you of not being introspective enough is your peer when they're not (bad). 

I think I'd like to highlight an outsider (i.e. someone who isn't working on extinction-level threatmodels) who seems to be doing actually useful labor of reframing for folks of extinction-level threatmodels: I just read the prologue to David Chapman's new book, and I'm optimistic about it being good. I made a claim sorta related to his prologue in a lightning talk at the MA4 unconference just the other day: I claimed that "AI ethics" and "AI alignment" can unite on pessimism, and furthermore that there's a massive umbrella sense of "everyone who redteams software" that I think could be a productive way of figuring out who to include and exclude. I.e. I do think that research communities should have some fairly basic low-bandwidth channels open between everyone who redteams software and high-bandwidth channels open between everyone who shares their particular threatmodels, and only really excluding optimists who think everything's fine. 

I have an off topic thing to continue on saying that I'll put in a reply to this comment. 

Threatmodel homogeneity is a major ecosystem risk in alignment in particular. There's this broad sense of "eliezer and holden are interested in extinction-level events" leading to "it's not cool to be interested in subextinction level events" that leads some people to unclear reasoning, which at worst becomes "guess the password to secure the funding". The whole "if eliezer is right the stakes are so high but I have nagging questions or can't wrap my head around exactly what's going on with the forecasts" thing leads to 1. an impossible to be happy with prioritization/tradeoff problem between dumping time into hard math/cs and dumping time into forecasting and theories of change (which would be a challenge for alignment researchers regardless), 2. an attack surface with a lot of opening to vultures or password guessers (many of whom aren't really distinguishable from people earnestly doing their best, so I may regret framing them as "attackers") , but most of all 3. people getting nowhere with research goals ostensibly directed at extinction-level threatmodels because they don't deep down in their heart of hearts understand those threatmodels when they could instead be making actual progress on other threatmodels that they do understand

The number of upvotes here seems like a self-parody of EA's criticism fetish. In what universe do critical sociologists or whomever steer your map closer to the territory, or keep you focused on actually helping people? I challenge anyone who'd downvote this comment to come up with a way to make my priors around who generates heat and who generates light resolvable by bet. 

Appreciate the pushback, but also think the upvotes are likely not representative of EAs reflexively thinking every criticism is great no matter how useless/uncharitable/etc. I think just from skimming this post, it was reasonable for me to have the reaction "Nice, some more people interested in x-risks/dystopias, and studying it from a critical sociological perspective, whatever that means. [Looking into it for 2 minutes without spotting anything really interesting] Thanks for sharing.", and that that's more representative of the upvotes. 

E.g. one example for EAs responding in a calibrated way to criticism is imo this recent thread about a book that's very critical of EA and that seems to receive appropriate pushback for its flaws: https://forum.effectivealtruism.org/posts/YFGkyDjKvsr9tHzkS/book-post-the-good-it-promises-the-harm-it-does-critical 

That said, I also am now and then surprised by the amount of upvotes some EA criticism content here gets despite what I perceive to be relatively low usefulness. I'm probably convinced though that this collective behavior is optimal given that criticism is hard and feels socially abrasive and adversarial, and to encourage it coming forward we should bias towards upvoting even just for approving the energy somebody put into the general process of improving EA. EAs as a community should just also realize that a critical post having 400+ upvotes doesn't necessarily reflect agreement or quality. (I'm also a fan of the idea to introduce agreement votes for posts themselves.)

Curated and popular this week
Relevant opportunities