Kinda pro-pluralist, kinda anti-Bay EA.
I have come here to extend the principle of charity to bad criticisms of EA and kick ass. And I'm all out of charity.
(my opinions are fully my own, and do not represent the views of any close associates or the company I work for)
I think the main thing is their astonishing success. Like, whatever else anyone wants to say to Émile, they are damn hard working and driven. It's just in their case they are driven by fear and pure hatred of EA.
Approximately ~every major news media piece critical of EA (or covering EA with a critical lens, which are basically the same thing over the last year and a half) seems to link to/quote Émile at some point as a reputable and credible report on EA.
Sure, those more familiar with EA might be able to see the hyperbole, but it's not imo out there to imagine that Émile's immensely negative presentation of EA being picked out by major outlets has contributed to the fall of EA's reputation over the last couple of years.
Like, I was wish we could "collectively agree to make Émile irrelevant", but EA can't do that unilaterally given the influence their[1] ideas and arguments have had. Those are going to have to be challenged or confronted sooner or later.
That is, Émile's
To answer your question very directly on the confidence of millions of years in the future, the answer I think is "no", because I don't think we can be reasonably confident and precise about any significant belief about the state of the universe millions of years into the future.[1] I'd note that the article you link isn't very convincing for someone who doesn't share the same premesis, though I can see it leading to 'nagging thoughts' as you put it.
Other ways to answer the latter question about human extinction could be:
In practice though, I think if you reach a point where you might consider it to be a moral course of action to make all of humanity extinct, perhaps consider this a modus tonens of the principles that brought you to that conclusion rather than as a logical consequence that you ought to believe and act on. (I see David made a similar comment basically at the same time)
Some exceptions for phyisics especially outside of our lightcone yada yada, but I think for the class of beliefs (I used significant beliefs) that are similar to this question this holds
I don't understand your lack of understanding. My point is that you're acting like a right arse.
When people make claims, we expect there to be some justification proportional to the claims made. You made hostile claims that weren't following on from prior discussion,[1] and in my view nasty and personal insinuations as well, and didn't have anything to back it up.
I don't understand how you wouldn't think that Sean would be hurt by it.[2] So to me, you behaved like arse, knowing that you'd hurt someone, didn't justify it, got called out, and are now complaining.
So I don't really have much interest in continuing this discussion for now, or much opinion at the moment of your behaviour or your 'integrity'
Sorry Oli, but what is up with this (and your following) comment?
From what I've read from you[1] seem to value what you call "integrity" almost as a deontological good above all others. And this has gained you many admirers. But to my mind high integrity actors don't make the claims you've made in both of these comments without bringing examples or evidence. Maybe you're reacting to Sean's use of 'garden variety incompetence' which you think is unfair to Bostrom's attempts to tow the fine line between independence and managing university politics but still, I feel you could have done better here.
To make my case:
Maybe from your perspective you feel like you're just floating questions here and sharing your personal perspective, but given the content of what you've said I think it would have been better if you had either brought more examples or been less hostile.
(I'm going to wrap up a few disparate threads together here, and will probably be my last comment on this post ~modulo a reply for clarification's sake. happy to discuss further with you Rob or anyone via DMs/Forum Dialogue/whatever)
(to Rob & Oli - there is a lot of inferential distance between us and that's ok, the world is wide enough to handle that! I don't mean to come off as rude/hostile and apologies if I did get the tone wrong)
Thanks for the update Rob, I appreciate you tying this information together in a single place. And yet... I can't help but still feel some of the frustrations of my original comment. Why does this person not want to share their thoughts publicly? Is it because they don't like the EA Forum? Because their scared of retaliation? It feels like this would be useful and important information for the community to know.
I'm also not sure what to make of Habryka's response here and elsewhere. I think there is a lot of inferential distance between myself and Oli, but it does seem to me to come off as a "social experiment in radical honesty and perfect transparency" , which is a vibe I often get from the Lightcone-adjacent world. And like, with all due respect, I'm not really interested in that whole scene. I'm more interested in questions like:
Writing it down, 2.b. strikes me as what I mean by 'naive consequentialism' if it happened. People had information that SBF was a bad character who had done harm, but calculated (or assumed) that he'd do more good being part of/tied to EA than otherwise. The kind of signalling you described as naive consequentialism doesn't really seem pertinent to me here, as interesting as the philosophical discussion can be.
tl'dr - I think there can be a difference between a discussion about what norms EA 'should' have, or senior EAs should act by, especially in the post-FTX and influencing-AI-policy world, but I think that's different from the 'minimal viable information-sharing' that can help the community heal, hold people to account, and help make the world a better place. It does feel like the lack of communication is harming that, and I applaud you/Oli pushing for it, but sometimes I wish you would both also be less vague too. Some of us don't have the EA history and context that you both do!
epilogue: I hope Rebecca is doing well. But this post & all the comments makes me feel more pessimistic about the state of EA (as a set of institutions/organisations, not ideas) post FTX. Wounds might have faded, but they haven't healed 😞
Not that people should have guessed the scale of his wrongdoing ex-ante, but was there enough to start to downplay and disassociate?
People, the downvote button is not a disagree button. That's not really what it should be used for.
My guess is there's something ideological or emotional behind these kind of EA critiques,
Something I've come across while looking into/responding to EA criticism over the last few months is that a lot of EA critics seem to absolutely hate EA[1], like with an absolutely burning zeal. And I'm not really sure why or what to do with it - feels like it's an underexplored question/phenomenon for sure.
Or at least, what they perceive EA/EAs to be
What are you referring to when you say "Naive consequentialism"?[1] Because I'm not sure that it's what others reading might take it to mean?
Like you seem critical of the current plan to sell Wytham Abbey, but I think many critics view the original purchase of it as an act of naive consequentialism that ignored the side effects that it's had, such as reinforcing negative views of EA etc. Can both the purchase and the sale be a case of NC? Are they the same kind of thing?
So I'm not sure the 3 respondents from the MCF and you have the same thing in mind when you talk about naive consequentialism, and I'm not quite sure I am either.
Both here and in this other example, for instance
Again, a fan of you and your approach David, but I think you underestimate just how hostile/toxic Émile has been toward all of EA. I think it's very fair to substitute one for the other, and it's the kind of thing we do all the time in real, social settings. In a way, you seem to be emulating a hardcore 'decoupling' mindset here.
Like, at risk of being inflammatory, an intuition pump from your perspective might be:
I think many EAs view 'engagement with criticism by Torres' in the same way that you'd see 'engagement with criticism by Trump', that the critic is just so toxic/bad-faith that nothing good can come of engagement.