I’m Emma from the Communications team at the Centre for Effective Altruism (CEA). I want to flag a few media items related to EA that have come out recently or will be coming out soon, given they’ll touch on topics—like FTX—that I expect will be of interest to Forum readers...
This post was partly inspired by, and shares some themes with, this Joe Carlsmith post. My post (unsurprisingly) expresses fewer concepts with less clarity and resonance, but is hopefully of some value regardless.
Content warning: description of animal death.
I live in a ...
My prior would be that unless you check extremely frequently, this sounds like a lot of suffering. But not sure about the other options.
Tl;dr: One of the biggest problems facing any kind of collective action today is the fracturing of the information landscape. I propose a collective, issue-agnostic observatory with a mix of algorithmic and human moderation for the purposes of aggregating information, separate from advocacy (i.e. "what is happening", not "what should happen").
There is a crisis of information happening right now. 500 hours of video are uploaded to Youtube every minute. Extremely rapid news cycles, empathy fatigue, the emergence of a theorised and observed polycrisis, and the general breakdown of traditional media institutions in favour of algorithms designed to keep you on-platform for as long as possible means that we receive more data than we have ever before, but are consequently more easily overwhelmed than ever before. The pace of research output has increased drastically while the pace of...
___________________________________________________
tldr;
___________________________________________________
Effective Altruism (EA) has embraced longtermism as one of its guiding principles. In What we owe the future, MacAskill lays out the foundational principles of longtermism, urging us to expand our ethical considerations to include the well-being and prospects of future generations.
Say, hypothetically, you have a coworker you work well with, but get in heated political arguments with as well. This only happens maybe once a quarter, so they rarely even register as hiccups in your working relationship.
Say, now, that during one of these arguments you recognize a cognitive bias in the coworker's argumentation. The likelihood is high that they are falling into the bias in other contexts (you might argue that it seems more likely that they can clearly compartmentalize and only display thoughts which show evidence of the bias when they are heated - so, say for the sake of the argument that you can now remember an obvious example from daily work where you have seen them lean on the bias).
Here's my dilemma: since you now have evidence they are employing a cognitive bias, do you have a moral (or even, team-based or business) obligation to point out the bias to them? If yes: ...
TLDR: If you're an EA-minded animal funder donating $200K/year or more, we'd love to connect with you about several exciting initiatives that AIM is launching over the next several months.
AIM (formerly Charity Entrepreneurship) has a history of incubating and supporting...
Hi, I am Charity Entrepreneurship (CE, now AIM) Director of Research. I wanted to quickly respond to this point.
– –
Quality of our reports
I would like to push back a bit on Joey's response here. I agree that our research is quicker scrappier and goes into less depth than other orgs, but I am not convinced that our reports have more errors or worse reasoning that reports of other organisations (thinking of non-peer reviewed global health and animal welfare organisations like GiveWell, OpenPhil, Animal Charity Evaluators, Rethink Priorities, Founders Pl...
This post summarizes "Against the Singularity Hypothesis," a Global Priorities Institute Working Paper by David Thorstad. This post is part of my sequence of GPI Working Paper summaries. For more, Thorstad’s blog, Reflective Altruism, has a three...
As you write:
The result will be a singularity, understood as a fundamental discontinuity in human history beyond which our fate depends largely on how we interact with artificial agents
The discontinuity is a result of humans no longer being the smartest agents in the world, and no longer being in control of our own fate. After this point, we've entered an event horizon where the output is almost entirely unforeseeable.
If you have accelerating growth that isn't sustained for very long, you get something like population growth from 1800-2000
If, a...
In theory of mind, the question of how to define an "individual" is complicated. If you're not familiar with this area of philosophy, see Wait But Why's introduction.
I think most people in EA circles subscribe to the computational theory of mind, which means that...
If you don't care about where or when duplicate experiences exist, only their number, then not caring about duplicates at all gives you a fanatical wager against the universe having infinitely many moral patients, e.g. by being infinitely large spatially, going on forever in time, having infinitely many pocket universes.
It would also give you a wager against the many-worlds interpretation of quantum mechanics, because there will be copies of you having identical experiences in (at least slightly) already physically distinct branches.
I think you're missing some important ground in between "reflection process" and "PR exercise".
I can't speak for EV or other people then on the boards, but from my perspective the purpose of the legal investigation was primarily about helping to facilitate justified trust. Sam had by many been seen as a trusted EA leader, and had previously been on the board of CEA US. It seemed it wouldn't be unreasonable if people in EA (or even within EV) started worrying that leadership were covering things up. Having an external investigation was, although not a cheap... (read more)