Co-founder of Concentric Policies
Talk to me about American governance/political systems/democracy
My journey to EA:
Two considerations:
1. does protecting democracy have to be "painting with leftist colors"?
2. even if it was, does the ROI justify it?
On the first, as noted in this EAGxVirtual lightning talk I gave on the US context, the design of the political system is a big upstream cause of authoritarian voter bases and to a larger extent authoritarian politicians. Many of the reforms to the US political system that would in the short to long term reduce the antidemocratic threat are "bi-populist," as I like to call it.
Left- and Right-wing populists are generally for campaign finance reform, preventing politicians from becoming lobbyists, having a districting and voting system that enables third parties, etc. There are some notable ver partisan exceptions like moving from the electoral college to national popular vote and making the Senate less minoritarian.
In the US context, I think disciplined and skilled advocates can keep political system reform as populist/anti-establishment issue and avoid culture war framing. I'm not sure how this generalizes to Europe or Germany. While I don't think it generalizes 100%, I suspect it generalizes at least a little.
On the second, I think the ROI from this issue is uniquely higher than other political issues. small-L liberal democracies (e.g. Germany, United States, Italy, etc) falling into something other than liberal democracies (Hungary is a good example of this) strikes me as patently super bad for suffering of humans/animals, the longterm future, EA agenda, and any other issues that we might care about. I think this is uniquely true of the United States because it is the world superpower and leading place for emerging technologies. However, the stakes are high even in places like Germany. How much would be lost alone by an authoritarian nativist government coming to power and eliminating all German foreign aid?
In short, if the Great Powers stop being liberal democracies, a lot of our agenda becomes moot because we will now have bigger problems. Instead of working to make German aid more effective, we will be working to get the govt to give any aid at all and ensure an authoritarian govt doesn't manipulate the political process to never relinquish power. Instead of working to get the United States Government to approach international AI coordination in a way that doesn't lead to an international arms race in capabilities, we will be fighting to keep a strongman from disrupting the global world order that that coordination is underpinned by.
I think the ironic thing is that actually many EAs would say that morality is subjective similar to as your friend claimed it to be. However, the fact that morality is subjective doesn't stop us from adopting EA principles.
And what led us to these principles over all the other ones that we could adopt in a universe of subjective morality? It's because we think they are the ones that make most sense. The child drowning exercise is a powerful example that most people's moral intuitions logically extrapolate to principles at the core of EA. If that is the case, then we are simply asking people to be consistent with the logic of their own morality rather than telling them to accept these principles as objective morality.
I think the child drowning thought experiment in Peter Singer's Famine, Affluence and Morality is great and compelling entry point for people of all walks of life to understand why many EAs are driven to do what they do.
People have different definitions of EA within the EA community. However the definitions that get more buy-in end up getting simpler. I think the lowest-common-denominator definition of EA is that it is both a set of principles and community centered around the belief that 1) we have a moral obligation to do good in the world and 2) we should be very thoughtful about how we do good so that we can end up doing more good.
So when it comes to "programs we'd like to see" including "a comprehensive investigation into FTX<>EA connections / problems," I take it that you disagree with that recommendation I.. I'd be interested to hear from those that proposed it what they hope to get out of it.
I'm not in an authoritative vantage point to say it would be fruitful. But from a conversation I had with someone that knew much more intimately how exposed non-FTX EAs were to SBF/FTX prior to the crash, and they said there are still many people in influential positions in EA that have not been held accountable for having enough exposure to have raised a yellow-flag (about SBF/FTX governance practices and behavior that on first look would have been value misaligned).
That to seems to me like one concrete benefit to the community of having another investigation. I've heard from a few people that the multi tens-of-millions dollar penthouse was known by multiple influential EAs and Lewis's book corroborates that. The penthouse and FTX's sponsorship deals (paying way over market to sponsor an E-sports team or buying StoryBook Brawl) appear to me like the clearest yellow/red flags that should have elicited scrutiny from non-FTX EAs.
As a community member, I'd like to know if influential non-FTX EAs
I've heard from one person a rationale for why the penthouse made sense. And there could be more merits as to why some of these things that appear as yellow/red flags actually aren't. yet this discussion doesn't seem to be happening publicly and I don't know to what extent its happening privately, but it strikes me as a discussion that should happen and should be public to the community.
The FTX episode--not whether EAs could have caught the fraud, but if they should have been scrutinizing FTX/SBF more--is an important reflection how well the community/movement currently manages itself, especially the orgs and people with the most power in shaping the movement. I.e. there are important lessons to be learnt from. Especially since their were known concerns about SBF as early Alameda (another thing whispered about on the Forum that we didn't get more insight into until Lewis's book).
"It's unlikely that a similar disaster will happen soon, so it might not be particularly urgent to set up programs to prevent similar future disasters."
^ This sentiment reflects why I'm worried that some parts of EA haven't fully learned the lessons of the FTX saga (which some think only apply to FTX and the broader community, but I've yet to be convinced). When triaging, you can always push off something important that isn't urgent, but this is the slippery slope that leads to never doing it before it's too late. Governance and PR disasters are not always going to foreseeable.
Also, memory fades with time, which can affect the ability to understand what happened.
There are a few things I want to flag on the topic of investigations related to FTX
Joey has answered this before elsewhere (i.e. why doesn't CE just open programs instead of spin-off charities). The answer is that starting a chair leads to more ownership and thus better results.
I'd also add that many programs in one charity raises the stakes of the leadership's judgement/decision-making. More charities in a way acts like diversification.