This is a special post for quick takes by Ivy Mazzola. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Recently I was given a warning*[1] by mods for a comment* I wrote while frustrated. I apologize to those in that thread for possibly hurting feelings, creating extra stress, and adding labor, and I apologize to all for breaking forum norms. I especially regret if I made the forum seem like an aggressive environment. 

I am taking action to not cause a problem again. To that end, (1) I made and started using some Anki flashcards to better-instill some frustration-management habits (mindfulness and CBT stuff). That said, I often use emotion to inspire myself to take action (especially when time-limited), so I might still write stuff while emotional. (2) Therefore I am also adding ChatGPT GPT-4 to my workflow (also instilled via Anki TAP-deck I hope): when I write something while  upset, I will copy-paste the resulting draft into ChatGPT GPT-4, and ask it to reword the upset-sounding parts. 

With this in mind, I've "redone" the comment here in google doc[2] with edits that ChatGPT helped me write more quickly than I could have done myself. [I also next-day tried out GPT-4 to be sure I had the best tool for future use, and in future I will use that.] My doing this might seem dramatic, but it is important for habit formation to go back and fix mistakes/redo an action properly, even when the problem is in the past. And I do so publicly in case others might find use in the ideas to (1) use spaced-repetition software like Anki to instill mindfulness TAPs and (2) use GPT to rephrase text you may write while emotional. You can request access to the doc if you want to see how [ChatGPT and GPT-4] perform at this task. 

[In closing, I again apologize. I think with these TAPs I am very unlikely to break forum norms again, and appreciate the reminders and chance to do better.]

  1. ^

    I used to have links here but I thought it might embarrass others and possibly spread some misinformation I was attempting to address. The links are in the doc.

  2. ^

    The doc requires access because my original comment discussed misinformation I don't want to spread

Eh, I read it. I think you were absolutely right and if you were rude it was deserved in this case tbh.  Seems insanely irresponsible to mention someone being cranky on Facebook in the same breath as abuse given that the Eye of Sauron is pointed dead at us right now.  

Thanks. I definitely still think someone had to say something--likely a detailed rebuttal at that point. But there were better ways to say some parts. 

And there is a pernicious thing with aggressive language that I realized while rewording: Like, it can feel like you are getting everything out, but actually now that I am rewording it I see that I didn't make my actual point in the second half very well. Later I ended up commenting a second time to try to clarify my POV and truly help the person "get it", which was a timesink, more clutter for that thread, and even harder to word without coming off aggressive since I was following up on my prior aggressive statements. Maybe if I had worded my first comment properly from the start, I would not have even needed(?) to write a second comment. 

Non-emotional language seems way better for making a nuanced point. Any point where it is imperative it can't get warped. Example: I think using frustrated language in that thread increased the risk that readers thought I was anti-transparency, which I'm not. Anyway, Chat-GPT is cool for this so far :)

I may benefit from that myself :D I will take a look :) 

Nice! I did have to bounce stuff around with it and ask it to add concepts, but doing that helped me get to those points I actually wanted to make that actually weren't conveyed. If your writing is better to start, you might get it in one shot. Better workflow notes and even session screenshots are within doc

This is an add-on to this comment I wrote and sort of to all the SBF-EA-related stuff I've written recently. I write this add-on mostly for personal reasons. 

I've argued that we should have patience when assigning blame for EA leadership and not assume leaders deserve blame or ever were necessarily incompetent in a way that counterfactual leaders would not have been. But this point is distinct from thinking there was nothing we could do or no signs to pay attention to. I don't want to be seen as arguing there was nothing that EAs in general could do, so here are my actual thoughts of what was, on it's own, enough to warrant taking action of distancing ourselves from SBF, which it looks like basically all of us, and non-EAs, missed. 

FWIW I do think mistakes were made around SBF. I'm just not willing to pin it on EA leaders specifically (yet), or even EAs/EA itself specifically (to the exclusion of others). Anyone, including journalists and finance people, could have seen red flags who watched SBF's interviews. IMO, the major red flags in retrospect were things anyone who was paying attention (I was only a bit but even I messed up here) could see: 

(1) SBF talking about ponzi schemes, and some of his testimony regarding crypto regulation (I think?) which apparently made the ponzi scheme possibility look more real. 

Personal take and regrets: I saw neither of these myself but my newly-EA gf thought they were morally troubling before the crash and told me. We had a couple short conversations about it which basically led to "Oof, IDK what to say" from me. I thought of looking it all up, or messaging prominent EAs on her word alone. But I did not, mostly due to confusion about what it meant... "Isn't this just the nature of crypto as an asset as something all people buying crypto should know? Or is this unethical? Am I getting into the moral dilemma that EAs just shouldn't do finance to E2G? Is that a bullet I want to bite, because I might have to argue that? And what's my 'ask', what am I hoping to happen as result of my messaging someone?" 

I didn't think of it as a red flag for upcoming crash and bankruptcy, and I didn't expect something to come out that could be formally charged as fraud. I guess someone who knows about ponzi schemes can say if I was dumb to not think of any of this. But it was a red flag that he didn't care about FTX users, and he might not be "a good guy" (even by consequentialist standards, the balance gets way more complicated and you can't be anywhere close to sure enough to take such risks with citizen's money). And regardless of SBF, it was a tip that the public consciousness was about to slant against crypto (even more than the growing disdain for "crypto bros" betrays) and that's a risk of association. 

I still kick myself for not messaging someone. It wouldn't have been that much notice, a couple months maybe? But maybe could have helped EA distance itself proactively. Sigh. 

(2) SBF's violation of kelly criteria/biting bullets on St. Petersburg paradox. 

Personal take and expressing shock/light scolding: I never knew how "all-in" he was, but I'd have found that super alarming, and this I think I'd have tried (more seriously than about #1) to talk to someone about it. Basically all I know about betting is that you "never bet it all. Always leave enough to bet another day", but I know it as the golden rule. It still troubles me that EAs and others seemingly thought SBF's responses were philosophically neutral or something, when actually it was a glaring red flag that the companywould fail, even without fraud. And also a red flag that he was kind of self-deluding, or trying too hard to be clever via breaking rules. Like. If you want to make more money to do more good, just do the thing that is already known to make the most money in the long run (kelly), don't instead pull numbers out of your ass to reinvent a wheel, except inevitably worse than before. This also tied into SBF acting way too morally sure of himself--personally I'd never bet earth's entire future without others' consent because of one moral theory coupled with the multiple universe theory, in regards to a situation that is called a paradox for good reason (it's not supposed to be an easy decision, which generally means you should defer to group consensus!).

This all said: I think EAs' philosophical naivety here, or brushing it off, is disappointingly normal? As proven by no one else in the world writing a hit piece about SBF (that I'm aware of). Bystander effect too maybe, since that stuff is way more public than ex-Alameda employee complaints (but CEA investigated those, at least kinda idk yet), it'd be normal to think "Well, lots of people are seeing this, and if no one else, including FTX investors, sees it as a problem, I guess it must be okay." Idk. I'd like EAs and non-EAs to do better at pinpointing problematic actors in this regard (and we can only control EA so we should focus on this failure mode a bit), but my complaints are all qualitatively different from what the Time article is talking about.

I expect I'm not the only one who feels as I do re: 1 and 2, including vague and specific guilt, even though I was by all accounts a total outsider. I'm guessing most people just don't talk about it, and if I'm not the only one, that's one reason it feels very weird to me to pin it on EA leaders (as of right now).

That basically everyone missed or ignored these flags, does not, I think, bode well that replacing EA leaders means it would have been caught, or that replacements will do better. As a silver lining, I expect odds of catching bad signs like this to go up in future for all potential leaders, because we will have learned this hard lesson and the lesson will be made overt to any new elected people. But I still think we want at least one designated person who would have caught it with or without ex-Alameda reports, regardless of what could have been gleaned from those reports, because I think some sort of fiasco could have been caught with or without those. Surprisingly, I consider those relatively minor flags compared to 1 and 2. The difference is that for those, it's EA leaders who take the blame, whereas for 1 and 2 it's basically everyone who was paying a bit of attention. 

Most humans won't catch troubling dark-triad actors. That's probably okay because we don't want most people to have low-trust types of personalities. As things stand, I'd be more in favor of adding a new person to the leadership mix, or hiring a social-risk specialist or something for the CH team, whose overt job it is to catch signs of unethical and troubling behavior by EA and EA adjacent people, who is structurally greenlighted to navigate possibly-manipulative people as though they are probably acting in bad faith, so as to not be as easily fooled as most leadership, I think, would be in cases like SBF's:] 

I could say a lot more, and be more precise, and doublecheck some stuff in #1 which I still never did, but this is just a shortform. 

[[URGENT]] Seeking people to lead phone-banking coworking for Carrick Flynn's campaign today, tomorrow, or Tuesday in gather.town! There is an EA coworking room in gather town already.

This is a strong counterfactual opportunity! This event can be promoted on a lot of EA fb pages as a casual and fun event (after all, it won't even be affiliated with "EA", but just some people who are into this getting together to do it), hopefully leading to many more phone banker hours in the next couple days.

Would you or anyone else be willing to lead this? Please share! Hosts will be trained in phone-banking and how to train your participants in phone-banking. 

DM me and/or CarolineJ if you are keen to help and we will add you to the slack with all phone banking instructions. It is easy! (I will be traveling a lot so DMing both of us is a good bet)

You can read more about Carrick's campaign from an EA perspective here:
The best $5,800 I’ve ever donated (to pandemic prevention)
and
Why Helping the Flynn Campaign is especially useful right now

Read about the EA Gathertown space here:
EA coworking/lounge space on gather.town
 

I'll be able to do phonebanking on Tuesday from 10am to 1pm PT on Tuesday - join then!

And I'm happy to help coordinate outside of this! 

Ivy, I'm free today from 9:30 - 10am, 10:30-11am, 11:30-1pm (all times PT).  Unfortunately, I do have a handful of client calls I'll need to take in the inbetween times.  I did a full afternoon of calls yesterday, so I have some ideas about how to do them well.

Curated and popular this week
Relevant opportunities