Q

quila

4 karmaJoined www.lesswrong.com/users/quila

Comments
4

'0.5 standard deviations' seems very significant on on the outer edges of the distribution. Because of https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule#Table_of_numerical_values.

But also people on the outer edges will be rare in the data this statement comes from, so it may not hold for them.

I think there should be more investigation somewhere[1] of whether generic cognitive improvements hold in the upper limits[2], cause it seems like the main way they could be high-impact. (by augmenting the best EAs/alignment researchers, under the view that impact scales closer to exponentially than linearly with intelligence.)

  1. ^

    In the sense of 'by now, civilization should have done this', but I'm not saying anything about whether it would be good to focus on at this point.

    (and maybe it has received that focus, and I just don't know where)

  2. ^

     (eg whether fixing an iron deficiency in someone who is already at +3sd pushes them to +3.5sd)

helo i'm endlessly scrolling through the ea forum 'practical' tag and arrived at this post :]

To be honest, I settled on this particular product a while ago and don’t remember exactly why I did, but I stand by my choice. The doses make sense, the bioavailability is good, and it checks a few boxes I particularly care about.

i wonder if https://huel.com/products/huel-daily-greens is better, especially if https://www.nature.com/articles/s43016-019-0005-1 is relevant. though i don't know why it has what it has, and it's possible they just threw a bunch of ingredients together without much thought like those supplements your post mentioned

I think it definitely does, if we're in a situation where an S-risk is on the horizon with some sufficient (<- subjective) probability. Also consider https://carado.moe/when-in-doubt-kill-everyone.html (and the author's subsequent updates)

... of course, the whole question is subjective as in moral.

“You didn’t trust yourself,” Hirou whispered.  “That’s why you had to touch the Sword of Good.”

Answer by quila3
0
1

i highly suggest reading the sequences.

if you suspect you might be capable of it, you could also start reading about alignment and potentially contributing to it. i know of at least one whose been doing (imo good) alignment research since they were in high school. many working on AI catastrophic risk (myself included) believe there's not much time left for you to have a career. you may want to look into those arguments.