Emrik

1769 karmaJoined Norway

Bio

“In the day I would be reminded of those men and women,
Brave, setting up signals across vast distances,
Considering a nameless way of living, of almost unimagined values.”

How others can help me

I would greatly appreciate anonymous feedback, or just feedback in general. Doesn't have to be anonymous.

Comments
292

(Publishing comment-draft that's been sitting here two years, since I thought it was good (even if super-unfinished…), and I may wish to link to it in future discussions. As always, feel free to not-engage and just be awesome. Also feel free to not be awesome, since awesomeness can only be achieved by choice (thus, awesomeness may be proportional to how free you feel to not be it).)

Yes! This relates to what I call costs of compromise.

Costs of compromise

As you allude to by the exponential decay of the green dots in your last graph, there are exponential costs to compromising what you are optimizing for in order to appeal to a wider variety of interests. On the flip-side, how usefwl to a subgroup you can expect to be is exponentially proportional to how purely you optimize for that particular subset of people (depending on how independent the optimization criteria are). This strategy is also known as "horizontal segmentation".[1]

The benefits of segmentation ought to be compared against what is plausibly an exponential decay in the number of people who fit a marginally smaller subset of optimization criteria. So it's not obvious in general whether you should on the margin try to aim more purely for a subset, or aim for broader appeal.

Specialization vs generalization

This relates to what I think are one of the main mysteries/trade-offs in optimization: specialization vs generalization. It explains why scaling your company can make it more efficient (economies of scale),[2] why the brain is modular,[3] and how Howea palm trees can speciate without the aid of geographic isolation (aka sympatric speciation constrained by genetic swamping) by optimising their gene pools for differentially-acidic patches of soil and evolving separate flowering intervals in order to avoid pollinating each other.[4]

Conjunctive search

When you search for a single thing that fits two or more criteria, that's called "conjunctive search". In the image, try to find an object that's both [colour: green] and [shape: X].

My claim is that this analogizes to how your brain searches for conjunctive ideas: a vast array of preconscious ideas are selected from a distribution of distractors that score high in either one of the criteria.

10d6 vs 1d60

Preamble2: When you throw 10 6-sided dice (written as "10d6"), the probability of getting a max roll is much lower compared to if you were throwing a single 60-sided dice ("1d60"). But if we assume that the 10 6-sided dice are strongly correlated, that has the effect of squishing the normal distribution to look like the uniform distribution, and you're much more likely to roll extreme values.

Moral: Your probability of sampling extreme values from a distribution depends the number of variables that make it up (i.e. how many factors convolved over), and the extent to which they are independent. Thus, costs of compromise are much steeper if you're sampling for outliers (a realm which includes most creative thinking and altruistic projects).

Spaghetti-sauce fallacies 🍝

If you maximally optimize a single spaghetti sauce for profit, there exists a global optimum for some taste, quantity, and price. You might then declare that this is the best you can do, and indeed this is a common fallacy I will promptly give numerous examples of. [TODO…]

But if you instead allow yourself to optimize several different spaghetti sauces, each one tailored to a specific market, you can make much more profit compared to if you have to conjunctively optimize a single thing.

Thus, a spaghetti-sauce fallacy is when somebody asks "how can we optimize thing  more for criteria ?" when they should be asking "how can we chunk/segment  into  cohesive/dimensionally-reduced segments so we can optimize for {, ..., } disjunctively?"


People rarely vote based on usefwlness in the first place

As a sidenote: People don't actually vote (/allocate karma) based on what they find usefwl. That's a rare case. Instead, people overwhelmingly vote based on what they (intuitively) expect others will find usefwl. This rapidly turns into a Keynesian Beauty Contest with many implications. Information about people's underlying preferences (or what they personally find usefwl) is lost as information cascades are amplified by recursive predictions. This explains approximately everything wrong about the social world.

Already in childhood, we learn to praise (and by extension vote) based on what kinds of praise other people will praise us for. This works so well as a general heuristic that it gets internalized and we stop being able to notice it as an underlying motivation for everything we do.

  1. ^

    See e.g. spaghetti sauce.

  2. ^

    Scale allows subunits (e.g. employees) to specialize at subtasks.

  3. ^

    Every time a subunit of the brain has to pull double-duty with respect to what it adapts to, the optimization criteria compete for its adaptation—this is also known as "pleiotropy" in evobio, and "polytely" in… some ppl called it that and it's a good word.

  4. ^

    This palm-tree example (and others) are partially optimized/goodharted for seeming impressive, but I leave it in because it also happens to be deliciously interesting and possibly entertaining as examples of a costs of compromise. I want to emphasize how ubiquitous this trade-off is.

Oh, this is excellent! I do a version of this, but I haven't paid enough attention to what I do to give it a name. "Blurting" is perfect.

I try to make sure to always notice my immediate reaction to something, so I can more reliably tell what my more sophisticated reasoning modules transforms that reaction into. Almost all the search-process imbalances (eg. filtered recollections, motivated stopping, etc.) come into play during the sophistication, so it's inherently risky. But refusing to reason past the blurt is equally inadvisable.

This is interesting from a predictive-processing perspective.[1] The first thing I do when I hear someone I respect tell me their opinion, is to compare that statement to my prior mental model of the world. That's the fast check. If it conflicts, I aspire to mentally blurt out that reaction to myself.

It takes longer to generate an alternative mental model (ie. sophistication) that is able to predict the world described by the other person's statement, and there's a lot more room for bias to enter via the mental equivalent of multiple comparisons. Thus, if I'm overly prone to conform, that bias will show itself after I've already blurted out "huh!" and made note of my prior. The blurt helps me avoid the failure mode of conforming and feeling like that's what I believed all along.

Blurting is a faster and more usefwl variation on writing down your predictions in advance.

  1. ^

    Speculation. I'm not very familiar with predictive processing, but the claim seems plausible to me on alternative models as well.

I disagree a little bit with the credibility of some of the examples, and want to double-click others. But regardless, I think this is a very productive train of thought and thank you for writing it up. Interesting!

And btw, if you feel like a topic of investigation "might not fit into the EA genre", and yet you feel like it could be important based on first-principles reasoning, my guess is that that's a very important lead to pursue. Reluctance to step outside the genre, and thinking that the goal is to "do EA-like things", is exactly the kind of dynamic that's likely to lead the whole community to overlook something important.

Some selected comments or posts I've written

  • Taxonomy of cheats, multiplex case analysis, worst-case alignment
  • "You never make decisions, you only ever decide between strategies"
  • My take on deference
  • Dumb
  • Quick reasons for bubbliness
  • Against blind updates
  • The Expert's Paradox, and the Funder's Paradox
  • Isthmus patterns
  • Jabber loop
  • Paradox of Expert Opinion
  • Rampant obvious errors
  • Arbital - Absorbing barrier
  • "Decoy prestige"
  • "prestige gradient"
  • Braindump and recommendations on coordination and institutional decision-making
  • Social epistemology braindump (I no longer endorse most of this, but it has patterns)

Other posts I like

  • The Goddess of Everything Else - Scott Alexander
    • “The Goddess of Cancer created you; once you were hers, but no longer. Throughout the long years I was picking away at her power. Through long generations of suffering I chiseled and chiseled. Now finally nothing is left of the nature with which she imbued you. She never again will hold sway over you or your loved ones. I am the Goddess of Everything Else and my powers are devious and subtle. I won you by pieces and hence you will all be my children. You are no longer driven to multiply conquer and kill by your nature. Go forth and do everything else, till the end of all ages.”
  • A Forum post can be short - Lizka
    • Succinctly demonstrates how often people goodhart on length or other irrelevant criteria like effort moralisation. A culture for appreciating posts for the practical value they add to you specifically, would incentivise writers to pay more attention to whether they are optimising for expected usefwlness or just signalling.
  • Changing the world through slack & hobbies - Steven Byrnes
    • Unsurprisingly, there's a theme to what kind of posts I like. Posts that are about de-Goodharting ourselves.
  • Hero Licensing - Eliezer Yudkowsky
    • Stop apologising, just do the thing. People might ridicule you for believing in yourself, but just do the thing.
  • A Sketch of Good Communication - Ben Pace
    • Highlights the danger of deferring if you're trying to be an Explorer in an epistemic community.
  • Holding a Program in One's Head - Paul Graham
    • "A good programmer working intensively on his own code can hold it in his mind the way a mathematician holds a problem he's working on. Mathematicians don't answer questions by working them out on paper the way schoolchildren are taught to. They do more in their heads: they try to understand a problem space well enough that they can walk around it the way you can walk around the memory of the house you grew up in. At its best programming is the same. You hold the whole program in your head, and you can manipulate it at will.

      That's particularly valuable at the start of a project, because initially the most important thing is to be able to change what you're doing. Not just to solve the problem in a different way, but to change the problem you're solving."

I predict with high uncertainty that this post will have been very usefwl to me. Thanks!

Here's a potential missing mood: if you read/skim a post and you don't go "ugh that was a waste of time" or "wow that was worth reading"[1], you are failing to optimise your information diet and you aren't developing intuition for what/how to read.

  1. ^

    This is importantly different from going "wow that was a good/impressive post". If you're just tracking how impressed you are by what you read (or how useful you predict it is for others), you could be wasting your time on stuff you already know and/or agree with. Succinctly, you need to track whether your mind has changed--track the temporal difference.

[weirdness-filter: ur weird if you read m commnt n agree w me lol]

Doing private capabilities research seems not obviously net-bad, for some subcategories of capabilities research. It constrains your expectations about how AGI will unfold, meaning you have a narrower target for your alignment ideas (incl. strategies, politics, etc.) to hit. The basic case: If an alignment researcher doesn't understand how gradient descent works, I think they're going to be less effective at alignment. I expect this to generalise for most advances they could make in their theoretical understanding of how to build intelligences. And there's no fundamental difference between learning the basics and doing novel research, as it all amounts to increased understanding in the end.

That said, it would in most cases be very silly to publish about that increased understanding, and people should be disincentivised from doing so. 

(I'll delete this comment if you've read it and you want it gone. I think the above can be very bad advice to give some poorly aligned selfish researchers, but I want reasonable people to hear it.)

EA: We should never trust ourselves to do act utilitarianism, we must strictly abide by a set of virtuous principles so we don't go astray.

Also EA: It's ok to eat animals as long as you do other world-saving work. The effort and sacrifice it would take to relearn my eating patterns just isn't worth it on consequentialist grounds.


Sorry for the strawmanish meme format. I realise people have complex reasons for needing to navigate their lives the way they do, and I don't advocate aggressively trying to make other people stop eating animals. The point is just that I feel like the seemingly universal disavowment of utilitarian reasoning has been insufficiently vetted for consistency. If we claim that utilitarian reasoning can be blamed for the FTX catastrophe, then we should ask ourselves what else we should apply that lesson to; or we should recognise that FTX isn't a strong counterexample to utilitarianism, and we can still use it to make important decisions.

(I realised after I wrote this that the metaphor between brains and epistemic communities is less fruitfwl than it seems like I think, but it's still a helpfwl frame in order to understand the differences anyway, so I'm posting it here. ^^)


TL;DR: I think people should consider searching for giving opportunities in their networks, because a community that efficiently capitalises on insider information may end up doing more efficient and more varied research. There are, as you would expect, both problems and advantages to this, but it definitely seems good to encourage on the margin.

Some reasons to prefer decentralised funding and insider trading

I think people are too worried about making their donations appear justifiable to others. And what people expect will appear justifiable to others, is based on the most visibly widespread evidence they can think of.[1] It just so happens that that is also the basket of information that everyone else bases their opinions on as well. The net effect is that a lot less information gets considered in total.

Even so, there are very good reasons to defer to consensus among people who know more, not act unilaterally, and be epistemically humble. I'm not arguing that we shouldn't take these considerations into account. What I'm trying to say is that even after you've given them adequate consideration, there are separate social reasons that could make it tempting to defer, and we should keep this distinction is in mind so we don't handicap ourselves just to fit in.

Consider the community from a bird's eye perspective for a moment. Imagine zooming out, and seeing EA as a single organism. Information goes in, and causal consequences go out. Now, what happens when you make most of the little humanoid neurons mimic their neighbours in proportion to how many neighbours they have doing the same thing?

What you end up with is a Matthew effect not only for ideas, but also for the bits of information that get promoted to public consciousness. Imagine ripples of information flowing in only to be suppressed at the periphery, way before they've had a chance to be adequately processed. Bits of information accumulate trust in proportion to how much trust they already have, and there are no well-coordinated checks that can reliably abort a cascade past a point.

To be clear, this isn't how the brain works. The brain is designed very meticulously to ensure that only the most surprising information gets promoted to universal recognition ("consciousness"). The signals that can already be predicted by established paradigms are suppressed, and novel information gets passed along with priority.[2] While it doesn't work perfectly for all things, consider just the fact that our entire perceptual field gets replaced instantly every time we turn our heads.

And because neurons have been harshly optimised for their collective performance, they show a remarkable level of competitive coordination aimed at making sure there are no informational short-circuits or redundancies.

Returning to the societal perspective again, what would it look like if the EA community were arranged in a similar fashion?

I think it would be a community optimised for the early detection and transmission of market-moving information--which in a finance context refers to information that would cause any reasonable investor to immediately make a decision upon hearing it. In the case where, for example, someone invests in a company because they're friends with the CEO and received private information, it's called "insider trading" and is illegal in some countries.

But it's not illegal for altruistic giving! Funding decisions based on highly valuable information only you have access to is precisely the thing we'd want to see happening.

If, say, you have a friend who's trying to get time off from work in order to start a project, but no one's willing to fund them because they're a weird-but-brilliant dropout with no credentials, you may have insider information about their trustworthiness.  That kind of information doesn't transmit very readily, so if we insist on centralised funding mechanisms, we're unknowingly losing out on all those insider trading opportunities.

Where the architecture of the brain efficiently promotes the most novel information to consciousness for processing, EA has the problem where unusual information doesn't even pass the first layer.

(I should probably mention that there are obviously biases that come into play when evaluating people you're close to, and that could easily interfere with good judgment. It's a crucial consideration. I'm mainly presenting the case for decentralisation here, since centralisation is the default, so I urge you keep some skepticism in mind.)


There are no way around having to make trade-offs here. One reason to prefer a central team of highly experienced grant-makers to be doing most of the funding, is that they're likely to be better at evaluating impact opportunities. But this needn't matter much if they're bottlenecked by bandwidth--both in terms of having less information reach them and in terms of having less time available to analyse what does come through.[3]

On the other hand, if you believe that most of the relevant market-moving information in EA is already being captured by relevant funding bodies, then their ability to separate the wheat from the chaff may be the dominating consideration.

While I think the above considerations make a strong case for encouraging people to look for giving opportunities in their own networks, I think they apply with greater force to adopting a model like impact markets.

They're a sort of compromise between central and decentralised funding. The idea is that everyone has an incentive to fund individuals or projects where they believe they have insider information indicating that the project will show itself to be impactfwl later on. If the projects they opportunistically funded at an early stage do end up producing a lot of impact, a central funding body rewards the maverick funder by "purchasing the impact" second-hand.

Once a system like that is up and running, people can reliably expect the retroactive funders to make it worth their while to search for promising projects. And when people are incentivised to locate and fund projects at their earliest bottlenecks, the community could end up capitalising on a lot more (insider) information than would be possible if everything had to be evaluated centrally.

(There are of course, more complexities to this, and you can check out the previous discussions on the forum.)

 

  1. ^

    This doesn't necessarily mean that people defer to the most popular beliefs, but rather that even if they do their own thinking, they're still reluctant to use information that other people don't have access to, so it amounts to nearly the same thing.

  2. ^

    This is sometimes called predictive processing. Sensory information comes in and gets passed along through increasingly conceptual layers. Higher-level layers are successively trying to anticipate the information coming in from below, and if they succeed, they just aren't interested in passing it along.

    (Imagine if it were the other way around, and neurons were increasingly shy to pass along information in proportion to how confused or surprised they were. What a brain that would be!)

  3. ^

    As an extreme example of how bad this can get, an Australian study on medicinal research funding estimated the length of average grant proposals to be "between 80 and 120 pages long and panel members are expected to read and rank between 50 and 100 proposals. It is optimistic to expect accurate judgements in this sea of excessive information." -- (Herbert et al., 2013)

    Luckily it's nowhere near as bad for EA research, but consider the Australian case as a clear example of how a funding process can be undeniably and extremely misaligned with the goal producing good research.

Hm, I think you may be reading the comment from a perspective of "what actions do the symbols refer to, and what would happen if readers did that?" as opposed to "what are the symbols going to cause readers to do?"[1]

The kinds of people who are able distinguish adequate vs inadequate good judgment shouldn't be encouraged to defer to conventional signals of expertise. But those are also disproportionately the people who, instead of feeling like deferring to Eliezer's comment, will respond "I agree, but..."

  1. ^

    For lack of a better term, and because there should be a term for it: Dan Sperber calls this the "cognitive causal chain", and contrasts it with the confabulated narratives we often have for what we do. I think it summons up the right image.

    When you read something, aspire to always infer what people intend based on the causal chains that led them to write that. Well, no. Not quite. Instead, aspire to always entertain the possibility that the author's consciously intended meaning may be inferred from what the symbols will cause readers to do. Well, I mean something along these lines. The point is that if you do this, you might discover a genuine optimiser in the wild. : )

Ideally, EigenTrust or something similar should be able to help with regranting once it takes off, no? : )

Load more