Hide table of contents

Sometimes I hear discussions like the following:

Amy: I think Cause A is 300x larger in scale than Cause B.

Bernard: I agree, but maybe we should put significant resources towards Cause B anyway, because cause A might be 100x less tractable. According to the ITN framework, we should put 1/3x the resources towards cause B as we do towards cause A.

Causes can easily be 300x larger in scale than other causes.[1] But I think Bernard's claim that Cause B is 100x more tractable is actually very strong. I argue that Bernard must be implicitly claiming that cause B is unusually tractable, that there is a strong departure from logarithmic returns, or that there is no feasible plan of attack for cause A.

Review of the ITN framework

Recall the importance-tractability-neglectedness (ITN) framework for estimating cost-effectiveness:

  • Importance = utility gained / % of problem solved
  • Tractability = % of problem solved / % increase in resources
  • Neglectedness = % increase in resources / extra $

The product of all three factors gives us utility gained / extra $, the cost-effectiveness of spending more resources on the problem. By replacing $ with another resource like researcher-hours, we get the marginal effectiveness of adding more of that resource.

In the 80,000 Hours page on ITN, scale ranges 8 orders of magnitude, neglectedness 6 orders of magnitude, and tractability (which 80k calls solvability) only 4. In practice, I think tractability actually only spans around 2-3 orders of magnitude for problems we spend time analyzing, except in specific circumstances.

Problems have similar tractability under logarithmic returns

Tractability is defined as the expected fraction of a given problem that would be solved with a doubling of resources devoted to that problem. The ITN framework suggests something like logarithmic returns: each additional doubling will solve a similar fraction of the problem, in expectation.[2] Let the "baseline" level of tractability be a 10% chance to be solved with one doubling of resources.

For a problem to be 10x less tractable than the baseline, it would have to take 10 more doublings (1000x the resources) to solve an expected 10% of the problem. Most problems that can be solved in theory are at least as tractable as this; I think with 1000x the resources, humanity could have way better than 10% chance of starting a Mars colony[3], solving the Riemann hypothesis, and doing other really difficult things.

For a problem to be 10x more tractable than the baseline, it would be ~100% solved by doubling resources. It's rare that we find an opportunity more tractable than this that also has reasonably good scale and neglectedness.

Therefore, if we assume logarithmic returns, most problems under consideration are within 10x of the tractability baseline, and thus fall within a 100x tractability range.

When are problems highly intractable?

The three outstanding problems in physics, in a certain sense, were never worked on while I was at Bell Labs. By important I mean guaranteed a Nobel Prize and any sum of money you want to mention. We didn't work on (1) time travel, (2) teleportation, and (3) antigravity. They are not important problems because we do not have an attack.

-- Richard Hamming

Some problems are highly intractable. In this case, one of the following is usually true:

  • There is a strong departure from logarithmic returns, making the next doubling in particular unusually bad for impact.
    • Some problems have an inherently linear structure: there are not strong diminishing returns to more resources, and you can basically pour more resources into the problem until you've solved it. Suppose your problem is a huge pile of trash in your backyard; the best way to solve it is to pay people to haul away the trash, and the cost of this is roughly linear in the amount of trash removed. In this case, ITN is not the right framing, and one should use "IA", where:
      • marginal utility is I * A
      • I is importance, as usual
      • A = T * N is absolute tractability, the percent of the problem you solve with each additional dollar. The implicit assumption in the IA framework is that A doesn't depend much on the problem’s neglectedness.[4]
    • Some causes have diminishing returns, but the curve is different from logarithmic; the general case is "ITC", where absolute tractability is an arbitrary function of neglectedness/crowdedness.
  • The problem might not be solvable in theory. We don't research teleportation because the true laws of physics might forbid it.
  • There is no plan of attack. Another reason why we don't research teleportation is because even if the true laws of physics allow teleportation, our current understanding of them does not, and so we would have to study physical phenomena more to even know where to begin. Maybe the best thing for the marginal teleportation researcher to do would be to study a field of physics that might lead to a new theory allowing teleportation. But this is an indirect path in a high-dimensional space and is unlikely to work. (This is separate from any neglectedness concern about the large number of existing physicists).
  1. ^

    e.g. Toby Ord gives a rough estimate of 0.1% x-risk from nuclear war in the next century, whereas many AI alignment researchers put the probability of x-risk from transformative AI around 30%. Malaria kills 1000x more people than ALS

  2. ^

    I think the logarithmic assumption is reasonable for many types of problems. Why is largely out of scope of this post, but owencb writes about why logarithmic returns are often a good approximation here. Also, the distribution of proof times of mathematical conjectures says a roughly constant percentage of conjectures are proved annually; the number of mathematicians has been increasing roughly exponentially, so the returns to more math effort is roughly logarithmic.

  3. ^

    Elon Musk thinks a self-sustaining Mars colony is possible by launching 3 Starships per day, which is <1000x our current launch capacity.

  4. ^

    MacAskill calls absolute tractability "leverage".

Comments25
Sorted by Click to highlight new comments since:

Thanks, really like this point (which I've kind of applied many times but not had such a clean articulation of).

I think it's important to remember that the log returns model is essentially an ignorance prior[1]. If you understand things about the structure of the problem at hand you can certainly depart from it. e.g. when COVID emerged, nobody had spent any time trying to find and distribute a COVID vaccine. But it will be obvious that going from $1 million to $2 million spent won't give you an appreciable chance of solving the problem (since there's no way that could cover distributing things to billions of people), whereas going from $100 billion to $200 billion spent would do. Often early work can be valuable because it can spur future spending (so you might recover something like log returns in expectation), but in this case it was obvious that there would be much more than $2 million future spending whatever the early work did.

  1. ^

    Well ... also some problems which are solved via lots of little contributions actually have ex post log returns, rather than log returns just being an ex ante thing. But that's a significantly smaller class of problems.

Thanks, this is a good post. A half-baked thought about a related but (I think) distinct reason for this phenomenon: I wonder if we tend to (re)define the scale of problems such that they are mostly unsolved at present (but also not so vast that we obviously couldn't make a dent). For instance, it's not natural to think that the problem of 'eradicating global undernourishment' is more than 90% solved, because fewer than 10% of people in the world are undernourished. As long as problems are (re)defined in this way to be smaller in absolute terms, then tractability is going to (appear to) proportionally increase, as a countervailing factor to diminishing returns from extra investment of resources. A nice feature of ITN is that (re)defining the scale of a problem such that it is always mostly unsolved at present doesn't affect the bottom line of utility per marginal dollar, because (utility / % of problem solved) increases as (% of problem solved / marginal dollar) decreases. To the extent this is a real phenomenon, it could emphasise the importance of not reading too much into direct comparisons between tractability across causes.

Hi Fin,

To the extent this is a real phenomenon, it could emphasise the importance of not reading too much into direct comparisons between tractability across causes.

I agree this is a concern, and motivates me to move towards comparing the cost-effectiveness of specific projects instead of the importance, tractability and neglectedness of (not well defined) causes.

Cluelessness can be another reason something is intractable. For example, effects on wild animal are really complicated, especially when population sizes change, and we know little about animals' welfare and have considerable uncertainty about their moral weights. As such, it's hard to know whether a given intervention is net positive or net negative in expectation. However, little has been spent on understanding the welfare of wild animals or their moral weights, so maybe this deep uncertainty is not unresolvable.

Luke Muehlhauser also said that almost all AI (governance) interventions he looks into are as likely to be net negative as they are to be net positive: https://forum.effectivealtruism.org/posts/pxALB46SEkwNbfiNS/the-motivated-reasoning-critique-of-effective-altruism?commentId=6yFEBSgDiAfGHHKTD

each additional doubling will solve a similar fraction of the problem, in expectation

Aren't you assuming the conclusion here?

I don't think so. I'd say I'm assuming something (tractability doesn't change much with neglectedness) which implies the conclusion (between problems, tractability doesn't vary by more than ~100x). Tell me if there's something obvious I'm missing.

When I formalize "tractability" it turns out to be directly related to neglectedness. If R is the number of resources invested in a problem currently, and u(r) is the difference in world utility from investing 0 v.s. r resources into the problem, and u_total is u(r) once the problem is solved, then tractability turns out to be:

Tractability = u'(R) * R * 1/ u_total

So I'm not sure I really understand yet why tractability wouldn't change much with neglectedness. I have preliminary understanding, though, which I'm writing up in another comment.

Ah, I see now that within a problem, tractability shouldn't change as the problem gets less neglected if you assume that u(r) is logarithmic, since then the derivative is like 1/R, making tractability like 1/u_total

I was in the process of writing a comment trying to debunk this. My counterexample didn't work so now I'm convinced this is a pretty good post. This is a nice way of thinking about ITN quantitatively. 

The counterexample I was trying to make might still be interesting for some people to read as an illustration of this phenomenon. Here it is:

Scale "all humans" trying to solve "all problems" down to "a single high school student" trying to solve "math problems". Then tractability (measured as % of problem solved / % increase in resources) for this person to solve different math problems is as follows:

  • A very large arithmetic question like "find 123456789123456789^2 by hand" requires ~10 hours to solve
  • A median international math olympiad question probably requires ~100 hours of studying to solve 
  • A median research question requires an undergraduate degree (~2000 hours) and then specialized studying (~1000 hours)  to solve
  • A really tough research question takes a decade of work (~20,000 hours) to solve
  • A way ahead of its time research question (maybe, think developing ML theory results before there were even computers) I could see taking 100,000+ hours of work 

Here tractability varies by 4 orders of magnitude (10-100,000 hours) if you include all kinds of math problems. If you exclude very easy or very hard things (as Thomas was describing) you end up with 2 orders of magnitude (~1000-100,000 hours). 

We've had several researchers who have been working on technical AI alignment for multiple years, and no consensus on a solution, although some might think some systems are less risky than others, and we've made progress on those. Say 20 researchers working 20 hours a week, 50 weeks a year, for 5 years. That's 20 * 20 * 5 * 50 = 100,000 hours of work. I think the number of researchers is much larger now. This also excludes a lot of the background studying, which would be duplicated.

Maybe AI alignment is not "one problem", and it's not exactly rigorously posed yet (it's pre-paradigmatic), but those are also reasons to think it's especially hard. Technical AI alignment has required building a new field of research, not just using existing tools.

(posting this so ideas from our chat can be public)

Ex ante, the tractability range is narrower than 2 orders of magnitude unless you have really strong evidence. Say you're a high school student presented with a problem of unknown difficulty, and you've already spent 100 hours on it without success. What's the probability that you solve it in the next doubling?

  • Obviously less than 100%
  • Probably more than 1%, even if it looks really hard-- you might find some trick that solves it!

And you have to have a pretty strong indication that it's hard (e.g. using concepts you've tried and failed to understand) to even put your probability below 3%.

There can be evidence that it's really hard (<0.1%), maybe for problems like "compute tan(10^123) to 9 decimal places" or "solve this problem that John Conway failed to solve". This means you've updated away from your ignorance prior (which spans many orders of magnitude) and now know the true structure of the problem, or something.

If I've spent 100 hours on a (math?) problem without success as a high school student and can't get hints or learn new material (or have already tried those and failed), then I don't think less than 1% to solving it in the next 100 hours is unreasonable. I'd probably already have exhausted all the tools I know of by then. Of course, this depends on the person.

The time and resources you (or others) spent on a problem without success (or substantial progress) are evidence for its intractability.

I was going to come back to this and write a comment saying why I either agree or disagree and why, but I keep flipping back and forth.

I now think there are some classes of problems for which I could easily get under 1%, and some for which I can't, and this partially depends on whether I can learn new material (if I can, I think I'd need to exhaust every promising-looking paper). The question is which is the better reference class for real problems.

You could argue that not learning new material is the better model, because we can't get external help in real life. But on the other hand, the large action space of real life feels more similar to me to a situation in which we can learn new material-- the intuition that the high school student will just "get stuck" seems less strong with an entire academic subfield working on alignment, say.

I think this argument makes a lot of sense when applied to domains with a certain level of existing resources, but not for fields which are so neglected that there are virtually no resources spent there right now. In other words, I think the logarithmic returns framework breaks down for really high neglectedness, for two reasons:

  1. High neglectedness is a signal of unusually high intractability - e.g. the "no plan of attack" case you describe, but also more subtle barriers. For example, we might have a brilliant intervention to reduce air pollution in China, but foreign funding regulations might be such a bottleneck that we couldn't actually spend any of that.

  2. High neglectedness means there is not a lot of infrastructure to absorb the money needed to solve the problem. For example, I wrote about extreme heat adaptation for the Cause Exploration Prizes, and ultimately assessed that it could not be an area for giving simply because I could not find any organizations that work on extreme heat adaptation, so there was simply nowhere to spend money. You could argue that over a sufficiently long period, money could be used to create that infrastructure e.g. incubate organizations, but it's an open question to me resource-elastic that is and intuitively it doesn't feel logarithmic.

So in a perverse way, high neglectedness (which is generally desirable) is usually correlated with low tractability, and possibly tractability outside the logarithmic framework.

I'm curious whether people have thoughts on whether this analysis of problem-level tractability also applies to personal fit. I think many of the arguments here naively seems like it should apply to personal fit as well. Yet many people (myself included) make consequential career- and project- selection decisions based on strong intuitions of personal fit. 

This article makes a strong argument that it'd be surprising if tractability (but not importance, or to a lesser degree neglectedness) can differ by >2 OOMS. In a similar vein, I think it'd also be surprising if personal fit can differ by 2 OOMs. 

I'd be interested in theoretical arguments or empirical evidence here, in either direction (Note that showing someone has an absolute advantage of >100X over someone else in the same field is relatively little evidence to me, as the important question here is comparative advantage as conferred by personal fit).

I don't know if I understand why tractability doesn't vary much. It seems like it should be able to vary just as much as cost-effectiveness can vary. 

For example, imagine two problems with the same cost-effectiveness, the same importance, but one problem has 1000x fewer resources invested in it. Then the tractability of that problem should be 1000x higher [ETA: so that the cost-effectiveness can still be the same, even given the difference in neglectedness.]

Another example:  suppose an AI safety researcher solved AI alignment after 20 years of research. Then the two problems "solve the sub-problem which will have been solved by tomorrow" and "solve AI alignment" have the same local cost-effectiveness (since they are locally the same actions), the same amount of resources invested into each, but potentially massively different importances. This means the tractabilities must also be massively different.

These two examples lead me to believe that in as much as tractability doesn't vary much, it's because of a combination of two things:

  1. The world isn't dumb enough to massively underinvest in a really cost-effective and important problems
  2. The things we tend to think of as problems are "similarly sized" or something like that

I'm still not fully convinced, though, and am confused for instance about what "similarly sized" might actually mean.

 

Problems vary on three axes: u'(R), R, u_total. You're expressing this in the basis u'(R), R, u_total. The ITN framework uses the basis I, T, N = u_total, u'(R) * R * 1/ u_total, 1/R. The basis is arbitrary: we could just as easily use some crazy basis like X, Y, Z = u_total^2, R^5, sqrt(u'(R)). But we want to use a basis that's useful in practice, which means variables with intuitive meanings, hence ITN.

But why is tractability roughly constant with neglectedness in practice? Equivalently, why are there logarithmic returns to many problems? I don't think it's related to your (1) or (2) because those are about comparing different problems, whereas the mystery is the relationship between u'(R) * R * 1/ u_total and 1/R for a given problem. One model that suggests log returns is if we have to surpass some unknown resource threshold r* (the "difficulty") to solve the problem, and r* ranges over many orders of magnitude with an approximately log-uniform distribution [1]. Owen C-B has empirical evidence and some theoretical justification for why this might happen in practice. When it does, my post then derives that tractability doesn't vary dramatically between (most) problems.

Note that sometimes we know r* or have a different prior, and then our problem stops being logarithmic, like in the second section of the post. This is exactly when tractability can vary dramatically between problems. In your AI alignment subproblem example, we know that alignment takes 20 years, which means a strong update away from the logarithmic prior.

[1]: log-uniform distributions over the reals don't exist, so I mean something like "doesn't vary by more than ~1.5x every doubling in the fat part of the distribution".

The entire time I've been thinking about this, I've been thinking of utility curves as logarithmic, so you don't have to sell me on that. I think my original comment here is another way of understanding why tractability perhaps doesn't vary much between problems, not within a problem.

But why is tractability roughly constant with neglectedness in practice? Equivalently, why are there logarithmic returns to many problems?

I don't see why logarithmic utility iff tractability doesn't change with neglectedness.

[This comment is no longer endorsed by its author]Reply

For example, imagine two problems with the same cost-effectiveness, the same importance, but one problem has 1000x fewer resources invested in it. Then the tractability of that problem should be 1000x higher.

In the ITN framework, this will be modeled under "neglectedness" rather than "tractability" 

There was an inference there -- you need tractability to balance with the neglectedness to add up to equal cost-effectiveness

Let the "baseline" level of tractability be a 10% chance to be solved with one doubling of resources.

Do you interpret this as "one doubling of resources increases the probability of solving the problem by 10 percentage points"? (Or, solves an additional 10% of the problem, for a problem that is continuous instead of binary.)

I argue that Bernard must be implicitly claiming that cause B is unusually tractable, that there is a strong departure from logarithmic returns, or that there is no feasible plan of attack for cause A.

Related: Rob Bensinger says MIRI's current take on AI risk is "we don't see a promising angle of attack on the core problem".

Suppose your problem is a huge pile of trash in your backyard; the best way to solve it is to pay people to haul away the trash, and the cost of this is roughly linear in the amount of trash removed.

Isn't this an example with diminishing returns? You can only fit so many trucks/people in your backyard, so you run into a bottleneck.

If problems don't differ dramatically in tractability, does that imply that we should be able to completely solve problems pretty easily?

Curated and popular this week
Relevant opportunities