B

BenjaminTereick

81 karmaJoined

Comments
5

I think it’s borderline whether reports of this type are forecasting as commonly understood, but would personally lean no in the specific cases you mention (except maybe the bio anchors report).
 
I really don’t think that this intuition is driven by the amount of time or effort that went into them, but rather the percentage of intellectual labor that went into something like “quantifying uncertainty” (rather than, e.g. establishing empirical facts, reviewing the literature, or analyzing the structure of commonly-made arguments).  

As for our grantmaking program: I expect we’ll have a more detailed description of what we want to cover later this year, where we might also address points about the boundaries to worldview investigations.

Hi Dan, 

Thanks for writing this! Some (weakly-held) points of skepticism: 

  1. I find it a bit nebulous what you do and don't count as a rationale. Similarly to Eli,* I think on some readings of your post, “forecasting” becomes very broad and just encompasses all of research. Obviously, research is important!
  2. Rationales are costly! Taking that into account, I think there is a role to play for  “just the numbers” forecasting, e.g.: 
    1. Sometimes you just want to defer to others, especially if an existing track record establishes that the numbers are reliable. For instance, when looking at weather forecasts, or (at least until last year) looking at 538’s numbers for an upcoming election, it would be great if you understood all the details of what goes into the numbers, but the numbers themselves are plenty useful, too. 
    2. Even without a track record, just-the-number forecasts give you a baseline of what people believe, which allows you to observe big shifts. I’ve heard many people express things like “I don’t defer to the Metaculus on AGI arrival, but it was surely informative to see by how much the community prediction has moved over the last few years”.
    3. Just-the-number forecasts let you spot disagreements with other people, which helps finding out where talking about rationales/models is particularly important. 
       
  3. I’m worried that in the context of getting high-stakes decision makers to use forecasts, some of the demand for rationales is due to lack of trust in the forecasts. Replying to this demand with AI-generated rationales might shift the skeptical take from “they’re just making up numbers” to “it’s all based on LLM hallucinations” that I’m not sure really addresses the underlying problem. 
     

*OTOH, I think Eli is also hinting at a definition of forecasting that is too narrow. I do think that generating models/rationales is part of forecasting as it is commonly understood (including in EA circles), and certainly don't agree that forecasting by definition means that little effort was put into it!
Maybe the right place to draw the line between forecasting rationales and “just general research” is asking “is the model/rationale for the most part tightly linked to the numerical forecast?" If yes, it's forecasting, if not, it's something else. 


 

As the program is about forecasting, what is your stance on the broader field of foresight & futures studies? Why is forecasting more promising than some other approaches to foresight?

We are open to considering projects in “forecasting-adjacent" areas, and projects that combine forecasting with ideas from related fields are certainly well within the scope of the program.

As for projects that would exclusively rely on other approaches: My worry is that non-probabilistic foresight techniques typically don’t have more to show in terms of evidence for their effectiveness, while being more ad hoc from a theoretical perspective.

Just confirming that informing our own decisions was part of the motivation for past grants, and I expect it to play an important role for our forecasting grants in the future.

[The forecasting money] seems to have overwhelmingly gone to community forecasting sites like Manifold and Metaculus. I don't see anything like "paying 3 teams of 3 forecasters to compete against each other on some AI timelines questions".

That’s directionally true, but I think “overwhelmingly” isn’t right.

  1. We did not fund Manifold. 
  2. One of our largest forecasting grants went to FRI, which is not a platform.
  3. While it’s fair to say that Metaculus is mostly a platform, it also runs externally-funded tournaments, and has a pro forecaster service.
  4. There were a few grants to more narrowly defined projects. Most of these are currently not assgined to forecasting as a cause area, but you can find them here (searching for “forecast” in our grants database), see especially those before August 2021. [Update: we have updated the labels, and these grants are now listed here ].
    I expect that we’ll make more of these types of grants now that forecasting is a designated area with more capacity. 

I’m glad to see the debate on decision relevance in the comments! I think that if we end up considering forecasting a successful focus area in 5-10 years, thinking hard about the value-add to decision-making will likely have played a crucial role in this success.
 
As for my own view, I do agree that judgmental / subjective probability forecasting hasn’t been as much of a success story as one might have expected about 10 years ago. I also agree that many of the stories people tell about the impact of forecasting  naturally raise questions like “so why isn’t this a huge industry now? Why is this project a non-profit?”. We are likely to ask questions of this kind to prospective grantees way more often than grantmakers in other focus areas. 
However, I (unsurprisingly) also disagree with the stronger claim that the lack of a large judgmental forecasting industry is conclusive evidence that forecasting doesn’t provide value, and is just an EA hobby horse. While I don’t have capacity to engage in this debate deeply, a few points of rebuttal:

  • I do think there have been some successes. For instance, the XPT mentioned in this comment certainly affected the personal beliefs of some people in the EA community, and thereby had an influence on resource allocation and career decisions. 
  • Forecasting, as such, is a large industry. I’d assign considerable weight to the idea that making judgmental forecasting a success of the kind that model-driven forecasting approaches have been in areas like finance, marketing or sports, is a harder, but solvable task. There might simply be a free-riding problem for investing the resources necessary for figuring out the best way to make it work. 
  • As a related indirect argument, forecasting has a pretty straightforward a priori case (more accurate information leads to better decision-making), and there are plenty of candidate explanations for why its widespread adoption would have been difficult despite forecasting having the potential to be widely useful (e.g. I’m sympathetic to the points made by MaxRa here). Thus, even after updating on the observation that judgmental forecasting hasn’t conquered the world yet, I don’t think we should assign high confidence that it will forever stay a niche industry. 
  • As others have pointed out, only a fairly small fraction of Open Phil’s spending has gone into forecasting so far (about 1%), and this is unlikely to dramatically change in the future. The forecasting community doesn’t need to become a multi-billion industry to justify that level of spending.