In the 80.000Hours podcast episode with Ezra Klein (https://80000hours.org/podcast/episodes/ezra-klein-journalism-most-important-topics/#biggest-critiques-of-the-effective-altruism-and-rationalist-communities-012040), one of the critiques that Ezra gave of the rationalist/EA community, is that those rationalist people too often state their epistemic status or confidence levels when making claims. He said: "I appreciate that Scott Alexander and others will sometimes put ‘epistemic status, 60%’ on the top of 5,000 words of super aggressive argumentation, but is the effect of that epistemic status to make people like, “Oh, I should be careful with this,” or is it like, “This person is super rational and self-critical, and actually now I believe him totally”? And I’m not picking on Scott here. A lot of people do this. [..] And so people are just pulling like 20%, 30%, 70% probabilities out of thin air. That makes things sound more convincing, but I always think it’s at the danger of making people… Of actually it having the reverse effect that it should. Sometimes the language of probability reads to folks like well, you can really trust this person. And so instead of being skeptical, you’re less skeptical. So those are just pitfalls that I notice and it’s worth watching out for, as I have to do myself."

I agree that merely mentioning your confidence level (e.g. "I feel X% confident about Y") may be misleading and not so informative. But it got me thinking: instead of communicating confidence levels, what might be more fruitful, is people communicating their epistemic status shifts (e.g. "after thinking and reading about Y recently, I changed my confidence from X% to Z%") or confidence interval changes (e.g. "with this new evidence, I narrowed down my confidence interval from..."). This gives clearer information about how people update their beliefs, how strong or important they think the new evidence is (what Bayesian update factor they use), and what their prior beliefs or confidence levels were. It also better creates a culture where changing one's mind is considered a good thing, and where changing one's mind does not always mean making 180° turns, but also includes having smaller shifts in epistemic status.

31

0
0

Reactions

0
0
Comments1
Sorted by Click to highlight new comments since:

Another complication here is that a lot of arguments are arguments about the expected value of some variable --- ie, the argument that we should take some action is implicitly an argument that the expected utility from taking that action is greater than that from taking the action we would have taken otherwise. 

And it's not clear what a % credence means when it comes to an estimate of an expected value --- expected values aren't random variables. Ie, if I think we ought to work on AI-risk over Global Public Health  since I think there is a 1% chance of an AI intervention saving trillions of lives, it's not clear what it'd mean to put another % confidence over that already probabilistically derived expected utility: I've already incorporated the 99% chance of failure into my case for working on AI-risk. Certainly it's good to acknowledge that chance of failure, but it doesn't say anything about my epistemic status in my argument. 

I think reporting % credences serve a purpose more similar to reporting effect sizes than an epistemic status. They're something for you to average together to get a quick & dirty estimate of what the consensus is. 

Anyway, re: what to do in the case when the argument is about an expected value --- I think the best practice is to to point out the known unknowns that you think are the most likely ways your argument might be shown to be false -- ie, "I think we should work on AI over Global Public Health, but I think my case depends on fast takeoff being true, I'm only 60% confident that it is, and I think we can get better info about which takeoff scenario is the more likely to happen."

In the case the biggest known unknowns are what priors you should have before seeing a piece of evidence, this basically reduces down to your strength of evidence/epistemic shift proposal. But I think generally when we're talking about our epistemic status, it's more useful to concentrate on how our beliefs might be changed in the future, and how qualitatively we think other people might accomplish changing our minds, than how they've changed in the past.


(It seems the correct "Bayesian" thing to do the above if you really wanted to report your beliefs using numbers would be to take your priors about information you'll receive at each time  in the future, encode the structure of your uncertainty about what you'll know at each point in time as a filtration of your event space , and then report your uncertainty about the trajectory about your future beliefs about  as the martingale process 

Needless to say this is a pretty unwieldy and impractical way to report your epistemic status). 

More from Stijn
Curated and popular this week
Relevant opportunities