I’ve decided to start making written notes every time I catch myself in a cognitive trap or bias. I tried just noticing them, but then I noticed I kept forgetting them, and that’s no use.
|If you tell me Brexit will be good for the economy, then I automatically think you know too little about economics to be worth listening to.||If you tell a Leaver that Brexit will be bad for the economy, then they automatically think you know too little about economics to be worth listening to.|
Both of these are fixed points for Baysean inference, a trap from which it is difficult to escape. The prior is 100% that «insert economic forecast here» is correct, and it must therefore follow that anything contradicting the forecast is wrong.
That said, it is possible for evidence to change my mind (and hopefully the average Leaver’s). Unfortunately, like seeing the pearly gates, the only evidence I can imagine would be too late to be actionable — for both Leavers and Remainers — because that evidence is in the form “wait for Brexit and see what happens”.
Is there any way to solve this? Is it a problem of inferential distance, masquerading as a Baysean prior?
Feels like this also fits my previous post about the dynamic range of Bayesian thought.