Politics, Psychology

Brexit as an example of failure to comprehend conditional probabilities

It’s been 2 years 5 months 12 days 12 hours 25 minutes since my first post about Brexit, and I still don’t really know what will happen.

Almost everything is conditional probability: “If there’s a hard (no-deal) Brexit, then the traffic jams from Dover, Harwich/Felixstowe, etc. will be about as long as is physically possible given the number of trucks in the UK.”

Conditional probabilities suck for human-human interactions.

Any pro-Brexit person who reads that will tend, I think, to remember that prediction without the “if” clause; and if there is a deal, they will be much more likely to crow about it as “yet another Remoaner failure”. (Also often missed: “much more likely” doesn’t mean “will”, but I keep seeing people read it that way).

This sort of prediction tends to be used by propagandists as a modern-day Cassandra: the worst case scenario is so bad that everyone strives to prevent it — in this case by not countenancing a no-deal scenario — and so it becomes just another reason to distrust experts who fought against a no-deal scenario. In turn, this increases the chances of something as bad as a no-deal scenario next time. I see the same thing with the Millennium Bug, and as a kid with global warming (no, it wasn’t scientists who told you we were heading straight into an ice age, it was newspapers; any scientists actually talking about an actual ice age were talking about the end of an interglacial warm period and were even then outnumbered 6-to-1 by those concerned about things getting warmer).

I can only think of three post-Brexit-Great-Britain (yes I do mean GB, not UK, I know too little about NI) predictions that are not conditional on the final form that Brexit takes:

  1. The UK population will still be arguing about it
  2. These arguments will involve at least one large march (≥10,000 people)
  3. At least one building will be set on fire (or an attempt will be made: they’re designed to not be on fire)

I put at least 1-in-3 odds that events 2 and 3 will be performed by both angry Leavers and angry Remainers.


Self-teaching | Selbstlernen

🇬🇧: I’ve been teaching myself German. Duolingo, clozemaster, etc.

🇩🇪: Ich habe mich Deutche selbstlernen. Duolingo, clozemaster, etc.

🇬🇧: I know my level; I can hold only simple conversations. I make mistakes often.

🇩🇪: Ich kenne meine Grade; Ich kann nur einfache Thema gesprochen. Ich oft irre bin.

🇬🇧: But my German is now good enough that I can see when Google Translate is telling me nonsense.

🇩🇪: Aber, mein Deutsch ist gut genug, das ich weiß, wann Google Translate hat blödsinn geschrieben.

🇬🇧: Not so good that Google Translate doesn’t help, just good enough to spot many mistakes.

🇩🇪: Nicht wie gut, dass Google Translate hilft nichts, nur so gut zu viele irre gesehen.

AI, Futurology

Pocket brains

  • Total iPhone sales between Q4 2017 and Q4 2018: 217.52 million
  • Performance of Neural Engine, component of Apple A11 SoC used in iPhone 8, 8 Plus, and X: 600 billion operations per second
  • Estimated computational power required to simulate a human brain in real time: 36.8×1015
  • Total compute power of all iPhones sold between Q4 2017 and Q4 2018, assuming 50% were A11’s (I’m not looking for more detailed stats right now): 6525.6×1015
  • Number of simultaneous, real-time, simulations of complete human brains that can be supported by 2017-18 sales of iPhones: 177


  • Performance of “Next-generation Neural Engine” in Apple A12 SoC used in Phone XR, XS, XS Max: 5 trillion operations per second
  • Assuming next year’s sales are unchanged (and given that all current models use this chip and I therefore shouldn’t discount by 50% the way I did previously) number of simultaneous, real-time, simulations of complete human brains that can be supported by 2018-19 sales of iPhones: 1.30512×1021/36.8×1015 = 35,465


  • Speedup required before one iPhone’s Neural Engine is sufficient to simulate a human brain in real time: 36.8×1015/5×1012 = 7,360
  • When this will happen, assuming Moore’s Law continues: log2(7360)×1.5 = 19.268… years = January, 2038
  • Reason to not expect this: A12 feature size is 7nm, silicon diameter is ~0.234nm, size may only reduce by a linear factor of about 30 or an areal factor of about 900 before features are atomic. (Oh no, you’ll have to buy a whole eight iPhones to equal your whole brain).


  • Purchase cost of existing hardware to simulate one human brain: <7,360×$749 → <$5,512,640
  • Power requirements of simulating one human brain in real time using existing hardware, assuming the vague estimates of ~5W TDP for an A12 SoC are correct: 7,360×~5W → ~36.8kW
  • Annual electricity bill from simulating one human brain in real time: 36.8kW * 1 year * $0.1/kWh = 32,200 US dollars
  • Reasons to be cautious about previous number: it ignores the cost of hardware failure, and I don’t know the MTBF of an A12 SoC so I can’t even calculate that
  • Fermi estimate of MTBF of Apple SoC: between myself, my coworkers, my friends, and my family, I have experience of at least 10 devices, and none have failed before being upgraded over a year later, so assume hardware replacement <10%/year → <$551,264/year
  • Assuming hardware replacement currently costs $551,264/year, and that Moore’s law continues, then expected date that the annual replacement cost of hardware required to simulate a human brain in real time becomes equal to median personal US annual income in 2016 ($31,099): log2($551,264/$31,099) = 6.22… years = late December, 2024


The 2-4-6 game is there to teach falsifiability. It’s an important skill, because otherwise you just get confirmation biases.

For the entirety of my life, never mind the Brexit negotiations, I have assumed “the EU is good” — at least, as far as ‘good’ can be attributed to any government. Nothing the EU have done during these negotiations has changed that, as all the things pointed to by Very Angry Leavers are things which I would also do in the EU’s place (that is: it decided what its objectives were, told everyone, and focused its negotiations around them). However, this doesn’t actually prove anything, as it has all the usual problems that come with any other ‘confirmation’.

In order to test the morality of the EU, I have to predict what an Evil Moustache-Twirling Union would do, and see how the EU’s behaviour differs from that. So:

  1. EMTU would seek to punish and harm the Other, meaning that it would not focus on maximising its own strategic interests but rather on causing negative outcomes to the Other. They would disregard any advice from their own economists and business groups if they say the EMTU’s response is not the best (or least-bad) option available, given those strategic interests.
  2. EMTU would not attempt to negotiate with the Other, or if it was forced to negotiate it would have red-lines which the Other cannot ever accept. It would not provide the Other a range of options and ask the Other to choose which it would prefer. It would highball any and all estimates of payments due from the Other, and not consider counterclaims from the Other. It would not be open and blunt about its strategic interests and demands, just in case the Other could figure out a way to meet them. It would move the goalposts and threaten to go back on any deals it had already agreed to.
  3. EMTU would insist that leaving the Other requires leaving all associations connected with EMTU, anything where the EMTU had clout. This does not mean ‘trade deals’ given the explanations I’ve read tend to agree such matters are entirely down to the third parties, but rather things like Euratom, EEA membership, fishing rights, etc.; I don’t know if it would or would not include aircraft safety certificates, mutual recognition of pilot licences etc., as I don’t know how complex those are to agree on, nor if those generally require things like “both parties agree to follow judgements of the other party’s courts” — heck, I don’t even know how to characterise that, given one of the problems with the real-life version of this is that although the EU/Remain position is that EU courts are neutral international courts for sovereign countries to resolve their disputes in, the UK/Leave position seems to be that the EU is the sovereign entity and those very same courts are domestic courts.
  4. EMTU would continuously denigrate the Other, comparing it to dictatorships that half its members remember fighting to overthrow; they would then follow this up with bombastic militaristic references.

So far, the EMTU as I’ve described appears to be closer to the UK than the EU.

Minds, Philosophy, Psychology

One person’s nit is another’s central pillar

If one person believes something is absolutely incontrovertibly true, then my first (and demonstrably unhelpful) reaction is that even the slightest demonstration of error should demolish the argument.

I know this doesn’t work.

People don’t make Boolean-logical arguments, they go with gut feelings that act much like Bayesian-logical inferences. If someone says something is incontrovertible, the incontrovertibility isn’t their central pillar — when I treated it as one, I totally failed to change their minds.

Steel man your arguments. Go for your opponent’s strongest point, but make sure it’s what your opponent is treating as their strongest point, for if you make the mistake I have made, you will fail.

If your Bayesian prior is 99.9%, you might reasonably (in common use of the words) say the evidence is incontrovertible; someone who hears “incontrovertible” and points out a minor edge case isn’t going to shift your posterior odds by much, are they?

They do? Are we thinking of the same things here? I don’t mean things where absolute truth is possible (i.e. maths, although I’ve had someone argue with me about that in a remarkably foolish way too), I mean about observations about reality which are necessarily flawed. Flawed, and sometimes circular.

Concrete example, although I apologise to any religious people in advance if I accidentally nut-pick. Imagine a Bible-literalist Christian called Chris (who thinks only 144,000 will survive the apocalypse, and no I’m not saying Chris is a Jehovah’s Witness, they’re just an example of 144k beliefs) arguing with Atheist Ann, specifically about “can God make a rock so heavy that God cannot move it?”:

P(A) = 0.999 (Bayesian prior: how certain Chris’s belief in God is)
P(B) = 1.0 (Observation: the argument has been made and Ann has not been struck down)
P(B|A) = 0.99979 (Probability that God has not struck down Ann for blasphemy, given that God exists — In the Bible, God has sometimes struck down non-believers, so let’s say about 21 million deaths of the 100 billion humans that have ever lived to cover the flood, noting that most were not in the 144k)

P(A|B) = P(B|A)P(A)/P(B) = 0.99979×0.999/1.0 = 0.99879021

Almost unchanged.

It gets worse; the phrase “I can’t believe what I’m hearing!” means P(B) is less than 1.0. If P(B) is less than 1.0 but all the rest is the same:

P(B) = 0.9 → P(A|B) = P(B|A)P(A)/P(B) = 0.99979×0.999/0.9 = 1.1097669

Oh no, it went up! Also, probability error, probability can never exceed 1.0! P>1.0 would be a problem if I was discussing real probabilities — if this was a maths test, this would fail (P(B|A) should be reduced correspondingly) — but people demonstrably don’t always update all their internal model at the same time: if we did, cognitive dissonance would be impossible. Depending on the level of the thinking (I suspect direct processing in synapses won’t do this, but that deliberative conscious thought can) we can sometimes fall into traps, so this totally explains another observation: some people can take the mere existence of people who disagree with them as a reason to believe even more strongly.

Minds, Politics

Baysean Brexit

I’ve decided to start making written notes every time I catch myself in a cognitive trap or bias. I tried just noticing them, but then I noticed I kept forgetting them, and that’s no use.

If you tell me Brexit will be good for the economy, then I automatically think you know too little about economics to be worth listening to. If you tell a Leaver that Brexit will be bad for the economy, then they automatically think you know too little about economics to be worth listening to.

Both of these are fixed points for Baysean inference, a trap from which it is difficult to escape. The prior is 100% that «insert economic forecast here» is correct, and it must therefore follow that anything contradicting the forecast is wrong.

That said, it is possible for evidence to change my mind (and hopefully the average Leaver’s). Unfortunately, like seeing the pearly gates, the only evidence I can imagine would be too late to be actionable — for both Leavers and Remainers — because that evidence is in the form “wait for Brexit and see what happens”.

Is there any way to solve this? Is it a problem of inferential distance, masquerading as a Baysean prior?

Feels like this also fits my previous post about the dynamic range of Bayesian thought.