Futurology, AI

Pocket brains

  • Total iPhone sales between Q4 2017 and Q4 2018: 217.52 million
  • Performance of Neural Engine, component of Apple A11 SoC used in iPhone 8, 8 Plus, and X: 600 billion operations per second
  • Estimated computational power required to simulate a human brain in real time: 36.8×1015
  • Total compute power of all iPhones sold between Q4 2017 and Q4 2018, assuming 50% were A11’s (I’m not looking for more detailed stats right now): 6525.6×1015
  • Number of simultaneous, real-time, simulations of complete human brains that can be supported by 2017-18 sales of iPhones: 177

 

  • Performance of “Next-generation Neural Engine” in Apple A12 SoC used in Phone XR, XS, XS Max: 5 trillion operations per second
  • Assuming next year’s sales are unchanged (and given that all current models use this chip and I therefore shouldn’t discount by 50% the way I did previously) number of simultaneous, real-time, simulations of complete human brains that can be supported by 2018-19 sales of iPhones: 1.30512×1021/36.8×1015 = 35,465

 

  • Speedup required before one iPhone’s Neural Engine is sufficient to simulate a human brain in real time: 36.8×1015/5×1012 = 7,360
  • When this will happen, assuming Moore’s Law continues: log2(7360)×1.5 = 19.268… years = January, 2038
  • Reason to not expect this: A12 feature size is 7nm, silicon diameter is ~0.234nm, size may only reduce by a linear factor of about 30 or an areal factor of about 900 before features are atomic. (Oh no, you’ll have to buy a whole eight iPhones to equal your whole brain).

 

  • Purchase cost of existing hardware to simulate one human brain: <7,360×$749 → <$5,512,640
  • Power requirements of simulating one human brain in real time using existing hardware, assuming the vague estimates of ~5W TDP for an A12 SoC are correct: 7,360×~5W → ~36.8kW
  • Annual electricity bill from simulating one human brain in real time: 36.8kW * 1 year * $0.1/kWh = 32,200 US dollars
  • Reasons to be cautious about previous number: it ignores the cost of hardware failure, and I don’t know the MTBF of an A12 SoC so I can’t even calculate that
  • Fermi estimate of MTBF of Apple SoC: between myself, my coworkers, my friends, and my family, I have experience of at least 10 devices, and none have failed before being upgraded over a year later, so assume hardware replacement <10%/year → <$551,264/year
  • Assuming hardware replacement currently costs $551,264/year, and that Moore’s law continues, then expected date that the annual replacement cost of hardware required to simulate a human brain in real time becomes equal to median personal US annual income in 2016 ($31,099): log2($551,264/$31,099) = 6.22… years = late December, 2024
Advertisements
Standard
Politics

Falsifiability

The 2-4-6 game is there to teach falsifiability. It’s an important skill, because otherwise you just get confirmation biases.

For the entirety of my life, never mind the Brexit negotiations, I have assumed “the EU is good” — at least, as far as ‘good’ can be attributed to any government. Nothing the EU have done during these negotiations has changed that, as all the things pointed to by Very Angry Leavers are things which I would also do in the EU’s place (that is: it decided what its objectives were, told everyone, and focused its negotiations around them). However, this doesn’t actually prove anything, as it has all the usual problems that come with any other ‘confirmation’.

In order to test the morality of the EU, I have to predict what an Evil Moustache-Twirling Union would do, and see how the EU’s behaviour differs from that. So:

  1. EMTU would seek to punish and harm the Other, meaning that it would not focus on maximising its own strategic interests but rather on causing negative outcomes to the Other. They would disregard any advice from their own economists and business groups if they say the EMTU’s response is not the best (or least-bad) option available, given those strategic interests.
  2. EMTU would not attempt to negotiate with the Other, or if it was forced to negotiate it would have red-lines which the Other cannot ever accept. It would not provide the Other a range of options and ask the Other to choose which it would prefer. It would highball any and all estimates of payments due from the Other, and not consider counterclaims from the Other. It would not be open and blunt about its strategic interests and demands, just in case the Other could figure out a way to meet them. It would move the goalposts and threaten to go back on any deals it had already agreed to.
  3. EMTU would insist that leaving the Other requires leaving all associations connected with EMTU, anything where the EMTU had clout. This does not mean ‘trade deals’ given the explanations I’ve read tend to agree such matters are entirely down to the third parties, but rather things like Euratom, EEA membership, fishing rights, etc.; I don’t know if it would or would not include aircraft safety certificates, mutual recognition of pilot licences etc., as I don’t know how complex those are to agree on, nor if those generally require things like “both parties agree to follow judgements of the other party’s courts” — heck, I don’t even know how to characterise that, given one of the problems with the real-life version of this is that although the EU/Remain position is that EU courts are neutral international courts for sovereign countries to resolve their disputes in, the UK/Leave position seems to be that the EU is the sovereign entity and those very same courts are domestic courts.
  4. EMTU would continuously denigrate the Other, comparing it to dictatorships that half its members remember fighting to overthrow; they would then follow this up with bombastic militaristic references.

So far, the EMTU as I’ve described appears to be closer to the UK than the EU.

Standard
Minds, Philosophy, Psychology

One person’s nit is another’s central pillar

If one person believes something is absolutely incontrovertibly true, then my first (and demonstrably unhelpful) reaction is that even the slightest demonstration of error should demolish the argument.

I know this doesn’t work.

People don’t make Boolean-logical arguments, they go with gut feelings that act much like Bayesian-logical inferences. If someone says something is incontrovertible, the incontrovertibility isn’t their central pillar — when I treated it as one, I totally failed to change their minds.

Steel man your arguments. Go for your opponent’s strongest point, but make sure it’s what your opponent is treating as their strongest point, for if you make the mistake I have made, you will fail.

If your Bayesian prior is 99.9%, you might reasonably (in common use of the words) say the evidence is incontrovertible; someone who hears “incontrovertible” and points out a minor edge case isn’t going to shift your posterior odds by much, are they?

They do? Are we thinking of the same things here? I don’t mean things where absolute truth is possible (i.e. maths, although I’ve had someone argue with me about that in a remarkably foolish way too), I mean about observations about reality which are necessarily flawed. Flawed, and sometimes circular.

Concrete example, although I apologise to any religious people in advance if I accidentally nut-pick. Imagine a Bible-literalist Christian called Chris (who thinks only 144,000 will survive the apocalypse, and no I’m not saying Chris is a Jehovah’s Witness, they’re just an example of 144k beliefs) arguing with Atheist Ann, specifically about “can God make a rock so heavy that God cannot move it?”:

P(A) = 0.999 (Bayesian prior: how certain Chris’s belief in God is)
P(B) = 1.0 (Observation: the argument has been made and Ann has not been struck down)
P(B|A) = 0.99979 (Probability that God has not struck down Ann for blasphemy, given that God exists — In the Bible, God has sometimes struck down non-believers, so let’s say about 21 million deaths of the 100 billion humans that have ever lived to cover the flood, noting that most were not in the 144k)

P(A|B) = P(B|A)P(A)/P(B) = 0.99979×0.999/1.0 = 0.99879021

Almost unchanged.

It gets worse; the phrase “I can’t believe what I’m hearing!” means P(B) is less than 1.0. If P(B) is less than 1.0 but all the rest is the same:

P(B) = 0.9 → P(A|B) = P(B|A)P(A)/P(B) = 0.99979×0.999/0.9 = 1.1097669

Oh no, it went up! Also, probability error, probability can never exceed 1.0! P>1.0 would be a problem if I was discussing real probabilities — if this was a maths test, this would fail (P(B|A) should be reduced correspondingly) — but people demonstrably don’t always update all their internal model at the same time: if we did, cognitive dissonance would be impossible. Depending on the level of the thinking (I suspect direct processing in synapses won’t do this, but that deliberative conscious thought can) we can sometimes fall into traps, so this totally explains another observation: some people can take the mere existence of people who disagree with them as a reason to believe even more strongly.

Standard
Minds, Politics

Baysean Brexit

I’ve decided to start making written notes every time I catch myself in a cognitive trap or bias. I tried just noticing them, but then I noticed I kept forgetting them, and that’s no use.

If you tell me Brexit will be good for the economy, then I automatically think you know too little about economics to be worth listening to. If you tell a Leaver that Brexit will be bad for the economy, then they automatically think you know too little about economics to be worth listening to.

Both of these are fixed points for Baysean inference, a trap from which it is difficult to escape. The prior is 100% that «insert economic forecast here» is correct, and it must therefore follow that anything contradicting the forecast is wrong.

That said, it is possible for evidence to change my mind (and hopefully the average Leaver’s). Unfortunately, like seeing the pearly gates, the only evidence I can imagine would be too late to be actionable — for both Leavers and Remainers — because that evidence is in the form “wait for Brexit and see what happens”.

Is there any way to solve this? Is it a problem of inferential distance, masquerading as a Baysean prior?

Feels like this also fits my previous post about the dynamic range of Bayesian thought.

Standard
Health, Minds, Psychology

Attention ec — ooh, a squirrel

List of my current YouTube subscriptions. It's very long.

I think the zeitgeist seems to be moving away from filling all our time with things and being hyper-connected, and towards rarer more meaningful connections.

It’s… disturbing and interesting at the same time, to realise that the attention-grabbing nature of all the things I enjoy has been designed to perfectly fit me, and all of us, by the same survival-of-the-fittest logic that causes natural evolution.

That which best grabs the attention, thrives. That which isn’t so powerful, doesn’t.

And when we develop strategies to defend ourselves against certain attention-grabbers, the attention-grabbers which use different approaches that we have not yet defended against take the place of those we have protected ourselves from.

A memetic arms race, between mental hygiene and thought germs.

I’ve done stuff in the last three months, but that stuff hasn’t included “finish editing next draft of my novel”. I could’ve, if only I’d made time for that instead of drinking from the (effectively) bottomless well of high quality YouTube content (see side-image for my active subscriptions; I also have to make a conscious effort to not click on the interesting clips from TV shows that probably shouldn’t even be on YouTube in the first place). Even though I watch most content sped up to a factor of 1.5 or 2, I can barely find time for all the new YouTube content I care about and do my online language courses and make time for the other things like finding a job.

Editing my novel? It’s right there, on my task list… but I barely touch it, even though it’s fulfilling to work on it, and fun to re-read. I don’t know if this is ego depletion or akrasia or addiction, but whatever it is, it’s an undesirable state.

I’m vulnerable to comments sections, too. Of course, I can do something about those — when I notice myself falling into a trap, I can block a relevant domain name in my hosts file. I have a lot of stuff in that file these days, and even then I slip up a bit because I can’t edit my iPhones hosts file.

Now that I know there’s a problem, I’m working on it… just like everyone else. The irony is, by disconnecting from the hyper-connected always-on parts of the internet, we’re not around to help each other when we slip up.

CGPGrey: Thinking About Attention — Walk with Me — https://www.youtube.com/watch?v=wf2VxeIm1no

CGPGrey: This Video Will Make You Angry — https://www.youtube.com/watch?v=rE3j_RHkqJc

Elsewhere on this blog: Hyperinflation in the attention economy: what succeeds adverts?

Standard
Psychology, Software, Technology

Social media compulsion

He flashed up a slide of a shelf filled with sugary baked goods. “Just as we shouldn’t blame the baker for making such delicious treats, we can’t blame tech makers for making their products so good we want to use them,” he said. “Of course that’s what tech companies will do. And frankly: do we want it any other way?”The Guardian (website); ‘Our minds can be hijacked’: the tech insiders who fear a smartphone dystopia

I can, in fact, blame bakers. It’s easy: I do it in the same way I blame cigarette manufacturers. In all three cases (sugar/fat/flavour combinations, nicotine, social rewards) they exploit chemical pathways in our brains to get us to do something not in our best interests. They are supernormal stimuli — and given how recent the research is, I can forgive the early tobacconists and confectioners, but tech doesn’t get the luxury of ignorance-as-an-excuse.

I want my technology to be a tool which helps me get stuff done.

A drill is something I pick up, use to make a hole, then put down and forget about until I want to make another hole.

I don’t want a drill which is cursed so that if I ever put it down, I start to feel bad about not making more holes in things, and end up staying up late at night just to find yet one more thing I can drill into.

If I saw in a shop a drill which I knew would do that, I wouldn’t get it even if it was free, never broke, the (included) battery lasted a lifetime, etc. — the cost to the mind wouldn’t be worth it.

The same is true for the addictive elements of social media: I need to be connected to my friends, but I’d rather spend money than risk addiction.

Standard