Futurology, Minds, Philosophy, Politics, SciFi, Technology, Transhumanism

Sufficient technology

Let’s hypothesise sufficient brain scans. As far as I know, we don’t have better than either very low resolution full-brain imaging (millions of synapses per voxel), or very limited high resolution imaging (thousands of synapses total), at least not for living brains. Let’s just pretend for the sake of argument that we have synapse-resolution full-brain scans of living subjects.

What are the implications?

  • Is a backup of your mind protected by the right to avoid self-incrimination? What about the minds of your pets?
  • Does a backup need to be punished (e.g. prison) if the person it is made from is punished? What if the offence occurred after the backup was made?
  • If the mind state is running rather than offline cold-storage, how many votes do all the copies get? What if they’re allowed to diverge? Which of them is allowed to access the bank accounts or other assets of the original? Is the original entitled to money earned by the copies?
  • If you memorise something and then get backed up, is that copyright infringement?
  • If a mind can run on silicon for less than the cost of food to keep a human healthy, can anyone other than the foremost mind in their respective field ever be employed?
  • If someone is backed up then the original is killed by someone who knows the person was backed up, is that murder, or is it the equivalent of a serious assault that causes a small duration of amnesia?
Standard
Philosophy

Morality, thy discount is hyperbolic

One well known failure mode of Utilitarian ethics is a thing called a “utility monster” — for any value of “benefit” and “suffering”, it’s possible to postulate an entity (Bob) and a course of action (The Plan) where Bob receives so much benefit that everyone else can suffer arbitrarily great pain and yet you “should” still do The Plan.

That this can happen is often used as a reason to not be a Utilitarian. Never mind that there are no known real examples — when something is paraded as a logical universal ethical truth, it’s not allowed to even have theoretical problems, for much the same reason that God doesn’t need to actually “microwave a burrito so hot that even he can’t eat it” for the mere possibility to be a proof that no god is capable of being all-powerful.

I have previously suggested a way to limit this — normalisation — but that was obviously not good enough for a group. What I was looking for then was a way to combine multiple entities in a sensible fashion, and now I’ve found one already existed: hyperbolic discounting.

Hyperbolic discounting is how we all naturally think about the future: the further into the future a reward is, the less important we regard it. For example, if I ask “would you rather have $15 immediately; or $30 after three months; or $60 after one year; or $100 after three years?” most people find those options equally desirable, even though nobody’s expecting the US dollar to lose 85% of its value in just three years.

Most people do a similar thing with large numbers, although logarithmic rather than hyperbolic. There’s a cliché with presenters describing how big some number X is by saying X-seconds is a certain number of years and acting like it’s surprising that “while a billion seconds is 30 years, a trillion is 30 thousand years”. (Personally I am so used to this truth that it feels weird that they even need to say it).

Examples of big numbers, in the form of money. Millionaire, Billionaire, Elon Musk, and Bill Gates.

Examples of big numbers, in the form of money. Millionaire, Billionaire, Elon Musk, and Bill Gates. Wealth estimates from Wikipedia, early 2020.

So, what’s the point of this? Well, one of the ways the Utility Monster makes a certain category of nerd look like an arse is the long-term future of humanity. The sort of person who (mea culpa) worries about X-risks (eXtinction risks), and says “don’t worry about climate change, worry about non-aligned AI!” to an audience filled with people who very much worry about climate change induced extinction and think that AI can’t possibly be a real issue when Alexa can’t even figure out which room’s lightbulb it’s supposed to switch on or off.

To put it in concrete terms: If your idea of “long term planning” means you are interested in star-lifting as way to extend the lifetime of the sun from a few billion years to a few tens of trillions, and you intend to help expand to fill the local supercluster with like-minded people, and your idea of “person” includes a mind running on a computer that’s as power-efficient as a human brain and effectively immortal, then you’re talking about giving 5*10^42 people a multi-trillion year lifespan.

If you’re talking about giving 5*10^42 people a multi-trillion year lifespan, you will look like a massive arsehole if you say, for example, that “climate change killing 7 billion people in the next century is a small price to pay for making sure we have an aligned AI to help us with the next bit” — the observation that a non-aligned AI is likely to irreversibly destroy everything it’s not explicitly trying to preserve isn’t going to change anyone’s mind, because even if you get past the inferential distance that means most people hearing about this say “off switch!” (both as a solution to the AI and to whichever channel you’re broadcasting on), at scales like this, utilitarianism feels wrong.

So, where does hyperbolic discounting come in?

We naturally discount the future as a “might not happen”. This is good. No matter how certain we think we are on paper, there is always the risk of an unknown-unknown. This risk doesn’t go away with more evidence, either — the classic illustration of the problem of induction is a turkey who observes that every morning they are fed by their farmer and so decides that the farmer has their best interests at heart; the longer this goes on for, the more certain they are of this, yet each day brings them closer to being slaughtered for thanksgiving. The currently-known equivalents in physics would be things like false vacuums or brane collisions triggering a new big bang or Boltzmann brains.

Because we don’t know the future, we should discount it. There are not 5*10^42 people. There might never be 5*10^42 people. To the extent that there might be, they might turn out to all be Literally Worse Than Hitler. Sure, there’s a chance of human flourishing on a scale that makes Heaven as described in the Bible seem worthless in comparison — and before you’re tempted to point out that Biblical heaven is supposed to be eternal, which is infinitely longer than any number of trillions of years: that counter-argument presumes Utilitarian and therefore doesn’t apply — but the further away those people and that outcome is from you, the less weight you should put on them really becoming and it really happening.

Instead: Concentrate on the here-and-now. Concentrate on ending poverty. Fight climate change. Save endangered species. Campaign for nuclear disarmament and peaceful dispute resolution. Not because there can’t be 5*10^42 people in our future light cone, but because we can falter and fail at any and every step on the way from here.

Standard
AI, Futurology

Pocket brains

  • Total iPhone sales between Q4 2017 and Q4 2018: 217.52 million
  • Performance of Neural Engine, component of Apple A11 SoC used in iPhone 8, 8 Plus, and X: 600 billion operations per second
  • Estimated computational power required to simulate a human brain in real time: 36.8×1015
  • Total compute power of all iPhones sold between Q4 2017 and Q4 2018, assuming 50% were A11’s (I’m not looking for more detailed stats right now): 6525.6×1015
  • Number of simultaneous, real-time, simulations of complete human brains that can be supported by 2017-18 sales of iPhones: 177

 

  • Performance of “Next-generation Neural Engine” in Apple A12 SoC used in Phone XR, XS, XS Max: 5 trillion operations per second
  • Assuming next year’s sales are unchanged (and given that all current models use this chip and I therefore shouldn’t discount by 50% the way I did previously) number of simultaneous, real-time, simulations of complete human brains that can be supported by 2018-19 sales of iPhones: 1.30512×1021/36.8×1015 = 35,465

 

  • Speedup required before one iPhone’s Neural Engine is sufficient to simulate a human brain in real time: 36.8×1015/5×1012 = 7,360
  • When this will happen, assuming Moore’s Law continues: log2(7360)×1.5 = 19.268… years = January, 2038
  • Reason to not expect this: A12 feature size is 7nm, silicon diameter is ~0.234nm, size may only reduce by a linear factor of about 30 or an areal factor of about 900 before features are atomic. (Oh no, you’ll have to buy a whole eight iPhones to equal your whole brain).

 

  • Purchase cost of existing hardware to simulate one human brain: <7,360×$749 → <$5,512,640
  • Power requirements of simulating one human brain in real time using existing hardware, assuming the vague estimates of ~5W TDP for an A12 SoC are correct: 7,360×~5W → ~36.8kW
  • Annual electricity bill from simulating one human brain in real time: 36.8kW * 1 year * $0.1/kWh = 32,200 US dollars
  • Reasons to be cautious about previous number: it ignores the cost of hardware failure, and I don’t know the MTBF of an A12 SoC so I can’t even calculate that
  • Fermi estimate of MTBF of Apple SoC: between myself, my coworkers, my friends, and my family, I have experience of at least 10 devices, and none have failed before being upgraded over a year later, so assume hardware replacement <10%/year → <$551,264/year
  • Assuming hardware replacement currently costs $551,264/year, and that Moore’s law continues, then expected date that the annual replacement cost of hardware required to simulate a human brain in real time becomes equal to median personal US annual income in 2016 ($31,099): log2($551,264/$31,099) = 6.22… years = late December, 2024
Standard
AI

Would this be a solution to the problem of literal-Genie omniscient AIs?

[stupivalent: neither malevolent nor benevolent, just doing exactly what it was told without awareness that what you said isn’t what you meant]

Imagine an AI that, as per [Robert Mile’s youtube videos], has a perfect model of reality, that has absolutely no ethical constraints, and that is given the instruction “collect as many stamps as possible”.

Could the bad outcome be prevented if the AI was built to always add the following precondition, regardless of what it was tasked by a human to achieve?

“Your reward function is measured in terms of how well the person who gave you the instruction would have reacted if they had heard, at the moment they gave you the instruction, what you were proposing to do.”

One might argue that Robert Miles’ stamp collector AI is a special case, as it is presupposed to model reality perfectly. I think such an objection is unreasonable: models don’t have to be perfect to cause the problems he described, and models don’t have to be perfect to at least try to predict what someone would have wanted.

How do you train an AI to figure out what people will and won’t approve of? I’d conjecture having the AI construct stories, tell those stories to people, and learn through story-telling what people consider to be “happy endings” and “sad endings”. Well, construct and read, but it’s much harder to teach a machine to read than it is to teach it to write — we’ve done the latter, the former might be Turing-complete.

Disclaimer: I have an A-level in philosophy, but it’s not a good one. I’m likely to be oblivious to things that proper philosophers consider common knowledge. I’ve also been spending most of the last 18 months writing a novel and only covering recent developments in AI in my spare time.

Standard
AI, Futurology, Science, Software, Technology

The Singularity is Dead, Long Live The Singularity

The Singularity is one form of the idea that machines are constantly being improved and will one day make us all unemployable. Phrased that way, it should be no surprise that discussions of the Singularity are often compared with those of the Luddites from 1816.

“It’s different now!” many people say. Are they right to think that those differences are important?

There have been so many articles and blog posts (and books) about the Singularity that I need to be careful to make clear which type of “Singularity” I’m writing about.

I don’t believe in real infinities. Any of them. Something will get in the way before you reach them. I therefore do not believe in any single runaway process that becomes a deity-like A.I. in a finite time.

That doesn’t stop me worrying about “paperclip optimisers” that are just smart enough to cause catastrophic damage (this already definitely happens even with very dumb A.I.); nor does it stop me worrying about the effect of machines with an IQ of only 200 that can outsmart all but the single smartest human, and rendering mental labour as redundant as physical labour already is, or even an IQ of 85, which would make 15.9% of the world permanently unemployable (some do claim that machines can never be artistic, but, well, machines are already doing “creative” jobs in music, literature and painting, and even if they were not there is a limit as to how many such jobs there can be).

So, for “the Singularity”, what I mean is this:

“A date after which the average human cannot keep up with the rate of progress.”

By this definition, I think it’s already happened. How many people have kept track of these things?:

Most of this was unbelievable science fiction when I was born. Between my birth and 2006, only a few of these things became reality. More than half are things that happened or were invented in the 2010s. When Google’s AlphaGo went up against Lee Sedol he thought he’d easily beat it, 5-0 or 4-1, instead he lost 1-4.

If you’re too young to have a Facebook account, there’s a good chance you’ll never need to learn any foreign language. Or make any physical object. Or learn to drive… there’s a fairly good chance you won’t be allowed to drive. And once you become an adult, if you come up with an invention or a plot for a novel or a motif for a song, there will be at least four billion other humans racing against you to publish it.

Sure, we don’t have a von Neumann probe nor even a clanking replicator at this stage (we don’t even know how to make one yet, unless you count “copy an existing life form”), but given we’ve got 3D printers working at 10 nanometers already, it’s not all that unreasonable to assume we will in the near future. The fact that life exists proves such machines are possible, after all.

None of this is to say humans cannot or will not adapt to change. We’ve been adapting to changes for a long time, we have a lot of experience of adapting to changes, we will adapt more. But there is a question:

“How fast can you adapt?”

Time, as they say, is money. Does it take you a week to learn a new job? A machine that already knows how to do it has a £500 advantage over you. A month? The machine has a £2,200 advantage. You need to get another degree? It has an £80,000 advantage even if the degree was free. That’s just for the average UK salary with none of the extra things employers have to care about.

We don’t face problems just from the machines outsmarting us, we face problems if all the people working on automation can between them outpace any significant fraction of the workforce. And there’s a strong business incentive to pay for such automation, because humans are one of the most expensive things businesses have to pay for.

I don’t have enough of a feeling for economics to guess what might happen if too many people are unemployed and therefore unable to afford the goods produced by machine labour, all I can say is that when I was in secondary school, all of us young enough to be without income, pirating software and music was common. (I was the only one with a Mac, so I had to make do with magazine cover CDs for my software, but I think the observation is still worth something).

Standard