Futurology, AI

Pocket brains

  • Total iPhone sales between Q4 2017 and Q4 2018: 217.52 million
  • Performance of Neural Engine, component of Apple A11 SoC used in iPhone 8, 8 Plus, and X: 600 billion operations per second
  • Estimated computational power required to simulate a human brain in real time: 36.8×1015
  • Total compute power of all iPhones sold between Q4 2017 and Q4 2018, assuming 50% were A11’s (I’m not looking for more detailed stats right now): 6525.6×1015
  • Number of simultaneous, real-time, simulations of complete human brains that can be supported by 2017-18 sales of iPhones: 177

 

  • Performance of “Next-generation Neural Engine” in Apple A12 SoC used in Phone XR, XS, XS Max: 5 trillion operations per second
  • Assuming next year’s sales are unchanged (and given that all current models use this chip and I therefore shouldn’t discount by 50% the way I did previously) number of simultaneous, real-time, simulations of complete human brains that can be supported by 2018-19 sales of iPhones: 1.30512×1021/36.8×1015 = 35,465

 

  • Speedup required before one iPhone’s Neural Engine is sufficient to simulate a human brain in real time: 36.8×1015/5×1012 = 7,360
  • When this will happen, assuming Moore’s Law continues: log2(7360)×1.5 = 19.268… years = January, 2038
  • Reason to not expect this: A12 feature size is 7nm, silicon diameter is ~0.234nm, size may only reduce by a linear factor of about 30 or an areal factor of about 900 before features are atomic. (Oh no, you’ll have to buy a whole eight iPhones to equal your whole brain).

 

  • Purchase cost of existing hardware to simulate one human brain: <7,360×$749 → <$5,512,640
  • Power requirements of simulating one human brain in real time using existing hardware, assuming the vague estimates of ~5W TDP for an A12 SoC are correct: 7,360×~5W → ~36.8kW
  • Annual electricity bill from simulating one human brain in real time: 36.8kW * 1 year * $0.1/kWh = 32,200 US dollars
  • Reasons to be cautious about previous number: it ignores the cost of hardware failure, and I don’t know the MTBF of an A12 SoC so I can’t even calculate that
  • Fermi estimate of MTBF of Apple SoC: between myself, my coworkers, my friends, and my family, I have experience of at least 10 devices, and none have failed before being upgraded over a year later, so assume hardware replacement <10%/year → <$551,264/year
  • Assuming hardware replacement currently costs $551,264/year, and that Moore’s law continues, then expected date that the annual replacement cost of hardware required to simulate a human brain in real time becomes equal to median personal US annual income in 2016 ($31,099): log2($551,264/$31,099) = 6.22… years = late December, 2024
Advertisements
Standard
Futurology

Old predictions, and how they’ve been standing up

The following was originally posted to a (long-since deleted) Livejournal account on 2012-06-05 02:55:27 BST. I have not edited this at all. Some of these predictions from 6 years ago have stood up pretty well, other predictions have been proven impossible.


Predicting the future is, in retrospect, hilarious. Nonetheless, I want to make a guess as to how the world will look in ten years, even if only to have a concrete record of how inaccurate I am. Unless otherwise specified, these predictions are for 2022:

Starting with the least surprising: By 2024, solar power will be the cheapest form of electricity for everyone closer to the equator than the north of France. Peak solar power output will equal current total power output from all sources, while annual average output will be 25%. Further progress relies on developments of large-scale energy storage systems, which may or may not happen depending on electric cars.

By 2022, CPU lithography will either reach 4nm, or everyone will decide it’s too expensive to keep on shrinking and stop sooner. There are signs that the manufacturers may not be able to mass produce 4nm chips due to their cost, even though that feature size is technically possible, so I’m going to say minimum feature size will be larger than you might expect from Moore’s law. One might assume that they can still get cheaper, even if not more powerful per unit area, but there isn’t much incentive to reduce production cost if you don’t also reduce power consumption; currently power consumption is improving slightly faster than Moore’s law, but not by much.

LEDs will be the most efficient light source; they will also be a tenth of their current price, making compact fluorescents totally obsolete. People will claim they take ages to get to full brightness, just because they are still energy-saving.

Bulk storage will probably be spinning magnetic platers, and flash drives will be as obsolete in 2022 as the floppy disk is in 2012. (Memristor based storage is an underdog on this front, at least on the scale of 10 years.)

Western economies won’t go anywhere fast in the next 4 years, but might go back to normal after that; China’s economy will more than double in size by 2022.

In the next couple of years, people will have realised that 3D printers take several hours to produce something the size of a cup and started to dismiss them as a fad. Meanwhile, people who already know the limitations of 3D printers have already, in 2011 used them for organ culture — in 10 to 20 years, “cost an arm and a leg” will fall out of common use in the same way and for the same reason that “you can no more XYZ than you can walk on the Moon” fell out of use in 1969 — if you lose either an arm or a leg, you will be able to print out a replacement. I doubt there will be full self-replication by 2022, but I wouldn’t bet against it.

No strong general A.I., but the problem is software rather than hardware, so if I’m wrong you won’t notice until it’s too late. (A CPU’s transistors change state a hundred million times faster than your neurons, and the minimum feature size of the best 2012 CPUs is 22nm, compared to the 200nm thickness of the smallest dendrite that Google told me about).

Robot cars will be available in many countries by 2020, rapidly displacing human drivers because they are much safer and therefore cheaper to insure; Taxi drivers disappear first, truckers fight harder but still fall. Human drivers may be forbidden from public roads by 2030.

Robot labour will be an even more significant part of the workforce. Foxconn, or their equivalent, will use more robots than there are people in Greater London.

SpaceX and similar companies lower launch costs by at least a factor of 10; these launch costs combine with standardised micro-satellites allow at least one university, 6th form, or school to launch a probe to the moon.

Graphene proves useful, but does not become a wonder material. Cookware coated in synthetic diamond is commonplace, and can be bought in Tesco. Carbon nanotube rope is available in significant lengths from specialist retailers, but still very expensive.

In-vitro meat will have been eaten, but probably still be considered experimental by 2020. There will be large protests and well-signed petitions against it, but these will be ignored.

Full-genome sequencing will cost about a hundred quid and take less than 10 hours.

3D television and films will fail and be revived at least once more.

E-book readers will be physically flexible, with similar resolution to print.

Hydrogen will not be developed significantly; biofuels will look promising, but will probably lose out to electric cars because they go so well with solar power (alternative: genetic engineering makes a crop that can be burned in existing power stations, making photovoltaic and solar-thermal plants redundant while also providing fuel for petrol and diesel car engines); fusion will continue to not be funded properly; too many people will remain too scared of fission for it to improve significantly; lots of people will still be arguing about wind turbines, and others will still be selling snake-oil “people-powered” devices.

Machine vision will be connected to every CCTV system that gets sold in 2020, and it will do a better job than any combination of human operators could possibly manage. The now-redundant human operators will argue loudly that “a computer could never look at someone and know how they are feeling, it could never know if someone is drunk and about to start a fight”; someone will put this to the test, and the machine will win.

High-temperature superconductivity currently seems to be developing at random, so I can’t say if we will have any progress or not. I’m going to err on the side of caution, and say no significant improvements by 2022.

Optical-wavelength cloaking fabric will be available by the mid 2020s, but very expensive and probably legally restricted to military and law enforcement.

Most of Kepler’s exoplanet candidates will be confirmed in the next few years; by 2022, we will have found and confirmed an Earth-like planet in the habitable zone of it’s star (right now, the most Earth-like candidate exoplanet, Gliese 581 g, in unconfirmed while the most Earth-like confirmed exoplanet, Gliese 581 d, is only slightly more habitable than Mars). We will find out if there is life on that world, but the answer will make no difference to most people’s lives.

OpenStreetMap will have replaced all other maps in almost every situation; Facebook will lose it’s crown as The social network; The comments section of most websites will still make people loose faith in humanity; English Wikipedia will be “complete” for some valid definition of the word.

Obama will win 2012, the Republicans will win 2016; The Conservatives will lose control of the UK regardless of when the next UK general election is held, but the Lib Dems might recover if Clegg departs.

Errors and omissions expected. It’s 3am!.

Standard
Futurology, Software, Technology

Hyperinflation in the attention economy: what succeeds adverts?

Adverts.

Lots of people block them because they’re really really annoying. (Also a major security risk that slows down your browsing experience, but I doubt that’s the main reason.)

Because adverts are executable (who thought that was a good idea?), they also get used for cryptocurrency mining. Really inefficient cryptocurrency mining, but still.

Because they cost money, there is a financial incentive to systematically defraud advertisers by showing lots of real, paid-for, adverts to lots of fake users. (See also: adverts are executable. Can one advert download ten more? Even sneakily in the background will do, the user doesn’t need to see them.)

Because of the faked consumption (amongst other reasons), advertisers don’t get good value for money, lowering demand; because of lowered demand, websites get less money than they would under an efficient system; because of something which seems analogous to hyperinflation (but affecting the supply of spaces in which to advertise rather than the supply of money), websites are crowded with adverts; because of the excess of adverts, lots of people block them.

What if there was a better way?

Cut out the middle man, explicitly fund your website with your own cryptocurrency mining? Users see no adverts, don’t have their attention syphoned away.

Challenge: the problem I’m calling hyperinflation of attention (probably inaccurately, but it’s a good metaphor) would still apply with cryptocurrency mining resource supply. This is already a separate problem with cryptocurrency mining — way too many people are spending way too many resources on something which is only counting and storing value but without fundamentally adding value to the system.

Potential solution: a better cryptocurrency, one which actually does something useful. Useful work such as SETI@home or folding@home — if it must be a currency, then perhaps one where each unit of useful work gets exchanged for a token which can be traded or redeemed with the organisation which produced it, in much the same way that banknotes could, for a long time, be taken to a central bank and exchanged for gold. And the token could be redeemed for whatever is economically useful — a user may perform 1e9 operations now in exchange for a token which would given them 2e9 floating point operations in five years (by which time floating point operations should be 10 times cheaper); or the user decodes two human genomes now in exchange for a token to decode one of their choice later; or whatever.

A separate, but solvable, issue is that the only things I can think of which are processing-power-limited right now are research (climate forecasts, particle physics, brain simulation, simulated drug testing, AI), or used directly by the consumer (video game graphics), or are a colossal waste of resources (bitcoin, spam) — I’ll freely admit this list may be just down to ignorance on my part — so far as I can see, the only one of those which pairs website visitors with actual income would be the video games… but even then it would be utter insanity for the paid customers to have their image rendering offloaded onto the non-payers. The clear solution to this is the same sort of mechanism that currently “solves” advertising: automated auction by those who want to buy your CPU time and websites that want to sell access to your CPU time.

Downside: this will kill you batteries if you don’t disable JavaScript.

Standard
Futurology, Technology

Musk City, Antarctica

One of the criticisms of a Mars colony is that Antarctica is more hospitable in literally every regard (you might argue that the 6-month day and the 6-month night makes it less hospitable, to which I would reply that light bulbs exist and you’d need light bulbs all year round on Mars to avoid SAD-like symptoms).

I’ve just realised the 2017 BFR will be able to get you anywhere in Antarctica, from any launch site on Earth, in no more than 45 minutes, at the cost of long-distance economy passenger flights, and that the Mars plan involves making fuel and oxidiser out of atmospheric CO₂ and frozen water ice so no infrastructure needs to be shipped conventionally before the first landing.

Standard
AI, Futurology

The end of human labour is inevitable, here’s why

OK. So, you might look at state-of-the-art A.I. and say “oh, this uses too much power compared to a human brain” or “this takes too many examples compared to a human brain”.

So far, correct.

But there are 7.6 billion humans: if an A.I. watches all of them all of the time (easy to imagine given around 2 billion of us already have two or three competing A.I. in our pockets all the time, forever listening for an activation keyword), then there is an enormous set of examples with which to train the machine mind.

“But,” you ask, “what about the power consumption?”

Humans cost a bare minimum of $1.25 per day, even if they’re literally slaves and you only pay for food and (minimal) shelter. Solar power can be as cheap as 2.99¢/kWh.

Combined, that means that any A.I. which uses less than 1.742 kilowatts per human-equivalent-part is beating the cheapest possible human — By way of comparison, Google’s first generation Tensor processing unit uses 40 W when busy — in the domain of Go, it’s about 174,969 times as cost efficient as a minimum-cost human, because four of them working together as one can teach itself to play Go better than the best human in… three days.

And don’t forget that it’s reasonable for A.I. to have as many human-equivalent-parts as there are humans performing whichever skill is being fully automated.

Skill. Not sector, not factory, skill.

And when one skill is automated away, when the people who performed that skill go off to retrain on something else, no matter where they are or what they do, there will be an A.I. watching them and learning with them.

Is there a way out?

Sure. All you have to do is make sure you learn a skill nobody else is learning.

Unfortunately, there is a reason why “thinking outside the box” is such a business cliché: humans suck at that style of thinking, even when we know what it is and why it’s important. We’re too social, we copy each other and create by remixing more than by genuinely innovating, even when we think we have something new.

Computers are, ironically, better than humans at thinking outside the box: two of the issues in Concrete Problems in AI Safety are there because machines easily stray outside the boxes we are thinking within when we give them orders. (I suspect that one of the things which forces A.I. to need far more examples to learn things than we humans do is that they have zero preconceived notions, and therefore must be equally open-minded to all possibilities).

Worse, no matter how creative you are, if other humans see you performing a skill that machines have yet to master, those humans will copy you… and then the machines, even today’s machines, will rapidly learn from all the enthusiastic humans who are so gleeful about their new trick to stay one step ahead of the machines, the new skill they can point to and say “look, humans are special, computers can’t do this” right up until the computers do it.

Standard
AI, Futurology, Science, Software, Technology

The Singularity is Dead, Long Live The Singularity

The Singularity is one form of the idea that machines are constantly being improved and will one day make us all unemployable. Phrased that way, it should be no surprise that discussions of the Singularity are often compared with those of the Luddites from 1816.

“It’s different now!” many people say. Are they right to think that those differences are important?

There have been so many articles and blog posts (and books) about the Singularity that I need to be careful to make clear which type of “Singularity” I’m writing about.

I don’t believe in real infinities. Any of them. Something will get in the way before you reach them. I therefore do not believe in any single runaway process that becomes a deity-like A.I. in a finite time.

That doesn’t stop me worrying about “paperclip optimisers” that are just smart enough to cause catastrophic damage (this already definitely happens even with very dumb A.I.); nor does it stop me worrying about the effect of machines with an IQ of only 200 that can outsmart all but the single smartest human, and rendering mental labour as redundant as physical labour already is, or even an IQ of 85, which would make 15.9% of the world permanently unemployable (some do claim that machines can never be artistic, but, well, machines are already doing “creative” jobs in music, literature and painting, and even if they were not there is a limit as to how many such jobs there can be).

So, for “the Singularity”, what I mean is this:

“A date after which the average human cannot keep up with the rate of progress.”

By this definition, I think it’s already happened. How many people have kept track of these things?:

Most of this was unbelievable science fiction when I was born. Between my birth and 2006, only a few of these things became reality. More than half are things that happened or were invented in the 2010s. When Google’s AlphaGo went up against Lee Sedol he thought he’d easily beat it, 5-0 or 4-1, instead he lost 1-4.

If you’re too young to have a Facebook account, there’s a good chance you’ll never need to learn any foreign language. Or make any physical object. Or learn to drive… there’s a fairly good chance you won’t be allowed to drive. And once you become an adult, if you come up with an invention or a plot for a novel or a motif for a song, there will be at least four billion other humans racing against you to publish it.

Sure, we don’t have a von Neumann probe nor even a clanking replicator at this stage (we don’t even know how to make one yet, unless you count “copy an existing life form”), but given we’ve got 3D printers working at 10 nanometers already, it’s not all that unreasonable to assume we will in the near future. The fact that life exists proves such machines are possible, after all.

None of this is to say humans cannot or will not adapt to change. We’ve been adapting to changes for a long time, we have a lot of experience of adapting to changes, we will adapt more. But there is a question:

“How fast can you adapt?”

Time, as they say, is money. Does it take you a week to learn a new job? A machine that already knows how to do it has a £500 advantage over you. A month? The machine has a £2,200 advantage. You need to get another degree? It has an £80,000 advantage even if the degree was free. That’s just for the average UK salary with none of the extra things employers have to care about.

We don’t face problems just from the machines outsmarting us, we face problems if all the people working on automation can between them outpace any significant fraction of the workforce. And there’s a strong business incentive to pay for such automation, because humans are one of the most expensive things businesses have to pay for.

I don’t have enough of a feeling for economics to guess what might happen if too many people are unemployed and therefore unable to afford the goods produced by machine labour, all I can say is that when I was in secondary school, all of us young enough to be without income, pirating software and music was common. (I was the only one with a Mac, so I had to make do with magazine cover CDs for my software, but I think the observation is still worth something).

Standard
Futurology

The near future

Five years ago, 3D printed kidneys were demonstrated live on stage. Teams like that are working on all the other organs in the human body right now, except for the brain.

Brain–computer interface have been around for much longer. Not that you need them if there isn’t a brain in the body, and plenty of research is going on to create silicon brains for robotic bodies — brains that learn from experience how to walk, or see. Naturally, if a robot knows how to walk, it can be given a higher-level order like “walk forward”, or even “walk to the shops” if you add Google Maps.

Put 3D bio-printing and BCI together, and it looks like remote-controlled organic avatars are in our near future.

My first thought was of bodyguards being more willing to (literally!) take a bullet for their employers, because the bodyguards would be using artificial bodies… but then I realised bodyguards would mostly be redundant because many of the people who currently use bodyguards would be able to use remote controlled bodies themselves, and never be in any real danger in the first place.

Then I thought of soldiers… but why go to the trouble of 3D printing a soft, fragile, organic body for soldiers to control when you could, with far less exciting new developments in bio-printing, manufacture a metal and plastic avatar for your soldiers? Or make them in a non-human shape that better suits military needs?

The next most obvious career it could make redundant is prostitution, depending on what it cost. I know essentially nothing about what life is like for prostitutes, only the scare stories that made it into national media — and yet, all of those threats to life and safety would cease to exist if prostitutes didn’t have to really touch their customers, if they could interact via a remote controlled biological robot.

The flip side (or the same side, depending on your views of prostitution), is that it would make crime easier to commit and harder to prove guilt. You couldn’t tell from the outside if you were looking at a real human, or a printed copy that was remotely controlled. It would mean an end to eye-witnessing/DNA testing/fingerprinting criminals, because such avatars could be of anyone, real or imagined. A simple MRI scan or X-ray wouldn’t be enough, because printing a lump of disconnected brain cells in the shape of a brain would fool such a scan, yet be no more of a person than you would get from sculpting a dozen cattle brains into the same shape.

Standard