Futurology, Minds, Philosophy, Politics, SciFi, Technology, Transhumanism

Sufficient technology

Let’s hypothesise sufficient brain scans. As far as I know, we don’t have better than either very low resolution full-brain imaging (millions of synapses per voxel), or very limited high resolution imaging (thousands of synapses total), at least not for living brains. Let’s just pretend for the sake of argument that we have synapse-resolution full-brain scans of living subjects.

What are the implications?

  • Is a backup of your mind protected by the right to avoid self-incrimination? What about the minds of your pets?
  • Does a backup need to be punished (e.g. prison) if the person it is made from is punished? What if the offence occurred after the backup was made?
  • If the mind state is running rather than offline cold-storage, how many votes do all the copies get? What if they’re allowed to diverge? Which of them is allowed to access the bank accounts or other assets of the original? Is the original entitled to money earned by the copies?
  • If you memorise something and then get backed up, is that copyright infringement?
  • If a mind can run on silicon for less than the cost of food to keep a human healthy, can anyone other than the foremost mind in their respective field ever be employed?
  • If someone is backed up then the original is killed by someone who knows the person was backed up, is that murder, or is it the equivalent of a serious assault that causes a small duration of amnesia?
Standard
AI, Futurology, Opinion, Philosophy

Memetic monocultures

Brief kernel of an idea:

  1. Societies deem certain ideas “dangerous”.
  2. If it possible to technologically eliminate perceived dangers, we can be tempted to do so, even when we perceived wrongly.
  3. Group-think has lead to catastrophic misjudgments.
  4. This represents a potential future “great filter” for the Fermi paradox. It does not apply to previous attempts at eliminating dissenting views, as they were social, not technological, in nature, and limited in geographical scope.
  5. This risk has not yet become practical, but we shouldn’t feel complacent just because brain-computer-interfaces are basic and indoctrinal viruses are fictional, as universal surveillance is sufficient and affordable, limited only by sufficiently advanced AI to assist human overseers (perfect AI not required).
Standard
AI, Futurology

Pocket brains

  • Total iPhone sales between Q4 2017 and Q4 2018: 217.52 million
  • Performance of Neural Engine, component of Apple A11 SoC used in iPhone 8, 8 Plus, and X: 600 billion operations per second
  • Estimated computational power required to simulate a human brain in real time: 36.8×1015
  • Total compute power of all iPhones sold between Q4 2017 and Q4 2018, assuming 50% were A11’s (I’m not looking for more detailed stats right now): 6525.6×1015
  • Number of simultaneous, real-time, simulations of complete human brains that can be supported by 2017-18 sales of iPhones: 177

 

  • Performance of “Next-generation Neural Engine” in Apple A12 SoC used in Phone XR, XS, XS Max: 5 trillion operations per second
  • Assuming next year’s sales are unchanged (and given that all current models use this chip and I therefore shouldn’t discount by 50% the way I did previously) number of simultaneous, real-time, simulations of complete human brains that can be supported by 2018-19 sales of iPhones: 1.30512×1021/36.8×1015 = 35,465

 

  • Speedup required before one iPhone’s Neural Engine is sufficient to simulate a human brain in real time: 36.8×1015/5×1012 = 7,360
  • When this will happen, assuming Moore’s Law continues: log2(7360)×1.5 = 19.268… years = January, 2038
  • Reason to not expect this: A12 feature size is 7nm, silicon diameter is ~0.234nm, size may only reduce by a linear factor of about 30 or an areal factor of about 900 before features are atomic. (Oh no, you’ll have to buy a whole eight iPhones to equal your whole brain).

 

  • Purchase cost of existing hardware to simulate one human brain: <7,360×$749 → <$5,512,640
  • Power requirements of simulating one human brain in real time using existing hardware, assuming the vague estimates of ~5W TDP for an A12 SoC are correct: 7,360×~5W → ~36.8kW
  • Annual electricity bill from simulating one human brain in real time: 36.8kW * 1 year * $0.1/kWh = 32,200 US dollars
  • Reasons to be cautious about previous number: it ignores the cost of hardware failure, and I don’t know the MTBF of an A12 SoC so I can’t even calculate that
  • Fermi estimate of MTBF of Apple SoC: between myself, my coworkers, my friends, and my family, I have experience of at least 10 devices, and none have failed before being upgraded over a year later, so assume hardware replacement <10%/year → <$551,264/year
  • Assuming hardware replacement currently costs $551,264/year, and that Moore’s law continues, then expected date that the annual replacement cost of hardware required to simulate a human brain in real time becomes equal to median personal US annual income in 2016 ($31,099): log2($551,264/$31,099) = 6.22… years = late December, 2024
Standard
Futurology

Old predictions, and how they’ve been standing up

The following was originally posted to a (long-since deleted) Livejournal account on 2012-06-05 02:55:27 BST. I have not edited this at all. Some of these predictions from 6 years ago have stood up pretty well, other predictions have been proven impossible.


Predicting the future is, in retrospect, hilarious. Nonetheless, I want to make a guess as to how the world will look in ten years, even if only to have a concrete record of how inaccurate I am. Unless otherwise specified, these predictions are for 2022:

Starting with the least surprising: By 2024, solar power will be the cheapest form of electricity for everyone closer to the equator than the north of France. Peak solar power output will equal current total power output from all sources, while annual average output will be 25%. Further progress relies on developments of large-scale energy storage systems, which may or may not happen depending on electric cars.

By 2022, CPU lithography will either reach 4nm, or everyone will decide it’s too expensive to keep on shrinking and stop sooner. There are signs that the manufacturers may not be able to mass produce 4nm chips due to their cost, even though that feature size is technically possible, so I’m going to say minimum feature size will be larger than you might expect from Moore’s law. One might assume that they can still get cheaper, even if not more powerful per unit area, but there isn’t much incentive to reduce production cost if you don’t also reduce power consumption; currently power consumption is improving slightly faster than Moore’s law, but not by much.

LEDs will be the most efficient light source; they will also be a tenth of their current price, making compact fluorescents totally obsolete. People will claim they take ages to get to full brightness, just because they are still energy-saving.

Bulk storage will probably be spinning magnetic platers, and flash drives will be as obsolete in 2022 as the floppy disk is in 2012. (Memristor based storage is an underdog on this front, at least on the scale of 10 years.)

Western economies won’t go anywhere fast in the next 4 years, but might go back to normal after that; China’s economy will more than double in size by 2022.

In the next couple of years, people will have realised that 3D printers take several hours to produce something the size of a cup and started to dismiss them as a fad. Meanwhile, people who already know the limitations of 3D printers have already, in 2011 used them for organ culture — in 10 to 20 years, “cost an arm and a leg” will fall out of common use in the same way and for the same reason that “you can no more XYZ than you can walk on the Moon” fell out of use in 1969 — if you lose either an arm or a leg, you will be able to print out a replacement. I doubt there will be full self-replication by 2022, but I wouldn’t bet against it.

No strong general A.I., but the problem is software rather than hardware, so if I’m wrong you won’t notice until it’s too late. (A CPU’s transistors change state a hundred million times faster than your neurons, and the minimum feature size of the best 2012 CPUs is 22nm, compared to the 200nm thickness of the smallest dendrite that Google told me about).

Robot cars will be available in many countries by 2020, rapidly displacing human drivers because they are much safer and therefore cheaper to insure; Taxi drivers disappear first, truckers fight harder but still fall. Human drivers may be forbidden from public roads by 2030.

Robot labour will be an even more significant part of the workforce. Foxconn, or their equivalent, will use more robots than there are people in Greater London.

SpaceX and similar companies lower launch costs by at least a factor of 10; these launch costs combine with standardised micro-satellites allow at least one university, 6th form, or school to launch a probe to the moon.

Graphene proves useful, but does not become a wonder material. Cookware coated in synthetic diamond is commonplace, and can be bought in Tesco. Carbon nanotube rope is available in significant lengths from specialist retailers, but still very expensive.

In-vitro meat will have been eaten, but probably still be considered experimental by 2020. There will be large protests and well-signed petitions against it, but these will be ignored.

Full-genome sequencing will cost about a hundred quid and take less than 10 hours.

3D television and films will fail and be revived at least once more.

E-book readers will be physically flexible, with similar resolution to print.

Hydrogen will not be developed significantly; biofuels will look promising, but will probably lose out to electric cars because they go so well with solar power (alternative: genetic engineering makes a crop that can be burned in existing power stations, making photovoltaic and solar-thermal plants redundant while also providing fuel for petrol and diesel car engines); fusion will continue to not be funded properly; too many people will remain too scared of fission for it to improve significantly; lots of people will still be arguing about wind turbines, and others will still be selling snake-oil “people-powered” devices.

Machine vision will be connected to every CCTV system that gets sold in 2020, and it will do a better job than any combination of human operators could possibly manage. The now-redundant human operators will argue loudly that “a computer could never look at someone and know how they are feeling, it could never know if someone is drunk and about to start a fight”; someone will put this to the test, and the machine will win.

High-temperature superconductivity currently seems to be developing at random, so I can’t say if we will have any progress or not. I’m going to err on the side of caution, and say no significant improvements by 2022.

Optical-wavelength cloaking fabric will be available by the mid 2020s, but very expensive and probably legally restricted to military and law enforcement.

Most of Kepler’s exoplanet candidates will be confirmed in the next few years; by 2022, we will have found and confirmed an Earth-like planet in the habitable zone of it’s star (right now, the most Earth-like candidate exoplanet, Gliese 581 g, in unconfirmed while the most Earth-like confirmed exoplanet, Gliese 581 d, is only slightly more habitable than Mars). We will find out if there is life on that world, but the answer will make no difference to most people’s lives.

OpenStreetMap will have replaced all other maps in almost every situation; Facebook will lose it’s crown as The social network; The comments section of most websites will still make people loose faith in humanity; English Wikipedia will be “complete” for some valid definition of the word.

Obama will win 2012, the Republicans will win 2016; The Conservatives will lose control of the UK regardless of when the next UK general election is held, but the Lib Dems might recover if Clegg departs.

Errors and omissions expected. It’s 3am!.

Standard
Futurology, Software, Technology

Hyperinflation in the attention economy: what succeeds adverts?

Adverts.

Lots of people block them because they’re really really annoying. (Also a major security risk that slows down your browsing experience, but I doubt that’s the main reason.)

Because adverts are executable (who thought that was a good idea?), they also get used for cryptocurrency mining. Really inefficient cryptocurrency mining, but still.

Because they cost money, there is a financial incentive to systematically defraud advertisers by showing lots of real, paid-for, adverts to lots of fake users. (See also: adverts are executable. Can one advert download ten more? Even sneakily in the background will do, the user doesn’t need to see them.)

Because of the faked consumption (amongst other reasons), advertisers don’t get good value for money, lowering demand; because of lowered demand, websites get less money than they would under an efficient system; because of something which seems analogous to hyperinflation (but affecting the supply of spaces in which to advertise rather than the supply of money), websites are crowded with adverts; because of the excess of adverts, lots of people block them.

What if there was a better way?

Cut out the middle man, explicitly fund your website with your own cryptocurrency mining? Users see no adverts, don’t have their attention syphoned away.

Challenge: the problem I’m calling hyperinflation of attention (probably inaccurately, but it’s a good metaphor) would still apply with cryptocurrency mining resource supply. This is already a separate problem with cryptocurrency mining — way too many people are spending way too many resources on something which is only counting and storing value but without fundamentally adding value to the system.

Potential solution: a better cryptocurrency, one which actually does something useful. Useful work such as SETI@home or folding@home — if it must be a currency, then perhaps one where each unit of useful work gets exchanged for a token which can be traded or redeemed with the organisation which produced it, in much the same way that banknotes could, for a long time, be taken to a central bank and exchanged for gold. And the token could be redeemed for whatever is economically useful — a user may perform 1e9 operations now in exchange for a token which would given them 2e9 floating point operations in five years (by which time floating point operations should be 10 times cheaper); or the user decodes two human genomes now in exchange for a token to decode one of their choice later; or whatever.

A separate, but solvable, issue is that the only things I can think of which are processing-power-limited right now are research (climate forecasts, particle physics, brain simulation, simulated drug testing, AI), or used directly by the consumer (video game graphics), or are a colossal waste of resources (bitcoin, spam) — I’ll freely admit this list may be just down to ignorance on my part — so far as I can see, the only one of those which pairs website visitors with actual income would be the video games… but even then it would be utter insanity for the paid customers to have their image rendering offloaded onto the non-payers. The clear solution to this is the same sort of mechanism that currently “solves” advertising: automated auction by those who want to buy your CPU time and websites that want to sell access to your CPU time.

Downside: this will kill you batteries if you don’t disable JavaScript.

Standard
Futurology, Technology

Musk City, Antarctica

One of the criticisms of a Mars colony is that Antarctica is more hospitable in literally every regard (you might argue that the 6-month day and the 6-month night makes it less hospitable, to which I would reply that light bulbs exist and you’d need light bulbs all year round on Mars to avoid SAD-like symptoms).

I’ve just realised the 2017 BFR will be able to get you anywhere in Antarctica, from any launch site on Earth, in no more than 45 minutes, at the cost of long-distance economy passenger flights, and that the Mars plan involves making fuel and oxidiser out of atmospheric CO₂ and frozen water ice so no infrastructure needs to be shipped conventionally before the first landing.

Standard
AI, Futurology

The end of human labour is inevitable, here’s why

OK. So, you might look at state-of-the-art A.I. and say “oh, this uses too much power compared to a human brain” or “this takes too many examples compared to a human brain”.

So far, correct.

But there are 7.6 billion humans: if an A.I. watches all of them all of the time (easy to imagine given around 2 billion of us already have two or three competing A.I. in our pockets all the time, forever listening for an activation keyword), then there is an enormous set of examples with which to train the machine mind.

“But,” you ask, “what about the power consumption?”

Humans cost a bare minimum of $1.25 per day, even if they’re literally slaves and you only pay for food and (minimal) shelter. Solar power can be as cheap as 2.99¢/kWh.

Combined, that means that any A.I. which uses less than 1.742 kilowatts per human-equivalent-part is beating the cheapest possible human — By way of comparison, Google’s first generation Tensor processing unit uses 40 W when busy — in the domain of Go, it’s about 174,969 times as cost efficient as a minimum-cost human, because four of them working together as one can teach itself to play Go better than the best human in… three days.

And don’t forget that it’s reasonable for A.I. to have as many human-equivalent-parts as there are humans performing whichever skill is being fully automated.

Skill. Not sector, not factory, skill.

And when one skill is automated away, when the people who performed that skill go off to retrain on something else, no matter where they are or what they do, there will be an A.I. watching them and learning with them.

Is there a way out?

Sure. All you have to do is make sure you learn a skill nobody else is learning.

Unfortunately, there is a reason why “thinking outside the box” is such a business cliché: humans suck at that style of thinking, even when we know what it is and why it’s important. We’re too social, we copy each other and create by remixing more than by genuinely innovating, even when we think we have something new.

Computers are, ironically, better than humans at thinking outside the box: two of the issues in Concrete Problems in AI Safety are there because machines easily stray outside the boxes we are thinking within when we give them orders. (I suspect that one of the things which forces A.I. to need far more examples to learn things than we humans do is that they have zero preconceived notions, and therefore must be equally open-minded to all possibilities).

Worse, no matter how creative you are, if other humans see you performing a skill that machines have yet to master, those humans will copy you… and then the machines, even today’s machines, will rapidly learn from all the enthusiastic humans who are so gleeful about their new trick to stay one step ahead of the machines, the new skill they can point to and say “look, humans are special, computers can’t do this” right up until the computers do it.

Standard