Futurology

Old predictions, and how they’ve been standing up

The following was originally posted to a (long-since deleted) Livejournal account on 2012-06-05 02:55:27 BST. I have not edited this at all. Some of these predictions from 6 years ago have stood up pretty well, other predictions have been proven impossible.


Predicting the future is, in retrospect, hilarious. Nonetheless, I want to make a guess as to how the world will look in ten years, even if only to have a concrete record of how inaccurate I am. Unless otherwise specified, these predictions are for 2022:

Starting with the least surprising: By 2024, solar power will be the cheapest form of electricity for everyone closer to the equator than the north of France. Peak solar power output will equal current total power output from all sources, while annual average output will be 25%. Further progress relies on developments of large-scale energy storage systems, which may or may not happen depending on electric cars.

By 2022, CPU lithography will either reach 4nm, or everyone will decide it’s too expensive to keep on shrinking and stop sooner. There are signs that the manufacturers may not be able to mass produce 4nm chips due to their cost, even though that feature size is technically possible, so I’m going to say minimum feature size will be larger than you might expect from Moore’s law. One might assume that they can still get cheaper, even if not more powerful per unit area, but there isn’t much incentive to reduce production cost if you don’t also reduce power consumption; currently power consumption is improving slightly faster than Moore’s law, but not by much.

LEDs will be the most efficient light source; they will also be a tenth of their current price, making compact fluorescents totally obsolete. People will claim they take ages to get to full brightness, just because they are still energy-saving.

Bulk storage will probably be spinning magnetic platers, and flash drives will be as obsolete in 2022 as the floppy disk is in 2012. (Memristor based storage is an underdog on this front, at least on the scale of 10 years.)

Western economies won’t go anywhere fast in the next 4 years, but might go back to normal after that; China’s economy will more than double in size by 2022.

In the next couple of years, people will have realised that 3D printers take several hours to produce something the size of a cup and started to dismiss them as a fad. Meanwhile, people who already know the limitations of 3D printers have already, in 2011 used them for organ culture — in 10 to 20 years, “cost an arm and a leg” will fall out of common use in the same way and for the same reason that “you can no more XYZ than you can walk on the Moon” fell out of use in 1969 — if you lose either an arm or a leg, you will be able to print out a replacement. I doubt there will be full self-replication by 2022, but I wouldn’t bet against it.

No strong general A.I., but the problem is software rather than hardware, so if I’m wrong you won’t notice until it’s too late. (A CPU’s transistors change state a hundred million times faster than your neurons, and the minimum feature size of the best 2012 CPUs is 22nm, compared to the 200nm thickness of the smallest dendrite that Google told me about).

Robot cars will be available in many countries by 2020, rapidly displacing human drivers because they are much safer and therefore cheaper to insure; Taxi drivers disappear first, truckers fight harder but still fall. Human drivers may be forbidden from public roads by 2030.

Robot labour will be an even more significant part of the workforce. Foxconn, or their equivalent, will use more robots than there are people in Greater London.

SpaceX and similar companies lower launch costs by at least a factor of 10; these launch costs combine with standardised micro-satellites allow at least one university, 6th form, or school to launch a probe to the moon.

Graphene proves useful, but does not become a wonder material. Cookware coated in synthetic diamond is commonplace, and can be bought in Tesco. Carbon nanotube rope is available in significant lengths from specialist retailers, but still very expensive.

In-vitro meat will have been eaten, but probably still be considered experimental by 2020. There will be large protests and well-signed petitions against it, but these will be ignored.

Full-genome sequencing will cost about a hundred quid and take less than 10 hours.

3D television and films will fail and be revived at least once more.

E-book readers will be physically flexible, with similar resolution to print.

Hydrogen will not be developed significantly; biofuels will look promising, but will probably lose out to electric cars because they go so well with solar power (alternative: genetic engineering makes a crop that can be burned in existing power stations, making photovoltaic and solar-thermal plants redundant while also providing fuel for petrol and diesel car engines); fusion will continue to not be funded properly; too many people will remain too scared of fission for it to improve significantly; lots of people will still be arguing about wind turbines, and others will still be selling snake-oil “people-powered” devices.

Machine vision will be connected to every CCTV system that gets sold in 2020, and it will do a better job than any combination of human operators could possibly manage. The now-redundant human operators will argue loudly that “a computer could never look at someone and know how they are feeling, it could never know if someone is drunk and about to start a fight”; someone will put this to the test, and the machine will win.

High-temperature superconductivity currently seems to be developing at random, so I can’t say if we will have any progress or not. I’m going to err on the side of caution, and say no significant improvements by 2022.

Optical-wavelength cloaking fabric will be available by the mid 2020s, but very expensive and probably legally restricted to military and law enforcement.

Most of Kepler’s exoplanet candidates will be confirmed in the next few years; by 2022, we will have found and confirmed an Earth-like planet in the habitable zone of it’s star (right now, the most Earth-like candidate exoplanet, Gliese 581 g, in unconfirmed while the most Earth-like confirmed exoplanet, Gliese 581 d, is only slightly more habitable than Mars). We will find out if there is life on that world, but the answer will make no difference to most people’s lives.

OpenStreetMap will have replaced all other maps in almost every situation; Facebook will lose it’s crown as The social network; The comments section of most websites will still make people loose faith in humanity; English Wikipedia will be “complete” for some valid definition of the word.

Obama will win 2012, the Republicans will win 2016; The Conservatives will lose control of the UK regardless of when the next UK general election is held, but the Lib Dems might recover if Clegg departs.

Errors and omissions expected. It’s 3am!.

Advertisements
Standard
Futurology, Science, AI, Philosophy, Psychology

How would you know whether an A.I. was a person or not?

I did an A-level in Philosophy. (For non UK people, A-levels are a 2-year course that happens after highschool and before university).

I did it for fun rather than good grades — I had enough good grades to get into university, and when the other A-levels required my focus, I was fine putting zero further effort into the Philosophy course. (Something which was very clear when my final results came in).

What I didn’t expect at the time was that the rapid development of artificial intelligence in my lifetime would make it absolutely vital that humanity develops a concrete and testable understanding of what counts as a mind, as consciousness, as self-awareness, and as capability to suffer. Yes, we already have that as a problem in the form of animal suffering and whether meat can ever be ethical, but the problem which already exists, exists only for our consciences — the animals can’t take over the world and treat us the way we treat them, but an artificial mind would be almost totally pointless if it was as limited as an animal, and the general aim is quite a lot higher than that.

Some fear that we will replace ourselves with machines which may be very effective at what they do, but don’t have anything “that it’s like to be”. One of my fears is that we’ll make machines that do “have something that it’s like to be”, but who suffer greatly because humanity fails to recognise their personhood. (A paperclip optimiser doesn’t need to hate us to kill us, but I’m more interested in the sort of mind that can feel what we can feel).

I don’t have a good description of what I mean by any of the normal words. Personhood, consciousness, self awareness, suffering… they all seem to skirt around the core idea, but to the extent that they’re correct, they’re not clearly testable; and to the extent that they’re testable, they’re not clearly correct. A little like the maths-vs.-physics dichotomy.

Consciousness? Versus what, subconscious decision making? Isn’t this distinction merely system 1 vs. system 2 thinking? Even then, the word doesn’t tell us what it means to have it objectively, only subjectively. In some ways, some forms of A.I. looks like system 1 — fast, but error prone, based on heuristics; while other forms of A.I. look like system 2 — slow, careful, deliberative weighing all the options.

Self-awareness? What do we even mean by that? It’s absolutely trivial to make an A.I. aware of it’s own internal states, even necessary for anything more than a perceptron. Do we mean a mirror test? (Or non-visual equivalent for non-visual entities, including both blind people and also smell-focused animals such as dogs). That at least can be tested.

Capability to suffer? What does that even mean in an objective sense? Is suffering equal to negative reinforcement? If you have only positive reinforcement, is the absence of reward itself a form of suffering?

Introspection? As I understand it, the human psychology of this is that we don’t really introspect, we use system 2 thinking to confabulate justifications for what system 1 thinking made us feel.

Qualia? Sure, but what is one of these as an objective, measurable, detectable state within a neural network, be it artificial or natural?

Empathy or mirror neurons? I can’t decide how I feel about this one. At first glance, if one mind can feel the same as another mind, that seems like it should have the general ill-defined concept I’m after… but then I realised, I don’t see why that would follow and had the temporarily disturbing mental concept of an A.I. which can perfectly mimic the behaviour corresponding to the emotional state of someone they’re observing, without actually feeling anything itself.

And then the disturbance went away as I realised this is obviously trivially possible, because even a video recording fits that definition… or, hey, a mirror. A video recording somehow feels like it’s fine, it isn’t “smart” enough to be imitating, merely accurately reproducing. (Now I think about it, is there an equivalent issue with the mirror test?)

So, no, mirror neurons are not enough to be… to have the qualia of being consciously aware, or whatever you want to call it.

I’m still not closer to having answers, but sometimes it’s good to write down the questions.

Standard
Futurology, Technology

Hyperloop’s secondary purposes

I can’t believe it took me this long (and until watching this video my Isaac Arthur) to realise that Hyperloop is a tech demo for a Launch loop.

I (along with many others) had realised the stated reason for the related–but–separate The Boring Company was silly. My first thought for that was it was a way to get a lot of people underground for a lot of the time, which would reduce the fatalities from a nuclear war. Other people had the much better observation that experience with tunnelling is absolutely vital for any space colony. (It may be notable that BFR is the same diameter as the SpaceX/TBC test tunnel, or it may just be coincidence).

A similar argument applies to Hyperloop as to TBC: Hyperloop is a better normal-circumstances transport system than cars and roads when colonising a new planet.

Standard
Futurology, Software, Technology

Hyperinflation in the attention economy: what succeeds adverts?

Adverts.

Lots of people block them because they’re really really annoying. (Also a major security risk that slows down your browsing experience, but I doubt that’s the main reason.)

Because adverts are executable (who thought that was a good idea?), they also get used for cryptocurrency mining. Really inefficient cryptocurrency mining, but still.

Because they cost money, there is a financial incentive to systematically defraud advertisers by showing lots of real, paid-for, adverts to lots of fake users. (See also: adverts are executable. Can one advert download ten more? Even sneakily in the background will do, the user doesn’t need to see them.)

Because of the faked consumption (amongst other reasons), advertisers don’t get good value for money, lowering demand; because of lowered demand, websites get less money than they would under an efficient system; because of something which seems analogous to hyperinflation (but affecting the supply of spaces in which to advertise rather than the supply of money), websites are crowded with adverts; because of the excess of adverts, lots of people block them.

What if there was a better way?

Cut out the middle man, explicitly fund your website with your own cryptocurrency mining? Users see no adverts, don’t have their attention syphoned away.

Challenge: the problem I’m calling hyperinflation of attention (probably inaccurately, but it’s a good metaphor) would still apply with cryptocurrency mining resource supply. This is already a separate problem with cryptocurrency mining — way too many people are spending way too many resources on something which is only counting and storing value but without fundamentally adding value to the system.

Potential solution: a better cryptocurrency, one which actually does something useful. Useful work such as SETI@home or folding@home — if it must be a currency, then perhaps one where each unit of useful work gets exchanged for a token which can be traded or redeemed with the organisation which produced it, in much the same way that banknotes could, for a long time, be taken to a central bank and exchanged for gold. And the token could be redeemed for whatever is economically useful — a user may perform 1e9 operations now in exchange for a token which would given them 2e9 floating point operations in five years (by which time floating point operations should be 10 times cheaper); or the user decodes two human genomes now in exchange for a token to decode one of their choice later; or whatever.

A separate, but solvable, issue is that the only things I can think of which are processing-power-limited right now are research (climate forecasts, particle physics, brain simulation, simulated drug testing, AI), or used directly by the consumer (video game graphics), or are a colossal waste of resources (bitcoin, spam) — I’ll freely admit this list may be just down to ignorance on my part — so far as I can see, the only one of those which pairs website visitors with actual income would be the video games… but even then it would be utter insanity for the paid customers to have their image rendering offloaded onto the non-payers. The clear solution to this is the same sort of mechanism that currently “solves” advertising: automated auction by those who want to buy your CPU time and websites that want to sell access to your CPU time.

Downside: this will kill you batteries if you don’t disable JavaScript.

Standard
Futurology, Technology

Musk City, Antarctica

One of the criticisms of a Mars colony is that Antarctica is more hospitable in literally every regard (you might argue that the 6-month day and the 6-month night makes it less hospitable, to which I would reply that light bulbs exist and you’d need light bulbs all year round on Mars to avoid SAD-like symptoms).

I’ve just realised the 2017 BFR will be able to get you anywhere in Antarctica, from any launch site on Earth, in no more than 45 minutes, at the cost of long-distance economy passenger flights, and that the Mars plan involves making fuel and oxidiser out of atmospheric CO₂ and frozen water ice so no infrastructure needs to be shipped conventionally before the first landing.

Standard
AI, Futurology

The end of human labour is inevitable, here’s why

OK. So, you might look at state-of-the-art A.I. and say “oh, this uses too much power compared to a human brain” or “this takes too many examples compared to a human brain”.

So far, correct.

But there are 7.6 billion humans: if an A.I. watches all of them all of the time (easy to imagine given around 2 billion of us already have two or three competing A.I. in our pockets all the time, forever listening for an activation keyword), then there is an enormous set of examples with which to train the machine mind.

“But,” you ask, “what about the power consumption?”

Humans cost a bare minimum of $1.25 per day, even if they’re literally slaves and you only pay for food and (minimal) shelter. Solar power can be as cheap as 2.99¢/kWh.

Combined, that means that any A.I. which uses less than 1.742 kilowatts per human-equivalent-part is beating the cheapest possible human — By way of comparison, Google’s first generation Tensor processing unit uses 40 W when busy — in the domain of Go, it’s about 174,969 times as cost efficient as a minimum-cost human, because four of them working together as one can teach itself to play Go better than the best human in… three days.

And don’t forget that it’s reasonable for A.I. to have as many human-equivalent-parts as there are humans performing whichever skill is being fully automated.

Skill. Not sector, not factory, skill.

And when one skill is automated away, when the people who performed that skill go off to retrain on something else, no matter where they are or what they do, there will be an A.I. watching them and learning with them.

Is there a way out?

Sure. All you have to do is make sure you learn a skill nobody else is learning.

Unfortunately, there is a reason why “thinking outside the box” is such a business cliché: humans suck at that style of thinking, even when we know what it is and why it’s important. We’re too social, we copy each other and create by remixing more than by genuinely innovating, even when we think we have something new.

Computers are, ironically, better than humans at thinking outside the box: two of the issues in Concrete Problems in AI Safety are there because machines easily stray outside the boxes we are thinking within when we give them orders. (I suspect that one of the things which forces A.I. to need far more examples to learn things than we humans do is that they have zero preconceived notions, and therefore must be equally open-minded to all possibilities).

Worse, no matter how creative you are, if other humans see you performing a skill that machines have yet to master, those humans will copy you… and then the machines, even today’s machines, will rapidly learn from all the enthusiastic humans who are so gleeful about their new trick to stay one step ahead of the machines, the new skill they can point to and say “look, humans are special, computers can’t do this” right up until the computers do it.

Standard
AI, Futurology, Science, Software, Technology

The Singularity is Dead, Long Live The Singularity

The Singularity is one form of the idea that machines are constantly being improved and will one day make us all unemployable. Phrased that way, it should be no surprise that discussions of the Singularity are often compared with those of the Luddites from 1816.

“It’s different now!” many people say. Are they right to think that those differences are important?

There have been so many articles and blog posts (and books) about the Singularity that I need to be careful to make clear which type of “Singularity” I’m writing about.

I don’t believe in real infinities. Any of them. Something will get in the way before you reach them. I therefore do not believe in any single runaway process that becomes a deity-like A.I. in a finite time.

That doesn’t stop me worrying about “paperclip optimisers” that are just smart enough to cause catastrophic damage (this already definitely happens even with very dumb A.I.); nor does it stop me worrying about the effect of machines with an IQ of only 200 that can outsmart all but the single smartest human, and rendering mental labour as redundant as physical labour already is, or even an IQ of 85, which would make 15.9% of the world permanently unemployable (some do claim that machines can never be artistic, but, well, machines are already doing “creative” jobs in music, literature and painting, and even if they were not there is a limit as to how many such jobs there can be).

So, for “the Singularity”, what I mean is this:

“A date after which the average human cannot keep up with the rate of progress.”

By this definition, I think it’s already happened. How many people have kept track of these things?:

Most of this was unbelievable science fiction when I was born. Between my birth and 2006, only a few of these things became reality. More than half are things that happened or were invented in the 2010s. When Google’s AlphaGo went up against Lee Sedol he thought he’d easily beat it, 5-0 or 4-1, instead he lost 1-4.

If you’re too young to have a Facebook account, there’s a good chance you’ll never need to learn any foreign language. Or make any physical object. Or learn to drive… there’s a fairly good chance you won’t be allowed to drive. And once you become an adult, if you come up with an invention or a plot for a novel or a motif for a song, there will be at least four billion other humans racing against you to publish it.

Sure, we don’t have a von Neumann probe nor even a clanking replicator at this stage (we don’t even know how to make one yet, unless you count “copy an existing life form”), but given we’ve got 3D printers working at 10 nanometers already, it’s not all that unreasonable to assume we will in the near future. The fact that life exists proves such machines are possible, after all.

None of this is to say humans cannot or will not adapt to change. We’ve been adapting to changes for a long time, we have a lot of experience of adapting to changes, we will adapt more. But there is a question:

“How fast can you adapt?”

Time, as they say, is money. Does it take you a week to learn a new job? A machine that already knows how to do it has a £500 advantage over you. A month? The machine has a £2,200 advantage. You need to get another degree? It has an £80,000 advantage even if the degree was free. That’s just for the average UK salary with none of the extra things employers have to care about.

We don’t face problems just from the machines outsmarting us, we face problems if all the people working on automation can between them outpace any significant fraction of the workforce. And there’s a strong business incentive to pay for such automation, because humans are one of the most expensive things businesses have to pay for.

I don’t have enough of a feeling for economics to guess what might happen if too many people are unemployed and therefore unable to afford the goods produced by machine labour, all I can say is that when I was in secondary school, all of us young enough to be without income, pirating software and music was common. (I was the only one with a Mac, so I had to make do with magazine cover CDs for my software, but I think the observation is still worth something).

Standard