Futurology

Old predictions, and how they’ve been standing up

The following was originally posted to a (long-since deleted) Livejournal account on 2012-06-05 02:55:27 BST. I have not edited this at all. Some of these predictions from 6 years ago have stood up pretty well, other predictions have been proven impossible.


Predicting the future is, in retrospect, hilarious. Nonetheless, I want to make a guess as to how the world will look in ten years, even if only to have a concrete record of how inaccurate I am. Unless otherwise specified, these predictions are for 2022:

Starting with the least surprising: By 2024, solar power will be the cheapest form of electricity for everyone closer to the equator than the north of France. Peak solar power output will equal current total power output from all sources, while annual average output will be 25%. Further progress relies on developments of large-scale energy storage systems, which may or may not happen depending on electric cars.

By 2022, CPU lithography will either reach 4nm, or everyone will decide it’s too expensive to keep on shrinking and stop sooner. There are signs that the manufacturers may not be able to mass produce 4nm chips due to their cost, even though that feature size is technically possible, so I’m going to say minimum feature size will be larger than you might expect from Moore’s law. One might assume that they can still get cheaper, even if not more powerful per unit area, but there isn’t much incentive to reduce production cost if you don’t also reduce power consumption; currently power consumption is improving slightly faster than Moore’s law, but not by much.

LEDs will be the most efficient light source; they will also be a tenth of their current price, making compact fluorescents totally obsolete. People will claim they take ages to get to full brightness, just because they are still energy-saving.

Bulk storage will probably be spinning magnetic platers, and flash drives will be as obsolete in 2022 as the floppy disk is in 2012. (Memristor based storage is an underdog on this front, at least on the scale of 10 years.)

Western economies won’t go anywhere fast in the next 4 years, but might go back to normal after that; China’s economy will more than double in size by 2022.

In the next couple of years, people will have realised that 3D printers take several hours to produce something the size of a cup and started to dismiss them as a fad. Meanwhile, people who already know the limitations of 3D printers have already, in 2011 used them for organ culture — in 10 to 20 years, “cost an arm and a leg” will fall out of common use in the same way and for the same reason that “you can no more XYZ than you can walk on the Moon” fell out of use in 1969 — if you lose either an arm or a leg, you will be able to print out a replacement. I doubt there will be full self-replication by 2022, but I wouldn’t bet against it.

No strong general A.I., but the problem is software rather than hardware, so if I’m wrong you won’t notice until it’s too late. (A CPU’s transistors change state a hundred million times faster than your neurons, and the minimum feature size of the best 2012 CPUs is 22nm, compared to the 200nm thickness of the smallest dendrite that Google told me about).

Robot cars will be available in many countries by 2020, rapidly displacing human drivers because they are much safer and therefore cheaper to insure; Taxi drivers disappear first, truckers fight harder but still fall. Human drivers may be forbidden from public roads by 2030.

Robot labour will be an even more significant part of the workforce. Foxconn, or their equivalent, will use more robots than there are people in Greater London.

SpaceX and similar companies lower launch costs by at least a factor of 10; these launch costs combine with standardised micro-satellites allow at least one university, 6th form, or school to launch a probe to the moon.

Graphene proves useful, but does not become a wonder material. Cookware coated in synthetic diamond is commonplace, and can be bought in Tesco. Carbon nanotube rope is available in significant lengths from specialist retailers, but still very expensive.

In-vitro meat will have been eaten, but probably still be considered experimental by 2020. There will be large protests and well-signed petitions against it, but these will be ignored.

Full-genome sequencing will cost about a hundred quid and take less than 10 hours.

3D television and films will fail and be revived at least once more.

E-book readers will be physically flexible, with similar resolution to print.

Hydrogen will not be developed significantly; biofuels will look promising, but will probably lose out to electric cars because they go so well with solar power (alternative: genetic engineering makes a crop that can be burned in existing power stations, making photovoltaic and solar-thermal plants redundant while also providing fuel for petrol and diesel car engines); fusion will continue to not be funded properly; too many people will remain too scared of fission for it to improve significantly; lots of people will still be arguing about wind turbines, and others will still be selling snake-oil “people-powered” devices.

Machine vision will be connected to every CCTV system that gets sold in 2020, and it will do a better job than any combination of human operators could possibly manage. The now-redundant human operators will argue loudly that “a computer could never look at someone and know how they are feeling, it could never know if someone is drunk and about to start a fight”; someone will put this to the test, and the machine will win.

High-temperature superconductivity currently seems to be developing at random, so I can’t say if we will have any progress or not. I’m going to err on the side of caution, and say no significant improvements by 2022.

Optical-wavelength cloaking fabric will be available by the mid 2020s, but very expensive and probably legally restricted to military and law enforcement.

Most of Kepler’s exoplanet candidates will be confirmed in the next few years; by 2022, we will have found and confirmed an Earth-like planet in the habitable zone of it’s star (right now, the most Earth-like candidate exoplanet, Gliese 581 g, in unconfirmed while the most Earth-like confirmed exoplanet, Gliese 581 d, is only slightly more habitable than Mars). We will find out if there is life on that world, but the answer will make no difference to most people’s lives.

OpenStreetMap will have replaced all other maps in almost every situation; Facebook will lose it’s crown as The social network; The comments section of most websites will still make people loose faith in humanity; English Wikipedia will be “complete” for some valid definition of the word.

Obama will win 2012, the Republicans will win 2016; The Conservatives will lose control of the UK regardless of when the next UK general election is held, but the Lib Dems might recover if Clegg departs.

Errors and omissions expected. It’s 3am!.

Advertisements
Standard
Psychology

Unlearnable

How many things are there, which one cannot learn? No matter how much effort is spent trying?

I’m aware that things like conscious control of intestinal peristalsis would fit this question (probably… I mean, who would’ve tried?) but I’m not interested in purely autonomous stuff.

Assuming the stereotypes are correct, I mean stuff like adults not being able to fully cross the Chinese-English language barrier in either direction if they didn’t learn both languages as children (if you read out The Lion-Eating Poet in the Stone Den I can tell that the Shis are different to each other, but I can’t tell if the difference I hear actually conveys a difference-of-meaning or if it is just natural variation of the sort I produce if I say “Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo”, and I’m told this difficulty persists even with practice; in reverse, the ‘R’/’L’ error is a common stereotype of Chinese people speaking English). Is there something like that for visual recognition? Some people cannot recognise faces, is there something like that but for all humans? Something where no human can recognise which of two things they are looking at, even if we know that a difference exists?

Languages in general seem to be extremely difficult for most adults: personally, I’ve never been able to get my mind around all the tenses of irregular verbs in French… but is that genuinely unlearnable or something I could overcome with perseverance? I found German quite straightforward, so there may be something else going on.

Are there any other possibilities? Some people struggle with maths: is it genuinely unlearnable by the majority, or just difficult and lacking motive? Probability in particular comes to mind, because people can have the Monty Hall problem explained and not get it.

One concept I’ve only just encountered, but which suddenly makes sense of a lot of behaviour I’ve seen in politics, is called Morton’s demon by analogy with Maxwell’s demon. A filter at the level of perception which allows people to ignore and reject without consideration facts which ought to change their minds. It feels — and I recognise with amusement the oddity of using system 1 thinking at this point — like a more powerful thing than Cherry picking, Cognitive dissonance, Confirmation bias, etc., and it certainly represents — with regard to system 2 thinking — the sort of “unlearnable” I have in mind.

Standard
AI, Philosophy

Unfortunate personhood tests for A.I.

What if the only way to tell if a particular A.I. design is or is not a person, is to subject it to all the types of experience — both good and harrowing — that we know impact the behaviour of the only example of personhood we all agree on, and seeing if it changes in the same way we change?

Is it moral to create a digital hell for a thousand, if that’s the only way to prevent carbon chauvinism/anti-silicon discrimination for a billion?

Standard
Futurology, Science, AI, Philosophy, Psychology

How would you know whether an A.I. was a person or not?

I did an A-level in Philosophy. (For non UK people, A-levels are a 2-year course that happens after highschool and before university).

I did it for fun rather than good grades — I had enough good grades to get into university, and when the other A-levels required my focus, I was fine putting zero further effort into the Philosophy course. (Something which was very clear when my final results came in).

What I didn’t expect at the time was that the rapid development of artificial intelligence in my lifetime would make it absolutely vital that humanity develops a concrete and testable understanding of what counts as a mind, as consciousness, as self-awareness, and as capability to suffer. Yes, we already have that as a problem in the form of animal suffering and whether meat can ever be ethical, but the problem which already exists, exists only for our consciences — the animals can’t take over the world and treat us the way we treat them, but an artificial mind would be almost totally pointless if it was as limited as an animal, and the general aim is quite a lot higher than that.

Some fear that we will replace ourselves with machines which may be very effective at what they do, but don’t have anything “that it’s like to be”. One of my fears is that we’ll make machines that do “have something that it’s like to be”, but who suffer greatly because humanity fails to recognise their personhood. (A paperclip optimiser doesn’t need to hate us to kill us, but I’m more interested in the sort of mind that can feel what we can feel).

I don’t have a good description of what I mean by any of the normal words. Personhood, consciousness, self awareness, suffering… they all seem to skirt around the core idea, but to the extent that they’re correct, they’re not clearly testable; and to the extent that they’re testable, they’re not clearly correct. A little like the maths-vs.-physics dichotomy.

Consciousness? Versus what, subconscious decision making? Isn’t this distinction merely system 1 vs. system 2 thinking? Even then, the word doesn’t tell us what it means to have it objectively, only subjectively. In some ways, some forms of A.I. looks like system 1 — fast, but error prone, based on heuristics; while other forms of A.I. look like system 2 — slow, careful, deliberative weighing all the options.

Self-awareness? What do we even mean by that? It’s absolutely trivial to make an A.I. aware of it’s own internal states, even necessary for anything more than a perceptron. Do we mean a mirror test? (Or non-visual equivalent for non-visual entities, including both blind people and also smell-focused animals such as dogs). That at least can be tested.

Capability to suffer? What does that even mean in an objective sense? Is suffering equal to negative reinforcement? If you have only positive reinforcement, is the absence of reward itself a form of suffering?

Introspection? As I understand it, the human psychology of this is that we don’t really introspect, we use system 2 thinking to confabulate justifications for what system 1 thinking made us feel.

Qualia? Sure, but what is one of these as an objective, measurable, detectable state within a neural network, be it artificial or natural?

Empathy or mirror neurons? I can’t decide how I feel about this one. At first glance, if one mind can feel the same as another mind, that seems like it should have the general ill-defined concept I’m after… but then I realised, I don’t see why that would follow and had the temporarily disturbing mental concept of an A.I. which can perfectly mimic the behaviour corresponding to the emotional state of someone they’re observing, without actually feeling anything itself.

And then the disturbance went away as I realised this is obviously trivially possible, because even a video recording fits that definition… or, hey, a mirror. A video recording somehow feels like it’s fine, it isn’t “smart” enough to be imitating, merely accurately reproducing. (Now I think about it, is there an equivalent issue with the mirror test?)

So, no, mirror neurons are not enough to be… to have the qualia of being consciously aware, or whatever you want to call it.

I’m still not closer to having answers, but sometimes it’s good to write down the questions.

Standard
Futurology, Technology

Hyperloop’s secondary purposes

I can’t believe it took me this long (and until watching this video my Isaac Arthur) to realise that Hyperloop is a tech demo for a Launch loop.

I (along with many others) had realised the stated reason for the related–but–separate The Boring Company was silly. My first thought for that was it was a way to get a lot of people underground for a lot of the time, which would reduce the fatalities from a nuclear war. Other people had the much better observation that experience with tunnelling is absolutely vital for any space colony. (It may be notable that BFR is the same diameter as the SpaceX/TBC test tunnel, or it may just be coincidence).

A similar argument applies to Hyperloop as to TBC: Hyperloop is a better normal-circumstances transport system than cars and roads when colonising a new planet.

Standard
AI, Software

Speed of machine intelligence

Every so often, someone tries to boast of human intelligence with the story of Shakuntala Devi — the stories vary, but they generally claim she beat the fastest supercomputer in the world in a feat of arithmetic, finding that the 23rd root of

916,748,676,920,039,158,098,660,927,585,380,162,483,106,680,144,308,622,407,126,516,427,934,657,040,867,096,593,279,205,767,480,806,790,022,783,016,354,924,852,380,335,745,316,935,111,903,596,577,547,340,075,681,688,305,620,821,016,129,132,845,564,805,780,158,806,771

was 546,372,891, and taking just 50 seconds to do so compared to the “over a minute” for her computer competitor.

Ignoring small details such as the “supercomputer” being named as a UNIVAC 1101, which wildly obsolete by the time of this event, this story dates to 1977 — and Moore’s Law over 41 years has made computers mind-defyingly powerful since then (if it was as simple as doubling in power every 18 months, it would 241/1.5 = 169,103,740 times faster, but Wikipedia shows even greater improvements on even shorter timescales going from the Cray X-MP in 1984 to standard consumer CPUs and GPUs in 2017, a factor of 1,472,333,333 improvement at fixed cost in only 33 years).

So, how fast are computers now? Well, here’s a small script to find out:

#!python

from datetime import datetime

before = datetime.now()

q = 916748676920039158098660927585380162483106680144308622407126516427934657040867096593279205767480806790022783016354924852380335745316935111903596577547340075681688305620821016129132845564805780158806771

for x in range(0,int(3.45e6)):
	a = q**(1./23)

after = datetime.now()

print after-before

It calculates the 23rd root of that number. It times itself as it does the calculation three million four hundred and fifty thousand times, repeating the calculation just to slow it down enough to make the time reading accurate.

Let’s see what how long it takes…

MacBook-Air:python kitsune$ python 201-digit-23rd-root.py 
0:00:01.140248
MacBook-Air:python kitsune$

1.14 seconds — to do the calculation 3,450,000 times.

My MacBook Air is an old model from mid-2013, and I’m already beating by more than a factor of 150 million someone who was (despite the oddities of the famous story) in the Guinness Book of Records for her mathematical abilities.

It gets worse, though. The next thing people often say is, paraphrased, “oh, but it’s cheating to program the numbers into the computer when the human had to read it”. Obviously the way to respond to that is to have the computer read for itself:

from sklearn import svm
from sklearn import datasets
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm

# Find out how fast it learns
from datetime import datetime
# When did we start learning?
before = datetime.now()

clf = svm.SVC(gamma=0.001, C=100.)
digits = datasets.load_digits()
size = len(digits.data)/10
clf.fit(digits.data[:-size], digits.target[:-size])

# When did we stop learning?
after = datetime.now()
# Show user how long it took to learn
print "Time spent learning:", after-before

# When did we start reading?
before = datetime.now()
maxRepeats = 100
for repeats in range(0, maxRepeats):
	for x in range(0, size):
		data = digits.data[-x]
		prediction = clf.predict(digits.data[-x])

# When did we stop reading?
after = datetime.now()
print "Number of digits being read:", size*maxRepeats
print "Time spent reading:", after-before

# Show mistakes:
for x in range(0, size):
	data = digits.data[-x]
	target = digits.target[-x]
	prediction = clf.predict(digits.data[-x])
	if (target!=prediction):
		print "Target: "+str(target)+" prediction: "+str(prediction)
		grid = data.reshape(8, 8)
		plt.imshow(grid, cmap = cm.Greys_r)
		plt.show()

This learns to read using a standard dataset of hand-written digits, then reads all the digits in that set a hundred times over, then shows you what mistakes it’s made.

MacBook-Air:AI stuff kitsune$ python digits.py 
Time spent learning: 0:00:00.225301
Number of digits being read: 17900
Time spent reading: 0:00:02.700562
Target: 3 prediction: [5]
Target: 3 prediction: [5]
Target: 3 prediction: [8]
Target: 3 prediction: [8]
Target: 9 prediction: [5]
Target: 9 prediction: [8]
MacBook-Air:AI stuff kitsune$ 

0.225 seconds to learn to read, from scratch; then it reads just over 6,629 digits per second. This is comparable with both the speed of a human blink (0.1-0.4 seconds) and also with many of the claims* I’ve seen about human visual processing time, from retina to recognising text.

The A.I. is not reading perfectly, but looking at the mistakes it does make, several of them are forgivable even for a human. They are hand-written digits, and some of them look, even to me, more like the number the A.I. saw than the number that was supposed to be there — indeed, the human error rate for similar examples is 2.5%, while this particular A.I. has an error rate of 3.35%.

* I refuse to assert those claims are entirely correct, because I don’t have any formal qualification in that area, but I do have experience of people saying rubbish about my area of expertise — hence this blog post. I don’t intend to make the same mistake.

Standard