AI, Software

Speed of machine intelligence

Every so often, someone tries to boast of human intelligence with the story of Shakuntala Devi — the stories vary, but they generally claim she beat the fastest supercomputer in the world in a feat of arithmetic, finding that the 23rd root of


was 546,372,891, and taking just 50 seconds to do so compared to the “over a minute” for her computer competitor.

Ignoring small details such as the “supercomputer” being named as a UNIVAC 1101, which wildly obsolete by the time of this event, this story dates to 1977 — and Moore’s Law over 41 years has made computers mind-defyingly powerful since then (if it was as simple as doubling in power every 18 months, it would 241/1.5 = 169,103,740 times faster, but Wikipedia shows even greater improvements on even shorter timescales going from the Cray X-MP in 1984 to standard consumer CPUs and GPUs in 2017, a factor of 1,472,333,333 improvement at fixed cost in only 33 years).

So, how fast are computers now? Well, here’s a small script to find out:


from datetime import datetime

before =

q = 916748676920039158098660927585380162483106680144308622407126516427934657040867096593279205767480806790022783016354924852380335745316935111903596577547340075681688305620821016129132845564805780158806771

for x in range(0,int(3.45e6)):
	a = q**(1./23)

after =

print after-before

It calculates the 23rd root of that number. It times itself as it does the calculation three million four hundred and fifty thousand times, repeating the calculation just to slow it down enough to make the time reading accurate.

Let’s see what how long it takes…

MacBook-Air:python kitsune$ python 
MacBook-Air:python kitsune$

1.14 seconds — to do the calculation 3,450,000 times.

My MacBook Air is an old model from mid-2013, and I’m already beating by more than a factor of 150 million someone who was (despite the oddities of the famous story) in the Guinness Book of Records for her mathematical abilities.

It gets worse, though. The next thing people often say is, paraphrased, “oh, but it’s cheating to program the numbers into the computer when the human had to read it”. Obviously the way to respond to that is to have the computer read for itself:

from sklearn import svm
from sklearn import datasets
import numpy as np
import matplotlib.pyplot as plt
import as cm

# Find out how fast it learns
from datetime import datetime
# When did we start learning?
before =

clf = svm.SVC(gamma=0.001, C=100.)
digits = datasets.load_digits()
size = len([:-size],[:-size])

# When did we stop learning?
after =
# Show user how long it took to learn
print "Time spent learning:", after-before

# When did we start reading?
before =
maxRepeats = 100
for repeats in range(0, maxRepeats):
	for x in range(0, size):
		data =[-x]
		prediction = clf.predict([-x])

# When did we stop reading?
after =
print "Number of digits being read:", size*maxRepeats
print "Time spent reading:", after-before

# Show mistakes:
for x in range(0, size):
	data =[-x]
	target =[-x]
	prediction = clf.predict([-x])
	if (target!=prediction):
		print "Target: "+str(target)+" prediction: "+str(prediction)
		grid = data.reshape(8, 8)
		plt.imshow(grid, cmap = cm.Greys_r)

This learns to read using a standard dataset of hand-written digits, then reads all the digits in that set a hundred times over, then shows you what mistakes it’s made.

MacBook-Air:AI stuff kitsune$ python 
Time spent learning: 0:00:00.225301
Number of digits being read: 17900
Time spent reading: 0:00:02.700562
Target: 3 prediction: [5]
Target: 3 prediction: [5]
Target: 3 prediction: [8]
Target: 3 prediction: [8]
Target: 9 prediction: [5]
Target: 9 prediction: [8]
MacBook-Air:AI stuff kitsune$ 

0.225 seconds to learn to read, from scratch; then it reads just over 6,629 digits per second. This is comparable with both the speed of a human blink (0.1-0.4 seconds) and also with many of the claims* I’ve seen about human visual processing time, from retina to recognising text.

The A.I. is not reading perfectly, but looking at the mistakes it does make, several of them are forgivable even for a human. They are hand-written digits, and some of them look, even to me, more like the number the A.I. saw than the number that was supposed to be there — indeed, the human error rate for similar examples is 2.5%, while this particular A.I. has an error rate of 3.35%.

* I refuse to assert those claims are entirely correct, because I don’t have any formal qualification in that area, but I do have experience of people saying rubbish about my area of expertise — hence this blog post. I don’t intend to make the same mistake.

Fiction, Humour

Mote of smartdust

Matthew beheld not the mote of smartdust in his own eye, for it was hiding itself from his view with advanced magickal trickery.

His brother Luke beheld the mote, yet within his brother’s eye was a beam of laser light that blinded him just as surely.

Luke went to remove the mote of dust in Matthew’s eye, but judged not correctly, and became confused.

Mark looked upon the brothers, and decided it was good.

Science, SciFi, Technology

Kessler-resistant real-life force-fields?

Idle thought at this stage.

The Kessler syndrome (also called the Kessler effect, collisional cascading or ablation cascade), proposed by the NASA scientist Donald J. Kessler in 1978, is a scenario in which the density of objects in low earth orbit (LEO) is high enough that collisions between objects could cause a cascade where each collision generates space debris that increases the likelihood of further collisions.

Kessler syndrome, Wikipedia

If all objects in Earth orbit were required to have an electrical charge (all negative, let’s say), how strong would that charge have to be to prevent collisions?

Also, how long would they remain charged, given the ionosphere, solar wind, Van Allen belts, etc?

Also, how do you apply charge to space junk already present? Rely on it picking up charge when it collides with new objects? Or is it possible to use an electron gun to charge them from a distance? And if so, what’s the trade-off between beam voltage, distance, and maximum charge (presumably shape dependent)?

And if you can apply charge remotely, is this even the best way to deal with them, rather than collecting them all in a large net and de-orbiting them?

Science, Technology

You won’t believe how fast transistors are

A transistor in a CPU is smaller and faster than a synapse in one of your brain’s neurons by about the same ratio that a wolf is smaller and faster than a hill.




CPU: 11nm transistors, 30GHz transition rate (transistors flip significantly faster than overall clock speed)

Neurons: 1µm synapses, 200Hz pulse rate

Wolves: 1.6m long, average range 25 km/day

Hills: 145m tall (widely variable, of course), continental drift 2 cm/year

1µm/11nm ≅ 1.6m/145m
200Hz/30GHz ≅ (Continental drift 2 cm/year) / (Average range 25 km/day)

Futurology, Software, Technology

Hyperinflation in the attention economy: what succeeds adverts?


Lots of people block them because they’re really really annoying. (Also a major security risk that slows down your browsing experience, but I doubt that’s the main reason.)

Because adverts are executable (who thought that was a good idea?), they also get used for cryptocurrency mining. Really inefficient cryptocurrency mining, but still.

Because they cost money, there is a financial incentive to systematically defraud advertisers by showing lots of real, paid-for, adverts to lots of fake users. (See also: adverts are executable. Can one advert download ten more? Even sneakily in the background will do, the user doesn’t need to see them.)

Because of the faked consumption (amongst other reasons), advertisers don’t get good value for money, lowering demand; because of lowered demand, websites get less money than they would under an efficient system; because of something which seems analogous to hyperinflation (but affecting the supply of spaces in which to advertise rather than the supply of money), websites are crowded with adverts; because of the excess of adverts, lots of people block them.

What if there was a better way?

Cut out the middle man, explicitly fund your website with your own cryptocurrency mining? Users see no adverts, don’t have their attention syphoned away.

Challenge: the problem I’m calling hyperinflation of attention (probably inaccurately, but it’s a good metaphor) would still apply with cryptocurrency mining resource supply. This is already a separate problem with cryptocurrency mining — way too many people are spending way too many resources on something which is only counting and storing value but without fundamentally adding value to the system.

Potential solution: a better cryptocurrency, one which actually does something useful. Useful work such as SETI@home or folding@home — if it must be a currency, then perhaps one where each unit of useful work gets exchanged for a token which can be traded or redeemed with the organisation which produced it, in much the same way that banknotes could, for a long time, be taken to a central bank and exchanged for gold. And the token could be redeemed for whatever is economically useful — a user may perform 1e9 operations now in exchange for a token which would given them 2e9 floating point operations in five years (by which time floating point operations should be 10 times cheaper); or the user decodes two human genomes now in exchange for a token to decode one of their choice later; or whatever.

A separate, but solvable, issue is that the only things I can think of which are processing-power-limited right now are research (climate forecasts, particle physics, brain simulation, simulated drug testing, AI), or used directly by the consumer (video game graphics), or are a colossal waste of resources (bitcoin, spam) — I’ll freely admit this list may be just down to ignorance on my part — so far as I can see, the only one of those which pairs website visitors with actual income would be the video games… but even then it would be utter insanity for the paid customers to have their image rendering offloaded onto the non-payers. The clear solution to this is the same sort of mechanism that currently “solves” advertising: automated auction by those who want to buy your CPU time and websites that want to sell access to your CPU time.

Downside: this will kill you batteries if you don’t disable JavaScript.

Futurology, Technology

Musk City, Antarctica

One of the criticisms of a Mars colony is that Antarctica is more hospitable in literally every regard (you might argue that the 6-month day and the 6-month night makes it less hospitable, to which I would reply that light bulbs exist and you’d need light bulbs all year round on Mars to avoid SAD-like symptoms).

I’ve just realised the 2017 BFR will be able to get you anywhere in Antarctica, from any launch site on Earth, in no more than 45 minutes, at the cost of long-distance economy passenger flights, and that the Mars plan involves making fuel and oxidiser out of atmospheric CO₂ and frozen water ice so no infrastructure needs to be shipped conventionally before the first landing.

AI, Futurology

The end of human labour is inevitable, here’s why

OK. So, you might look at state-of-the-art A.I. and say “oh, this uses too much power compared to a human brain” or “this takes too many examples compared to a human brain”.

So far, correct.

But there are 7.6 billion humans: if an A.I. watches all of them all of the time (easy to imagine given around 2 billion of us already have two or three competing A.I. in our pockets all the time, forever listening for an activation keyword), then there is an enormous set of examples with which to train the machine mind.

“But,” you ask, “what about the power consumption?”

Humans cost a bare minimum of $1.25 per day, even if they’re literally slaves and you only pay for food and (minimal) shelter. Solar power can be as cheap as 2.99¢/kWh.

Combined, that means that any A.I. which uses less than 1.742 kilowatts per human-equivalent-part is beating the cheapest possible human — By way of comparison, Google’s first generation Tensor processing unit uses 40 W when busy — in the domain of Go, it’s about 174,969 times as cost efficient as a minimum-cost human, because four of them working together as one can teach itself to play Go better than the best human in… three days.

And don’t forget that it’s reasonable for A.I. to have as many human-equivalent-parts as there are humans performing whichever skill is being fully automated.

Skill. Not sector, not factory, skill.

And when one skill is automated away, when the people who performed that skill go off to retrain on something else, no matter where they are or what they do, there will be an A.I. watching them and learning with them.

Is there a way out?

Sure. All you have to do is make sure you learn a skill nobody else is learning.

Unfortunately, there is a reason why “thinking outside the box” is such a business cliché: humans suck at that style of thinking, even when we know what it is and why it’s important. We’re too social, we copy each other and create by remixing more than by genuinely innovating, even when we think we have something new.

Computers are, ironically, better than humans at thinking outside the box: two of the issues in Concrete Problems in AI Safety are there because machines easily stray outside the boxes we are thinking within when we give them orders. (I suspect that one of the things which forces A.I. to need far more examples to learn things than we humans do is that they have zero preconceived notions, and therefore must be equally open-minded to all possibilities).

Worse, no matter how creative you are, if other humans see you performing a skill that machines have yet to master, those humans will copy you… and then the machines, even today’s machines, will rapidly learn from all the enthusiastic humans who are so gleeful about their new trick to stay one step ahead of the machines, the new skill they can point to and say “look, humans are special, computers can’t do this” right up until the computers do it.