Software

I’m updating my six-year-old Runestone code. Objective-C has changed, Cocos2d has effectively been replaced with SpriteKit, and my understanding of the language has improved massively. Net result? It’s embarrassing.

Once this thing is running as it should, I may rewrite from scratch just to see how bad a project has to be for rewrites to be worth it.

Advertisements
Aside
AI, Software, Technology

Automated detection of propaganda and cultural bias

The ability of word2vec to detect relationships between words (for example that “man” is to “king” as “woman” is to “queen”) can already be used to detect biases. Indeed, the biases are so easy to find, so blatant, that they are embarrassing.

Can this automated detection of cultural bias be used to detect deliberate bias, such as propaganda? It depends in part on how large the sample set is, and in part on how little data the model needs to become effective.

I suspect that such a tool would work only for long-form propaganda, and for detecting people who start to believe and repeat that propaganda: individual tweets — or even newspaper articles — are likely to be far too short for these tools, but the combined output of all their tweets (or a year of some journalist’s articles) might be sufficient.

If it is at all possible, it would of course be very useful. For a few hours, until the propagandists started using the same tool the way we now all use spell checkers — they’re professionals, after all, who will use the best tools money can buy.

That’s the problem with A.I., as well as the promise: it’s a tool for thinking faster, and it’s a tool which is very evenly distributed throughout society, not just in the hands of those we approve of.

Of course… are we right about who we approve of, or is our hatred of Them just because of propaganda we’ve fallen for ourselves?

(Note: I’ve seen people, call them Bobs, saying “x is propaganda”, but I’ve never been able to convince any of the Bobs that they are just as likely to fall for propaganda as the people they are convinced have fallen for propaganda. If you have any suggestions, please comment).

Standard
Software

Just because you can, doesn’t mean you should

Python lets programmers redefine initializers at runtime. Don’t do that.

Screen Shot 2017-05-19 at 19.07.53

The C preprocessor lets programmers redefine “true”. Don’t do that.

Screen Shot 2017-05-19 at 17.11.35

Swift lets programmers use very foolish variable names. This may be the lesser sin, but still, don’t do that.

Screen Shot 2017-05-19 at 17.07.26

Given python has a reputation for relatively defect-free code, it’s remarkable how few guards it has for enforcing good code — no type safety, no access modifiers, only enforced indentation.

Standard
Software

Executable images

Twenty years ago, back in 1997, there was an urban legend that some images contained viruses.

“Absurd!” thought the teenage me, “Images are just data, they can’t do anything!”

Well, I was wrong. In 2004, researchers found a bug in a Microsoft JPEG library which allowed a well-crafted .jpg to totally compromise a computer — file system, arbitrary code, administrator access, everything. Obviously that particular bug has now been fixed, but it did make me realise quite how badly broken things can be.

Despite all the tools that help us software engineers solve problems, reduce our bug count, secure our software, and develop things faster, we still don’t seem to have this as our mindset. We have professional groups, but membership of them is not needed for jobs. Automated tests exist, but knowledge of them is limited and use even more limited (for example, I wish I could say I had professional experience of them since university, but no such luck). We don’t have anything like a medical licence (or even just the hippocratic oath), there is nothing to stop people practicing code without a licence the way people are prevented from practicing law without a licence.

And now, we’re making a world where A.I. replaces humans. Automation is fine, nothing wrong with it… but if you assume the computer is either always right or that the errors are purely random, you will be blind to problems this causes.

Hackers.

I can’t say that I have “a hacker mentality”, but mainly because the phrase means completely different things to different people, so I will say this: I see loopholes everywhere, systems that can be exploited by malevolent or selfish people, not just accidentally by those who can’t follow instructions.

How many people, I wonder, travel on fake rail tickets or bus passes that came out of their home printers? How many, when faced with a self-service checkout, will tell the terminal that their expensive fancy foreign cheese is actually loose onions?

This sort of thing is dealt with at the moment by humans — it was a human who realised it was odd that one particular gentleman kept buying onions, given the store had run out some time ago, for instance — but the more humans fall out of the loop, the easier it is to exploit machines.

This brings me to QR codes. QR codes are somewhat stupid, in that they are just some text encoded in a way that a computer can read easily, with some error-correction codes so it can survive a bit of dirt or a bad reflection. This is stupid, partly because it really hasn’t taken long to make A.I. which can read text from photos just fine (making the codes redundant), but mainly because humans can’t read the codes (making them dangerous).

Dangerous? Well, just as with URL-shorteners, you may find yourself looking at a shock site rather than the promised content… but that’s not really the big problem.

If you can, try to scan this QR code. No goats, lemons, or tubs, I assure you (and if you don’t know what those three words have in common, you may want to retain your innocence), but please do scan this code:

Executable QR code

What does it do for you? I’m curious.

If you don’t have a QR code scanner, I’ll tell you what it says:

data:text/html;,alert("Your QR code scanner is hackable")

That is literally what it says, because a QR code is just text that’s easy for a computer to read. This is a data URI, which contains some JavaScript, which opens an alert message. If you want, copy it into the address bar of your browser, just as if it were a website — press return or “go” or whatever works on your system.

It’s an executable image. Nothing nefarious, just proof of concept.

What does that mean for the world? Well, what do people do with QR codes? Well, not people, people don’t use them… what do businesses do with QR codes? Mine is a harmless example, but what happens if UK Limited Company Number 10542519 makes a QR code from their name… and it shows up in the vision system of a computer that, owing to our profession’s move-fast-and-break-things attitude, naïvely trusts input without anyone having considered that could be a bad thing?

Some social networks know (and complain) if I try to use a profile photo that doesn’t have a face in it. If that’s a general-purpose computer vision system, it may well also recognises QR codes (because QR codes are easy to recognise, and because “more features!” is a business plan). If your business can’t resist a Bobby Drop Tables username, it won’t be in business for very long — but the same may happen to Bobby Drop Table faces, if you’re not careful.

Governments are all over the place when it comes to security, just like the private sector. What happens if a wanted criminal wears a face mask that is the QR code version of Bobby Drop Tables?

Robert'); DROP TABLE criminals;--

And suddenly, no more criminal record? Well, not in that jurisdiction anyway.

Standard
AI, Futurology, Science, Software, Technology

The Singularity is Dead, Long Live The Singularity

The Singularity is one form of the idea that machines are constantly being improved and will one day make us all unemployable. Phrased that way, it should be no surprise that discussions of the Singularity are often compared with those of the Luddites from 1816.

“It’s different now!” many people say. Are they right to think that those differences are important?

There have been so many articles and blog posts (and books) about the Singularity that I need to be careful to make clear which type of “Singularity” I’m writing about.

I don’t believe in real infinities. Any of them. Something will get in the way before you reach them. I therefore do not believe in any single runaway process that becomes a deity-like A.I. in a finite time.

That doesn’t stop me worrying about “paperclip optimisers” that are just smart enough to cause catastrophic damage (this already definitely happens even with very dumb A.I.); nor does it stop me worrying about the effect of machines with an IQ of only 200 that can outsmart all but the single smartest human, and rendering mental labour as redundant as physical labour already is, or even an IQ of 85, which would make 15.9% of the world permanently unemployable (some do claim that machines can never be artistic, but, well, machines are already doing “creative” jobs in music, literature and painting, and even if they were not there is a limit as to how many such jobs there can be).

So, for “the Singularity”, what I mean is this:

“A date after which the average human cannot keep up with the rate of progress.”

By this definition, I think it’s already happened. How many people have kept track of these things?:

Most of this was unbelievable science fiction when I was born. Between my birth and 2006, only a few of these things became reality. More than half are things that happened or were invented in the 2010s. When Google’s AlphaGo went up against Lee Sedol he thought he’d easily beat it, 5-0 or 4-1, instead he lost 1-4.

If you’re too young to have a Facebook account, there’s a good chance you’ll never need to learn any foreign language. Or make any physical object. Or learn to drive… there’s a fairly good chance you won’t be allowed to drive. And once you become an adult, if you come up with an invention or a plot for a novel or a motif for a song, there will be at least four billion other humans racing against you to publish it.

Sure, we don’t have a von Neumann probe nor even a clanking replicator at this stage (we don’t even know how to make one yet, unless you count “copy an existing life form”), but given we’ve got 3D printers working at 10 nanometers already, it’s not all that unreasonable to assume we will in the near future. The fact that life exists proves such machines are possible, after all.

None of this is to say humans cannot or will not adapt to change. We’ve been adapting to changes for a long time, we have a lot of experience of adapting to changes, we will adapt more. But there is a question:

“How fast can you adapt?”

Time, as they say, is money. Does it take you a week to learn a new job? A machine that already knows how to do it has a £500 advantage over you. A month? The machine has a £2,200 advantage. You need to get another degree? It has an £80,000 advantage even if the degree was free. That’s just for the average UK salary with none of the extra things employers have to care about.

We don’t face problems just from the machines outsmarting us, we face problems if all the people working on automation can between them outpace any significant fraction of the workforce. And there’s a strong business incentive to pay for such automation, because humans are one of the most expensive things businesses have to pay for.

I don’t have enough of a feeling for economics to guess what might happen if too many people are unemployed and therefore unable to afford the goods produced by machine labour, all I can say is that when I was in secondary school, all of us young enough to be without income, pirating software and music was common. (I was the only one with a Mac, so I had to make do with magazine cover CDs for my software, but I think the observation is still worth something).

Standard