Futurology, Minds, Philosophy, Politics, SciFi, Technology, Transhumanism

Sufficient technology

Let’s hypothesise sufficient brain scans. As far as I know, we don’t have better than either very low resolution full-brain imaging (millions of synapses per voxel), or very limited high resolution imaging (thousands of synapses total), at least not for living brains. Let’s just pretend for the sake of argument that we have synapse-resolution full-brain scans of living subjects.

What are the implications?

  • Is a backup of your mind protected by the right to avoid self-incrimination? What about the minds of your pets?
  • Does a backup need to be punished (e.g. prison) if the person it is made from is punished? What if the offence occurred after the backup was made?
  • If the mind state is running rather than offline cold-storage, how many votes do all the copies get? What if they’re allowed to diverge? Which of them is allowed to access the bank accounts or other assets of the original? Is the original entitled to money earned by the copies?
  • If you memorise something and then get backed up, is that copyright infringement?
  • If a mind can run on silicon for less than the cost of food to keep a human healthy, can anyone other than the foremost mind in their respective field ever be employed?
  • If someone is backed up then the original is killed by someone who knows the person was backed up, is that murder, or is it the equivalent of a serious assault that causes a small duration of amnesia?
Standard
Philosophy

Dot product morality

The first time I felt confused about morality was as a child. I was about six, and saw a D&D-style role-playing magazine; On the cover, there were two groups preparing to fight, one dressed as barbarians, the other as soldiers or something1. I asked my brother “Which are the goodies and which are the baddies?”, and I couldn’t understand him when he told me neither of them were.

Boolean.

When I was 14 or so, in the middle of a Catholic secondary school, I discovered neopaganism; instead of the Bible and the Ten Commandments, I started following the Wiccan Rede (if it doesn’t hurt anyone, do what you like). Initially I still suffered from the hubris of black-and-white thinking, even though I’d witnessed others falling into that trap and thought poorly of them for it, but eventually my exposure to alternative religious and spiritual ideas made me recognise that morality is shades of grey.

Float.

Because of the nature of the UK education system, between school and university I spent 2 years doing A-levels, and one of the subjects I studied was philosophy. Between repeated failures to prove god exists, we covered ethics, specifically emotivism, AKA the hurrah/boo theory, which claims there are no objective morals, and that claims about them are merely emotional attitudes — the standard response at this point is to claim that “murder is wrong” is objective, at which point someone demonstrates plenty of people disagree with you about what counts as murder (abortion, execution, deaths in war, death by dangerous driving, meat, that sort of thing). I don’t think I understood it at that age, any more than I understood my brother saying “neither” when I was six; it’s hard to be sure after so much time.

Then I encountered complicated people. People who could be incredibly moral in one axis, and monsters in another. I can’t remember the exact example that showed it first, but I have plenty to choose from now — on a national scale, the British empire did a great deal to end slavery, yet acted in appalling ways to many of the people under it’s rule; on an individual scale, you can find scandals for Gandhi and Churchill, not just obvious modern examples of formerly-liked celebrities like Kevin Spacey and Rolf Harris. In all cases, saying someone is “evil” or “not evil”, or even “0.5 on the 0-1 evil axis” is misleading — you can trust Churchill 100% to run 1940 UK while simultaneously refusing to trust him (0% trust) to care about anyone who wasn’t a white Protestant, though obviously your percentages might be different.

I’ve been interested in artificial intelligence and artificial neural networks for longer than I’ve been able to follow the maths. When you, as a natural neural network, try to measure something, you do so with a high-dimensional vector-space of inputs (well, many such spaces, each layered on top of each other, with the outputs of one layer being the inputs of the next layer) and that includes morality.

When you ask how moral someone else is, how moral some behaviour is, what you’re doing is essentially a dot-product of your moral code with their moral code. You may or may not filter that down into a single “good/bad” boolean afterwards — that’s easy for a neural network, and makes no difference.

1 I can’t remember exactly, but it doesn’t matter.

Standard
Philosophy, Psychology

A life’s work

There are 2.5 billion seconds in a lifetime and (as of December 2018) 7.7 billion humans on the planet.

If you fight evil one-on-one, if you refuse to pick your battles, if only 1% of humans are sociopaths, you’ve got 21 waking seconds per opponent — and you’ll be fighting your whole life, from infancy to your dying breath and from when you wake to when you sleep, with no holiday, no weekends, no retirement.

Conversely, if you are a product designer, and five million people use your stuff once per day, every second you save them saves a waking lifetime of waiting per year. If you can relieve a hundred thousand people of just 5 minutes anxiety each day (say, about social media notifications), you’re saving six and a half waking lifetimes of anxiety every year.

When people complained about the cost of the Apollo programs, someone said Americans spent more on haircuts in the same time. How many Apollo programs of joy are wasted tapping on small red dots or waiting for them?

Standard