Futurology, Minds, Philosophy, Politics, SciFi, Technology, Transhumanism

Sufficient technology

Let’s hypothesise sufficient brain scans. As far as I know, we don’t have better than either very low resolution full-brain imaging (millions of synapses per voxel), or very limited high resolution imaging (thousands of synapses total), at least not for living brains. Let’s just pretend for the sake of argument that we have synapse-resolution full-brain scans of living subjects.

What are the implications?

  • Is a backup of your mind protected by the right to avoid self-incrimination? What about the minds of your pets?
  • Does a backup need to be punished (e.g. prison) if the person it is made from is punished? What if the offence occurred after the backup was made?
  • If the mind state is running rather than offline cold-storage, how many votes do all the copies get? What if they’re allowed to diverge? Which of them is allowed to access the bank accounts or other assets of the original? Is the original entitled to money earned by the copies?
  • If you memorise something and then get backed up, is that copyright infringement?
  • If a mind can run on silicon for less than the cost of food to keep a human healthy, can anyone other than the foremost mind in their respective field ever be employed?
  • If someone is backed up then the original is killed by someone who knows the person was backed up, is that murder, or is it the equivalent of a serious assault that causes a small duration of amnesia?
Standard
Minds, Psychology, Science

Hypothesise first, test later

Brought to you by me noticing that when I watch Kristen Bell playing an awful person in The Good Place, I feel as stressed as when I have to consciously translate to or from German in real-life situations and not just language apps.

Idea:

  • System 2 thinking (effortful) is stressful to the mind in the same way that mild exercise is stressful to the body.
  • Having to think in system 2 continuously is sometimes possible, but how long for is not a universal constant.
  • Social interactions are smoothed by being able to imagine what other people are thinking.
  • If two minds think in different ways, one or both has to use system 2 thinking to forecast the other. Autistic and neurotypical minds are one of many possible examples in the human realm. Cats and dogs are a common non-human example (“Play! Bark!” “Arg, big scary predator! Hiss!”)
  • Stress makes personal touches and eye contact unpleasant.

Implications:

Autistic people will be:

  • Much less stressed when they are only around other autistic people.
  • Look each other in the eye.
  • Be comfortable hugging each other.
  • Less likely than allistic people to watch soap operas.
  • Jokes based on mind state will not transfer, but puns will. (The opposite of the German-English joke barrier, as puns don’t translate but The Ministry Of Silly Walks does).
  • Reality shows will make no sense, and be somewhere between comedic and confusing in the same way as memes based on “type XYZ and let autocomplete finish the sentence!”

Questions:

  • How to interests, e.g. sports, music, painting, fit into this?
  • Does “gender” work like this? Memes such as “Men are from Mars, women are from Venus” or men finding women confusing and unpredictable come to mind. Also, men’s clubs are a thing, as are women’s. Also the existence of transgender people would pattern-match here. That said, it might just be a cultural barrier, because any group can have a culture and culture is not merely a synonym for political geography.
Standard
AI, Futurology, Philosophy, Psychology, Science

How would you know whether an A.I. was a person or not?

I did an A-level in Philosophy. (For non UK people, A-levels are a 2-year course that happens after highschool and before university).

I did it for fun rather than good grades — I had enough good grades to get into university, and when the other A-levels required my focus, I was fine putting zero further effort into the Philosophy course. (Something which was very clear when my final results came in).

What I didn’t expect at the time was that the rapid development of artificial intelligence in my lifetime would make it absolutely vital that humanity develops a concrete and testable understanding of what counts as a mind, as consciousness, as self-awareness, and as capability to suffer. Yes, we already have that as a problem in the form of animal suffering and whether meat can ever be ethical, but the problem which already exists, exists only for our consciences — the animals can’t take over the world and treat us the way we treat them, but an artificial mind would be almost totally pointless if it was as limited as an animal, and the general aim is quite a lot higher than that.

Some fear that we will replace ourselves with machines which may be very effective at what they do, but don’t have anything “that it’s like to be”. One of my fears is that we’ll make machines that do “have something that it’s like to be”, but who suffer greatly because humanity fails to recognise their personhood. (A paperclip optimiser doesn’t need to hate us to kill us, but I’m more interested in the sort of mind that can feel what we can feel).

I don’t have a good description of what I mean by any of the normal words. Personhood, consciousness, self awareness, suffering… they all seem to skirt around the core idea, but to the extent that they’re correct, they’re not clearly testable; and to the extent that they’re testable, they’re not clearly correct. A little like the maths-vs.-physics dichotomy.

Consciousness? Versus what, subconscious decision making? Isn’t this distinction merely system 1 vs. system 2 thinking? Even then, the word doesn’t tell us what it means to have it objectively, only subjectively. In some ways, some forms of A.I. looks like system 1 — fast, but error prone, based on heuristics; while other forms of A.I. look like system 2 — slow, careful, deliberative weighing all the options.

Self-awareness? What do we even mean by that? It’s absolutely trivial to make an A.I. aware of it’s own internal states, even necessary for anything more than a perceptron. Do we mean a mirror test? (Or non-visual equivalent for non-visual entities, including both blind people and also smell-focused animals such as dogs). That at least can be tested.

Capability to suffer? What does that even mean in an objective sense? Is suffering equal to negative reinforcement? If you have only positive reinforcement, is the absence of reward itself a form of suffering?

Introspection? As I understand it, the human psychology of this is that we don’t really introspect, we use system 2 thinking to confabulate justifications for what system 1 thinking made us feel.

Qualia? Sure, but what is one of these as an objective, measurable, detectable state within a neural network, be it artificial or natural?

Empathy or mirror neurons? I can’t decide how I feel about this one. At first glance, if one mind can feel the same as another mind, that seems like it should have the general ill-defined concept I’m after… but then I realised, I don’t see why that would follow and had the temporarily disturbing mental concept of an A.I. which can perfectly mimic the behaviour corresponding to the emotional state of someone they’re observing, without actually feeling anything itself.

And then the disturbance went away as I realised this is obviously trivially possible, because even a video recording fits that definition… or, hey, a mirror. A video recording somehow feels like it’s fine, it isn’t “smart” enough to be imitating, merely accurately reproducing. (Now I think about it, is there an equivalent issue with the mirror test?)

So, no, mirror neurons are not enough to be… to have the qualia of being consciously aware, or whatever you want to call it.

I’m still not closer to having answers, but sometimes it’s good to write down the questions.

Standard