Futurology, Minds, Philosophy, Politics, SciFi, Technology, Transhumanism

Sufficient technology

Let’s hypothesise sufficient brain scans. As far as I know, we don’t have better than either very low resolution full-brain imaging (millions of synapses per voxel), or very limited high resolution imaging (thousands of synapses total), at least not for living brains. Let’s just pretend for the sake of argument that we have synapse-resolution full-brain scans of living subjects.

What are the implications?

  • Is a backup of your mind protected by the right to avoid self-incrimination? What about the minds of your pets?
  • Does a backup need to be punished (e.g. prison) if the person it is made from is punished? What if the offence occurred after the backup was made?
  • If the mind state is running rather than offline cold-storage, how many votes do all the copies get? What if they’re allowed to diverge? Which of them is allowed to access the bank accounts or other assets of the original? Is the original entitled to money earned by the copies?
  • If you memorise something and then get backed up, is that copyright infringement?
  • If a mind can run on silicon for less than the cost of food to keep a human healthy, can anyone other than the foremost mind in their respective field ever be employed?
  • If someone is backed up then the original is killed by someone who knows the person was backed up, is that murder, or is it the equivalent of a serious assault that causes a small duration of amnesia?
Standard
Philosophy

Normalised, n-dimensional, utility monster

From https://en.wikipedia.org/wiki/Utility_monster:

Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose … the theory seems to require that we all be sacrificed in the monster’s maw, in order to increase total utility.

How would the problem be affected if all sentient beings have their utility functions normalised into the same range, say -1 to +1, before comparisons are made?

Example 1: 51% (this is not a Brexit metaphor) of a group gained maximum possible normalised utility, +1, from something that caused 49% maximum possible normalised anti-utility, -1. Is that ethical? Really? My mind keeps saying “in that case look for another solution”, and so I have to force myself to remember that this is a thought experiment where there is no alternative to do—or—do-not… I think it has to be ethical if there really is no alternative.

Example 2: Some event can cause 1% to experience +1 normalised utility while the other 99% to experience -0.01 normalised utility (totalling -0.99). This is the reverse of the plot of Doctor Who: The Beast Below. Again, my mind wants an alternative, but I think it’s valid, that “shut up and multiply” is correct here.


Even if that worked, it’s not sufficient.

If you consider utility to be a space, where each sentient being is their own axis, how do you maximise the vector representing total utility? If I understand correctly, there isn’t a well-defined > or < operator for even two dimensions. Unless you perform some function that collapses all utilities together, you cannot have Utilitarianism for more than just one single sentient being within a set of interacting sentient beings — that function, even if it’s just “sum” or “average”, is your “ethics”: Utilitarianism is no more than “how to not be stupid”.

Standard