AI, Philosophy

Nietzsche, Facebook, and A.I.

“If you stare into The Facebook, The Facebook stares back at you.”

I think this fits the reality of digital surveillance much better than it fits the idea Nietzsche was trying to convey when he wrote the original.

Facebook and Google look at you with an unblinking eye; they look at all of us which they can reach, even those without accounts; two billion people on Facebook, their every keystroke recorded, even those they delete; every message analysed, even those never sent; every photo processed, even those kept private; on Google maps, every step taken or turn missed, every place where you stop, becomes an update for the map.

We’re lucky that A.I. isn’t as smart as a human, because if it was, such incomprehensible breadth and depth of experience would make Sherlock look like an illiterate child raised by wild animals in comparison. Even without hypothesising new technologies that a machine intelligence may or may not invent, even just a machine that does exactly what its told by its owner… this dataset alone ought to worry any who fear the thumb of a totalitarian micro-managing your life.

Advertisements
Standard
Philosophy

Normalised, n-dimensional, utility monster

From https://en.wikipedia.org/wiki/Utility_monster:

Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose … the theory seems to require that we all be sacrificed in the monster’s maw, in order to increase total utility.

How would the problem be affected if all sentient beings have their utility functions normalised into the same range, say -1 to +1, before comparisons are made?

Example 1: 51% (this is not a Brexit metaphor) of a group gained maximum possible normalised utility, +1, from something that caused 49% maximum possible normalised anti-utility, -1. Is that ethical? Really? My mind keeps saying “in that case look for another solution”, and so I have to force myself to remember that this is a thought experiment where there is no alternative to do—or—do-not… I think it has to be ethical if there really is no alternative.

Example 2: Some event can cause 1% to experience +1 normalised utility while the other 99% to experience -0.01 normalised utility (totalling -0.99). This is the reverse of the plot of Doctor Who: The Beast Below. Again, my mind wants an alternative, but I think it’s valid, that “shut up and multiply” is correct here.


Even if that worked, it’s not sufficient.

If you consider utility to be a space, where each sentient being is their own axis, how do you maximise the vector representing total utility? If I understand correctly, there isn’t a well-defined > or < operator for even two dimensions. Unless you perform some function that collapses all utilities together, you cannot have Utilitarianism for more than just one single sentient being within a set of interacting sentient beings — that function, even if it’s just “sum” or “average”, is your “ethics”: Utilitarianism is no more than “how to not be stupid”.

Standard