Health, Personal

Ohne Kaffee

As anyone who has spent any significant time with me will know, I have a genuinely problematic relationship with coffee. If I don’t watch myself I can drink nothing but coffee all day, sometimes even double strength — 4 cups is one thing, 2 litres of double strength is too much.

Because of this, every so often I try to cut it out entirely. This time, I’m trying to keep a diary.

Day 1, Saturday

A bit tired, nothing special.

Day 2, Sunday

Really tired, bad headache, took ibuprofen.

Day 3, Monday

A bit tired, occasionally tempted to grab a coffee from the machine at work.

Day 4, Tuesday

Working from home along with everyone else because of Coronavirus. A bit tired, very unfocused. Not sure if the loss of focus is from the switch to home office or from the withdrawal. Definitely feeling like a coffee would help.

Leg ache developed in the evening, feeling like I’d run 5 km.

Day 5, Wednesday

Arm and leg muscles ache.

Day 6, Thursday

No issues.

Standard
Futurology, Minds, Philosophy, Politics, SciFi, Technology, Transhumanism

Sufficient technology

Let’s hypothesise sufficient brain scans. As far as I know, we don’t have better than either very low resolution full-brain imaging (millions of synapses per voxel), or very limited high resolution imaging (thousands of synapses total), at least not for living brains. Let’s just pretend for the sake of argument that we have synapse-resolution full-brain scans of living subjects.

What are the implications?

  • Is a backup of your mind protected by the right to avoid self-incrimination? What about the minds of your pets?
  • Does a backup need to be punished (e.g. prison) if the person it is made from is punished? What if the offence occurred after the backup was made?
  • If the mind state is running rather than offline cold-storage, how many votes do all the copies get? What if they’re allowed to diverge? Which of them is allowed to access the bank accounts or other assets of the original? Is the original entitled to money earned by the copies?
  • If you memorise something and then get backed up, is that copyright infringement?
  • If a mind can run on silicon for less than the cost of food to keep a human healthy, can anyone other than the foremost mind in their respective field ever be employed?
  • If someone is backed up then the original is killed by someone who knows the person was backed up, is that murder, or is it the equivalent of a serious assault that causes a small duration of amnesia?
Standard
Minds, Psychology, Science

Hypothesise first, test later

Brought to you by me noticing that when I watch Kristen Bell playing an awful person in The Good Place, I feel as stressed as when I have to consciously translate to or from German in real-life situations and not just language apps.

Idea:

  • System 2 thinking (effortful) is stressful to the mind in the same way that mild exercise is stressful to the body.
  • Having to think in system 2 continuously is sometimes possible, but how long for is not a universal constant.
  • Social interactions are smoothed by being able to imagine what other people are thinking.
  • If two minds think in different ways, one or both has to use system 2 thinking to forecast the other. Autistic and neurotypical minds are one of many possible examples in the human realm. Cats and dogs are a common non-human example (“Play! Bark!” “Arg, big scary predator! Hiss!”)
  • Stress makes personal touches and eye contact unpleasant.

Implications:

Autistic people will be:

  • Much less stressed when they are only around other autistic people.
  • Look each other in the eye.
  • Be comfortable hugging each other.
  • Less likely than allistic people to watch soap operas.
  • Jokes based on mind state will not transfer, but puns will. (The opposite of the German-English joke barrier, as puns don’t translate but The Ministry Of Silly Walks does).
  • Reality shows will make no sense, and be somewhere between comedic and confusing in the same way as memes based on “type XYZ and let autocomplete finish the sentence!”

Questions:

  • How to interests, e.g. sports, music, painting, fit into this?
  • Does “gender” work like this? Memes such as “Men are from Mars, women are from Venus” or men finding women confusing and unpredictable come to mind. Also, men’s clubs are a thing, as are women’s. Also the existence of transgender people would pattern-match here. That said, it might just be a cultural barrier, because any group can have a culture and culture is not merely a synonym for political geography.
Standard
Opinion, Politics

The worst form of government except for all the others

(Does this sound fair? I’m not formally qualified in politics).

Democracy

The only way to get a government which wants to do the sort of things the public are OK with.

Technocracy

The only way to get a government of people who know what they’re talking about.

Industrialism

The only way to provide a government with the capability to do things.

(Not “capitalism” in general, industry in particular).

Diplomacy

The only way to provide a government with awareness that other nations can have their own desires and goals which differ from it’s own.

Journalism/Police

The only way to fight corruption.

Standard
Philosophy

Morality, thy discount is hyperbolic

One well known failure mode of Utilitarian ethics is a thing called a “utility monster” — for any value of “benefit” and “suffering”, it’s possible to postulate an entity (Bob) and a course of action (The Plan) where Bob receives so much benefit that everyone else can suffer arbitrarily great pain and yet you “should” still do The Plan.

That this can happen is often used as a reason to not be a Utilitarian. Never mind that there are no known real examples — when something is paraded as a logical universal ethical truth, it’s not allowed to even have theoretical problems, for much the same reason that God doesn’t need to actually “microwave a burrito so hot that even he can’t eat it” for the mere possibility to be a proof that no god is capable of being all-powerful.

I have previously suggested a way to limit this — normalisation — but that was obviously not good enough for a group. What I was looking for then was a way to combine multiple entities in a sensible fashion, and now I’ve found one already existed: hyperbolic discounting.

Hyperbolic discounting is how we all naturally think about the future: the further into the future a reward is, the less important we regard it. For example, if I ask “would you rather have $15 immediately; or $30 after three months; or $60 after one year; or $100 after three years?” most people find those options equally desirable, even though nobody’s expecting the US dollar to lose 85% of its value in just three years.

Most people do a similar thing with large numbers, although logarithmic rather than hyperbolic. There’s a cliché with presenters describing how big some number X is by saying X-seconds is a certain number of years and acting like it’s surprising that “while a billion seconds is 30 years, a trillion is 30 thousand years”. (Personally I am so used to this truth that it feels weird that they even need to say it).

Examples of big numbers, in the form of money. Millionaire, Billionaire, Elon Musk, and Bill Gates.

Examples of big numbers, in the form of money. Millionaire, Billionaire, Elon Musk, and Bill Gates. Wealth estimates from Wikipedia, early 2020.

So, what’s the point of this? Well, one of the ways the Utility Monster makes a certain category of nerd look like an arse is the long-term future of humanity. The sort of person who (mea culpa) worries about X-risks (eXtinction risks), and says “don’t worry about climate change, worry about non-aligned AI!” to an audience filled with people who very much worry about climate change induced extinction and think that AI can’t possibly be a real issue when Alexa can’t even figure out which room’s lightbulb it’s supposed to switch on or off.

To put it in concrete terms: If your idea of “long term planning” means you are interested in star-lifting as way to extend the lifetime of the sun from a few billion years to a few tens of trillions, and you intend to help expand to fill the local supercluster with like-minded people, and your idea of “person” includes a mind running on a computer that’s as power-efficient as a human brain and effectively immortal, then you’re talking about giving 5*10^42 people a multi-trillion year lifespan.

If you’re talking about giving 5*10^42 people a multi-trillion year lifespan, you will look like a massive arsehole if you say, for example, that “climate change killing 7 billion people in the next century is a small price to pay for making sure we have an aligned AI to help us with the next bit” — the observation that a non-aligned AI is likely to irreversibly destroy everything it’s not explicitly trying to preserve isn’t going to change anyone’s mind, because even if you get past the inferential distance that means most people hearing about this say “off switch!” (both as a solution to the AI and to whichever channel you’re broadcasting on), at scales like this, utilitarianism feels wrong.

So, where does hyperbolic discounting come in?

We naturally discount the future as a “might not happen”. This is good. No matter how certain we think we are on paper, there is always the risk of an unknown-unknown. This risk doesn’t go away with more evidence, either — the classic illustration of the problem of induction is a turkey who observes that every morning they are fed by their farmer and so decides that the farmer has their best interests at heart; the longer this goes on for, the more certain they are of this, yet each day brings them closer to being slaughtered for thanksgiving. The currently-known equivalents in physics would be things like false vacuums or brane collisions triggering a new big bang or Boltzmann brains.

Because we don’t know the future, we should discount it. There are not 5*10^42 people. There might never be 5*10^42 people. To the extent that there might be, they might turn out to all be Literally Worse Than Hitler. Sure, there’s a chance of human flourishing on a scale that makes Heaven as described in the Bible seem worthless in comparison — and before you’re tempted to point out that Biblical heaven is supposed to be eternal, which is infinitely longer than any number of trillions of years: that counter-argument presumes Utilitarian and therefore doesn’t apply — but the further away those people and that outcome is from you, the less weight you should put on them really becoming and it really happening.

Instead: Concentrate on the here-and-now. Concentrate on ending poverty. Fight climate change. Save endangered species. Campaign for nuclear disarmament and peaceful dispute resolution. Not because there can’t be 5*10^42 people in our future light cone, but because we can falter and fail at any and every step on the way from here.

Standard
AI, Futurology, Opinion, Philosophy

Memetic monocultures

Brief kernel of an idea:

  1. Societies deem certain ideas “dangerous”.
  2. If it possible to technologically eliminate perceived dangers, we can be tempted to do so, even when we perceived wrongly.
  3. Group-think has lead to catastrophic misjudgments.
  4. This represents a potential future “great filter” for the Fermi paradox. It does not apply to previous attempts at eliminating dissenting views, as they were social, not technological, in nature, and limited in geographical scope.
  5. This risk has not yet become practical, but we shouldn’t feel complacent just because brain-computer-interfaces are basic and indoctrinal viruses are fictional, as universal surveillance is sufficient and affordable, limited only by sufficiently advanced AI to assist human overseers (perfect AI not required).
Standard
Uncategorized

Newcomb’s Assured Destruction

This evening I noticed a similarity between Newcomb’s Paradox and MAD. It feels like the same problem, just with a sign change.

Newcomb’s Paradox

The player has two boxes, A and B. The player can either take only box B, or take both A and B.

  • Box A is clear, and always contains a visible $1,000
  • Box B is opaque, and it contains:
    • Nothing, if it was predicted the player would take both boxes
    • A million dollars, if it was predicted that the player will take only box B

The player does not know what was predicted.

Game theory says that, no matter what was predicted, you’re better off taking both boxes.

If you trust the prediction will accurately reflect your decision no matter what you decide, it’s better to only take one box.

In order to win the maximum reward, you must appear to be a one-boxer while actually being a two-boxer.

Mutually assured destruction

Two players each have annihilation weapons.

If either player uses their weapons, the other player may respond in kind before they are annihilated. Annihilating your opponent in retaliation does not prevent your own destruction.

  • If your opponent predicts you will not retaliate, they will launch an attack, and win
  • If your opponent predicts you will retaliate, they will not launch an attack, and you survive

Game theory says you must retaliate. If your opponent attacks anyway, nuke fall, everybody dies.

In order to minimise fatalities, you must be no-retaliate, while appearing to be pro-retaliate.

Standard