Health, Minds, Personal

Truthiness & COVID denial by the dying

Enough people believe enough odd things that I was not surprised when I learned of COVID deniers; not just because the same happened a century ago with Influenza, but also my own former (as a teenager, now embarrassing) sincere belief in the occult.

Indeed, even when it comes to people denying the existence of COVID even in their dying breath (and despite claims that these reports are, if not incorrect, then exaggerated), I find this scenario very plausible thanks to the unfortunate path of my father’s bowel cancer.

Bowel cancer, as you might guess, can require a colectomy and the subsequent use of a colostomy bag. As one function of the colon is to absorb water, skipping it means you must increase your consumption to compensate. My father did not drink more water, and therefore suffered kidney failure just as I arrived for that year’s family Christmas — so I got to listen to his nurse telling my father all of the things I’ve just written about in order to explain to him why he now had an emergency hydration drip going in one arm and an emergency kidney rescue drug going in the other. Despite this, my father absolutely denied there was anything was wrong with how much water he was drinking.

He died two months later.

Standard
Futurology, Minds, Philosophy, Politics, SciFi, Technology, Transhumanism

Sufficient technology

Let’s hypothesise sufficient brain scans. As far as I know, we don’t have better than either very low resolution full-brain imaging (millions of synapses per voxel), or very limited high resolution imaging (thousands of synapses total), at least not for living brains. Let’s just pretend for the sake of argument that we have synapse-resolution full-brain scans of living subjects.

What are the implications?

  • Is a backup of your mind protected by the right to avoid self-incrimination? What about the minds of your pets?
  • Does a backup need to be punished (e.g. prison) if the person it is made from is punished? What if the offence occurred after the backup was made?
  • If the mind state is running rather than offline cold-storage, how many votes do all the copies get? What if they’re allowed to diverge? Which of them is allowed to access the bank accounts or other assets of the original? Is the original entitled to money earned by the copies?
  • If you memorise something and then get backed up, is that copyright infringement?
  • If a mind can run on silicon for less than the cost of food to keep a human healthy, can anyone other than the foremost mind in their respective field ever be employed?
  • If someone is backed up then the original is killed by someone who knows the person was backed up, is that murder, or is it the equivalent of a serious assault that causes a small duration of amnesia?
Standard
Minds, Psychology, Science

Hypothesise first, test later

Brought to you by me noticing that when I watch Kristen Bell playing an awful person in The Good Place, I feel as stressed as when I have to consciously translate to or from German in real-life situations and not just language apps.

Idea:

  • System 2 thinking (effortful) is stressful to the mind in the same way that mild exercise is stressful to the body.
  • Having to think in system 2 continuously is sometimes possible, but how long for is not a universal constant.
  • Social interactions are smoothed by being able to imagine what other people are thinking.
  • If two minds think in different ways, one or both has to use system 2 thinking to forecast the other. Autistic and neurotypical minds are one of many possible examples in the human realm. Cats and dogs are a common non-human example (“Play! Bark!” “Arg, big scary predator! Hiss!”)
  • Stress makes personal touches and eye contact unpleasant.

Implications:

Autistic people will be:

  • Much less stressed when they are only around other autistic people.
  • Look each other in the eye.
  • Be comfortable hugging each other.
  • Less likely than allistic people to watch soap operas.
  • Jokes based on mind state will not transfer, but puns will. (The opposite of the German-English joke barrier, as puns don’t translate but The Ministry Of Silly Walks does).
  • Reality shows will make no sense, and be somewhere between comedic and confusing in the same way as memes based on “type XYZ and let autocomplete finish the sentence!”

Questions:

  • How to interests, e.g. sports, music, painting, fit into this?
  • Does “gender” work like this? Memes such as “Men are from Mars, women are from Venus” or men finding women confusing and unpredictable come to mind. Also, men’s clubs are a thing, as are women’s. Also the existence of transgender people would pattern-match here. That said, it might just be a cultural barrier, because any group can have a culture and culture is not merely a synonym for political geography.
Standard
Minds, Philosophy, Psychology

One person’s nit is another’s central pillar

If one person believes something is absolutely incontrovertibly true, then my first (and demonstrably unhelpful) reaction is that even the slightest demonstration of error should demolish the argument.

I know this doesn’t work.

People don’t make Boolean-logical arguments, they go with gut feelings that act much like Bayesian-logical inferences. If someone says something is incontrovertible, the incontrovertibility isn’t their central pillar — when I treated it as one, I totally failed to change their minds.

Steel man your arguments. Go for your opponent’s strongest point, but make sure it’s what your opponent is treating as their strongest point, for if you make the mistake I have made, you will fail.

If your Bayesian prior is 99.9%, you might reasonably (in common use of the words) say the evidence is incontrovertible; someone who hears “incontrovertible” and points out a minor edge case isn’t going to shift your posterior odds by much, are they?

They do? Are we thinking of the same things here? I don’t mean things where absolute truth is possible (i.e. maths, although I’ve had someone argue with me about that in a remarkably foolish way too), I mean about observations about reality which are necessarily flawed. Flawed, and sometimes circular.

Concrete example, although I apologise to any religious people in advance if I accidentally nut-pick. Imagine a Bible-literalist Christian called Chris (who thinks only 144,000 will survive the apocalypse, and no I’m not saying Chris is a Jehovah’s Witness, they’re just an example of 144k beliefs) arguing with Atheist Ann, specifically about “can God make a rock so heavy that God cannot move it?”:

P(A) = 0.999 (Bayesian prior: how certain Chris’s belief in God is)
P(B) = 1.0 (Observation: the argument has been made and Ann has not been struck down)
P(B|A) = 0.99979 (Probability that God has not struck down Ann for blasphemy, given that God exists — In the Bible, God has sometimes struck down non-believers, so let’s say about 21 million deaths of the 100 billion humans that have ever lived to cover the flood, noting that most were not in the 144k)

P(A|B) = P(B|A)P(A)/P(B) = 0.99979×0.999/1.0 = 0.99879021

Almost unchanged.

It gets worse; the phrase “I can’t believe what I’m hearing!” means P(B) is less than 1.0. If P(B) is less than 1.0 but all the rest is the same:

P(B) = 0.9 → P(A|B) = P(B|A)P(A)/P(B) = 0.99979×0.999/0.9 = 1.1097669

Oh no, it went up! Also, probability error, probability can never exceed 1.0! P>1.0 would be a problem if I was discussing real probabilities — if this was a maths test, this would fail (P(B|A) should be reduced correspondingly) — but people demonstrably don’t always update all their internal model at the same time: if we did, cognitive dissonance would be impossible. Depending on the level of the thinking (I suspect direct processing in synapses won’t do this, but that deliberative conscious thought can) we can sometimes fall into traps, so this totally explains another observation: some people can take the mere existence of people who disagree with them as a reason to believe even more strongly.

Standard
Minds, Politics

Baysean Brexit

I’ve decided to start making written notes every time I catch myself in a cognitive trap or bias. I tried just noticing them, but then I noticed I kept forgetting them, and that’s no use.

If you tell me Brexit will be good for the economy, then I automatically think you know too little about economics to be worth listening to. If you tell a Leaver that Brexit will be bad for the economy, then they automatically think you know too little about economics to be worth listening to.

Both of these are fixed points for Baysean inference, a trap from which it is difficult to escape. The prior is 100% that «insert economic forecast here» is correct, and it must therefore follow that anything contradicting the forecast is wrong.

That said, it is possible for evidence to change my mind (and hopefully the average Leaver’s). Unfortunately, like seeing the pearly gates, the only evidence I can imagine would be too late to be actionable — for both Leavers and Remainers — because that evidence is in the form “wait for Brexit and see what happens”.

Is there any way to solve this? Is it a problem of inferential distance, masquerading as a Baysean prior?

Feels like this also fits my previous post about the dynamic range of Bayesian thought.

Standard
Health, Minds, Psychology

Attention ec — ooh, a squirrel

List of my current YouTube subscriptions. It's very long.

I think the zeitgeist seems to be moving away from filling all our time with things and being hyper-connected, and towards rarer more meaningful connections.

It’s… disturbing and interesting at the same time, to realise that the attention-grabbing nature of all the things I enjoy has been designed to perfectly fit me, and all of us, by the same survival-of-the-fittest logic that causes natural evolution.

That which best grabs the attention, thrives. That which isn’t so powerful, doesn’t.

And when we develop strategies to defend ourselves against certain attention-grabbers, the attention-grabbers which use different approaches that we have not yet defended against take the place of those we have protected ourselves from.

A memetic arms race, between mental hygiene and thought germs.

I’ve done stuff in the last three months, but that stuff hasn’t included “finish editing next draft of my novel”. I could’ve, if only I’d made time for that instead of drinking from the (effectively) bottomless well of high quality YouTube content (see side-image for my active subscriptions; I also have to make a conscious effort to not click on the interesting clips from TV shows that probably shouldn’t even be on YouTube in the first place). Even though I watch most content sped up to a factor of 1.5 or 2, I can barely find time for all the new YouTube content I care about and do my online language courses and make time for the other things like finding a job.

Editing my novel? It’s right there, on my task list… but I barely touch it, even though it’s fulfilling to work on it, and fun to re-read. I don’t know if this is ego depletion or akrasia or addiction, but whatever it is, it’s an undesirable state.

I’m vulnerable to comments sections, too. Of course, I can do something about those — when I notice myself falling into a trap, I can block a relevant domain name in my hosts file. I have a lot of stuff in that file these days, and even then I slip up a bit because I can’t edit my iPhones hosts file.

Now that I know there’s a problem, I’m working on it… just like everyone else. The irony is, by disconnecting from the hyper-connected always-on parts of the internet, we’re not around to help each other when we slip up.

CGPGrey: Thinking About Attention — Walk with Me — https://www.youtube.com/watch?v=wf2VxeIm1no

CGPGrey: This Video Will Make You Angry — https://www.youtube.com/watch?v=rE3j_RHkqJc

Elsewhere on this blog: Hyperinflation in the attention economy: what succeeds adverts?

Standard
AI, Minds, Philosophy, Politics

A.I. safety with Democracy?

Common path of discussion:

Alice: A.I. can already be dangerous, even though it’s currently narrow intelligence only. How do we make it safe before it’s general intelligence?

Bob: Democracy!

Alice: That’s a sentence fragment, not an answer. What do you mean?

Bob: Vote for what you want the A.I. to do 🙂

Alice: But people ask for what they think they want instead of what they really want — this leads to misaligned incentives/paperclip optimisers, or pathological focus on universal instrumental goals like money or power.

Bob: Then let’s give the A.I. to everyone, so we’re all equal and anyone who tells their A.I. to do something daft can be countered by everyone else.

Alice: But that assumes the machines operate on the same speed we do. If we assume that an A.G.I. can be made by duplicating a human brain’s connectome in silicon — mapping synapses to transistors — then even with no more Moore’s Law an A.G.I. would be out-pacing our thoughts by the same margin a pack of wolves outpaces continental drift (and the volume of a few dozen grains of sand).

Because we’re much too slow to respond to threats ourselves, any helpful A.G.I. working to stop a harmful A.G.I. would have to know what to do before we told it; yet if we knew how to make them work like that, then we wouldn’t need to, as all A.G.I. would stop themselves from doing anything harmful in the first place.

Bob: Balance of powers, just like governments — no single A.G.I can get too big, because all the other A.G.I. want the same limited resource.

Alice: Keep reading that educational webcomic. Even in the human case (and we can’t trust our intuition about the nature of an arbitrary A.G.I.), separation of powers only works if you can guarantee that those who seek power don’t collude. As humans collude, an A.G.I. (even one which seeks power only as an instrumental goal for some other cause) can be expected to collude with other similar A.G.I. (“A.G.I.s”? How do you pluralise an initialism?)


There’s probably something that should follow this, but I don’t know what as real conversations usually go stale well before my final Alice response (and even that might have been too harsh and conversation-stopping, I’d like to dig deeper and find out what happens next).

I still think we ultimately want “do what I meant not what I said“, but at the very least that’s really hard to specify and at worst I’m starting to worry that some (too many?) people may be unable to cope with the possibility that some of the things they want are incoherent or self-contradictory.

Whatever the solution, I suspect that politics and economics both have a lot of lessons available to help the development of safe A.I. — both limited A.I. that currently exists and also potential future tech such as human-level general A.I. (perhaps even super-intelligence, but don’t count on that).

Standard
Minds

Dynamic range of Bayesian thought

We naturally use something close to Bayesian logic when we learn and intuit. Bayesian logic doesn’t update when the prior is 0 or 1. Some people can’t shift their opinions, no matter what evidence they have. This is compatible with them having priors of 0 or 1.

It would be implausible for humans to store neural weights with ℝeal numbers. How many bits (base-2) do we use to store our implicit priors? My gut feeling says it’s a shockingly small number, perhaps 4.

How little evidence do we need to become trapped in certainty? Is it even constant (or close to) for all humans?

Standard