Futurology, Politics, AI, Minds

A new meaning to the words “Thought crime”: a crime for which the only evidence is a scan of your brain.

Advertisements

Thought crime in 2084

Aside
Minds, Philosophy, Psychology

One person’s nit is another’s central pillar

If one person believes something is absolutely incontrovertibly true, then my first (and demonstrably unhelpful) reaction is that even the slightest demonstration of error should demolish the argument.

I know this doesn’t work.

People don’t make Boolean-logical arguments, they go with gut feelings that act much like Bayesian-logical inferences. If someone says something is incontrovertible, the incontrovertibility isn’t their central pillar — when I treated it as one, I totally failed to change their minds.

Steel man your arguments. Go for your opponent’s strongest point, but make sure it’s what your opponent is treating as their strongest point, for if you make the mistake I have made, you will fail.

If your Bayesian prior is 99.9%, you might reasonably (in common use of the words) say the evidence is incontrovertible; someone who hears “incontrovertible” and points out a minor edge case isn’t going to shift your posterior odds by much, are they?

They do? Are we thinking of the same things here? I don’t mean things where absolute truth is possible (i.e. maths, although I’ve had someone argue with me about that in a remarkably foolish way too), I mean about observations about reality which are necessarily flawed. Flawed, and sometimes circular.

Concrete example, although I apologise to any religious people in advance if I accidentally nut-pick. Imagine a Bible-literalist Christian called Chris (who thinks only 144,000 will survive the apocalypse, and no I’m not saying Chris is a Jehovah’s Witness, they’re just an example of 144k beliefs) arguing with Atheist Ann, specifically about “can God make a rock so heavy that God cannot move it?”:

P(A) = 0.999 (Bayesian prior: how certain Chris’s belief in God is)
P(B) = 1.0 (Observation: the argument has been made and Ann has not been struck down)
P(B|A) = 0.99979 (Probability that God has not struck down Ann for blasphemy, given that God exists — In the Bible, God has sometimes struck down non-believers, so let’s say about 21 million deaths of the 100 billion humans that have ever lived to cover the flood, noting that most were not in the 144k)

P(A|B) = P(B|A)P(A)/P(B) = 0.99979×0.999/1.0 = 0.99879021

Almost unchanged.

It gets worse; the phrase “I can’t believe what I’m hearing!” means P(B) is less than 1.0. If P(B) is less than 1.0 but all the rest is the same:

P(B) = 0.9 → P(A|B) = P(B|A)P(A)/P(B) = 0.99979×0.999/0.9 = 1.1097669

Oh no, it went up! Also, probability error, probability can never exceed 1.0! P>1.0 would be a problem if I was discussing real probabilities — if this was a maths test, this would fail (P(B|A) should be reduced correspondingly) — but people demonstrably don’t always update all their internal model at the same time: if we did, cognitive dissonance would be impossible. Depending on the level of the thinking (I suspect direct processing in synapses won’t do this, but that deliberative conscious thought can) we can sometimes fall into traps, so this totally explains another observation: some people can take the mere existence of people who disagree with them as a reason to believe even more strongly.

Standard
Minds, Politics

Baysean Brexit

I’ve decided to start making written notes every time I catch myself in a cognitive trap or bias. I tried just noticing them, but then I noticed I kept forgetting them, and that’s no use.

If you tell me Brexit will be good for the economy, then I automatically think you know too little about economics to be worth listening to. If you tell a Leaver that Brexit will be bad for the economy, then they automatically think you know too little about economics to be worth listening to.

Both of these are fixed points for Baysean inference, a trap from which it is difficult to escape. The prior is 100% that «insert economic forecast here» is correct, and it must therefore follow that anything contradicting the forecast is wrong.

That said, it is possible for evidence to change my mind (and hopefully the average Leaver’s). Unfortunately, like seeing the pearly gates, the only evidence I can imagine would be too late to be actionable — for both Leavers and Remainers — because that evidence is in the form “wait for Brexit and see what happens”.

Is there any way to solve this? Is it a problem of inferential distance, masquerading as a Baysean prior?

Feels like this also fits my previous post about the dynamic range of Bayesian thought.

Standard
Health, Minds, Psychology

Attention ec — ooh, a squirrel

List of my current YouTube subscriptions. It's very long.

I think the zeitgeist seems to be moving away from filling all our time with things and being hyper-connected, and towards rarer more meaningful connections.

It’s… disturbing and interesting at the same time, to realise that the attention-grabbing nature of all the things I enjoy has been designed to perfectly fit me, and all of us, by the same survival-of-the-fittest logic that causes natural evolution.

That which best grabs the attention, thrives. That which isn’t so powerful, doesn’t.

And when we develop strategies to defend ourselves against certain attention-grabbers, the attention-grabbers which use different approaches that we have not yet defended against take the place of those we have protected ourselves from.

A memetic arms race, between mental hygiene and thought germs.

I’ve done stuff in the last three months, but that stuff hasn’t included “finish editing next draft of my novel”. I could’ve, if only I’d made time for that instead of drinking from the (effectively) bottomless well of high quality YouTube content (see side-image for my active subscriptions; I also have to make a conscious effort to not click on the interesting clips from TV shows that probably shouldn’t even be on YouTube in the first place). Even though I watch most content sped up to a factor of 1.5 or 2, I can barely find time for all the new YouTube content I care about and do my online language courses and make time for the other things like finding a job.

Editing my novel? It’s right there, on my task list… but I barely touch it, even though it’s fulfilling to work on it, and fun to re-read. I don’t know if this is ego depletion or akrasia or addiction, but whatever it is, it’s an undesirable state.

I’m vulnerable to comments sections, too. Of course, I can do something about those — when I notice myself falling into a trap, I can block a relevant domain name in my hosts file. I have a lot of stuff in that file these days, and even then I slip up a bit because I can’t edit my iPhones hosts file.

Now that I know there’s a problem, I’m working on it… just like everyone else. The irony is, by disconnecting from the hyper-connected always-on parts of the internet, we’re not around to help each other when we slip up.

CGPGrey: Thinking About Attention — Walk with Me — https://www.youtube.com/watch?v=wf2VxeIm1no

CGPGrey: This Video Will Make You Angry — https://www.youtube.com/watch?v=rE3j_RHkqJc

Elsewhere on this blog: Hyperinflation in the attention economy: what succeeds adverts?

Standard
AI, Minds, Philosophy, Politics

A.I. safety with Democracy?

Common path of discussion:

Alice: A.I. can already be dangerous, even though it’s currently narrow intelligence only. How do we make it safe before it’s general intelligence?

Bob: Democracy!

Alice: That’s a sentence fragment, not an answer. What do you mean?

Bob: Vote for what you want the A.I. to do 🙂

Alice: But people ask for what they think they want instead of what they really want — this leads to misaligned incentives/paperclip optimisers, or pathological focus on universal instrumental goals like money or power.

Bob: Then let’s give the A.I. to everyone, so we’re all equal and anyone who tells their A.I. to do something daft can be countered by everyone else.

Alice: But that assumes the machines operate on the same speed we do. If we assume that an A.G.I. can be made by duplicating a human brain’s connectome in silicon — mapping synapses to transistors — then even with no more Moore’s Law an A.G.I. would be out-pacing our thoughts by the same margin a pack of wolves outpaces continental drift (and the volume of a few dozen grains of sand).

Because we’re much too slow to respond to threats ourselves, any helpful A.G.I. working to stop a harmful A.G.I. would have to know what to do before we told it; yet if we knew how to make them work like that, then we wouldn’t need to, as all A.G.I. would stop themselves from doing anything harmful in the first place.

Bob: Balance of powers, just like governments — no single A.G.I can get too big, because all the other A.G.I. want the same limited resource.

Alice: Keep reading that educational webcomic. Even in the human case (and we can’t trust our intuition about the nature of an arbitrary A.G.I.), separation of powers only works if you can guarantee that those who seek power don’t collude. As humans collude, an A.G.I. (even one which seeks power only as an instrumental goal for some other cause) can be expected to collude with other similar A.G.I. (“A.G.I.s”? How do you pluralise an initialism?)


There’s probably something that should follow this, but I don’t know what as real conversations usually go stale well before my final Alice response (and even that might have been too harsh and conversation-stopping, I’d like to dig deeper and find out what happens next).

I still think we ultimately want “do what I meant not what I said“, but at the very least that’s really hard to specify and at worst I’m starting to worry that some (too many?) people may be unable to cope with the possibility that some of the things they want are incoherent or self-contradictory.

Whatever the solution, I suspect that politics and economics both have a lot of lessons available to help the development of safe A.I. — both limited A.I. that currently exists and also potential future tech such as human-level general A.I. (perhaps even super-intelligence, but don’t count on that).

Standard
Minds

Dynamic range of Bayesian thought

We naturally use something close to Bayesian logic when we learn and intuit. Bayesian logic doesn’t update when the prior is 0 or 1. Some people can’t shift their opinions, no matter what evidence they have. This is compatible with them having priors of 0 or 1.

It would be implausible for humans to store neural weights with ℝeal numbers. How many bits (base-2) do we use to store our implicit priors? My gut feeling says it’s a shockingly small number, perhaps 4.

How little evidence do we need to become trapped in certainty? Is it even constant (or close to) for all humans?

Standard