Politics, Psychology

Brexit as an example of failure to comprehend conditional probabilities

It’s been 2 years 5 months 12 days 12 hours 25 minutes since my first post about Brexit, and I still don’t really know what will happen.

Almost everything is conditional probability: “If there’s a hard (no-deal) Brexit, then the traffic jams from Dover, Harwich/Felixstowe, etc. will be about as long as is physically possible given the number of trucks in the UK.”

Conditional probabilities suck for human-human interactions.

Any pro-Brexit person who reads that will tend, I think, to remember that prediction without the “if” clause; and if there is a deal, they will be much more likely to crow about it as “yet another Remoaner failure”. (Also often missed: “much more likely” doesn’t mean “will”, but I keep seeing people read it that way).

This sort of prediction tends to be used by propagandists as a modern-day Cassandra: the worst case scenario is so bad that everyone strives to prevent it — in this case by not countenancing a no-deal scenario — and so it becomes just another reason to distrust experts who fought against a no-deal scenario. In turn, this increases the chances of something as bad as a no-deal scenario next time. I see the same thing with the Millennium Bug, and as a kid with global warming (no, it wasn’t scientists who told you we were heading straight into an ice age, it was newspapers; any scientists actually talking about an actual ice age were talking about the end of an interglacial warm period and were even then outnumbered 6-to-1 by those concerned about things getting warmer).

I can only think of three post-Brexit-Great-Britain (yes I do mean GB, not UK, I know too little about NI) predictions that are not conditional on the final form that Brexit takes:

  1. The UK population will still be arguing about it
  2. These arguments will involve at least one large march (≥10,000 people)
  3. At least one building will be set on fire (or an attempt will be made: they’re designed to not be on fire)

I put at least 1-in-3 odds that events 2 and 3 will be performed by both angry Leavers and angry Remainers.

Advertisements
Standard
Minds, Philosophy, Psychology

One person’s nit is another’s central pillar

If one person believes something is absolutely incontrovertibly true, then my first (and demonstrably unhelpful) reaction is that even the slightest demonstration of error should demolish the argument.

I know this doesn’t work.

People don’t make Boolean-logical arguments, they go with gut feelings that act much like Bayesian-logical inferences. If someone says something is incontrovertible, the incontrovertibility isn’t their central pillar — when I treated it as one, I totally failed to change their minds.

Steel man your arguments. Go for your opponent’s strongest point, but make sure it’s what your opponent is treating as their strongest point, for if you make the mistake I have made, you will fail.

If your Bayesian prior is 99.9%, you might reasonably (in common use of the words) say the evidence is incontrovertible; someone who hears “incontrovertible” and points out a minor edge case isn’t going to shift your posterior odds by much, are they?

They do? Are we thinking of the same things here? I don’t mean things where absolute truth is possible (i.e. maths, although I’ve had someone argue with me about that in a remarkably foolish way too), I mean about observations about reality which are necessarily flawed. Flawed, and sometimes circular.

Concrete example, although I apologise to any religious people in advance if I accidentally nut-pick. Imagine a Bible-literalist Christian called Chris (who thinks only 144,000 will survive the apocalypse, and no I’m not saying Chris is a Jehovah’s Witness, they’re just an example of 144k beliefs) arguing with Atheist Ann, specifically about “can God make a rock so heavy that God cannot move it?”:

P(A) = 0.999 (Bayesian prior: how certain Chris’s belief in God is)
P(B) = 1.0 (Observation: the argument has been made and Ann has not been struck down)
P(B|A) = 0.99979 (Probability that God has not struck down Ann for blasphemy, given that God exists — In the Bible, God has sometimes struck down non-believers, so let’s say about 21 million deaths of the 100 billion humans that have ever lived to cover the flood, noting that most were not in the 144k)

P(A|B) = P(B|A)P(A)/P(B) = 0.99979×0.999/1.0 = 0.99879021

Almost unchanged.

It gets worse; the phrase “I can’t believe what I’m hearing!” means P(B) is less than 1.0. If P(B) is less than 1.0 but all the rest is the same:

P(B) = 0.9 → P(A|B) = P(B|A)P(A)/P(B) = 0.99979×0.999/0.9 = 1.1097669

Oh no, it went up! Also, probability error, probability can never exceed 1.0! P>1.0 would be a problem if I was discussing real probabilities — if this was a maths test, this would fail (P(B|A) should be reduced correspondingly) — but people demonstrably don’t always update all their internal model at the same time: if we did, cognitive dissonance would be impossible. Depending on the level of the thinking (I suspect direct processing in synapses won’t do this, but that deliberative conscious thought can) we can sometimes fall into traps, so this totally explains another observation: some people can take the mere existence of people who disagree with them as a reason to believe even more strongly.

Standard
Health, Minds, Psychology

Attention ec — ooh, a squirrel

List of my current YouTube subscriptions. It's very long.

I think the zeitgeist seems to be moving away from filling all our time with things and being hyper-connected, and towards rarer more meaningful connections.

It’s… disturbing and interesting at the same time, to realise that the attention-grabbing nature of all the things I enjoy has been designed to perfectly fit me, and all of us, by the same survival-of-the-fittest logic that causes natural evolution.

That which best grabs the attention, thrives. That which isn’t so powerful, doesn’t.

And when we develop strategies to defend ourselves against certain attention-grabbers, the attention-grabbers which use different approaches that we have not yet defended against take the place of those we have protected ourselves from.

A memetic arms race, between mental hygiene and thought germs.

I’ve done stuff in the last three months, but that stuff hasn’t included “finish editing next draft of my novel”. I could’ve, if only I’d made time for that instead of drinking from the (effectively) bottomless well of high quality YouTube content (see side-image for my active subscriptions; I also have to make a conscious effort to not click on the interesting clips from TV shows that probably shouldn’t even be on YouTube in the first place). Even though I watch most content sped up to a factor of 1.5 or 2, I can barely find time for all the new YouTube content I care about and do my online language courses and make time for the other things like finding a job.

Editing my novel? It’s right there, on my task list… but I barely touch it, even though it’s fulfilling to work on it, and fun to re-read. I don’t know if this is ego depletion or akrasia or addiction, but whatever it is, it’s an undesirable state.

I’m vulnerable to comments sections, too. Of course, I can do something about those — when I notice myself falling into a trap, I can block a relevant domain name in my hosts file. I have a lot of stuff in that file these days, and even then I slip up a bit because I can’t edit my iPhones hosts file.

Now that I know there’s a problem, I’m working on it… just like everyone else. The irony is, by disconnecting from the hyper-connected always-on parts of the internet, we’re not around to help each other when we slip up.

CGPGrey: Thinking About Attention — Walk with Me — https://www.youtube.com/watch?v=wf2VxeIm1no

CGPGrey: This Video Will Make You Angry — https://www.youtube.com/watch?v=rE3j_RHkqJc

Elsewhere on this blog: Hyperinflation in the attention economy: what succeeds adverts?

Standard
Psychology, Software, Technology

Social media compulsion

He flashed up a slide of a shelf filled with sugary baked goods. “Just as we shouldn’t blame the baker for making such delicious treats, we can’t blame tech makers for making their products so good we want to use them,” he said. “Of course that’s what tech companies will do. And frankly: do we want it any other way?”The Guardian (website); ‘Our minds can be hijacked’: the tech insiders who fear a smartphone dystopia

I can, in fact, blame bakers. It’s easy: I do it in the same way I blame cigarette manufacturers. In all three cases (sugar/fat/flavour combinations, nicotine, social rewards) they exploit chemical pathways in our brains to get us to do something not in our best interests. They are supernormal stimuli — and given how recent the research is, I can forgive the early tobacconists and confectioners, but tech doesn’t get the luxury of ignorance-as-an-excuse.

I want my technology to be a tool which helps me get stuff done.

A drill is something I pick up, use to make a hole, then put down and forget about until I want to make another hole.

I don’t want a drill which is cursed so that if I ever put it down, I start to feel bad about not making more holes in things, and end up staying up late at night just to find yet one more thing I can drill into.

If I saw in a shop a drill which I knew would do that, I wouldn’t get it even if it was free, never broke, the (included) battery lasted a lifetime, etc. — the cost to the mind wouldn’t be worth it.

The same is true for the addictive elements of social media: I need to be connected to my friends, but I’d rather spend money than risk addiction.

Standard
Psychology

Unlearnable

How many things are there, which one cannot learn? No matter how much effort is spent trying?

I’m aware that things like conscious control of intestinal peristalsis would fit this question (probably… I mean, who would’ve tried?) but I’m not interested in purely autonomous stuff.

Assuming the stereotypes are correct, I mean stuff like adults not being able to fully cross the Chinese-English language barrier in either direction if they didn’t learn both languages as children (if you read out The Lion-Eating Poet in the Stone Den I can tell that the Shis are different to each other, but I can’t tell if the difference I hear actually conveys a difference-of-meaning or if it is just natural variation of the sort I produce if I say “Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo”, and I’m told this difficulty persists even with practice; in reverse, the ‘R’/’L’ error is a common stereotype of Chinese people speaking English). Is there something like that for visual recognition? Some people cannot recognise faces, is there something like that but for all humans? Something where no human can recognise which of two things they are looking at, even if we know that a difference exists?

Languages in general seem to be extremely difficult for most adults: personally, I’ve never been able to get my mind around all the tenses of irregular verbs in French… but is that genuinely unlearnable or something I could overcome with perseverance? I found German quite straightforward, so there may be something else going on.

Are there any other possibilities? Some people struggle with maths: is it genuinely unlearnable by the majority, or just difficult and lacking motive? Probability in particular comes to mind, because people can have the Monty Hall problem explained and not get it.

One concept I’ve only just encountered, but which suddenly makes sense of a lot of behaviour I’ve seen in politics, is called Morton’s demon by analogy with Maxwell’s demon. A filter at the level of perception which allows people to ignore and reject without consideration facts which ought to change their minds. It feels — and I recognise with amusement the oddity of using system 1 thinking at this point — like a more powerful thing than Cherry picking, Cognitive dissonance, Confirmation bias, etc., and it certainly represents — with regard to system 2 thinking — the sort of “unlearnable” I have in mind.

Standard
AI, Futurology, Philosophy, Psychology, Science

How would you know whether an A.I. was a person or not?

I did an A-level in Philosophy. (For non UK people, A-levels are a 2-year course that happens after highschool and before university).

I did it for fun rather than good grades — I had enough good grades to get into university, and when the other A-levels required my focus, I was fine putting zero further effort into the Philosophy course. (Something which was very clear when my final results came in).

What I didn’t expect at the time was that the rapid development of artificial intelligence in my lifetime would make it absolutely vital that humanity develops a concrete and testable understanding of what counts as a mind, as consciousness, as self-awareness, and as capability to suffer. Yes, we already have that as a problem in the form of animal suffering and whether meat can ever be ethical, but the problem which already exists, exists only for our consciences — the animals can’t take over the world and treat us the way we treat them, but an artificial mind would be almost totally pointless if it was as limited as an animal, and the general aim is quite a lot higher than that.

Some fear that we will replace ourselves with machines which may be very effective at what they do, but don’t have anything “that it’s like to be”. One of my fears is that we’ll make machines that do “have something that it’s like to be”, but who suffer greatly because humanity fails to recognise their personhood. (A paperclip optimiser doesn’t need to hate us to kill us, but I’m more interested in the sort of mind that can feel what we can feel).

I don’t have a good description of what I mean by any of the normal words. Personhood, consciousness, self awareness, suffering… they all seem to skirt around the core idea, but to the extent that they’re correct, they’re not clearly testable; and to the extent that they’re testable, they’re not clearly correct. A little like the maths-vs.-physics dichotomy.

Consciousness? Versus what, subconscious decision making? Isn’t this distinction merely system 1 vs. system 2 thinking? Even then, the word doesn’t tell us what it means to have it objectively, only subjectively. In some ways, some forms of A.I. looks like system 1 — fast, but error prone, based on heuristics; while other forms of A.I. look like system 2 — slow, careful, deliberative weighing all the options.

Self-awareness? What do we even mean by that? It’s absolutely trivial to make an A.I. aware of it’s own internal states, even necessary for anything more than a perceptron. Do we mean a mirror test? (Or non-visual equivalent for non-visual entities, including both blind people and also smell-focused animals such as dogs). That at least can be tested.

Capability to suffer? What does that even mean in an objective sense? Is suffering equal to negative reinforcement? If you have only positive reinforcement, is the absence of reward itself a form of suffering?

Introspection? As I understand it, the human psychology of this is that we don’t really introspect, we use system 2 thinking to confabulate justifications for what system 1 thinking made us feel.

Qualia? Sure, but what is one of these as an objective, measurable, detectable state within a neural network, be it artificial or natural?

Empathy or mirror neurons? I can’t decide how I feel about this one. At first glance, if one mind can feel the same as another mind, that seems like it should have the general ill-defined concept I’m after… but then I realised, I don’t see why that would follow and had the temporarily disturbing mental concept of an A.I. which can perfectly mimic the behaviour corresponding to the emotional state of someone they’re observing, without actually feeling anything itself.

And then the disturbance went away as I realised this is obviously trivially possible, because even a video recording fits that definition… or, hey, a mirror. A video recording somehow feels like it’s fine, it isn’t “smart” enough to be imitating, merely accurately reproducing. (Now I think about it, is there an equivalent issue with the mirror test?)

So, no, mirror neurons are not enough to be… to have the qualia of being consciously aware, or whatever you want to call it.

I’m still not closer to having answers, but sometimes it’s good to write down the questions.

Standard
Health, Psychology

Alzheimer’s

It’s as fascinating as it is sad to watch a relative fall, piece by piece, to Alzheimer’s. I had always thought it was just anterograde- and progressive retrograde amnesia of episodic memory, but its worse. It’s affecting:

  • Her skills (e.g. how to get dressed, or how much you need to chew in order to swallow).
  • Her semantic knowledge (e.g. [it is dark outside] ⇒ [it is night], or what a bath is for).
  • Her working memory (seems to be reduced to about 4 items: she can draw triangles and squares, but not higher polygons unless you walk her through it; and if you draw ◯◯▢◯▢▢ then ask her to count the circles, she says “one (pointing at the second circle), two (pointing at the third circle), that’s a square (pointing at the third square), three (pointing at the second circle again), four (pointing at the third circle again), that’s a pentagon (pointing at the pentagon I walked her through drawing); and if she is looking at a group of five cars, she’ll call it “lots of cars” rather than instantly seeing it’s five).
  • The general concept of things existing on the left side as looked at. (I always thought this was an urban legend or a misunderstanding of hemianopsia, but she will look at a plate half-covered in food and declare it finished, and rotating that plate 180° will enable her to eat more; if I ask her to draw a picture of me, she’ll stop at the nose and miss my right side (her left); if we get her to draw a clock she’ll usually miss all the numbers, but if prompted to add them will only put them on the side that should be clockwise from 12 to 6).
  • Connected-ness of objects, such as drawing the handle of a mug connected directly to the rim.
  • Object permanence — if she can’t see a thing, sometimes she forgets the thing exists. Fortunately not all the time, but she has asserted non-existence separately to “I’ve lost $thing”.
  • Vocabulary. I’m sure everyone has a fine example of word soup they can think of (I have examples, both of things I’ve said and also of frustratingly bad communications from a client), but this is high and increasing frequency — last night’s example was “this apple juice is much better than the apple juice”.

I know vision doesn’t work the way we subjectively feel it works. I hypothesise that it is roughly:

  1. Eyes →
  2. Object and feature detection, similar to current machine vision →
  3. Something that maps detected objects and features into a model of reality →
  4. “Awareness” is of that model

It fits with the way she’s losing her mind. Bit by bit, it seems like her vision is diminishing from a world full of objects, to a TV static with a few objects floating freely in that noise.

An artistic impression of her vision. The image is mostly hidden by noise, but a red British-style telephone box is visible, along with a shadow, and a flag. The 'telephone' sign is floating freely, away from the box.

How might she see the world?

Standard