Futurology, Science, AI, Philosophy, Psychology

How would you know whether an A.I. was a person or not?

I did an A-level in Philosophy. (For non UK people, A-levels are a 2-year course that happens after highschool and before university).

I did it for fun rather than good grades — I had enough good grades to get into university, and when the other A-levels required my focus, I was fine putting zero further effort into the Philosophy course. (Something which was very clear when my final results came in).

What I didn’t expect at the time was that the rapid development of artificial intelligence in my lifetime would make it absolutely vital that humanity develops a concrete and testable understanding of what counts as a mind, as consciousness, as self-awareness, and as capability to suffer. Yes, we already have that as a problem in the form of animal suffering and whether meat can ever be ethical, but the problem which already exists, exists only for our consciences — the animals can’t take over the world and treat us the way we treat them, but an artificial mind would be almost totally pointless if it was as limited as an animal, and the general aim is quite a lot higher than that.

Some fear that we will replace ourselves with machines which may be very effective at what they do, but don’t have anything “that it’s like to be”. One of my fears is that we’ll make machines that do “have something that it’s like to be”, but who suffer greatly because humanity fails to recognise their personhood. (A paperclip optimiser doesn’t need to hate us to kill us, but I’m more interested in the sort of mind that can feel what we can feel).

I don’t have a good description of what I mean by any of the normal words. Personhood, consciousness, self awareness, suffering… they all seem to skirt around the core idea, but to the extent that they’re correct, they’re not clearly testable; and to the extent that they’re testable, they’re not clearly correct. A little like the maths-vs.-physics dichotomy.

Consciousness? Versus what, subconscious decision making? Isn’t this distinction merely system 1 vs. system 2 thinking? Even then, the word doesn’t tell us what it means to have it objectively, only subjectively. In some ways, some forms of A.I. looks like system 1 — fast, but error prone, based on heuristics; while other forms of A.I. look like system 2 — slow, careful, deliberative weighing all the options.

Self-awareness? What do we even mean by that? It’s absolutely trivial to make an A.I. aware of it’s own internal states, even necessary for anything more than a perceptron. Do we mean a mirror test? (Or non-visual equivalent for non-visual entities, including both blind people and also smell-focused animals such as dogs). That at least can be tested.

Capability to suffer? What does that even mean in an objective sense? Is suffering equal to negative reinforcement? If you have only positive reinforcement, is the absence of reward itself a form of suffering?

Introspection? As I understand it, the human psychology of this is that we don’t really introspect, we use system 2 thinking to confabulate justifications for what system 1 thinking made us feel.

Qualia? Sure, but what is one of these as an objective, measurable, detectable state within a neural network, be it artificial or natural?

Empathy or mirror neurons? I can’t decide how I feel about this one. At first glance, if one mind can feel the same as another mind, that seems like it should have the general ill-defined concept I’m after… but then I realised, I don’t see why that would follow and had the temporarily disturbing mental concept of an A.I. which can perfectly mimic the behaviour corresponding to the emotional state of someone they’re observing, without actually feeling anything itself.

And then the disturbance went away as I realised this is obviously trivially possible, because even a video recording fits that definition… or, hey, a mirror. A video recording somehow feels like it’s fine, it isn’t “smart” enough to be imitating, merely accurately reproducing. (Now I think about it, is there an equivalent issue with the mirror test?)

So, no, mirror neurons are not enough to be… to have the qualia of being consciously aware, or whatever you want to call it.

I’m still not closer to having answers, but sometimes it’s good to write down the questions.

Advertisements
Standard
Science, SciFi, Technology

Kessler-resistant real-life force-fields?

Idle thought at this stage.

The Kessler syndrome (also called the Kessler effect, collisional cascading or ablation cascade), proposed by the NASA scientist Donald J. Kessler in 1978, is a scenario in which the density of objects in low earth orbit (LEO) is high enough that collisions between objects could cause a cascade where each collision generates space debris that increases the likelihood of further collisions.

Kessler syndrome, Wikipedia

If all objects in Earth orbit were required to have an electrical charge (all negative, let’s say), how strong would that charge have to be to prevent collisions?

Also, how long would they remain charged, given the ionosphere, solar wind, Van Allen belts, etc?

Also, how do you apply charge to space junk already present? Rely on it picking up charge when it collides with new objects? Or is it possible to use an electron gun to charge them from a distance? And if so, what’s the trade-off between beam voltage, distance, and maximum charge (presumably shape dependent)?

And if you can apply charge remotely, is this even the best way to deal with them, rather than collecting them all in a large net and de-orbiting them?

Standard
Science

I am not a quantum physicist

I am not a quantum physicist. I do not write this prediction thinking that it is true or a novel deduction on the nature of reality. I write this prediction in order to test my own understanding of quantum physics.

Given all particles are fields:

  1. Fermions are those fields where probability is in the range 0-1 (or possibly -1 to +1, depending on antimatter).
  2. Bosons are those fields where probability can take on any positive or zero value (possibly also any negative value, depending on antimatter).

This “explains” why two fermions cannot occupy the same quantum state, yet bosons can. Inverted quote marks, because this might turn out to not have any explanatory power.

I’m fine with that, just as I’m fine with being wrong. I am not a quantum physicist. I don’t expect to be right. It would be nicer to find I’m wrong rather than not even wrong, but even that’s OK — that’s why I’m writing this down before I see if someone else has already written about this.

Standard
Science, Technology

You won’t believe how fast transistors are

A transistor in a CPU is smaller and faster than a synapse in one of your brain’s neurons by about the same ratio that a wolf is smaller and faster than a hill.

Smaller.

And.

Faster.

CPU: 11nm transistors, 30GHz transition rate (transistors flip significantly faster than overall clock speed)

Neurons: 1µm synapses, 200Hz pulse rate

Wolves: 1.6m long, average range 25 km/day

Hills: 145m tall (widely variable, of course), continental drift 2 cm/year

1µm/11nm ≅ 1.6m/145m
200Hz/30GHz ≅ (Continental drift 2 cm/year) / (Average range 25 km/day)

Standard
Science

If I’m right about this, it’s luck, not skill.

Number #37 in the ongoing series of “questions that I can’t even articulate correctly without a PhD, and if I Google it I’ll just get a bunch of amateurs who’ve mistaken themselves for Einstein”:

What if the apparent factor of 10¹²⁰ difference between the theoretical energy density of zero-point energy and the observed value of the cosmological constant is because that energy has gone into curving the 6-or-7 extra dimensions of string theory so tightly those extra dimensions can’t be observed?

Testable (hah!) consequence: the Calabi–Yau manifolds of string theory would be larger (less tightly curved) inside Casimir cavities.

Standard
Science, Technology

Railgun notes #2

[Following previous railgun notes, which has been updated with corrections]

Force:
F = B·I·l
B = 1 tesla

I: Current = Voltage / Resistance
l: Length of armature in meters

F = 1 tesla · V/R · l
F = m · a
∴ a = (1 tesla · V/R · l) / m

Using liquid mercury, let cavity be 1cm square, consider section 1cm long:
∴ l = 0.01 m
Resistivity: 961 nΩ·m
∴ Resistance R = ((961 nΩ·m)*0.01m)/(0.01m^2) = 9.6×10^-7 Ω
Volume: 1 millilitre
∴ Mass m = ~13.56 gram = 1.356e-2 kg
∴ a = (1 tesla · V/(9.6×10^-7 Ω) · (0.01 m)) / (1.356e-2 kg)

Let target velocity = Escape velocity = 11200 m/s = 1.12e4 m/s:
Railgun length s = 1/2 · a · t^2
And v = a · t
∴ t = v / a
∴ s = 1/2 · a · (v / a)^2
∴ s = 1/2 · a · v^2 / a^2
∴ s = 1/2 · v^2 / a
∴ s = 1/2 · ((1.12e4 m/s)^2) / ((1 tesla · V/(9.6×10^-7 Ω) · (0.01 m)) / (1.356e-2 kg))

@250V: s = 0.3266 m (matches previous result)

@1V: s = 81.65 m
I = V/R = 1V / 9.6×10^-7 Ω = 1.042e6 A
P = I · V = 1V · 1.042e6 A = 1.042e6 W

Duration between rails:
t = v / a
∴ t = (1.12e4 m/s) / a
∴ t = (1.12e4 m/s) / ( (1 tesla · V/(9.6×10^-7 Ω) · (0.01 m)) / (1.356e-2 kg) )

(Different formula than before, but produces same values)
@1V: t = 0.01458 seconds

Electrical energy usage: E = P · t
@1V: E = 1.042e6 W · 0.01458 seconds = 1.519e4 joules

Kinetic energy: E = 1/2 · m · v^2 = 8.505e5 joules

Kinetic energy out shouldn’t exceed electrical energy used, so something has gone wrong.

Standard
AI, Science

Why do people look by touching?

Every so often, I see someone get irritated that “can I see that?” tends to mean “may I hold that while I look at it?” Given how common this is, and how natural it seems to me to want to hold something while I examine it, I wonder if there is an underlying reason behind it.

Seeing some of the pictures in a recent blog post by Google’s research team, I wonder if that reason may be related to how “quickly” we learn to recognise new objects — quickly in quotes, because we make “one observation” while a typical machine-learning system may need thousands of examples to learn from — what if we also need a lot of examples, but we don’t realise that we need them because we’re seeing them in a continuous sequence?

Human vision isn’t as straightforward as a video played back on a computer, but it’s not totally unreasonable to say we see things “more than once” when we hold them in our hands — and, crucially, if we hold them while we do so we get to see those things with additional information: the object’s distance and therefore size comes from proprioception (which tells us where our hand is), not just from binocular vision; we can rotate it and see it from multiple angles, or rotate ourselves and see how different angles of light changes its appearance; we can bring it closer to our eyes to see fine detail that we might have missed from greater distance; we can rub the surface to see if markings on the surface are permanent or temporary.

So, the hypothesis (conjecture?) is this: humans need to hold things to look at them properly, just to gather enough information to learn what it looks like in general rather than just from one point of view. Likewise, machine learning systems seem worse than they are for lack of capacity to create realistic alternative perspectives of the things they’ve been tasked with classifying.

Not sure how I’d test both parts of this idea. A combination of robot arm, camera, and machine learning system that manipulates an object it’s been asked to learn to recognise is the easy part; but when testing the reverse in humans, one would need to show them a collection of novel objects, half of which they can hold and the other half of which they can only observe in a way that actively prevents them from seeing multiple perspectives, and then test their relative abilities to recognise the objects in each category.

Standard