Science

Homeopathic solutions to the Fermi paradox

Homeopathy: for those who have never learned the details, claims that the potency of a treatment can be increased by repeatedly diluting it. There are many scales — the C-scale is “how many times has this been diluted by a factor of 100”, the X-scale “…by a factor of 10”. I’d say “clearly nonsense”, but I fell for it when I was a teenager.

Fermi paradox: there are so many stars in the observable universe — tens of sextillions (short scale) — that even fairly pessimistic assumptions imply we should be surrounded by noisy aliens… so why can’t we see any?

One of the most common resolutions to the Fermi paradox is that there are one or more “great filters” which make it entirely unlikely that any of those stars have produced intergalactic expansionist civilisations. There are good reasons to expect direct intergalactic expansion rather than starting with ‘mere’ interstellar expansion, and (rather more surprisingly) good reasons to think we’re within spitting distance of the technology required, but that only makes the non-technological problems all the more severe. There are a lot of unknowns here, obviously we’ve only got ourselves as an example, so the space between “where we are now” and “owning the universe” is filled entirely with underpants gnomes, and that’s where homeopathy fits in, in two separate ways.

First, as a categorical example. Homeopathy represents an archaic way of thinking, yet it’s very popular. It’s simple, it’s friendly, it is a viral meme. There are many of these, some of them are quite destructive, and while it’s nice to think nature is in a balance — especially when we’re thinking of something we’re really proud of such as our own minds — the truth is nature (including humans) often goes off the deep end and only sometimes recovers. It’s very easy for me to believe that an anti-rational meme such as homeopathy can either destroy a civilisation entirely, or prevent it developing into a proper space-faring civilisation.

Second, as an analogy. Dilution. It’s not the first dilution of a homeopathic preparation which removes all atoms of active ingredients from the result, but the repeated dilution. If there are, say, twenty things which have an independent 50% chance of holding back or wiping out a civilisation out before it can set up a colony — AI; bioweapons; cyber-warfare; global climate change (doesn’t matter if artificial warming or natural ice age); cascade agricultural collapse; mineral resource exhaustion; grey goo; global thermonuclear war; cosmic threats collectively from noisy stars whose CMEs make electricity impractical to asteroids and gamma ray bursts; anti-intellectualism movements, whether deliberate or not; feedback between cheap genetic engineering and genetically-defined super-stimulus making all the citizens a biologically vulnerable monoculture … — twenty items each with a 50% chance adds up to million-to-one odds (million-ish, but if you care about the difference you’re taking the wrong lesson from this).

Yes, one-million-to-one is almost irrelevant compared to ten sextillion. Odds of (100e9)^2-to-one would require 73 such events, not 20, but this combines with the previous Fermi estimates, it doesn’t replace them. 20 such events reduces the overall problem by a factor of a million, no matter what your previous estimate was, and both 20 events and 50% chances are just round numbers, not a real ones. Unfortunately, we don’t know how many small-filters we might face: as the Great Recession was starting, someone said that no two recessions are the same because we learn from all our mistakes and so each mistake has to be a new one. Sadly it’s worse even than that, as humanity as a whole does repeat even its economic mistakes, so even if we weren’t re-rolling some of our previously-successful dice because we keep thinking “we’re too big to fail“, humans don’t know all the ways we can fail to survive.

The Great Filter doesn’t have to be something that civilisations encounter exactly once and in much the same way a sentence encounters a full stop — it can be the death of a thousand paper-cuts.

If we do finally reach the stars, we may find the universe is much more interesting than it currently seems. Instead of Vulcans and warp drive, we might find hippy space-elves communing with their trees via mind-warping drugs… and if we don’t, instead of wiping ourselves out, we might become the hippy space-elves that some sentient octopus discovers while going boldly where no sentient octopus has gone before.

Advertisements
Standard
AI, Futurology, Philosophy, Psychology, Science

How would you know whether an A.I. was a person or not?

I did an A-level in Philosophy. (For non UK people, A-levels are a 2-year course that happens after highschool and before university).

I did it for fun rather than good grades — I had enough good grades to get into university, and when the other A-levels required my focus, I was fine putting zero further effort into the Philosophy course. (Something which was very clear when my final results came in).

What I didn’t expect at the time was that the rapid development of artificial intelligence in my lifetime would make it absolutely vital that humanity develops a concrete and testable understanding of what counts as a mind, as consciousness, as self-awareness, and as capability to suffer. Yes, we already have that as a problem in the form of animal suffering and whether meat can ever be ethical, but the problem which already exists, exists only for our consciences — the animals can’t take over the world and treat us the way we treat them, but an artificial mind would be almost totally pointless if it was as limited as an animal, and the general aim is quite a lot higher than that.

Some fear that we will replace ourselves with machines which may be very effective at what they do, but don’t have anything “that it’s like to be”. One of my fears is that we’ll make machines that do “have something that it’s like to be”, but who suffer greatly because humanity fails to recognise their personhood. (A paperclip optimiser doesn’t need to hate us to kill us, but I’m more interested in the sort of mind that can feel what we can feel).

I don’t have a good description of what I mean by any of the normal words. Personhood, consciousness, self awareness, suffering… they all seem to skirt around the core idea, but to the extent that they’re correct, they’re not clearly testable; and to the extent that they’re testable, they’re not clearly correct. A little like the maths-vs.-physics dichotomy.

Consciousness? Versus what, subconscious decision making? Isn’t this distinction merely system 1 vs. system 2 thinking? Even then, the word doesn’t tell us what it means to have it objectively, only subjectively. In some ways, some forms of A.I. looks like system 1 — fast, but error prone, based on heuristics; while other forms of A.I. look like system 2 — slow, careful, deliberative weighing all the options.

Self-awareness? What do we even mean by that? It’s absolutely trivial to make an A.I. aware of it’s own internal states, even necessary for anything more than a perceptron. Do we mean a mirror test? (Or non-visual equivalent for non-visual entities, including both blind people and also smell-focused animals such as dogs). That at least can be tested.

Capability to suffer? What does that even mean in an objective sense? Is suffering equal to negative reinforcement? If you have only positive reinforcement, is the absence of reward itself a form of suffering?

Introspection? As I understand it, the human psychology of this is that we don’t really introspect, we use system 2 thinking to confabulate justifications for what system 1 thinking made us feel.

Qualia? Sure, but what is one of these as an objective, measurable, detectable state within a neural network, be it artificial or natural?

Empathy or mirror neurons? I can’t decide how I feel about this one. At first glance, if one mind can feel the same as another mind, that seems like it should have the general ill-defined concept I’m after… but then I realised, I don’t see why that would follow and had the temporarily disturbing mental concept of an A.I. which can perfectly mimic the behaviour corresponding to the emotional state of someone they’re observing, without actually feeling anything itself.

And then the disturbance went away as I realised this is obviously trivially possible, because even a video recording fits that definition… or, hey, a mirror. A video recording somehow feels like it’s fine, it isn’t “smart” enough to be imitating, merely accurately reproducing. (Now I think about it, is there an equivalent issue with the mirror test?)

So, no, mirror neurons are not enough to be… to have the qualia of being consciously aware, or whatever you want to call it.

I’m still not closer to having answers, but sometimes it’s good to write down the questions.

Standard
Science, SciFi, Technology

Kessler-resistant real-life force-fields?

Idle thought at this stage.

The Kessler syndrome (also called the Kessler effect, collisional cascading or ablation cascade), proposed by the NASA scientist Donald J. Kessler in 1978, is a scenario in which the density of objects in low earth orbit (LEO) is high enough that collisions between objects could cause a cascade where each collision generates space debris that increases the likelihood of further collisions.

Kessler syndrome, Wikipedia

If all objects in Earth orbit were required to have an electrical charge (all negative, let’s say), how strong would that charge have to be to prevent collisions?

Also, how long would they remain charged, given the ionosphere, solar wind, Van Allen belts, etc?

Also, how do you apply charge to space junk already present? Rely on it picking up charge when it collides with new objects? Or is it possible to use an electron gun to charge them from a distance? And if so, what’s the trade-off between beam voltage, distance, and maximum charge (presumably shape dependent)?

And if you can apply charge remotely, is this even the best way to deal with them, rather than collecting them all in a large net and de-orbiting them?

Standard
Science

I am not a quantum physicist

I am not a quantum physicist. I do not write this prediction thinking that it is true or a novel deduction on the nature of reality. I write this prediction in order to test my own understanding of quantum physics.

Given all particles are fields:

  1. Fermions are those fields where probability is in the range 0-1 (or possibly -1 to +1, depending on antimatter).
  2. Bosons are those fields where probability can take on any positive or zero value (possibly also any negative value, depending on antimatter).

This “explains” why two fermions cannot occupy the same quantum state, yet bosons can. Inverted quote marks, because this might turn out to not have any explanatory power.

I’m fine with that, just as I’m fine with being wrong. I am not a quantum physicist. I don’t expect to be right. It would be nicer to find I’m wrong rather than not even wrong, but even that’s OK — that’s why I’m writing this down before I see if someone else has already written about this.

Standard
Science, Technology

You won’t believe how fast transistors are

A transistor in a CPU is smaller and faster than a synapse in one of your brain’s neurons by about the same ratio that a wolf is smaller and faster than a hill.

Smaller.

And.

Faster.

CPU: 11nm transistors, 30GHz transition rate (transistors flip significantly faster than overall clock speed)

Neurons: 1µm synapses, 200Hz pulse rate

Wolves: 1.6m long, average range 25 km/day

Hills: 145m tall (widely variable, of course), continental drift 2 cm/year

1µm/11nm ≅ 1.6m/145m
200Hz/30GHz ≅ (Continental drift 2 cm/year) / (Average range 25 km/day)

Standard
Science

If I’m right about this, it’s luck, not skill.

Number #37 in the ongoing series of “questions that I can’t even articulate correctly without a PhD, and if I Google it I’ll just get a bunch of amateurs who’ve mistaken themselves for Einstein”:

What if the apparent factor of 10¹²⁰ difference between the theoretical energy density of zero-point energy and the observed value of the cosmological constant is because that energy has gone into curving the 6-or-7 extra dimensions of string theory so tightly those extra dimensions can’t be observed?

Testable (hah!) consequence: the Calabi–Yau manifolds of string theory would be larger (less tightly curved) inside Casimir cavities.

Standard
Science, Technology

Railgun notes #2

[Following previous railgun notes, which has been updated with corrections]

Force:
F = B·I·l
B = 1 tesla

I: Current = Voltage / Resistance
l: Length of armature in meters

F = 1 tesla · V/R · l
F = m · a
∴ a = (1 tesla · V/R · l) / m

Using liquid mercury, let cavity be 1cm square, consider section 1cm long:
∴ l = 0.01 m
Resistivity: 961 nΩ·m
∴ Resistance R = ((961 nΩ·m)*0.01m)/(0.01m^2) = 9.6×10^-7 Ω
Volume: 1 millilitre
∴ Mass m = ~13.56 gram = 1.356e-2 kg
∴ a = (1 tesla · V/(9.6×10^-7 Ω) · (0.01 m)) / (1.356e-2 kg)

Let target velocity = Escape velocity = 11200 m/s = 1.12e4 m/s:
Railgun length s = 1/2 · a · t^2
And v = a · t
∴ t = v / a
∴ s = 1/2 · a · (v / a)^2
∴ s = 1/2 · a · v^2 / a^2
∴ s = 1/2 · v^2 / a
∴ s = 1/2 · ((1.12e4 m/s)^2) / ((1 tesla · V/(9.6×10^-7 Ω) · (0.01 m)) / (1.356e-2 kg))

@250V: s = 0.3266 m (matches previous result)

@1V: s = 81.65 m
I = V/R = 1V / 9.6×10^-7 Ω = 1.042e6 A
P = I · V = 1V · 1.042e6 A = 1.042e6 W

Duration between rails:
t = v / a
∴ t = (1.12e4 m/s) / a
∴ t = (1.12e4 m/s) / ( (1 tesla · V/(9.6×10^-7 Ω) · (0.01 m)) / (1.356e-2 kg) )

(Different formula than before, but produces same values)
@1V: t = 0.01458 seconds

Electrical energy usage: E = P · t
@1V: E = 1.042e6 W · 0.01458 seconds = 1.519e4 joules

Kinetic energy: E = 1/2 · m · v^2 = 8.505e5 joules

Kinetic energy out shouldn’t exceed electrical energy used, so something has gone wrong.

Standard