Science, SciFi, Technology

Kessler-resistant real-life force-fields?

Idle thought at this stage.

The Kessler syndrome (also called the Kessler effect, collisional cascading or ablation cascade), proposed by the NASA scientist Donald J. Kessler in 1978, is a scenario in which the density of objects in low earth orbit (LEO) is high enough that collisions between objects could cause a cascade where each collision generates space debris that increases the likelihood of further collisions.

Kessler syndrome, Wikipedia

If all objects in Earth orbit were required to have an electrical charge (all negative, let’s say), how strong would that charge have to be to prevent collisions?

Also, how long would they remain charged, given the ionosphere, solar wind, Van Allen belts, etc?

Also, how do you apply charge to space junk already present? Rely on it picking up charge when it collides with new objects? Or is it possible to use an electron gun to charge them from a distance? And if so, what’s the trade-off between beam voltage, distance, and maximum charge (presumably shape dependent)?

And if you can apply charge remotely, is this even the best way to deal with them, rather than collecting them all in a large net and de-orbiting them?


I am not a quantum physicist

I am not a quantum physicist. I do not write this prediction thinking that it is true or a novel deduction on the nature of reality. I write this prediction in order to test my own understanding of quantum physics.

Given all particles are fields:

  1. Fermions are those fields where probability is in the range 0-1 (or possibly -1 to +1, depending on antimatter).
  2. Bosons are those fields where probability can take on any positive or zero value (possibly also any negative value, depending on antimatter).

This “explains” why two fermions cannot occupy the same quantum state, yet bosons can. Inverted quote marks, because this might turn out to not have any explanatory power.

I’m fine with that, just as I’m fine with being wrong. I am not a quantum physicist. I don’t expect to be right. It would be nicer to find I’m wrong rather than not even wrong, but even that’s OK — that’s why I’m writing this down before I see if someone else has already written about this.

Science, Technology

You won’t believe how fast transistors are

A transistor in a CPU is smaller and faster than a synapse in one of your brain’s neurons by about the same ratio that a wolf is smaller and faster than a hill.




CPU: 11nm transistors, 30GHz transition rate (transistors flip significantly faster than overall clock speed)

Neurons: 1µm synapses, 200Hz pulse rate

Wolves: 1.6m long, average range 25 km/day

Hills: 145m tall (widely variable, of course), continental drift 2 cm/year

1µm/11nm ≅ 1.6m/145m
200Hz/30GHz ≅ (Continental drift 2 cm/year) / (Average range 25 km/day)


If I’m right about this, it’s luck, not skill.

Number #37 in the ongoing series of “questions that I can’t even articulate correctly without a PhD, and if I Google it I’ll just get a bunch of amateurs who’ve mistaken themselves for Einstein”:

What if the apparent factor of 10¹²⁰ difference between the theoretical energy density of zero-point energy and the observed value of the cosmological constant is because that energy has gone into curving the 6-or-7 extra dimensions of string theory so tightly those extra dimensions can’t be observed?

Testable (hah!) consequence: the Calabi–Yau manifolds of string theory would be larger (less tightly curved) inside Casimir cavities.

Science, Technology

Railgun notes #2

[Following previous railgun notes, which has been updated with corrections]

F = B·I·l
B = 1 tesla

I: Current = Voltage / Resistance
l: Length of armature in meters

F = 1 tesla · V/R · l
F = m · a
∴ a = (1 tesla · V/R · l) / m

Using liquid mercury, let cavity be 1cm square, consider section 1cm long:
∴ l = 0.01 m
Resistivity: 961 nΩ·m
∴ Resistance R = ((961 nΩ·m)*0.01m)/(0.01m^2) = 9.6×10^-7 Ω
Volume: 1 millilitre
∴ Mass m = ~13.56 gram = 1.356e-2 kg
∴ a = (1 tesla · V/(9.6×10^-7 Ω) · (0.01 m)) / (1.356e-2 kg)

Let target velocity = Escape velocity = 11200 m/s = 1.12e4 m/s:
Railgun length s = 1/2 · a · t^2
And v = a · t
∴ t = v / a
∴ s = 1/2 · a · (v / a)^2
∴ s = 1/2 · a · v^2 / a^2
∴ s = 1/2 · v^2 / a
∴ s = 1/2 · ((1.12e4 m/s)^2) / ((1 tesla · V/(9.6×10^-7 Ω) · (0.01 m)) / (1.356e-2 kg))

@250V: s = 0.3266 m (matches previous result)

@1V: s = 81.65 m
I = V/R = 1V / 9.6×10^-7 Ω = 1.042e6 A
P = I · V = 1V · 1.042e6 A = 1.042e6 W

Duration between rails:
t = v / a
∴ t = (1.12e4 m/s) / a
∴ t = (1.12e4 m/s) / ( (1 tesla · V/(9.6×10^-7 Ω) · (0.01 m)) / (1.356e-2 kg) )

(Different formula than before, but produces same values)
@1V: t = 0.01458 seconds

Electrical energy usage: E = P · t
@1V: E = 1.042e6 W · 0.01458 seconds = 1.519e4 joules

Kinetic energy: E = 1/2 · m · v^2 = 8.505e5 joules

Kinetic energy out shouldn’t exceed electrical energy used, so something has gone wrong.

AI, Science

Why do people look by touching?

Every so often, I see someone get irritated that “can I see that?” tends to mean “may I hold that while I look at it?” Given how common this is, and how natural it seems to me to want to hold something while I examine it, I wonder if there is an underlying reason behind it.

Seeing some of the pictures in a recent blog post by Google’s research team, I wonder if that reason may be related to how “quickly” we learn to recognise new objects — quickly in quotes, because we make “one observation” while a typical machine-learning system may need thousands of examples to learn from — what if we also need a lot of examples, but we don’t realise that we need them because we’re seeing them in a continuous sequence?

Human vision isn’t as straightforward as a video played back on a computer, but it’s not totally unreasonable to say we see things “more than once” when we hold them in our hands — and, crucially, if we hold them while we do so we get to see those things with additional information: the object’s distance and therefore size comes from proprioception (which tells us where our hand is), not just from binocular vision; we can rotate it and see it from multiple angles, or rotate ourselves and see how different angles of light changes its appearance; we can bring it closer to our eyes to see fine detail that we might have missed from greater distance; we can rub the surface to see if markings on the surface are permanent or temporary.

So, the hypothesis (conjecture?) is this: humans need to hold things to look at them properly, just to gather enough information to learn what it looks like in general rather than just from one point of view. Likewise, machine learning systems seem worse than they are for lack of capacity to create realistic alternative perspectives of the things they’ve been tasked with classifying.

Not sure how I’d test both parts of this idea. A combination of robot arm, camera, and machine learning system that manipulates an object it’s been asked to learn to recognise is the easy part; but when testing the reverse in humans, one would need to show them a collection of novel objects, half of which they can hold and the other half of which they can only observe in a way that actively prevents them from seeing multiple perspectives, and then test their relative abilities to recognise the objects in each category.



Human vision is both amazingly good, and surprisingly weird.

Good, because try taking a photo of the Moon and comparing it with what you can see with your eyes.

Weird, because of all the different ways to fool it. The faces you see in clouds. The black-and-blue/gold-and-white dress (I see gold and white, which means I’m wrong). The way your eyes keep darting all over the place without you even noticing.

What happens if you force your eyes to stay put?

I have limited ability to pause my eyes’ saccade; I have no idea how it compares to other people, so I assume anyone can do what I have tried.

On a recent sunny day, in some nearby woodland, I focused on the smallest thing I could see, a small dot in the grass near where I sat. I shut one eye, then tried to keep my open eye as still as possible.

It was difficult, and I had to make several attempts, but soon all the things which were moving stood out against all the things which were stationary. That much, I expected. What I did not expect was for my perception of what was near the point I was focused on to change.

This slideshow requires JavaScript.

I didn’t take photos at the time, but this mock-up shows a similar environment, and an approximation of the effect. One small region near the point I was looking at tiled itself around much of my central vision. I don’t think it was any particular shape: what I have in this mock-up is a square, what I saw was {shape of nearby thing} tiled, even though that doesn’t really work in euclidian space. If I let my vision focus on a different point infinitesimally near the first, my central vision became tiled with a different thing with its own shape. The transition from normal vision to weird tiling felt like it took about a second.

How much of this is consistent with other people’s eyes (and minds), I don’t know. It might be that all human vision (and brains) do this, or it might be that the way I learned to see is different to the way you learned to see. Or perhaps we learned the same way, and me thinking about computer vision has literally changed the way I see.

Vision is strange. And I’m not at all sure where the boundary is between seeing and thinking; the whole concept is far less clear than it seemed when I was a kid.