Science, Technology

Railgun notes #2

[Following previous railgun notes, which has been updated with corrections]

Force:
F = B·I·l
B = 1 tesla

I: Current = Voltage / Resistance
l: Length of armature in meters

F = 1 tesla · V/R · l
F = m · a
∴ a = (1 tesla · V/R · l) / m

Using liquid mercury, let cavity be 1cm square, consider section 1cm long:
∴ l = 0.01 m
Resistivity: 961 nΩ·m
∴ Resistance R = ((961 nΩ·m)*0.01m)/(0.01m^2) = 9.6×10^-7 Ω
Volume: 1 millilitre
∴ Mass m = ~13.56 gram = 1.356e-2 kg
∴ a = (1 tesla · V/(9.6×10^-7 Ω) · (0.01 m)) / (1.356e-2 kg)

Let target velocity = Escape velocity = 11200 m/s = 1.12e4 m/s:
Railgun length s = 1/2 · a · t^2
And v = a · t
∴ t = v / a
∴ s = 1/2 · a · (v / a)^2
∴ s = 1/2 · a · v^2 / a^2
∴ s = 1/2 · v^2 / a
∴ s = 1/2 · ((1.12e4 m/s)^2) / ((1 tesla · V/(9.6×10^-7 Ω) · (0.01 m)) / (1.356e-2 kg))

@250V: s = 0.3266 m (matches previous result)

@1V: s = 81.65 m
I = V/R = 1V / 9.6×10^-7 Ω = 1.042e6 A
P = I · V = 1V · 1.042e6 A = 1.042e6 W

Duration between rails:
t = v / a
∴ t = (1.12e4 m/s) / a
∴ t = (1.12e4 m/s) / ( (1 tesla · V/(9.6×10^-7 Ω) · (0.01 m)) / (1.356e-2 kg) )

(Different formula than before, but produces same values)
@1V: t = 0.01458 seconds

Electrical energy usage: E = P · t
@1V: E = 1.042e6 W · 0.01458 seconds = 1.519e4 joules

Kinetic energy: E = 1/2 · m · v^2 = 8.505e5 joules

Kinetic energy out shouldn’t exceed electrical energy used, so something has gone wrong.

Advertisements
Standard
AI, Science

Why do people look by touching?

Every so often, I see someone get irritated that “can I see that?” tends to mean “may I hold that while I look at it?” Given how common this is, and how natural it seems to me to want to hold something while I examine it, I wonder if there is an underlying reason behind it.

Seeing some of the pictures in a recent blog post by Google’s research team, I wonder if that reason may be related to how “quickly” we learn to recognise new objects — quickly in quotes, because we make “one observation” while a typical machine-learning system may need thousands of examples to learn from — what if we also need a lot of examples, but we don’t realise that we need them because we’re seeing them in a continuous sequence?

Human vision isn’t as straightforward as a video played back on a computer, but it’s not totally unreasonable to say we see things “more than once” when we hold them in our hands — and, crucially, if we hold them while we do so we get to see those things with additional information: the object’s distance and therefore size comes from proprioception (which tells us where our hand is), not just from binocular vision; we can rotate it and see it from multiple angles, or rotate ourselves and see how different angles of light changes its appearance; we can bring it closer to our eyes to see fine detail that we might have missed from greater distance; we can rub the surface to see if markings on the surface are permanent or temporary.

So, the hypothesis (conjecture?) is this: humans need to hold things to look at them properly, just to gather enough information to learn what it looks like in general rather than just from one point of view. Likewise, machine learning systems seem worse than they are for lack of capacity to create realistic alternative perspectives of the things they’ve been tasked with classifying.

Not sure how I’d test both parts of this idea. A combination of robot arm, camera, and machine learning system that manipulates an object it’s been asked to learn to recognise is the easy part; but when testing the reverse in humans, one would need to show them a collection of novel objects, half of which they can hold and the other half of which they can only observe in a way that actively prevents them from seeing multiple perspectives, and then test their relative abilities to recognise the objects in each category.

Standard
Science

Vision

Human vision is both amazingly good, and surprisingly weird.

Good, because try taking a photo of the Moon and comparing it with what you can see with your eyes.

Weird, because of all the different ways to fool it. The faces you see in clouds. The black-and-blue/gold-and-white dress (I see gold and white, which means I’m wrong). The way your eyes keep darting all over the place without you even noticing.

What happens if you force your eyes to stay put?

I have limited ability to pause my eyes’ saccade; I have no idea how it compares to other people, so I assume anyone can do what I have tried.

On a recent sunny day, in some nearby woodland, I focused on the smallest thing I could see, a small dot in the grass near where I sat. I shut one eye, then tried to keep my open eye as still as possible.

It was difficult, and I had to make several attempts, but soon all the things which were moving stood out against all the things which were stationary. That much, I expected. What I did not expect was for my perception of what was near the point I was focused on to change.

This slideshow requires JavaScript.

I didn’t take photos at the time, but this mock-up shows a similar environment, and an approximation of the effect. One small region near the point I was looking at tiled itself around much of my central vision. I don’t think it was any particular shape: what I have in this mock-up is a square, what I saw was {shape of nearby thing} tiled, even though that doesn’t really work in euclidian space. If I let my vision focus on a different point infinitesimally near the first, my central vision became tiled with a different thing with its own shape. The transition from normal vision to weird tiling felt like it took about a second.

How much of this is consistent with other people’s eyes (and minds), I don’t know. It might be that all human vision (and brains) do this, or it might be that the way I learned to see is different to the way you learned to see. Or perhaps we learned the same way, and me thinking about computer vision has literally changed the way I see.

Vision is strange. And I’m not at all sure where the boundary is between seeing and thinking; the whole concept is far less clear than it seemed when I was a kid.

Standard

Space rockets are Big.

Those quaint little pictures that show them next to Nelson’s Column or the Eiffel Tower don’t do them justice, partly because… well. I didn’t even realise how big the Eiffel Tower is until I visited it a few years ago.

So, here’s what the first stage of the SpaceX Falcon 9 rocket looks like, next to Nelson’s Column. Take a close look at the bottom, both photos have people in them.

Falcon 9.jpg

The image of Nelson’s Column [linked here] is licensed as Creative Commons Share Alike, which requires that “If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.”

Fortunately, the image from SpaceX [link] was licensed as CC0 1.0 Universal (CC0 1.0) Public Domain Dedication, which doesn’t interfere with my ability to release this as CC-SA-3.

Science, Technology

Falcon 9 to scale

Image
Science, Technology

Railgun notes

Force on the projectile of a railgun:
F = B·I·l
B: Magnetic field
I: Current
l: Length of armature

Current = Voltage / Resistance

Resistivity of seawater:
ρ = 2.00×10^−1 (Ω·m) (because = (Ω/m-length)*(cross-sectional area))

Let cavity be 1cm square, consider section 1cm long:

Volume: 1 millilitre
mass (m): ~1 gram = 1e-3 kg
Cross-section: 1e-4 m^2
Armature length (l): 1e-2 m
Resistance: ((2.00×10^−1 Ω·m)*0.01m)/(0.01m^2) = 0.2 Ω (got that wrong first time! Along with all that followed, which is now updated…)
∴ current (I) = Voltage (V) / 0.2 Ω

Rare earth magnets can be 1 tesla without much difficulty. Assume that here.

F = 1 T · (V/0.2 Ω) · (1e-2 m)

Target velocity: 11.2 km/s = Escape velocity = 11200 m/s
v = at = 11200 m/s
∴ a = (11200 m/s) / t
s = 1/2 · a · t^2
∴ s = 1/2 · ( (11200 m/s) / t ) · t^2
= 1/2 · (11200 m/s) · t
or: t = s / (1/2 · (11200 m/s))
F = ma = (1e-3 kg) · a
∴ a = F / (1e-3 kg)
∴ t = (11200 m/s) / (F / (1e-3 kg))
= (11200 m/s) · (1e-3 kg) / F
∴ s = 1/2 · (11200 m/s) · (11200 m/s) · (1e-3 kg) / F
∴ s = 1/2 · (11200 m/s) · (11200 m/s) · (1e-3 kg) / ( 1 T · (V/0.2 Ω) · (1e-2 m) )

Say V = 250 volts:
∴ s = 1/2 · (11200 m/s) · (11200 m/s) · (1e-3 kg) / ( 1 T · (250V/0.2 Ω) · (1e-2 m) ) = 5020m (not ~501760 meters)

Say V = 25,000 volts:
∴ s = 1/2 · (11200 m/s) · (11200 m/s) · (1e-3 kg) / ( 1 T · (25000V/0.2 Ω) · (1e-2 m) ) = 50.2m (not ~5017.6 meters)

Liquid mercury instead of seawater:
Resistivity: 961 nΩ·m = 0.961e-6 Ω·m
Resistance: 9.6e-7 Ω (got this one wrong the first time, too!)
Density: 13.56 times water
F = 1 T · (V/9.6e-7 Ω) · (1e-2 m)
s = 1/2 · (11200 m/s) · (11200 m/s) · (13.56e-3 kg) / ( 1 T · (V/9.6e-7 Ω) · (1e-2 m) )
@250 volts: s = 0.3266 meters (not 3.266m as before correction)
@25kV: s = 3.266 millimetres (not 32.66 millimetres as before)

Power (DC): P = IV where I = V/R,
R = 9.6e-7 Ω
@250 volts: I = 250 / R = 250 V / 9.6e-7 Ω = 2.604e8 amperes (x10 more than before correction)
∴ P = 65.1 gigawatts (x10 than before)
@25kV: I = 25000 / R = 25000 V / 9.6e-7 Ω = 2.604e10 amperes (x10 more than before)
∴ P = 651 terawatts (x10 than before)

Duration between rails:
From t = s / (1/2 · (11200 m/s))
@250 volts:
t = 0.3266 meters / (1/2 · (11200 m/s)) = 5.8321×10^-5 seconds (x10 less than before correction)
@25kV:
t = 3.266 millimetres / (1/2 · (11200 m/s)) = 5.8321×10^-7 seconds (x10 less than before)

Electrical energy usage:
E = P · t
@250 volts:
E = 65.1 gigawatts · 5.8321×10^-5 seconds = 3.797×10^6 joules (unchanged by correction)
@25kV:
E = 651 terawatts · 5.8321×10^-7 seconds = 3.797×10^8 joules (unchanged by correction)
(For reference, 1 litre of aviation turbine fuel is around 3.5e7 joules)

Standard
AI, Futurology, Science, Software, Technology

The Singularity is Dead, Long Live The Singularity

The Singularity is one form of the idea that machines are constantly being improved and will one day make us all unemployable. Phrased that way, it should be no surprise that discussions of the Singularity are often compared with those of the Luddites from 1816.

“It’s different now!” many people say. Are they right to think that those differences are important?

There have been so many articles and blog posts (and books) about the Singularity that I need to be careful to make clear which type of “Singularity” I’m writing about.

I don’t believe in real infinities. Any of them. Something will get in the way before you reach them. I therefore do not believe in any single runaway process that becomes a deity-like A.I. in a finite time.

That doesn’t stop me worrying about “paperclip optimisers” that are just smart enough to cause catastrophic damage (this already definitely happens even with very dumb A.I.); nor does it stop me worrying about the effect of machines with an IQ of only 200 that can outsmart all but the single smartest human, and rendering mental labour as redundant as physical labour already is, or even an IQ of 85, which would make 15.9% of the world permanently unemployable (some do claim that machines can never be artistic, but, well, machines are already doing “creative” jobs in music, literature and painting, and even if they were not there is a limit as to how many such jobs there can be).

So, for “the Singularity”, what I mean is this:

“A date after which the average human cannot keep up with the rate of progress.”

By this definition, I think it’s already happened. How many people have kept track of these things?:

Most of this was unbelievable science fiction when I was born. Between my birth and 2006, only a few of these things became reality. More than half are things that happened or were invented in the 2010s. When Google’s AlphaGo went up against Lee Sedol he thought he’d easily beat it, 5-0 or 4-1, instead he lost 1-4.

If you’re too young to have a Facebook account, there’s a good chance you’ll never need to learn any foreign language. Or make any physical object. Or learn to drive… there’s a fairly good chance you won’t be allowed to drive. And once you become an adult, if you come up with an invention or a plot for a novel or a motif for a song, there will be at least four billion other humans racing against you to publish it.

Sure, we don’t have a von Neumann probe nor even a clanking replicator at this stage (we don’t even know how to make one yet, unless you count “copy an existing life form”), but given we’ve got 3D printers working at 10 nanometers already, it’s not all that unreasonable to assume we will in the near future. The fact that life exists proves such machines are possible, after all.

None of this is to say humans cannot or will not adapt to change. We’ve been adapting to changes for a long time, we have a lot of experience of adapting to changes, we will adapt more. But there is a question:

“How fast can you adapt?”

Time, as they say, is money. Does it take you a week to learn a new job? A machine that already knows how to do it has a £500 advantage over you. A month? The machine has a £2,200 advantage. You need to get another degree? It has an £80,000 advantage even if the degree was free. That’s just for the average UK salary with none of the extra things employers have to care about.

We don’t face problems just from the machines outsmarting us, we face problems if all the people working on automation can between them outpace any significant fraction of the workforce. And there’s a strong business incentive to pay for such automation, because humans are one of the most expensive things businesses have to pay for.

I don’t have enough of a feeling for economics to guess what might happen if too many people are unemployed and therefore unable to afford the goods produced by machine labour, all I can say is that when I was in secondary school, all of us young enough to be without income, pirating software and music was common. (I was the only one with a Mac, so I had to make do with magazine cover CDs for my software, but I think the observation is still worth something).

Standard