Science

If I’m right about this, it’s luck, not skill.

Number #37 in the ongoing series of “questions that I can’t even articulate correctly without a PhD, and if I Google it I’ll just get a bunch of amateurs who’ve mistaken themselves for Einstein”:

What if the apparent factor of 10¹²⁰ difference between the theoretical energy density of zero-point energy and the observed value of the cosmological constant is because that energy has gone into curving the 6-or-7 extra dimensions of string theory so tightly those extra dimensions can’t be observed?

Testable (hah!) consequence: the Calabi–Yau manifolds of string theory would be larger (less tightly curved) inside Casimir cavities.

Advertisements
Standard
Futurology, Technology

Musk City, Antarctica

One of the criticisms of a Mars colony is that Antarctica is more hospitable in literally every regard (you might argue that the 6-month day and the 6-month night makes it less hospitable, to which I would reply that light bulbs exist and you’d need light bulbs all year round on Mars to avoid SAD-like symptoms).

I’ve just realised the 2017 BFR will be able to get you anywhere in Antarctica, from any launch site on Earth, in no more than 45 minutes, at the cost of long-distance economy passenger flights, and that the Mars plan involves making fuel and oxidiser out of atmospheric CO₂ and frozen water ice so no infrastructure needs to be shipped conventionally before the first landing.

Standard
Futurology, AI

The end of human labour is inevitable, here’s why

OK. So, you might look at state-of-the-art A.I. and say “oh, this uses too much power compared to a human brain” or “this takes too many examples compared to a human brain”.

So far, correct.

But there are 7.6 billion humans: if an A.I. watches all of them all of the time (easy to imagine given around 2 billion of us already have two or three competing A.I. in our pockets all the time, forever listening for an activation keyword), then there is an enormous set of examples with which to train the machine mind.

“But,” you ask, “what about the power consumption?”

Humans cost a bare minimum of $1.25 per day, even if they’re literally slaves and you only pay for food and (minimal) shelter. Solar power can be as cheap as 2.99¢/kWh.

Combined, that means that any A.I. which uses less than 1.742 kilowatts per human-equivalent-part is beating the cheapest possible human — By way of comparison, Google’s first generation Tensor processing unit uses 40 W when busy — in the domain of Go, it’s about 174,969 times as cost efficient as a minimum-cost human, because four of them working together as one can teach itself to play Go better than the best human in… three days.

And don’t forget that it’s reasonable for A.I. to have as many human-equivalent-parts as there are humans performing whichever skill is being fully automated.

Skill. Not sector, not factory, skill.

And when one skill is automated away, when the people who performed that skill go off to retrain on something else, no matter where they are or what they do, there will be an A.I. watching them and learning with them.

Is there a way out?

Sure. All you have to do is make sure you learn a skill nobody else is learning.

Unfortunately, there is a reason why “thinking outside the box” is such a business cliché: humans suck at that style of thinking, even when we know what it is and why it’s important. We’re too social, we copy each other and create by remixing more than by genuinely innovating, even when we think we have something new.

Computers are, ironically, better than humans at thinking outside the box: two of the issues in Concrete Problems in AI Safety are there because machines easily stray outside the boxes we are thinking within when we give them orders. (I suspect that one of the things which forces A.I. to need far more examples to learn things than we humans do is that they have zero preconceived notions, and therefore must be equally open-minded to all possibilities).

Worse, no matter how creative you are, if other humans see you performing a skill that machines have yet to master, those humans will copy you… and then the machines, even today’s machines, will rapidly learn from all the enthusiastic humans who are so gleeful about their new trick to stay one step ahead of the machines, the new skill they can point to and say “look, humans are special, computers can’t do this” right up until the computers do it.

Standard
Software

I’m updating my six-year-old Runestone code. Objective-C has changed, Cocos2d has effectively been replaced with SpriteKit, and my understanding of the language has improved massively. Net result? It’s embarrassing.

Once this thing is running as it should, I may rewrite from scratch just to see how bad a project has to be for rewrites to be worth it.

Aside
AI

Would this be a solution to the problem of literal-Genie omniscient AIs?

[stupivalent: neither malevolent nor benevolent, just doing exactly what it was told without awareness that what you said isn’t what you meant]

Imagine an AI that, as per [Robert Mile’s youtube videos], has a perfect model of reality, that has absolutely no ethical constraints, and that is given the instruction “collect as many stamps as possible”.

Could the bad outcome be prevented if the AI was built to always add the following precondition, regardless of what it was tasked by a human to achieve?

“Your reward function is measured in terms of how well the person who gave you the instruction would have reacted if they had heard, at the moment they gave you the instruction, what you were proposing to do.”

One might argue that Robert Miles’ stamp collector AI is a special case, as it is presupposed to model reality perfectly. I think such an objection is unreasonable: models don’t have to be perfect to cause the problems he described, and models don’t have to be perfect to at least try to predict what someone would have wanted.

How do you train an AI to figure out what people will and won’t approve of? I’d conjecture having the AI construct stories, tell those stories to people, and learn through story-telling what people consider to be “happy endings” and “sad endings”. Well, construct and read, but it’s much harder to teach a machine to read than it is to teach it to write — we’ve done the latter, the former might be Turing-complete.

Disclaimer: I have an A-level in philosophy, but it’s not a good one. I’m likely to be oblivious to things that proper philosophers consider common knowledge. I’ve also been spending most of the last 18 months writing a novel and only covering recent developments in AI in my spare time.

Standard
Science, Technology

Railgun notes #2

[Following previous railgun notes, which has been updated with corrections]

Force:
F = B·I·l
B = 1 tesla

I: Current = Voltage / Resistance
l: Length of armature in meters

F = 1 tesla · V/R · l
F = m · a
∴ a = (1 tesla · V/R · l) / m

Using liquid mercury, let cavity be 1cm square, consider section 1cm long:
∴ l = 0.01 m
Resistivity: 961 nΩ·m
∴ Resistance R = ((961 nΩ·m)*0.01m)/(0.01m^2) = 9.6×10^-7 Ω
Volume: 1 millilitre
∴ Mass m = ~13.56 gram = 1.356e-2 kg
∴ a = (1 tesla · V/(9.6×10^-7 Ω) · (0.01 m)) / (1.356e-2 kg)

Let target velocity = Escape velocity = 11200 m/s = 1.12e4 m/s:
Railgun length s = 1/2 · a · t^2
And v = a · t
∴ t = v / a
∴ s = 1/2 · a · (v / a)^2
∴ s = 1/2 · a · v^2 / a^2
∴ s = 1/2 · v^2 / a
∴ s = 1/2 · ((1.12e4 m/s)^2) / ((1 tesla · V/(9.6×10^-7 Ω) · (0.01 m)) / (1.356e-2 kg))

@250V: s = 0.3266 m (matches previous result)

@1V: s = 81.65 m
I = V/R = 1V / 9.6×10^-7 Ω = 1.042e6 A
P = I · V = 1V · 1.042e6 A = 1.042e6 W

Duration between rails:
t = v / a
∴ t = (1.12e4 m/s) / a
∴ t = (1.12e4 m/s) / ( (1 tesla · V/(9.6×10^-7 Ω) · (0.01 m)) / (1.356e-2 kg) )

(Different formula than before, but produces same values)
@1V: t = 0.01458 seconds

Electrical energy usage: E = P · t
@1V: E = 1.042e6 W · 0.01458 seconds = 1.519e4 joules

Kinetic energy: E = 1/2 · m · v^2 = 8.505e5 joules

Kinetic energy out shouldn’t exceed electrical energy used, so something has gone wrong.

Standard
AI, Science

Why do people look by touching?

Every so often, I see someone get irritated that “can I see that?” tends to mean “may I hold that while I look at it?” Given how common this is, and how natural it seems to me to want to hold something while I examine it, I wonder if there is an underlying reason behind it.

Seeing some of the pictures in a recent blog post by Google’s research team, I wonder if that reason may be related to how “quickly” we learn to recognise new objects — quickly in quotes, because we make “one observation” while a typical machine-learning system may need thousands of examples to learn from — what if we also need a lot of examples, but we don’t realise that we need them because we’re seeing them in a continuous sequence?

Human vision isn’t as straightforward as a video played back on a computer, but it’s not totally unreasonable to say we see things “more than once” when we hold them in our hands — and, crucially, if we hold them while we do so we get to see those things with additional information: the object’s distance and therefore size comes from proprioception (which tells us where our hand is), not just from binocular vision; we can rotate it and see it from multiple angles, or rotate ourselves and see how different angles of light changes its appearance; we can bring it closer to our eyes to see fine detail that we might have missed from greater distance; we can rub the surface to see if markings on the surface are permanent or temporary.

So, the hypothesis (conjecture?) is this: humans need to hold things to look at them properly, just to gather enough information to learn what it looks like in general rather than just from one point of view. Likewise, machine learning systems seem worse than they are for lack of capacity to create realistic alternative perspectives of the things they’ve been tasked with classifying.

Not sure how I’d test both parts of this idea. A combination of robot arm, camera, and machine learning system that manipulates an object it’s been asked to learn to recognise is the easy part; but when testing the reverse in humans, one would need to show them a collection of novel objects, half of which they can hold and the other half of which they can only observe in a way that actively prevents them from seeing multiple perspectives, and then test their relative abilities to recognise the objects in each category.

Standard