Philosophy

Mathematical Universe v. Boltzmann Brains

I’m a fan of the Mathematical Universe idea. Or rather, I was. I think I came up with the idea independently of (and before) Max Tegmark, based on one of my old LiveJournal blog-post dated “2007-01-12” (from context, I think that’s YYYY-MM-DD, not YYYY-DD-MM).

Here’s what I wrote then, including typos and poor rhetorical choices:

Ouch, my mind hurts. I've been thinking about The Nature of Reality again. This time, what I have is the idea that from the point of view of current science, the universe can be described as a giant equation: each particle obeys the laws of physics, which are just mathematical formula. Add to this that an mathematical system can exist before anyone defines it (9*10 was still equal to 90 before anybody could count that high), and you get reality existing because its underlying definitions do not contradict each-other.

This would mean that there are a lot of very simple, for lack of a better word, "universes" along the lines of the one containing only Bob and Sarah, where Sarah is three times the age of Bob now, and will be twice his age in 5 years' time. But it would also mean that there are an infinite number of universes which are, from the point of view of an external observer looking at the behaviour of those within them, completely indistinguishable from this one; this would be caused by, amongst other things, the gravitational constant being represented by an irrational number, and the difference between the different universes' gravitational constants varies by all possible fractions (in the everyday sense) of one divided by Graham's number.

Our universe contains representations of many more simple ones (I've described a simple one just now, and you get hundreds of others "universes" of this type in the mathematics books you had at school); you cannot, as an outside observer, interfere with such universes, because all you end up with is another universe. The original still exists, and the example Sarah is still 15. In this sense of existence, the Stargate universe is real because it follows fundamental rules which do not contradict themselves. These rules are of course not the rules the characters within it talk about, but the rules of the Canadian TV industry. There may be another universe where the rules the characters talk about do apply, but I'm not enough of a Stargate nerd to know if they are consistent in that way.

The point of this last little diversion, is that there could be (and almost certainly is) a universe much more complex than this one, which contains us as a component. The question, which I am grossly unqualified to contemplate but tried anyway (hence my mind hurting), is what is the most complex equation possible? (Apart from "God" in certain senses of that word). All I feel certain of at the moment, is that it would "simultaneously" (if you can use that word for something outside of time but containing it) contain every possible afterlife for every possible subset of people.

Tomorrow I will be in Cambridge.

Since writing that, I found out about Boltzmann brains. Boltzmann brains are a problem, because if they exist at all then it is (probably) overwhelmingly likely that you are one, and if you are one then it’s overwhelmingly likely that the you’re wrong about everything leading up to the belief that they exist, so any belief in them has to be irrational even if it’s also correct.

Boltzmann brains appear spontaneously in systems which are in thermal equilibrium for long enough (“long enough” being 101050 years from quantum fluctuations), but if you have all possible universes then you have a universe, an infinite number of universes, where Boltzmann brains are the most common form of brain — Therefore, all the problems that apply to Boltzmann brains must also apply to the Mathematical Universe.

Advertisements
Standard
Minds

Dynamic range of Bayesian thought

We naturally use something close to Bayesian logic when we learn and intuit. Bayesian logic doesn’t update when the prior is 0 or 1. Some people can’t shift their opinions, no matter what evidence they have. This is compatible with them having priors of 0 or 1.

It would be implausible for humans to store neural weights with ℝeal numbers. How many bits (base-2) do we use to store our implicit priors? My gut feeling says it’s a shockingly small number, perhaps 4.

How little evidence do we need to become trapped in certainty? Is it even constant (or close to) for all humans?

Standard
AI, Software

Speed of machine intelligence

Every so often, someone tries to boast of human intelligence with the story of Shakuntala Devi — the stories vary, but they generally claim she beat the fastest supercomputer in the world in a feat of arithmetic, finding that the 23rd root of

916,748,676,920,039,158,098,660,927,585,380,162,483,106,680,144,308,622,407,126,516,427,934,657,040,867,096,593,279,205,767,480,806,790,022,783,016,354,924,852,380,335,745,316,935,111,903,596,577,547,340,075,681,688,305,620,821,016,129,132,845,564,805,780,158,806,771

was 546,372,891, and taking just 50 seconds to do so compared to the “over a minute” for her computer competitor.

Ignoring small details such as the “supercomputer” being named as a UNIVAC 1101, which wildly obsolete by the time of this event, this story dates to 1977 — and Moore’s Law over 41 years has made computers mind-defyingly powerful since then (if it was as simple as doubling in power every 18 months, it would 241/1.5 = 169,103,740 times faster, but Wikipedia shows even greater improvements on even shorter timescales going from the Cray X-MP in 1984 to standard consumer CPUs and GPUs in 2017, a factor of 1,472,333,333 improvement at fixed cost in only 33 years).

So, how fast are computers now? Well, here’s a small script to find out:

#!python

from datetime import datetime

before = datetime.now()

q = 916748676920039158098660927585380162483106680144308622407126516427934657040867096593279205767480806790022783016354924852380335745316935111903596577547340075681688305620821016129132845564805780158806771

for x in range(0,int(3.45e6)):
	a = q**(1./23)

after = datetime.now()

print after-before

It calculates the 23rd root of that number. It times itself as it does the calculation three million four hundred and fifty thousand times, repeating the calculation just to slow it down enough to make the time reading accurate.

Let’s see what how long it takes…

MacBook-Air:python kitsune$ python 201-digit-23rd-root.py 
0:00:01.140248
MacBook-Air:python kitsune$

1.14 seconds — to do the calculation 3,450,000 times.

My MacBook Air is an old model from mid-2013, and I’m already beating by more than a factor of 150 million someone who was (despite the oddities of the famous story) in the Guinness Book of Records for her mathematical abilities.

It gets worse, though. The next thing people often say is, paraphrased, “oh, but it’s cheating to program the numbers into the computer when the human had to read it”. Obviously the way to respond to that is to have the computer read for itself:

from sklearn import svm
from sklearn import datasets
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm

# Find out how fast it learns
from datetime import datetime
# When did we start learning?
before = datetime.now()

clf = svm.SVC(gamma=0.001, C=100.)
digits = datasets.load_digits()
size = len(digits.data)/10
clf.fit(digits.data[:-size], digits.target[:-size])

# When did we stop learning?
after = datetime.now()
# Show user how long it took to learn
print "Time spent learning:", after-before

# When did we start reading?
before = datetime.now()
maxRepeats = 100
for repeats in range(0, maxRepeats):
	for x in range(0, size):
		data = digits.data[-x]
		prediction = clf.predict(digits.data[-x])

# When did we stop reading?
after = datetime.now()
print "Number of digits being read:", size*maxRepeats
print "Time spent reading:", after-before

# Show mistakes:
for x in range(0, size):
	data = digits.data[-x]
	target = digits.target[-x]
	prediction = clf.predict(digits.data[-x])
	if (target!=prediction):
		print "Target: "+str(target)+" prediction: "+str(prediction)
		grid = data.reshape(8, 8)
		plt.imshow(grid, cmap = cm.Greys_r)
		plt.show()

This learns to read using a standard dataset of hand-written digits, then reads all the digits in that set a hundred times over, then shows you what mistakes it’s made.

MacBook-Air:AI stuff kitsune$ python digits.py 
Time spent learning: 0:00:00.225301
Number of digits being read: 17900
Time spent reading: 0:00:02.700562
Target: 3 prediction: [5]
Target: 3 prediction: [5]
Target: 3 prediction: [8]
Target: 3 prediction: [8]
Target: 9 prediction: [5]
Target: 9 prediction: [8]
MacBook-Air:AI stuff kitsune$ 

0.225 seconds to learn to read, from scratch; then it reads just over 6,629 digits per second. This is comparable with both the speed of a human blink (0.1-0.4 seconds) and also with many of the claims* I’ve seen about human visual processing time, from retina to recognising text.

The A.I. is not reading perfectly, but looking at the mistakes it does make, several of them are forgivable even for a human. They are hand-written digits, and some of them look, even to me, more like the number the A.I. saw than the number that was supposed to be there — indeed, the human error rate for similar examples is 2.5%, while this particular A.I. has an error rate of 3.35%.

* I refuse to assert those claims are entirely correct, because I don’t have any formal qualification in that area, but I do have experience of people saying rubbish about my area of expertise — hence this blog post. I don’t intend to make the same mistake.

Standard
Fiction, Humour

Mote of smartdust

Matthew beheld not the mote of smartdust in his own eye, for it was hiding itself from his view with advanced magickal trickery.

His brother Luke beheld the mote, yet within his brother’s eye was a beam of laser light that blinded him just as surely.

Luke went to remove the mote of dust in Matthew’s eye, but judged not correctly, and became confused.

Mark looked upon the brothers, and decided it was good.

Standard
Science, SciFi, Technology

Kessler-resistant real-life force-fields?

Idle thought at this stage.

The Kessler syndrome (also called the Kessler effect, collisional cascading or ablation cascade), proposed by the NASA scientist Donald J. Kessler in 1978, is a scenario in which the density of objects in low earth orbit (LEO) is high enough that collisions between objects could cause a cascade where each collision generates space debris that increases the likelihood of further collisions.

Kessler syndrome, Wikipedia

If all objects in Earth orbit were required to have an electrical charge (all negative, let’s say), how strong would that charge have to be to prevent collisions?

Also, how long would they remain charged, given the ionosphere, solar wind, Van Allen belts, etc?

Also, how do you apply charge to space junk already present? Rely on it picking up charge when it collides with new objects? Or is it possible to use an electron gun to charge them from a distance? And if so, what’s the trade-off between beam voltage, distance, and maximum charge (presumably shape dependent)?

And if you can apply charge remotely, is this even the best way to deal with them, rather than collecting them all in a large net and de-orbiting them?

Standard
Health, Psychology

Alzheimer’s

It’s as fascinating as it is sad to watch a relative fall, piece by piece, to Alzheimer’s. I had always thought it was just anterograde- and progressive retrograde amnesia of episodic memory, but its worse. It’s affecting:

  • Her skills (e.g. how to get dressed, or how much you need to chew in order to swallow).
  • Her semantic knowledge (e.g. [it is dark outside] ⇒ [it is night], or what a bath is for).
  • Her working memory (seems to be reduced to about 4 items: she can draw triangles and squares, but not higher polygons unless you walk her through it; and if you draw ◯◯▢◯▢▢ then ask her to count the circles, she says “one (pointing at the second circle), two (pointing at the third circle), that’s a square (pointing at the third square), three (pointing at the second circle again), four (pointing at the third circle again), that’s a pentagon (pointing at the pentagon I walked her through drawing); and if she is looking at a group of five cars, she’ll call it “lots of cars” rather than instantly seeing it’s five).
  • The general concept of things existing on the left side as looked at. (I always thought this was an urban legend or a misunderstanding of hemianopsia, but she will look at a plate half-covered in food and declare it finished, and rotating that plate 180° will enable her to eat more; if I ask her to draw a picture of me, she’ll stop at the nose and miss my right side (her left); if we get her to draw a clock she’ll usually miss all the numbers, but if prompted to add them will only put them on the side that should be clockwise from 12 to 6).
  • Connected-ness of objects, such as drawing the handle of a mug connected directly to the rim.
  • Object permanence — if she can’t see a thing, sometimes she forgets the thing exists. Fortunately not all the time, but she has asserted non-existence separately to “I’ve lost $thing”.
  • Vocabulary. I’m sure everyone has a fine example of word soup they can think of (I have examples, both of things I’ve said and also of frustratingly bad communications from a client), but this is high and increasing frequency — last night’s example was “this apple juice is much better than the apple juice”.

I know vision doesn’t work the way we subjectively feel it works. I hypothesise that it is roughly:

  1. Eyes →
  2. Object and feature detection, similar to current machine vision →
  3. Something that maps detected objects and features into a model of reality →
  4. “Awareness” is of that model

It fits with the way she’s losing her mind. Bit by bit, it seems like her vision is diminishing from a world full of objects, to a TV static with a few objects floating freely in that noise.

An artistic impression of her vision. The image is mostly hidden by noise, but a red British-style telephone box is visible, along with a shadow, and a flag. The 'telephone' sign is floating freely, away from the box.

How might she see the world?

Standard
Science

I am not a quantum physicist

I am not a quantum physicist. I do not write this prediction thinking that it is true or a novel deduction on the nature of reality. I write this prediction in order to test my own understanding of quantum physics.

Given all particles are fields:

  1. Fermions are those fields where probability is in the range 0-1 (or possibly -1 to +1, depending on antimatter).
  2. Bosons are those fields where probability can take on any positive or zero value (possibly also any negative value, depending on antimatter).

This “explains” why two fermions cannot occupy the same quantum state, yet bosons can. Inverted quote marks, because this might turn out to not have any explanatory power.

I’m fine with that, just as I’m fine with being wrong. I am not a quantum physicist. I don’t expect to be right. It would be nicer to find I’m wrong rather than not even wrong, but even that’s OK — that’s why I’m writing this down before I see if someone else has already written about this.

Standard