Category: Philosophy
-
LaMDA, Turing Test, sentience
A chatbot from Google called LaMDA made the headlines last weekend. It seems it convinced Blake Lemoine (someone at Google) that it was sentient. While, like the majority of real AI researchers[0], I do not actually think it is sentient, the transcripts make it plain why it caused this belief. When Alan Turing originally described…
-
Arguments, hill climbing, the wisdom of the crowds
You ever had an argument which seems to go nowhere, where both sides act like their position is self-evident and obvious, that the other person “is clearly being deliberately obtuse”? I hope that’s common, and not just one of my personal oddities. Ahem. In the current world of machine learning (yes these two things are…
-
To believe falsely
“If there were a verb meaning ‘to believe falsely’, it would not have any significant first person, present indicative.” —Ludwig Wittgenstein Would “confusion” not be such a first person, present indicative? Or am I confused about the meaning of the Wittgenstein quote? (Update a few days later: Turns out I forgot the phrase “cognitive dissonance”,…
-
Sufficient technology
Let’s hypothesise sufficient brain scans. As far as I know, we don’t have better than either very low resolution full-brain imaging (millions of synapses per voxel), or very limited high resolution imaging (thousands of synapses total), at least not for living brains. Let’s just pretend for the sake of argument that we have synapse-resolution full-brain…
-
Morality, thy discount is hyperbolic
One well known failure mode of Utilitarian ethics is a thing called a “utility monster” — for any value of “benefit” and “suffering”, it’s possible to postulate an entity (Bob) and a course of action (The Plan) where Bob receives so much benefit that everyone else can suffer arbitrarily great pain and yet you “should”…
-
Memetic monocultures
Brief kernel of an idea: Societies deem certain ideas “dangerous”. If it possible to technologically eliminate perceived dangers, we can be tempted to do so, even when we perceived wrongly. Group-think has lead to catastrophic misjudgments. This represents a potential future “great filter” for the Fermi paradox. It does not apply to previous attempts at…
-
Dot product morality
The first time I felt confused about morality was as a child. I was about six, and saw a D&D-style role-playing magazine; On the cover, there were two groups preparing to fight, one dressed as barbarians, the other as soldiers or something1. I asked my brother “Which are the goodies and which are the baddies?”,…
-
A life’s work
There are 2.5 billion seconds in a lifetime and (as of December 2018) 7.7 billion humans on the planet. If you fight evil one-on-one, if you refuse to pick your battles, if only 1% of humans are sociopaths, you’ve got 21 waking seconds per opponent — and you’ll be fighting your whole life, from infancy…
-
One person’s nit is another’s central pillar
If one person believes something is absolutely incontrovertibly true, then my first (and demonstrably unhelpful) reaction is that even the slightest demonstration of error should demolish the argument. I know this doesn’t work. People don’t make Boolean-logical arguments, they go with gut feelings that act much like Bayesian-logical inferences. If someone says something is incontrovertible,…
-
Mathematical Universe v. Boltzmann Brains
I’m a fan of the Mathematical Universe idea. Or rather, I was. I think I came up with the idea independently of (and before) Max Tegmark, based on one of my old LiveJournal blog-post dated “2007-01-12” (from context, I think that’s YYYY-MM-DD, not YYYY-DD-MM). Here’s what I wrote then, including typos and poor rhetorical choices:…