One day, I might learn enough physics that my questions don’t sound like nonsense to physics graduates. Today is not that day — my working assumption is I sound like a freshman at best, and a homeopath at worst, and will remain so until I put numerical simulations of standard results in general relativity, quantum mechanics, and Navier-Stokes equations onto my GitHub page.
The baryon asymmetry problem is that matter and antimatter are always created and destroyed in equal quantity, yet the universe clearly has more of one than the other.
If you can make or destroy one without the other, in isolation, then you also get to violate charge conservation, which would mean that quantum field theory is wrong because something something Noether’s theorem. (Of course quantum field theory might be wrong; it’s known that general relativity and quantum physics can’t both be true because if they were both true the universe would’ve collapsed instantly at the very beginning).
The only way you can conserve charge but take antiparticles out of the system is if the process requires an equal number of antiprotons and positrons.
Both of these options — either violate charge conservation or take out multiple particles at once — have interesting consequences which can probably be tested, although not by me, given my degree is in a totally unrelated field.
If charge conservation is violated, then the universe should have a net electric charge. This charge should change over time, as there are still natural processes creating positron-electron pairs but not (at least to the same degree) proton-antiproton pairs. I don’t understand what this would do to the Einstein field equations (only that it would do something; given the effect on black holes I have to ask if it could be dark energy?), but I’m fairly sure lots of free electrons in the interstellar or intergalactic medium should be noticeable.
On the other hand, if antiprotons combine with positrons and that composite — possibly but not necessarily, given how conjectural this already is, an antineutron — either that composite is stable or it has a way of decaying into something other than an antiproton and a positron. The obvious question this raises is: could this be dark matter?
The obvious counter-point to the question “what if antineutrons are stable” is “surely someone would have noticed”, which is a fair question that I cannot answer — I genuinely do not know if anyone would have noticed yet, given how hard it is to make antimatter, how hard it is to trap antimatter, how hard it is to trap even normal neutrons, and the free-neutron half-life.
I can say other people have thought about neutron-antineutron oscillations, which might well solve the baryon asymmetry problem all by itself without any consequences for dark energy/dark matter: https://arxiv.org/abs/0902.0834
(Another thing I definitely don’t know, and which my physics MOOC won’t teach me, is how to separate legit ArXiv papers from the bogus ones; that reflects badly on me, not on the authors of that paper).
Brought to you by me noticing that when I watch Kristen Bell playing an awful person in The Good Place, I feel as stressed as when I have to consciously translate to or from German in real-life situations and not just language apps.
- System 2 thinking (effortful) is stressful to the mind in the same way that mild exercise is stressful to the body.
- Having to think in system 2 continuously is sometimes possible, but how long for is not a universal constant.
- Social interactions are smoothed by being able to imagine what other people are thinking.
- If two minds think in different ways, one or both has to use system 2 thinking to forecast the other. Autistic and neurotypical minds are one of many possible examples in the human realm. Cats and dogs are a common non-human example (“Play! Bark!” “Arg, big scary predator! Hiss!”)
- Stress makes personal touches and eye contact unpleasant.
Autistic people will be:
- Much less stressed when they are only around other autistic people.
- Look each other in the eye.
- Be comfortable hugging each other.
- Less likely than allistic people to watch soap operas.
- Jokes based on mind state will not transfer, but puns will. (The opposite of the German-English joke barrier, as puns don’t translate but The Ministry Of Silly Walks does).
- Reality shows will make no sense, and be somewhere between comedic and confusing in the same way as memes based on “type XYZ and let autocomplete finish the sentence!”
- How to interests, e.g. sports, music, painting, fit into this?
- Does “gender” work like this? Memes such as “Men are from Mars, women are from Venus” or men finding women confusing and unpredictable come to mind. Also, men’s clubs are a thing, as are women’s. Also the existence of transgender people would pattern-match here. That said, it might just be a cultural barrier, because any group can have a culture and culture is not merely a synonym for political geography.
I’ve only seen this concept in reference to gravitational fields. I suspect an equivalent may exist for electric fields, which may be useful for developing an improved electrically confined fusion reactor (AKA the sort that school students make every so often as science fair projects, which currently have so many flaws that almost nobody expects them to ever become useful power sources).
Why would it be useful? Let’s begin with the current problem: electrostatic fusion reactors have two grids, one a cathode and the other an anode, to create the electric fields which accelerate the ions enough for nuclear fusion to happen. Unfortunately, fusion is very unlikely compared to the ions simply bouncing off each other, which means even a very spacious grid — 99% empty — isn’t empty enough, and most of the power gets wasted by the few ions which hit the grid each time they fly past.
Some designs try to get around the grid problem. For example Robert Bussard (yes that one) has designed the Polywell reactor which uses a virtual cathode: a cloud of magnetically confined electrons. Another possibility I’ve never had time (and probably resources) to simulate was finding out if the so-called “star mode” of a Farnsworth Fusor, where the ions primarily flow through the gaps in the grids, might be caused by a magnetic field generated by the current flowing between the grids — if it is, you could enhance that field relatively easily, and boost the efficiency. This probably still won’t make it a net power producer (anything I can think of will have been thought of a hundred times already by the professionals), but it might still be interesting for other things.
This brings me to the idea of a Lagrange point as a virtual cathode, where the virtual cathode is the dynamic balance of the electric charges as they move.
It might not be possible at all (gravity is always attractive, unlike electric fields, and this may cause extra problems when you have a plasma field rather than claiming equivalence from a few point-like masses to a a few point-like charges); and even if it is possible at all, it might require a prohibitive power consumption (accelerating a charge produces electromagnetic radiation, slowing the charge down in the process).
Of course, the equivalence of moving electric fields and magnetic fields makes me wonder, again, if a hybrid electric- and magnetic-confinement fusion reactor could do better than either on their own.
Disclaimer: I’m a software engineer, not a doctor of physics. If a proper scientist disagrees with me, trust them.
Homeopathy: for those who have never learned the details, claims that the potency of a treatment can be increased by repeatedly diluting it. There are many scales — the C-scale is “how many times has this been diluted by a factor of 100”, the X-scale “…by a factor of 10”. I’d say “clearly nonsense”, but I fell for it when I was a teenager.
Fermi paradox: there are so many stars in the observable universe — tens of sextillions (short scale) — that even fairly pessimistic assumptions imply we should be surrounded by noisy aliens… so why can’t we see any?
One of the most common resolutions to the Fermi paradox is that there are one or more “great filters” which make it entirely unlikely that any of those stars have produced intergalactic expansionist civilisations. There are good reasons to expect direct intergalactic expansion rather than starting with ‘mere’ interstellar expansion, and (rather more surprisingly) good reasons to think we’re within spitting distance of the technology required, but that only makes the non-technological problems all the more severe. There are a lot of unknowns here, obviously we’ve only got ourselves as an example, so the space between “where we are now” and “owning the universe” is filled entirely with underpants gnomes, and that’s where homeopathy fits in, in two separate ways.
First, as a categorical example. Homeopathy represents an archaic way of thinking, yet it’s very popular. It’s simple, it’s friendly, it is a viral meme. There are many of these, some of them are quite destructive, and while it’s nice to think nature is in a balance — especially when we’re thinking of something we’re really proud of such as our own minds — the truth is nature (including humans) often goes off the deep end and only sometimes recovers. It’s very easy for me to believe that an anti-rational meme such as homeopathy can either destroy a civilisation entirely, or prevent it developing into a proper space-faring civilisation.
Second, as an analogy. Dilution. It’s not the first dilution of a homeopathic preparation which removes all atoms of active ingredients from the result, but the repeated dilution. If there are, say, twenty things which have an independent 50% chance of holding back or wiping out a civilisation out before it can set up a colony — AI; bioweapons; cyber-warfare; global climate change (doesn’t matter if artificial warming or natural ice age); cascade agricultural collapse; mineral resource exhaustion; grey goo; global thermonuclear war; cosmic threats collectively from noisy stars whose CMEs make electricity impractical to asteroids and gamma ray bursts; anti-intellectualism movements, whether deliberate or not; feedback between cheap genetic engineering and genetically-defined super-stimulus making all the citizens a biologically vulnerable monoculture … — twenty items each with a 50% chance adds up to million-to-one odds (million-ish, but if you care about the difference you’re taking the wrong lesson from this).
Yes, one-million-to-one is almost irrelevant compared to ten sextillion. Odds of (100e9)^2-to-one would require 73 such events, not 20, but this combines with the previous Fermi estimates, it doesn’t replace them. 20 such events reduces the overall problem by a factor of a million, no matter what your previous estimate was, and both 20 events and 50% chances are just round numbers, not a real ones. Unfortunately, we don’t know how many small-filters we might face: as the Great Recession was starting, someone said that no two recessions are the same because we learn from all our mistakes and so each mistake has to be a new one. Sadly it’s worse even than that, as humanity as a whole does repeat even its economic mistakes, so even if we weren’t re-rolling some of our previously-successful dice because we keep thinking “we’re too big to fail“, humans don’t know all the ways we can fail to survive.
The Great Filter doesn’t have to be something that civilisations encounter exactly once and in much the same way a sentence encounters a full stop — it can be the death of a thousand paper-cuts.
If we do finally reach the stars, we may find the universe is much more interesting than it currently seems. Instead of Vulcans and warp drive, we might find hippy space-elves communing with their trees via mind-warping drugs… and if we don’t, instead of wiping ourselves out, we might become the hippy space-elves that some sentient octopus discovers while going boldly where no sentient octopus has gone before.
I did an A-level in Philosophy. (For non UK people, A-levels are a 2-year course that happens after highschool and before university).
I did it for fun rather than good grades — I had enough good grades to get into university, and when the other A-levels required my focus, I was fine putting zero further effort into the Philosophy course. (Something which was very clear when my final results came in).
What I didn’t expect at the time was that the rapid development of artificial intelligence in my lifetime would make it absolutely vital that humanity develops a concrete and testable understanding of what counts as a mind, as consciousness, as self-awareness, and as capability to suffer. Yes, we already have that as a problem in the form of animal suffering and whether meat can ever be ethical, but the problem which already exists, exists only for our consciences — the animals can’t take over the world and treat us the way we treat them, but an artificial mind would be almost totally pointless if it was as limited as an animal, and the general aim is quite a lot higher than that.
Some fear that we will replace ourselves with machines which may be very effective at what they do, but don’t have anything “that it’s like to be”. One of my fears is that we’ll make machines that do “have something that it’s like to be”, but who suffer greatly because humanity fails to recognise their personhood. (A paperclip optimiser doesn’t need to hate us to kill us, but I’m more interested in the sort of mind that can feel what we can feel).
I don’t have a good description of what I mean by any of the normal words. Personhood, consciousness, self awareness, suffering… they all seem to skirt around the core idea, but to the extent that they’re correct, they’re not clearly testable; and to the extent that they’re testable, they’re not clearly correct. A little like the maths-vs.-physics dichotomy.
Consciousness? Versus what, subconscious decision making? Isn’t this distinction merely system 1 vs. system 2 thinking? Even then, the word doesn’t tell us what it means to have it objectively, only subjectively. In some ways, some forms of A.I. looks like system 1 — fast, but error prone, based on heuristics; while other forms of A.I. look like system 2 — slow, careful, deliberative weighing all the options.
Self-awareness? What do we even mean by that? It’s absolutely trivial to make an A.I. aware of it’s own internal states, even necessary for anything more than a perceptron. Do we mean a mirror test? (Or non-visual equivalent for non-visual entities, including both blind people and also smell-focused animals such as dogs). That at least can be tested.
Capability to suffer? What does that even mean in an objective sense? Is suffering equal to negative reinforcement? If you have only positive reinforcement, is the absence of reward itself a form of suffering?
Introspection? As I understand it, the human psychology of this is that we don’t really introspect, we use system 2 thinking to confabulate justifications for what system 1 thinking made us feel.
Qualia? Sure, but what is one of these as an objective, measurable, detectable state within a neural network, be it artificial or natural?
Empathy or mirror neurons? I can’t decide how I feel about this one. At first glance, if one mind can feel the same as another mind, that seems like it should have the general ill-defined concept I’m after… but then I realised, I don’t see why that would follow and had the temporarily disturbing mental concept of an A.I. which can perfectly mimic the behaviour corresponding to the emotional state of someone they’re observing, without actually feeling anything itself.
And then the disturbance went away as I realised this is obviously trivially possible, because even a video recording fits that definition… or, hey, a mirror. A video recording somehow feels like it’s fine, it isn’t “smart” enough to be imitating, merely accurately reproducing. (Now I think about it, is there an equivalent issue with the mirror test?)
So, no, mirror neurons are not enough to be… to have the qualia of being consciously aware, or whatever you want to call it.
I’m still not closer to having answers, but sometimes it’s good to write down the questions.
Idle thought at this stage.
The Kessler syndrome (also called the Kessler effect, collisional cascading or ablation cascade), proposed by the NASA scientist Donald J. Kessler in 1978, is a scenario in which the density of objects in low earth orbit (LEO) is high enough that collisions between objects could cause a cascade where each collision generates space debris that increases the likelihood of further collisions.
If all objects in Earth orbit were required to have an electrical charge (all negative, let’s say), how strong would that charge have to be to prevent collisions?
Also, how long would they remain charged, given the ionosphere, solar wind, Van Allen belts, etc?
Also, how do you apply charge to space junk already present? Rely on it picking up charge when it collides with new objects? Or is it possible to use an electron gun to charge them from a distance? And if so, what’s the trade-off between beam voltage, distance, and maximum charge (presumably shape dependent)?
And if you can apply charge remotely, is this even the best way to deal with them, rather than collecting them all in a large net and de-orbiting them?
I am not a quantum physicist. I do not write this prediction thinking that it is true or a novel deduction on the nature of reality. I write this prediction in order to test my own understanding of quantum physics.
Given all particles are fields:
- Fermions are those fields where probability is in the range 0-1 (or possibly -1 to +1, depending on antimatter).
- Bosons are those fields where probability can take on any positive or zero value (possibly also any negative value, depending on antimatter).
This “explains” why two fermions cannot occupy the same quantum state, yet bosons can. Inverted quote marks, because this might turn out to not have any explanatory power.
I’m fine with that, just as I’m fine with being wrong. I am not a quantum physicist. I don’t expect to be right. It would be nicer to find I’m wrong rather than not even wrong, but even that’s OK — that’s why I’m writing this down before I see if someone else has already written about this.
A transistor in a CPU is smaller and faster than a synapse in one of your brain’s neurons by about the same ratio that a wolf is smaller and faster than a hill.
CPU: 11nm transistors, 30GHz transition rate (transistors flip significantly faster than overall clock speed)
Neurons: 1µm synapses, 200Hz pulse rate
Wolves: 1.6m long, average range 25 km/day
Hills: 145m tall (widely variable, of course), continental drift 2 cm/year
1µm/11nm ≅ 1.6m/145m
200Hz/30GHz ≅ (Continental drift 2 cm/year) / (Average range 25 km/day)
Number #37 in the ongoing series of “questions that I can’t even articulate correctly without a PhD, and if I Google it I’ll just get a bunch of amateurs who’ve mistaken themselves for Einstein”:
What if the apparent factor of 10¹²⁰ difference between the theoretical energy density of zero-point energy and the observed value of the cosmological constant is because that energy has gone into curving the 6-or-7 extra dimensions of string theory so tightly those extra dimensions can’t be observed?
Testable (hah!) consequence: the Calabi–Yau manifolds of string theory would be larger (less tightly curved) inside Casimir cavities.