![]() |
book reviews
physics booksreviewed by T. Nelson |
by
Yorikiyo Nagashima
Wiley-VCH Verlag GmbH, 2014, 628 pages
reviewed by T. Nelson
his book is a grab-bag of theories that try to extend the Standard Model
(SM) of particle physics. Since these problems haven't been solved, it's
essentially a compendium of theories that don't yet work.
The SM describes the particles we're all familiar with: electrons, quarks, neutrinos, photons, etc., and how they interact. It's accurate to some ungodly number of decimal places. It is also very complicated. But it doesn't tell us what we want to know. Why, for instance, are protons and antiprotons made up of three quarks containing electric charges of +1/3, +2/3, −2/3, and −1/3 that magically always add up to exactly ±1, while the electron charge is exactly −1? Why are there muons and tau particles, which are just like electrons, only big? Why are there three kinds of neutrinos that ‘oscillate’ or change into each other over time, and why are the masses of the particles what they are?
Nagashima, who wrote the two-volume textbook Elementary Particle Physics, discusses possible answers here. This book has long chapters on the Higgs boson, neutrinos, grand unified theories, supersymmetry, axions, dark matter, and cosmology. The chapters alternate between theory and phenomenology, giving the reader an occasional breather from the math. Even so, this book is utterly exhausting to read.
Neutrinos cause stars to burn for 10 billion years, so they're essential for life. Neutrinos feel only the weak force and gravity. They're the most abundant matter particle in the universe—a billion times more abundant than each of the others. In a type II supernova, the high density makes the star opaque to neutrinos, so they get thermalized. Thermal neutrinos are stored behind the shock wave and revive the outgoing shock wave, so they're important in stellar evolution.
One theory that explains the neutrino's mass is the “seesaw” mechanism. This theory invokes Majorana particles, which are spin-1/2 particles that are their own antiparticle. In fact, the neutrino may actually be a Majorana particle. Neutrinos are all left-handed. If a right-handed neutrino exists, it doesn't interact through the weak or EM force (it is ‘sterile’) so it's undetectable.
Nagashima explains that due to relativity, if neutrinos travel slightly slower than c, then they must have mass. If so, right-handed ones must exist. To have mass you also need a finite Higgs coupling strength. The discovery of neutrino oscillation proves that they have mass.
Nagashima says the Earth refracts neutrinos, changing the oscillation wavelength. This means that there should be a day-night difference in solar neutrinos. The whole theory seems a bit fragile and some observatories don't see a day-night effect.
The upper limits of masses for the three flavors of neutrino are:
Electron neutrino (νe) < 2 eV
Mu neutrino (νμ) < 190 keV
Tau neutrino (ντ) < 18.2 MeV
Cosmological arguments predict the sum of all flavors is < 0.14 eV.
Nagashima then takes us through the dismal history of grand unified theories. There's SU(5), a beautiful theory with just one little flaw: it is wrong. It predicts proton decay, which experiments show doesn't happen. Likewise with SO(10) and E6: all great theories that didn't succeed.
That doesn't mean they're not important. Physicists need to know them, of course; but it also convinces us normal people that a more complicated theory like supersymmetric string theory really is the only way forward. And for that, there's only one textbook at the moment that covers the most recent stuff: Sergio Cecotti's action-packed 828-page Introduction to String Theory.
There are other reasons we need a better theory. The SM has 18 unconstrained parameters. Electroweak theory, which explains radioactive beta-decay, has something called the triangle anomaly, where certain triangle loops in the Feynman diagram would screw up the solution so that current is not conserved. This is a bad thing to have, and it's one reason Weinberg in his QFT book totally gave up on Feynman diagrams, saying they don't work for problems that involve non-Abelian gauge theory or for anything complicated. So halfway through Volume 1 he switched over to path integrals, which overcomes the problems at the cost of gigantic formulas.
Both the SM and the GUTs suffer from ‘hierarchy’ problems, which means they predict particles over such a huge range of masses that precise fine-tuning—to an accuracy of one part in 1026—is necessary to get the formulas to match reality. There's also the “little fine-tuning problem” in the SM, which concerns the Higgs boson. The mass term of the Higgs is invariant under gauge symmetry, which means you need radiative corrections. To get the theory to predict the correct mass, you have to subtract at least three very large numbers to get the 125 GeV.
Here's my take on it. This sort of thing encourages people to think the universe has been “fine tuned” so we could exist. It's not so. Fine-tuning is not proof of a deity, but an artifact of an incomplete theory. A Supreme Being would never create a universe with such a big mathematical instability. For one thing, it would make the universe unstable, in which case he'd have to create it all over again.
Theorists have suggested ways around this, for instance by saying the Higgs is not really real (which the author suggests on page 17), or by adding more dimensions, or with SUSY (see below), which the author really likes because it also predicts the Weinberg angle and does away with proton decay . . . mostly. . . .
If you think the SM has too many parameters, wait until you get to supersymmetry, or SUSY. By now there are several SUSY theories: MSSM, which has 120 unconstrained parameters (plus the 18 in SM); pMSSM (phenomenological MSSM), which has 19; and mSUGRA (supergravity, aka constrained MSSM, but with gravity), which has only 4. The ‘super’ in supersymmetry comes from the fact that there is a gigantic supersymmetric particle for every known particle.
Not every particle in SUSY is gigantic, though. Nagashima suggests that the LSP, or lightest superparticle, could be dark matter. This particle could be the Goldstino, neutralino, gravitino, or sneutrino, axion, stau, or even a Kaluza-Klein or KK particle, depending on which theory you prefer. (The ‘lightest’ particle, in terms of mass, is very important because it can't decay into anything smaller, so it's stable.)
The goal of SUSY is to unify fermions and bosons. Its operator is a fermionic spinor Qαi, which interconverts them. You would have Qαi|B〉 → |F〉 and Q†αi |F〉 → |B〉, where F is a fermion and B is a boson.
In SUSY, particle masses aren't constant but vary depending on the energy, unlike in the SM, where the Yukawa coupling constants between particles and the Higgs determine the mass.
That means an enormous number of particles. Take the Higgs boson. You need two Higgs particles to make the triangle anomaly vanish. Two Higgs doublets contain eight components. Three are absorbed to give mass to W± and Z. The other five are particles: h0, H0, A, H± (note that ‘±’ counts as two particles). There are also five higgsinos: h~0, H~0, A~, h~±, a neutralino χ0i, photinos γ~, and charginos. Zinos Z~ are made from the bino B~ and wino W~0. That's just the tip of the iceberg. Don't even ask about gluinos and neutralinos, which are Majorana fermions. And we still don't get any right-handed neutrinos in MSSM. If the goal was to simplify particle physics, SUSY failed.
Another problem is that despite extensive searches, no supersymmetric partner has ever been detected. The particles are extremely massive, so they probably only existed shortly after the Big Bang. Physicists believe that in those days all the particles had the same mass until matter cooled below a certain temperature and they underwent symmetry breaking.
There are at least three ways to get symmetry breaking. All of them make use of some “hidden sector,” which remains mostly unexplained, and a “visible sector,” which is us. The existence of a hidden sector is purely hypothetical and remains a weakness of the theories.
Minimum supergravity mSUGRA, where symmetry breaking is transmitted to MSSM (=visible sector) by gravity.
GMSB (gauge-mediated symmetry breaking), where some messenger particle communicates a gauge multiplet, which can be thought of as another particle that doesn't interact with any known particles, only the messenger M.
AMSB (anomaly-mediated symmetry breaking), where some “quantum anomaly” gives mass to gauginos. What is this anomaly? It could be an extra dimension. The simplest AMSB invokes tachyons, which were a big problem in early versions of string theory. The new theories don't need tachyons.
Nagashima seems to prefer the Extra Dimension theory, which he describes in great detail.
In the Extra Dimension theory, a fifth spatial dimension is compactified down to a microscopic-sized circle. Or, in some variants, the fifth dimension is a ‘brane’ or higher dimensional sheet a fixed distance from our world, which is a different brane, weakening gravity to its current strength. In some versions gravitons are all on one brane and we're on the other, as in the Randall-Sundrum (RS) model.
The RS model also posits a warped space based on string theory called an AdS5. Warping by gravity in the 5th dimension is supposed to be the cosmological constant. This theory eliminates the hierarchy problem because it makes gravity much smaller in our brane. And yes, it's not perfect: it's known that our world isn't an AdS5.
Note that SUSY doesn't do away with the hierarchy problem altogether. Nagashima says symmetry breaking happens at two scales: the GUT scale (which is around 1011 GeV) and the EW scale, which is around 250 GeV. To me it looks unsolvable: make the biggest particle big and you get a ‘fine-tuning’ problem. Make it small and you get proton decay.
Interspersed among the wreckage of these grandly awful unified theories, we get a good discussion of the Higgs mechanism, cosmology, dark matter, and dark energy. Some of these chapters are what the author calls ‘phenomenological’, which means there are fewer formulas and more actual observations. So, some chapters are easier to read than others. It's often necessary to re-read sections for it to make sense.
The writing is mostly understandable, but in my opinion the author tries to cover too many subjects. One weakness is that the graphs are rendered in many different shades of gray and their labels are placed in strange places, making some of them uninterpretable. In the back some of the figures are reproduced in color. The index is good, and there's a table of acronyms. If you read this one, some background in quantum field theory would help a lot. And writing down the names of all the variables as they appear would be essential.
feb 17, 2025. updated feb 18, 2025
by Tudor D. Stanescu
CRC Press, 2025, 432 pages
reviewed by T. Nelson
here to start with this strange book? CRC Press lists 2025 as the
publication date. But on page 228 the author says the second edition
was written in 2016, so the manuscript evidently sat around on his desk
for nine years before getting published. Well, if you're reading for the
background maybe it doesn't matter.
The narration follows the standard style, but takes it to an extreme: the author fills the first 231 pages with theoretical formulas without telling us how they relate to anything real. Then finally in Chapter 8, “Majorana Zero Modes in Solid-State Heterostructures” he tells us, and I was able to take off the masking tape that was holding my eyelids open and learn some interesting stuff.
And it really is an interesting topic, not least because Microsoft recently claimed to have invented a topological Majorana qubit. So far, they haven't managed to put Clippy on it, so there are still some doubts floating around about how powerful it may be.
The general idea is that a topological system overcomes quantum decoherence, which is a big problem with qubits. Normally you'd use many physical qubits to overcome the problem. But in a topological computer, perturbing an anyon affects the particle's worldline but doesn't affect the topology of the braid. This means they're a lot more stable, or they would be if somebody could figure out how to make them.
‘Topological’ means procedures like braiding, which means starting with state |0〉 and exchanging four anyons pairwise in a certain order. This, the author says, is equivalent to a unitary operation changing |0〉 to |1〉. If you have n anyons, braiding them is equivalent to doing a 2n operation. It's all very vague what any of that means experimentally and how you can tell what state your qubit is in, probably because it was all theoretical when the book was written.
An anyon is a particle with properties between a fermion and a boson. It exists only in two-dimensional space. On page 30 the author says that the non-Abelian geometric phase is gauge-covariant so it is not observable. But the eigenvalues Unc of the matrix U, which describes the geometric phase, or its trace Tr[Unc] , which is a Wilson loop, are gauge-invariant so they are measurable. That might sound like gibberish, but it's physicist talk that means it's theoretically possible to measure something. Observability is an important concept in quantum mechanics.
That's very promising, but don't ask me what a Wilson loop is. He never explains it. Good sources for background are “Kitaev Quantum Spin Liquids” by Yuji Matsuda and “Fault-tolerant quantum computation by anyons” by Alexei Kitaev, a big shot in the field. You may also need to look up Ising model, which plays a big role in the theory, if you've never run across it.
Two hundred pages later, the author finally gets around to telling us what an anyon is, what quasiparticles are, and what a Majorana fermion quasiparticle is. And we finally learn how all these things might be measured. The brief chapters on quantum computing are basic; they're based on Nielsen and Chuang's excellent 2000 book.
Another advantage of topological quasiparticles is that a pair of Majorana zero mode quasiparticles, or MZMs, can have an arbitrarily large separation, so the quantum information is stored in a non-localized manner. This means the qubit is protected from local perturbations. Braiding MZMs makes the system fault-tolerant. However, he writes, making them isn't so easy:
Realization, detection, and manipulation of MZMs in 'intrinsic' topological superconductors [is] a daunting experimental task. [p.233]
Most other authors say a quasiparticle consists of a combination of an electron and a hole. Since we're talking about anyons, which are strictly two-dimensional things, this tells us that these qubits are at an interface between some semiconductor material and something else. Cooper pairs, thought to be the basis for superconductivity, are also quasiparticles.
The early chapters mainly serve to indoctrinate you into the jargon, which is quite extensive. There's also a chapter on axions, which he says are low-energy quasiparticles in condensed matter topological quantum systems.
Axions are hypothetical particles that have been proposed to account for dark matter. They're totally different from anyons and there's little reason to think they're quasiparticles. It's possible the author got axions confused with anyons. In the same chapter he talks about “axion gravitoelectromagnetism in topological semiconductors.” Unfortunately, there is no such thing as ‘gravitoelectromagnetism’. It's well known there are analogies between the formulas for EM and for relativity. How he gets axions from that is a mystery.
According to the author, a typical Majorana device is a semiconductor nanowire of some metal with a strong spin-orbit coupling, like InAs or InSb, proximity-coupled to a NbTiN superconductor (SC). A magnetic field is then applied. When the Zeeman potential Γ created by the magnetic field is bigger than some critical value Γc, a pair of MZMs is formed, one at each end of the nanowire. An ordinary metal lead is attached to the nanowire to allow charge tunneling measurements. In some designs a laser is used instead of a magnetic field.
It sounds like falling off a log, but there are many things that can go wrong. There are ways of detecting whether you have an MZM, such as using Andreev reflection, where electrons tunnel one at a time between two superconductors. This is called a fractional Josephson effect. Or you could inject an electron into one end of the nanowire, which would flip its fermion parity, and then measure it when it gets chucked out the other end. Or you could even use scanning tunneling spectroscopy. But, the author says, none of these methods is of any help in manipulating the braid (which is what you'd need to do to create a qubit). So that is where the field is stuck.
Well, maybe I'm biased. I hated crystallography class in school, and I had a professor for inorganic chemistry who had such a thick accent I couldn't understand a word he said. Unfortunately, there was also no textbook in that class. So what I learned is that solid-state physics was not for me.
mar 11, 2025. updated mar 12, 2025. minor edits mar 21, 2025
Update, Mar 13, 2025 The Register, a UK computer news website, has an article claiming that scientists are skeptical that Microsoft could have created a Majorana qubit.