book reviews
books on philosophy of physicsreviewed by T. Nelson |
by
James Read
Oxford University Press, 2023, 150 pages
reviewed by T. Nelson
hat is background independence, and why do we care? A theory is background-independent if it doesn't rely on something more fundamental, such as an absolute object, some field that permeates space, or space itself. A background-independent theory, in other words, is a play without a stage. We need one to know whether we're really showing that space-time is ‘emergent’ or just imposing a structure on it.
For example, GR (general relativity) is thought to be background-independent because it says that a gravitational field is a deformation in spacetime itself, even though it says nothing about what spacetime really is.
Does any theory that posits the existence of two or more objects, as many theories of quantum gravity do, implicitly assume a background spacetime on which they can exist? How can we go from a complicated math formula to something we can move around in? These are not really physics questions but philosophical ones.
In this book James Read, a philosopher of physics, says that our intuitive guesses aren't good enough, so he makes the discussion more rigorous by comparing formal definitions of background independence, such as:
No absolute objects (Anderson, Def. 5)
No formulation in terms of fixed fields (Pooley, Def. 6)
No fixed fields at all (Def. 9)
All its physical degrees of freedom correspond to geometric degrees of freedom (Belot, Def. 12)
Its solution space is determined by a generally covariant action and has non-trivial corner changes (Freidel and Teh, Def. 15)
No physical meaning on the location of a system (Gryb, Def. 17)
Has a covariant path integral (Def. 18).
The result is that the theories are background-independent by some definitions and not by others. For example, GR is background-independent according to Definitions 6, 9, 10, 12, and 15 but not 5. So is string theory with respect to the target space field (ST-TS-gab), while ST-TS-ηab, which is slightly different, is only background-independent by definition 12. Newtonian Gravitational Theory is not background-independent by any of them. The reader needs some familiarity with the theories and their math to fully understand his reasoning.
In other words, the hope that making the definitions more rigorous would shed light on the question failed. It demonstrated only that the answer depends on our definition. In the conclusion, Read finally says his real goal was “to bring clarity and order to our assessments of background independence.”
In that, I guess he partly succeeds, but maybe what he should have done was to ask whether we have a solid grip on how to use mathematics, with its metric tensors and infinite matrices, to explain why space and time have the properties we all experience but cannot describe. The answer is, of course, no. But it is a conceptual problem, which is something philosophers can help us with, and maybe sometime they'll give it another try.
may 09, 2024
by Alyssa Ney
Oxford University Press, 2021, 269 pages
reviewed by T. Nelson
The so-called “measurement problem,” where particles appear not to have a determinate location or state until they're measured, has endless fascination for philosophers. Alyssa Ney's nicely balanced article on the idea that space is composed of quantum entanglement (“From Quantum Entanglement to Spatiotemporal Distance” in Philosophy Beyond Spacetime) prompted me to read this one on the measurement problem.
This book is for an educated layman and, like Peter J. Lewis's Quantum Ontology (which Ney compares this book to), you don't need a degree in hard science to understand it. It discusses the idea that the wave function is too complicated to fit in normal 3D space, so particles live in 3N space—literally—where N is the number of particles in the universe. Philosophers call this wavefunction realism (see Vera Matarese's critical discussion of this idea in Quantum Mechanics and Fundamentality reviewed at right). Ney writes:
If it is the wave function of the universe that is fundamental, then since the number of total particles in the universe is . . . 1080, the space this universal wave function Ψ inhabits, the space the wave function realist will argue is the fundamental physical space of our universe, is 3×1080-dimensional. [p.41]
3×1080 is an awful lot of dimensions, and it's not clear, at least to me, what it adds or how it can be justified theoretically: if real space had this many dimensions, it would take literally forever to get directions on Google Maps. 3N is totally arbitrary. Why must each particle have three dimensions to itself? Why not 4N or even infinity?
She compares this to David Lewis's modal realism, which says there is a continuous infinity of possible worlds. Both make Everettian many-worlds theory, which says every quantum event creates a new universe, seem puny by comparison.
I give it three stars because the author gives fair treatment to the objections to the idea. But those of us who dislike string theory for its ad hoc compactification of 9 or 10 spatial dimensions may find 3×1080 a bit extravagant, even for a philosopher.
jul 25, 2024
by Valia Allori, ed.
Springer, 2022, 415 pages
reviewed by T. Nelson
he Copenhagen school of quantum mechanics (QM) promoted the idea that the quantum world, and by extension our world, is indeterminate and unknowable. The question in this book is: is that idea really true? These contributors, mostly philosophers, have doubts.
In the first section (Realism) they discuss the so-called measurement problem. In QM, the bridge from a wavefunction to a particle-like event that occurs when something is measured was never completely specified. This has caused considerable speculation: maybe, for instance, wave functions don't change into particles until someone comes along and observes them. So maybe a deity went around and observed the universe into existence, and the Moon didn't really exist until someone happened to look at it. Out of this, we got Schrodinger's infamous cat box. And as for “quantum consciousness,” well, let's just agree we got drunk and it never happened, okay?
The standard answer now is that the wavefunction ‘decoheres’ when a measurement occurs. This means any contact with the outside world is a measurement, and that seems to be supported by the behavior of qubits in quantum computers. There are still several alternative theories out there, including the many-worlds interpretation, the de Broglie-Bohm (dBB) theory, the CSL (continuous spontaneous localization) model, the DP (Diósi-Penrose) model, which relates spontaneous collapse to gravity, and the GRW (the Ghirardi-Rimini-Weber theory, which says a wavefunction collapse is amplified so that a wave collapses lots of other waves around it).
These theories all posit decoherence, but some authors, including Davide Romano, say decoherence isn't a good ontological explanation at all. We need, he says, an ontological explanation.
As you might guess, there are many of those already. There's ‘thin’ ontology, which means fundamental objects have no properties other than the ones needed to identify their nature; and ‘thick’ ontology, which means that particles have properties like charge that account for their different behavior. The editor, Valia Allori, is the most radical ‘Thinner’ of the bunch and says that particles appear to belong to different families only because they're governed by different laws. Particles have almost no properties—no spin, no charge, not even mass. Thin ontology is like throwing out the baby with the bathwater, along with the bathtub, the faucet, and the plumbing system, having the water shut off, and blowing up your house.
Other authors say that despite the belief of many physicists, there are in fact hidden variables. Most are science realists and are skeptical about how much indeterminacy there really is. One calls for a Platonic interpretation of QM. A few of the articles have an inadequate understanding of QM and make mistakes, but in general this book is an idea free-for-all.
The most interesting ideas are on Everettian quantum mechanics (EQM), also known as the Many-Worlds theory, which says each quantum event splits the world into two separate worlds. We're not talking a few hundred worlds here. If the double-slit experiment is a quantum phenomenon, then shooting 1000 electrons through the slits would produce 21000 or 1.0715 × 10301 different worlds where each one went through a specific slit.
Claudio Calosi and Jessica Wilson make even more compelling refutations of EQM, including the fact that EQM predicts you can never get an interference pattern. They quote Alistair Wilson, who favors EQM and said its purpose is to replace indeterminacy with multiplicity, as saying “it's indeterminate exactly how much indeterminacy there is.” Wilson has the next chapter but apparently didn't get a chance to read the article about him, so he doesn't defend himself but describes his quantum modal realism idea, where to be a metaphysically possible world is to be an Everett world.
Another problem with EQM, according to Jean Bricmont, is that it doesn't account for cases where the probability of the choices is not 50%. No matter what, even if the probabilities were 95% and 5%, you'd still get two worlds for each choice. That doesn't sound fair. And he's not convinced Schrodinger's cat is ever really dead.
Most of the 26 short articles are worth reading. There's nothing on wave-particle duality. Famous physicists Carlo Rovelli and Gerard 't Hooft each have an article, which adds a bit of much-needed scientific weight to the book. There is no index.
jul 11 2024