The quandary at the heart of quantum mechanics
Just because a theory works, doesn’t mean we understand it, writes Dan Falk.
Take a self-guided tour from quantum to cosmos!
Just because a theory works, doesn’t mean we understand it, writes Dan Falk.
We’re five minutes into the “Foundations of Quantum Mechanics” conference at Perimeter Institute, and one of the organizers, Lucien Hardy, is talking about Isaac Newton. I consider this a stroke of luck, as I have some understanding of Newtonian physics and just might be able to follow along. Inevitably, the discussion among these 20 or so deep thinkers will turn to the intricacies of quantum mechanics, and when that happens I will be on shakier ground. But for now, Newton.
In 1687, Newton published his Principia, laying out his laws of motion and his theory of universal gravitation. Hardy, a physicist at Perimeter, notes that the landmark book was initially met with skepticism. The law of gravitation, in particular, puzzled readers. In Newton’s theory, two bodies attract each other even if they’re far apart. Whatever “gravity” is, it seemed to operate instantaneously over empty space. Scientists and philosophers – in Newton’s time, there was hardly any distinction between them – figured that for object A to have an effect on object B, they either had to be touching or there had to be some sort of physical stuff in between A and B to mediate the force.
And yet, Newton’s theory worked. In the decades following its publication, the ideas that he set down in the Principia were spectacularly confirmed, again and again. (A classic example involves the orbit of what we now call Halley’s Comet. Astronomer Edmond Halley used Newton’s laws to predict that a bright comet seen in 1682 would return in 1759. It did, though Halley didn’t live to see his prediction borne out.)
What do we do when a theory is successful and yet runs counter to our intuitions? In the case of Newton’s law of gravity, we had to wait a couple of hundred years. The tension was eventually resolved when Albert Einstein developed his general theory of relativity, in which gravity is seen to act not instantaneously but rather in a finite time, as distortions in the fabric of spacetime ripple across space. Newton wasn’t wrong – it’s just that Einstein’s theory is more complete, and presents a more satisfying picture of how things work. (Indeed, when gravitational fields are weak, Einstein’s equations become Newton’s equations.)
Hardy argues that the analogy could be useful as we try to make sense of quantum mechanics. Quantum mechanics is an extraordinarily successful theory; it’s given us everything from lasers to semiconductors, and some of its predictions have been confirmed to 11 decimal places (or perhaps 14, depending on how one counts them). We have managed to create astoundingly advanced technologies by following the very precise rules offered by quantum mechanics.
And yet, when followed through to its logical conclusions, quantum theory paints a profoundly weird picture of the universe: a universe where probability reigns, where we can barely say what things are actually made of. Some physicists feel we just need to get used to that weirdness; others hope that beneath the layers of weirdness is something more intuitively understandable, something closer to the sort of physics that ruled from Newton’s time to Einstein’s.
As I heard repeatedly during the week-long conference, it is far from clear what quantum mechanics is actually telling us about the world.
Hardy elaborated when I sat down with him between sessions. He drew a contrast between quantum mechanics and general relativity. The latter, he says “has a clear ontology” – in other words, we know what its equations are describing, in physical terms. “Whereas in quantum theory, that isn’t the case,” he says. “Or at least, there is no agreement on what the ontology is.”
It’s perfectly clear what the mathematical framework – the “formalism” – of quantum mechanics is, says Jonathan Barrett, a physicist and information theorist at Oxford. “We don’t tend to disagree very much on the actual predictions and the actual experiments that we can do.” But what does the math refer to? Does quantum mechanics encompass the information that we have about something? Or is it about the “real stuff that’s actually there”? We just don’t know, Barrett says.
At the heart of quantum mechanics lies a peculiar entity known as the wave function. Physicists can use the wave function to work out, say, the probability that a certain particle will be found within a certain region of space. The equations of quantum mechanics dictate how the wave function evolves. So far, so good. But why does the theory deal only in probabilities? Here, opinions diverge.
Depending on which “interpretation” of the theory you subscribe to, you might conclude that quantum mechanics is a theory of knowledge rather than of things; a theory requiring multiple universes; or something else altogether. Along the way, we get the famous “paradoxes” of quantum mechanics, like quantum superposition (in which a system is in two states at once, like Schrödinger’s famous cat) and quantum entanglement (in which the properties of two particles can be correlated even though they’re far apart and no signal has passed between them – what Einstein described as “spooky action at a distance”).
Paradox may not be quite the right word for these phenomena; they may simply be features of the universe that we need to get used to. But they are certainly puzzling.
A good first step would be to understand what, exactly, the wave function is. “As a mathematical object, we’re clear what it is – it’s a vector in a mathematical Hilbert space,” says Barrett. In other words, it’s an abstraction, a mathematical entity that can be used to calculate probabilities. If you want to work out the chances of finding an electron between this point and this other point, the wave function is your friend. “But the question is, what does it refer to? What’s it describing? What does it correspond to?”
Physicists have generally taken one of two approaches to handling that question, Barrett says. There are those “who take the wave function to be a part of the reality – some kind of real, physical wave that evolves – and those that take it to be a description, in some sense, of someone’s information.”
In the 1920’s, the Danish physicist Niels Bohr, together with his young German colleague Werner Heisenberg, worked out what we now call the Copenhagen Interpretation of quantum mechanics: the idea, roughly, that the theory only describes what we are able to say about a physical system, not the system itself. A wave function describes a quantum system as existing in a superposition of states; when a measurement is made, the wave function “collapses” and a single value is observed. For example, when we measure an electron’s spin, we might find it to be spin up or spin down. Before the measurement is made, however, we cannot be definitive; we can only say that it’s in a superposition of the two states.
We can use the theory to work out the probability that this or that outcome will be measured, but not to predict outcomes of individual instances of experiments. Of course, probabilities aren’t unique to quantum theory. Doctors, for example, tell their patients the probability that they’ll have a heart attack or stroke within a certain time, but can’t make more precise predictions. In quantum mechanics, though, it seems to be probabilities all the way down; they seem to be inherent in the theory itself.
For decades, Copenhagen was the most commonly accepted interpretation of the theory. But critics sometimes derisively call it the “shut up and calculate” interpretation (a phrase usually attributed to physicist David Mermin). At the Perimeter conference, I sensed little love for Copenhagen; over lunch, someone described it as “inconsistent and incoherent.” Steven Weinberg, the Nobel laureate physicist, once wrote that if one adopts Copenhagen, one “rejects quantum mechanics altogether as a description of reality.”
The interpretation also draws a distinction between the observer and the thing being observed, without ever saying where this boundary is or why it exists. This leads some to speculate that human beings, or perhaps conscious minds, are an essential part of the theory, since it is we (or our minds) that do the observing. This, Weinberg argues, represents a move away from the kind of science we’ve embraced since Darwin – a way of understanding humans as a part of nature, not as something separate and mysterious.
But if Copenhagen isn’t the way to go, what is? Quantum mechanics has, by now, a notoriously large number of competing interpretations. Of these, perhaps the most provocative is the Many Worlds Interpretation (MWI), first put forward by Hugh Everett back in the 1950s. According to MWI, when a quantum event happens, all of the possible outcomes happen, each in a separate universe. At the Perimeter conference, the strongest proponent of the Many Worlds view was Israeli physicist Lev Vaidman.
Vaidman believes that, with MWI, the paradoxes of quantum mechanics disappear. Consider the probabilistic nature of the theory: in MWI, this can be seen as an illusion of perspective, Vaidman says. To any individual observer, only one outcome of a quantum measurement is seen and the equations of quantum mechanics can be used to work out the probability of seeing one particular result or another. There are no probabilities; everything simply happens, somewhere.
But not everyone is ready to climb onto the MWI bandwagon. Barrett, for example, says that MWI doesn’t quite do away with probabilities as tidily as Vaidman believes. Another objection is more philosophical, involving the notion of the “self.” If MWI is to be taken seriously, we have to accept the idea that each of us exists many times over across this multitude of universes.
At one point during the conference banquet, Hardy stood up and said, “We’ve got to change the course of physics. Nothing less will do!” Quite a few people banged on their tables to indicate agreement. He went on, “The reason we have to come up with a theory of quantum gravity is to prove that Lev is wrong,” adding wryly, “and my biggest fear is that we’ll end up doing the opposite.” Vaidman smiled.
The conference came to an end, but the quest to make sense of quantum theory continues. Everyone, it seems, has a different intuition about how the journey should proceed and where it might end.
After discussing Newton, Hardy moved on to Johannes Kepler. The German mathematician came up with empirical laws that describe how planets move: the three laws of planetary motion. Kepler’s laws were accurate – you could use them to predict planetary movements with precision – but they seemed ad-hoc. Where did they come from? This was only clarified when Newton came up with his laws of motion and gravity; then everything fell into place.
Perhaps we are in a similar situation, waiting for a Newton or an Einstein to come up with a more complete picture of the universe. Meanwhile, we reap enormous benefits from what quantum science has already given us. In that regard, Mermin’s suggestion that we get on with the calculations, and leave the philosophical for later, makes sense: it’s good to have computers today, even if knowing the ultimate nature of reality has to wait until tomorrow. Or decades from now.
But Hardy, for one, is confident. “Things are only difficult until you figure out how to do them,” he says. “Then they become easy.”
Dan Falk (@danfalk) is a science journalist based in Toronto. His books include The Science of Shakespeare and In Search of Time. He was a visiting writer at Perimeter in summer 2018.
Quantum gravity is likely to be stranger than either quantum theory or gravitational theory alone, says Perimeter Faculty member Lucien Hardy. He has a new proposal for taming that strangeness – one inspired by Einstein.
Correlation does not imply causation – unless it’s quantum. That’s the message of surprising new work from Perimeter Institute and the Institute for Quantum Computing.