Tag: quantum physics

  • What does it mean to interpret quantum physics?

    The United Nations has designated 2025 the International Year of Quantum Science and Technology. Many physics magazines and journals have taken the opportunity to publish more articles on quantum physics than they usually do, and that has meant quantum physics research has often been on my mind. Nirmalya Kajuri, an occasional collaborator, an assistant professor at IIT Mandi, and an excellent science communicator, recently asked other physics teachers on X.com how much time they spend teaching the interpretations of quantum physics. His question and the articles I’ve been reading inspired me to write the following post. I hope it’s useful in particular to people like me, who are interested in physics but didn’t formally train to study it.


    Quantum physics is often described as the most successful theory in science. It explains how atoms bond, how light interacts with matter, how semiconductors and lasers work, and even how the sun produces energy. With its equations, scientists can predict experimental results with astonishing precision — up to 10 decimal places in the case of the electron’s magnetic moment.

    In spite of this extraordinary success, quantum physics is unusual compared to other scientific theories because it doesn’t tell us a single, clear story about what reality is like. The mathematics yields predictions that have never been contradicted within their tested domain, yet it leaves open the question of what the world is actually doing behind those numbers. This is what physicists mean when they speak of the ‘interpretations’ of quantum mechanics.

    In classical physics, the situation is more straightforward. Newton’s laws describe how forces act on bodies, leading them to move along definite paths. Maxwell’s theory of electromagnetism describes electric and magnetic fields filling space and interacting with charges. Einstein’s relativity shows space and time are flexible and curve under the influence of matter and energy. These theories predict outcomes and provide a coherent picture of the world: objects have locations, fields have values, and spacetime has shape. In quantum mechanics, the mathematics works perfectly — but the corresponding picture of reality is still unclear.

    The central concept in quantum theory is the wavefunction. This is a mathematical object that contains all the information about a system, such as an electron moving through space. The wavefunction evolves smoothly in time according to the Schrödinger equation. If you know the wavefunction at one moment, you can calculate it at any later moment using the equation. But when a measurement is made, the rules of the theory change. Instead of continuing smoothly, the wavefunction is used to calculate probabilities for different possible outcomes, and then one of those outcomes occurs.

    For instance, if an electron has a 50% chance of being detected on the left and a 50% chance of being detected on the right, the experiment will yield either left or right, never both at once. The mathematics says that before the measurement, the electron exists in a superposition of left and right, but after the measurement only one is found. This peculiar structure, where the wavefunction evolves deterministically between measurements but then seems to collapse into a definite outcome when observed, has no counterpart in classical physics.

    The puzzles arise because it’s not clear what the wavefunction really represents. Is it a real physical wave that somehow ‘collapses’? Is it merely a tool for calculating probabilities, with no independent existence? Is it information in the mind of an observer rather than a feature of the external world? The mathematics doesn’t say.

    The measurement problem asks why the wavefunction collapses at all and what exactly counts as a measurement. Superposition raises the question of whether a system can truly be in several states at once or whether the mathematics is only a convenient shorthand. Entanglement, where two particles remain linked in ways that seem to defy distance, forces us to wonder whether reality itself is nonlocal in some deep sense. Each of these problems points to the fact that while the predictive rules of quantum theory are clear, their meaning is not.

    Over the past century, physicists and philosophers have proposed many interpretations of quantum mechanics. The most traditional is often called the Copenhagen interpretation, illustrated by the Schrödinger’s cat thought experiment. In this view, the wavefunction is not real but only a computational tool. In many Copenhagen-style readings, the wavefunction is a device for organising expectations while measurement is taken as a primitive, irreducible step. The many-worlds interpretation offers a different view that denies the wavefunction ever collapses. Instead, all possible outcomes occur, each in its own branch of reality. When you measure the electron, there is one version of you that sees it on the left and another version that sees it on the right.

    In Bohmian mechanics, particles always have definite positions guided by a pilot wave that’s represented by the wavefunction. In this view, the randomness of measurement outcomes arises because we can’t know the precise initial positions of the particles. There are also objective collapse theories that take the wavefunction as real but argue that it undergoes genuine, physical collapse triggered randomly or by specific conditions. Finally, an informational approach called QBism says the wavefunction isn’t about the world at all but about an observer’s expectations for experiences upon acting on the world.

    Most interpretations reproduce the same experimental predictions (objective-collapse models predict small, testable deviations) but tell different stories about what the world is really like.

    It’s natural to ask why interpretations are needed at all if they don’t change the predictions. Indeed, many physicists work happily without worrying about them. To build a transistor, calculate the energy of a molecule or design a quantum computer, the rules of standard quantum mechanics suffice. Yet interpretations matter for several reasons, but especially because they shape our philosophical understanding of what kind of universe we live in.

    They also influence scientific creativity because some interpretations suggest directions for new experiments. For example, objective collapse theories predict small deviations from the usual quantum rules that can, at least in principle, be tested. Interpretations also matter in education. Students taught only the Copenhagen interpretation may come away thinking quantum physics is inherently mysterious and that reality only crystallises when it’s observed. Students introduced to many-worlds alone may instead think of the universe as an endlessly branching tree. The choice of interpretation moulds the intuition of future physicists. At the frontiers of physics, in efforts to unify quantum theory with gravity or to describe the universe as a whole, questions about what the wavefunction really is become unavoidable.

    In research fields that apply quantum mechanics to practical problems, many physicists don’t think about interpretation at all. A condensed-matter physicist studying superconductors uses the standard formalism without worrying about whether electrons are splitting into multiple worlds. But at the edges of theory, interpretation plays a major role. In quantum cosmology, where there are no external observers to perform measurements, one needs to decide what the wavefunction of the universe means. How we interpret entanglement, i.e. as a real physical relation versus as a representational device, colours how technologists imagine the future of quantum computing. In quantum gravity, the question of whether spacetime itself can exist in superposition renders interpretation crucial.

    Interpretations also matter in teaching. Instructors make choices, sometimes unconsciously, about how to present the theory. One professor may stick to the Copenhagen view and tell students that measurement collapses the wavefunction and that that’s the end of the story. Another may prefer many-worlds and suggest that collapse never occurs, only branching universes. A third may highlight information-based views, stressing that quantum mechanics is really about knowledge and prediction rather than about what exists independently. These different approaches shape the way students can understand quantum mechanics as a tool as well as as a worldview. For some, quantum physics will always appear mysterious and paradoxical. For others, it will seem strange but logical once its hidden assumptions are made clear.

    Interpretations also play a role in experiment design. Objective collapse theories, for example, predict that superpositions of large objects should spontaneously collapse. Experimental physicists are now testing whether quantum superpositions survive for increasingly massive molecules or for diminutive mechanical devices, precisely to check whether collapse really happens. Interpretations have also motivated tests of Bell’s inequalities, an idea that shows no local theory with “hidden variables” can reproduce the correlations predicted by quantum mechanics. The scientists who conducted these experiments confirmed entanglement is a genuine feature of the world, not a residue of the mathematical tools we use to study it — and won the Nobel Prize for physics in 2022. Today, entanglement is exploited in technologies such as quantum cryptography. Without the interpretative debates that forced physicists to take these puzzles seriously, such developments may never have been pursued.

    The fact that some physicists care deeply about interpretation while others don’t reflects different goals. Those who work on applied problems or who need to build devices don’t have to care much. The maths provides the answers they need. Those who are concerned with the foundations of physics, with the philosophy of science or with the unification of physical theories care very much, because interpretation guides their thinking about what’s possible and what’s not. Many physicists switch back and forth, ignoring interpretation when calculating in the lab but discussing many-worlds or informational views over chai.

    Quantum mechanics is unique among physical theories in this way. Few chemists or engineers spend time worrying about the ‘interpretation’ of Newtonian mechanics or thermodynamics because these theories present straightforward pictures of the world. Quantum mechanics instead gives flawless predictions but an under-determined picture. The search for interpretation is the search for a coherent story that links the extraordinary success of the mathematics to a clear vision of what the world is like.

    To interpret quantum physics is therefore to move beyond the bare equations and ask what they mean. Unlike classical theories, quantum mechanics doesn’t supply a single picture of reality along with its predictions. It leaves us with probabilities, superpositions, and entanglement, and it remains ambiguous about what these things really are. Some physicists insist interpretation is unnecessary; to others it’s essential. Some interpretations depict reality as a branching multiverse, others as a set of hidden particles, yet others as information alone. None has won final acceptance, but all try to close the gap between predictive success and conceptual clarity.

    In daily practice, many physicists calculate without worrying, but in teaching, in probing the limits of the theory, and in searching for new physics, interpretations matter. They shape not only what we understand about the quantum world but also how we imagine the universe we live in.

  • Quantum clock breaks entropy barrier

    In physics, the second law of thermodynamics says that a closed system tends to become more disordered over time. This disorder is captured in an entity called entropy. Many devices, especially clocks, are affected by this law because they need to tick regularly to measure time. But every tick creates a bit of disorder, i.e. increases the entropy, and physicists have believed for a long time now that this places a fundamental limit on how precise a clock can be. The more precise you want your clock, the more entropy (and thus more energy) you’ll have to expend.

    A study published in Nature Physics on June 2 challenges this wisdom. In it, researchers from Austria, Malta, and Sweden asked if the second law of thermodynamics really set a limit on a clock’s precision and came away, surprisingly, with a design of a new kind of quantum clock that’s too precise scientists once believed possible for the amount of energy it spends to achieve that precision.

    The researchers designed this clock using a spin chain. Imagine a ring made of several quantum sites, like minuscule cups. Each cup can hold an excitation — say, a marble that can hop from cup to cup. This excitation moves around the ring and every time it completes a full circle, the clock ticks once. A spin chain is, broadly speaking, a series of connected quantum systems (the sites) arranged in a ring and the excitation is a subatomic particle or packet of energy that moves from site to site.

    In most clocks, every tick is accompanied by the dissipation of some energy and a small increase in entropy. But in the model in the new study, only the last link in the circle, where the last quantum system was linked to the first one, dissipated energy. Everywhere else, the excitation moved without losing energy, like a wave gliding smoothly around the ring. The movement of the excitation in this lossless way through most of the ring is called coherent transport.

    The researchers used computer simulations to help them adjust the hopping rates — or how easily the excitation moved between sites — and thus to make the clock as precise as possible. They found that the best setup involved dividing the ring into three regions: (i) in the preparation ramp, the excitation was shaped into a wave packet; (ii) in the bulk propagation phase, the wave packet moved steadily through the ring; and (iii) in the boundary matching phase, the wave packet was reset for the next tick.

    The team measured the clock’s precision as the number of ticks it completed before it was one tick ahead or behind a perfect clock. Likewise, team members defined the entropy per tick to be the amount of energy dissipated per tick. Finally, the team compared this quantum clock to classical clocks and other quantum models, which typically show a linear relationship between precision and entropy: e.g. if the precision doubled, the entropy doubled as well.

    The researchers, however, found that the precision of their quantum clock grew exponentially with entropy. In other words, if the amount of entropy per tick increased only slightly, the precision increased by a big leap. It was proof that, at least in principle, it’s possible to build a clock to be arbitrarily precise while keeping the system’s entropy down, all without falling afoul of the second law.

    That is, contrary to what many physicists thought, the second law of thermodynamics doesn’t strictly limit a clock precision, at least not for quantum clocks like this one. The clock’s design allowed it to sidestep the otherwise usual trade-off between precision and entropy.

    During coherent transport, the process is governed only by the system’s Hamiltonian, i.e. the rules for how energy moves in a closed quantum system. In this regime, the excitation acts like a wave that spreads smoothly and reversibly, without losing any energy or creating any disorder. Imagine a ball rolling on a perfectly smooth, frictionless track. It keeps moving without slowing down or heating up the track. Such a thing is impossible in classical mechanics, like in the ball example, but it’s possible in quantum systems. The tradeoff of course is that the latter are very small and very fragile and thus harder to manipulate.

    In the present study, the researchers have proved that it’s possible to build a quantum clock that takes advantage of coherent transport to tick while dissipating very little energy. Their model, the spin chain, uses a Hamiltonian that only allows the excitation to coherently hop to its nearest neighbour. The researchers engineered the couplings between the sites in the preparation ramp part of the ring to shape the excitation into a traveling wave packet that moves predominantly in the forward direction.

    This tendency to move in only direction is further bolstered at the last link, where the last site is coupled to the first. Here, the researchers installed a thermal gradient — a small temperature difference that encouraged the wave to restart its journey rather than be reflected and move backwards through the ring. When the excitation crossed this thermodynamic bias, the clock ticked once and also dissipated some energy.

    Three points here. First, remember that this is a quantum system. The researchers are dealing with energy (almost) at its barest, manipulating it directly without having to bother with an accoutrement of matter covering it. In the classical regime, such accoutrements are unavoidable. For example, if you have a series of cups and you want to make an excitation hop through it, you do so with a marble. But while the marble contains the (potential) energy that you want to move through the cups, it also has mass and it dissipates energy whenever it hops into a cup, e.g. it might bounce when it lands and it will release sound when it strikes the cup’s material. So while the marble metaphor earlier might have helped you visualise the quantum clock, remember that the metaphor has limitations.

    Second, for the quantum clock to work as a clock, it needs to break time-reversal symmetry (a concept I recently discussed in the context of quasicrystals). Say you remove the thermodynamic bias at the last link of the ring and replace it with a regular link. In this case the excitation will move randomly — i.e. at each step it will randomly pick the cup to move to, forward or backward, and keep going. If you reversed time, the excitation’s path will still be random and just evolve in reverse.

    However, the final thermodynamically biased link causes the excitation to acquire a preference for moving in one direction. The system thus breaks time-reversal symmetry because even if you reverse the flow of time, the system will encourage the excitation to move in one direction and one direction only. This in turn is essential for the quantum system to function like a clock. That is, the excitation needs to traverse a fixed number of cups in the spin chain and then start from the first cup. Only between these two stages will the system count off a ‘tick’. Breaking time-reversal symmetry thus turns the device into a clock.

    Three, the thermodynamic bias ensures that the jump from the last site to the first is more likely than the reverse, and the entropy is the cost the system pays in order to ensure the jump. Equally, the greater the thermodynamic bias, the more likely the excitation is to move in one direction through spin chain as well as make the jump in the right direction at the final step. Thus, the greater the thermodynamic bias, the more precise the clock will be.

    The new study excelled by creating a sufficiently precise clock while minimising the entropy cost.

    According to the researchers, its design design could help build better quantum clocks, which are important for quantum computers, quantum communication, and to make ultra-precise precise measurements of the kind demanded by atomic clocks. The clock’s ticks could also be used to emit single photons at regular intervals — a technology increasingly in demand for its use in quantum networks of the sort China, the US, and India are trying to build.

    But more fundamentally, the clock’s design — which confines energy dissipation to a single link and uses coherent transport everywhere else — and that design’s ability to evade the precision-entropy trade-off challenges a longstanding belief that the second law of thermodynamics strictly limits precision.

    Featured image credit: Meier, F., Minoguchi, Y., Sundelin, S. et al. Nat. Phys. (2025).

  • Notes on covering QM

    1. I learnt last year that quantum systems are essentially linear because the mathematics that physicists have found can describe quantum-mechanical phenomena contain only linear terms. Effects add to each other like 1 + 1 = 2; nothing gets out of control in exponential fashion, at least not usually. I learnt this by mistake in an article published in 1998 when I was trying to learn more about the connection between the Riemann zeta function and ‘quantum chaos’. This is to say that physicists take for granted several concepts – many of which might even be too ‘basic’ for them to have to clarify to a science reporter – that the reporter may only accidentally discover.
    2. “Classical systems are, roughly speaking, defined by well-bounded theories and equations, most of which were invented to describe them. But the description of quantum systems often invokes concepts and mathematical tools that can be found strewn around many other fields of physics.” This impression was unexpectedly disorienting when it first struck. After many years, I realised that the problem lies in my (our?) schooling: I learnt concepts in classical physics in a way that closely tied them to other things I was learning at the same time. Could that be why complicated forms of Euclidean geometry come up at the same time as optics, and vector algebra at the same time as calculus? But it also strikes me that quantum systems lend themselves more readily to be described by more than one theory because of the significant diversity of effects on offer.
    3. The edge of physics is a more wonderful place than the middle because there’s a lot of creativity at work at the edge. This statement is very true for classical physics but vaguely at best for quantum physics. One reason is the diversity of effects: a system that is intractable in statistical mechanics might suddenly offer glimpses of order and predictability when viewed through the lens of quantum field theory. More than a few problems require ‘goat solutions’ – a personal term for an assumption thrown in to make a problem amenable to solving in such a way that the solution doesn’t retain any effects of the assumption (reason for the choice of words here). In some instances, physicists’ assumptions have brought the Iron Man films to mind: the assumptions are in the realm of the fantastic, but are still bound by a discipline that prevents runaway imagination.
    4. Researchers who use the tools of mathematical physics seem to take mathematical notation for granted. Statements of the following form may seem simple but actually pack a lot of information: “Consider a function f(x, y) of the form Σ xip where p is equal to dy/dt in some domain…” (an obviously made up example). I’m all the more spooked when I encounter symbols whose names themselves are beyond me, like ζ or Π, or when the logarithms make an appearance. We need to acknowledge the importance of being habituated to these terms. To a physicist who has spent many years dealing with that operation, a summation might mean a straightforward accumulation of certain effects, but in my mind it always invokes a series of complex sums. I don’t know what else to visualise.
    5. Only a small minority of physicists in India can talk in interesting ways about their work. They use interesting turns of phrase, metaphors borrowed from a book or a play, and sometimes contemplate what their and/or others’ work is telling them about the universe and our place in it. I don’t know why this is rare.
  • Why scientists should read more

    The amount of communicative effort to describe the fact of a ball being thrown is vanishingly low. It’s as simple as saying, “X threw the ball.” It takes a bit more effort to describe how an internal combustion engine works – especially if you’re writing for readers who have no idea how thermodynamics works. However, if you spend enough time, you can still completely describe it without compromising on any details.

    Things start to get more difficult when you try to explain, for example, how webpages are loaded in your browser: because the technology is more complicated and you often need to talk about electric signals and logical computations – entities that you can’t directly see. You really start to max out when you try to describe everything that goes into launching a probe from Earth and landing it on a comet because, among other reasons, it brings together advanced ideas in a large number of fields.

    At this point, you feel ambitious and you turn your attention to quantum technologies – only to realise you’ve crossed a threshold into a completely different realm of communication, a realm in which you need to pick between telling the whole story and risk being (wildly) misunderstood OR swallowing some details and making sure you’re entirely understood.

    Last year, a friend and I spent dozens of hours writing a 1,800-word article explaining the Aharonov-Bohm quantum interference effect. We struggled so much because understanding this effect – in which electrons are affected by electromagnetic fields that aren’t there – required us to understand the wave-function, a purely mathematical object that describes real-world phenomena, like the behaviour of some subatomic particles, and mathematical-physical processes like non-Abelian transformations. Thankfully my friend was a physicist, a string theorist for added measure; but while this meant that I could understand what was going on, we spent a considerable amount of time negotiating the right combination of metaphors to communicate what we wanted to communicate.

    However, I’m even more grateful in hindsight that my friend was a physicist who understood the need to not exhaustively include details. This need manifests in two important ways. The first is the simpler, grammatical way, in which we construct increasingly involved meanings using a combination of subjects, objects, referrers, referents, verbs, adverbs, prepositions, gerunds, etc. The second way is more specific to science communication: in which the communicator actively selects a level of preexisting knowledge on the reader’s part – say, high-school education at an English-medium institution – and simplifies the slightly more complicated stuff while using approximations, metaphors and allusions to reach for the mind-boggling.

    Think of it like building an F1 racecar. It’s kinda difficult if you already have the engine, some components to transfer kinetic energy through the car and a can of petrol. It’s just ridiculous if you need to start with mining iron ore, extracting oil and preparing a business case to conduct televisable racing sports. In the second case, you’re better off describing what you’re trying to do to the caveman next to you using science fiction, maybe poetry. The problem is that to really help an undergraduate student of mechanical engineering make sense of, say, the Casimir effect, I’d rather say:

    According to quantum mechanics, a vacuum isn’t completely empty; rather, it’s filled with quantum fluctuations. For example, if you take two uncharged plates and bring them together in a vacuum, only quantum fluctuations with wavelengths shorter than the distance between the plates can squeeze between them. Outside the plates, however, fluctuations of all wavelengths can fit. The energy outside will be greater than inside, resulting in a net force that pushes the plates together.

    ‘Quantum Atmospheres’ May Reveal Secrets of Matter, Quanta, September 2018

    I wouldn’t say the following even though it’s much less wrong:

    The Casimir effect can be understood by the idea that the presence of conducting metals and dielectrics alters the vacuum expectation value of the energy of the second-quantised electromagnetic field. Since the value of this energy depends on the shapes and positions of the conductors and dielectrics, the Casimir effect manifests itself as a force between such objects.

    Casimir effect, Wikipedia

    Put differently, the purpose of communication is to be understood – not learnt. And as I’m learning these days, while helping virologists compose articles on the novel coronavirus and convincing physicists that comparing the Higgs field to molasses isn’t wrong, this difference isn’t common knowledge at all. More importantly, I’m starting to think that my physicist-friend who really got this difference did so because he reads a lot. He’s a veritable devourer of texts. So he knows it’s okay – and crucially why it’s okay – to skip some details.

    I’m half-enraged when really smart scientists just don’t get this, and accuse editors (like me) of trying instead to misrepresent their work. (A group that’s slightly less frustrating consists of authors who list their arguments in one paragraph after another, without any thought for the article’s structure and – more broadly – recognising the importance of telling a story. Even if you’re reviewing a book or critiquing a play, it’s important to tell a story about the thing you’re writing about, and not simply enumerate your points.)

    To them – which is all of them because those who think they know the difference but really don’t aren’t going to acknowledge the need to bridge the difference, and those who really know the difference are going to continue reading anyway – I say: I acknowledge that imploring people to communicate science more without reading more is fallacious, so read more, especially novels and creative non-fiction, and stories that don’t just tell stories but show you how we make and remember meaning, how we memorialise human agency, how memory works (or doesn’t), and where knowledge ends and wisdom begins.

    There’s a similar problem I’ve faced when working with people for whom English isn’t the first language. Recently, a person used to reading and composing articles in the passive voice was livid after I’d changed numerous sentences in the article they’d submitted to the active voice. They really didn’t know why writing, and reading, in the active voice is better because they hadn’t ever had to use English for anything other than writing and reading scientific papers, where the passive voice is par for the course.

    I had a bigger falling out with another author because I hadn’t been able to perfectly understand the point they were trying to make, in sentences of broken English, and used what I could infer to patch them up – except I was told I’d got most of them wrong. And they couldn’t implement my suggestions either because they couldn’t understand my broken Hindi.

    These are people that I can’t ask to read more. The Wire and The Wire Science publish in English but, despite my (admittedly inflated) view of how good these publications are, I’ve no reason to expect anyone to learn a new language because they wish to communicate their ideas to a large audience. That’s a bigger beast of a problem, with tentacles snaking through colonialism, linguistic chauvinism, regional identities, even ideologies (like mine – to make no attempts to act on instructions, requests, etc. issued in Hindi even if I understand the statement). But at the same time there’s often too much lost in translation – so much so that (speaking from my experience in the last five years) 50% of all submissions written by authors for whom English isn’t the first language don’t go on to get published, even if it was possible for either party to glimpse during the editing process that they had a fascinating idea on their hands.

    And to me, this is quite disappointing because one of my goals is to publish a more diverse group of writers, especially from parts of the country underrepresented thus far in the national media landscape. Then again, I acknowledge that this status quo axiomatically charges us to ensure there are independent media outlets with science sections and publishing in as many languages as we need. A monumental task as things currently stand, yes, but nonetheless, we remain charged.

  • A universe out of sight

    Two things before we begin:

    1. The first subsection of this post assumes that humankind has colonised some distant extrasolar planet(s) within the observable universe, and that humanity won’t be wiped out in 5 billion years.
    2. Both subsections assume a pessimistic outlook, and neither projections they dwell on might ever come to be while humanity still exists. Nonetheless, it’s still fun to consider them and their science, and, most importantly, their potential to fuel fiction.

    Cosmology

    Astronomers using the Hubble Space Telescope have captured the most comprehensive picture ever assembled of the evolving Universe — and one of the most colourful. The study is called the Ultraviolet Coverage of the Hubble Ultra Deep Field. Caption and credit: hubble_esa/Flickr, CC BY 2.0
    Astronomers using the Hubble Space Telescope have captured the most comprehensive picture ever assembled of the evolving universe — and one of the most colourful. The study is called the Ultraviolet Coverage of the Hubble Ultra Deep Field. Caption and credit: hubble_esa/Flickr, CC BY 2.0

    Note: An edited version of this post has been published on The Wire.

    A new study whose results were reported this morning made for a disconcerting read: it seems the universe is expanding 5-9% faster than we figured it was.

    That the universe is expanding at all is disappointing, that it is growing in volume like a balloon and continuously birthing more emptiness within itself. Because of the suddenly larger distances between things, each passing day leaves us lonelier than we were yesterday. The universe’s expansion is accelerating, too, and that doesn’t simply mean objects getting farther away. It means some photons from those objects never reaching our telescopes despite travelling at lightspeed, doomed to yearn forever like Tantalus in Tartarus. At some point in the future, a part of the universe will become completely invisible to our telescopes, remaining that way no matter how hard we try.

    And the darkness will only grow, until a day out of an Asimov story confronts us: a powerful telescope bearing witness to the last light of a star before it is stolen from us for all time. Even if such a day is far, far into the future – the effect of the universe’s expansion is perceptible only on intergalactic scales, as the Hubble constant indicates, and simply negligible within the Solar System – the day exists.

    This is why we are uniquely positioned: to be able to see as much as we are able to see. At the same time, it is pointless to wonder how much more we are able to see than our successors because it calls into question what we have ever been able to see. Say the whole universe occupies a volume of X, that the part of it that remains accessible to us contains a volume Y, and what we are able to see today is Z. Then: Z < Y < X. We can dream of some future technological innovation that will engender a rapid expansion of what we are able to see, but with Y being what it is, we will likely forever play catch-up (unless we find tachyons, navigable wormholes, or the universe beginning to decelerate someday).

    How fast is the universe expanding? There is a fixed number to this called the deceleration parameter:

    q = – (1 + /H2),

    where H is the Hubble constant and  is its first derivative. The Hubble constant is the speed at which an object one megaparsec from us is moving away at. So, if q is positive, the universe’s expansion is slowing down. If q is zero, then H is the time since the Big Bang. And if q is negative – as scientists have found to be the case – then the universe’s expansion is accelerating.

    The age and ultimate fate of the universe can be determined by measuring the Hubble constant today and extrapolating with the observed value of the deceleration parameter, uniquely characterised by values of density parameters (Ω_M for matter and Ω_Λ for dark energy). Caption and credit: Wikimedia Commons
    The age and ultimate fate of the universe can be determined by measuring the Hubble constant today and extrapolating with the observed value of the deceleration parameter, uniquely characterised by values of density parameters (Ω_M for matter and Ω_Λ for dark energy). Caption and credit: Wikimedia Commons

    We measure the expansion of the universe from our position: on its surface (because, no, we’re not inside the universe). We look at light coming from distant objects, like supernovae; we work out how much that light is ‘red-shifted’; and we compare that to previous measurements. Here’s a rough guide.

    What kind of objects do we use to measure these distances? Cosmologists prefer type Ia supernovae. In a type Ia supernova, a white-dwarf (the core of a dead stare made entirely of electrons) is slowly sucking in matter from an object orbiting it until it becomes hot enough to trigger fusion reaction. In the next few seconds, the reaction expels 1044 joules of energy, visible as a bright fleck in the gaze of a suitable telescope. Such explosions have a unique attribute: the mass of the white-dwarf that goes boom is uniform, which means type Ia supernova across the universe are almost equally bright. This is why cosmologists refer to them as ‘cosmic candles’. Based on how faint these candles are, you can tell how far away they are burning.

    After a type Ia supernova occurs, photons set off from its surface toward a telescope on Earth. However, because the universe is continuously expanding, the distance between us and the supernova is continuously increasing. The effective interpretation is that the explosion appears to be moving away from us, becoming fainter. How much it has moved away is derived from the redshift. The wave nature of radiation allows us to think of light as having a frequency and a wavelength. When an object that is moving away from us emits light toward us, the waves of light appear to become stretched, i.e. the wavelength seems to become distended. If the light is in the visible part of the spectrum when starting out, then by the time it reached Earth, the increase in its wavelength will make it seem redder. And so the name.

    The redshift, z – technically known as the cosmological redshift – can be calculated as:

    z = (λobserved – λemitted)/λemitted

    In English: the redshift is the factor by which the observed wavelength is changed from the emitted wavelength. If z = 1, then the observed wavelength is twice as much as the emitted wavelength. If z = 5, then the observed wavelength is six-times as much as the emitted wavelength. The farthest galaxy we know (MACS0647-JD) is estimated to be at a distance wherefrom = 10.7 (corresponding to 13.3 billion lightyears).

    Anyway, z is used to calculate the cosmological scale-factor, a(t). This is the formula:

    a(t) = 1/(1 + z)

    a(t) is then used to calculate the distance between two objects:

    d(t) = a(t) d0,

    where d(t) is the distance between the two objects at time t and d0 is the distance between them at some reference time t0. Since the scale factor would be constant throughout the universe, d(t) and d0 can be stand-ins for the ‘size’ of the universe itself.

    So, let’s say a type Ia supernova lit up at a redshift of 0.6. This gives a(t) = 0.625 = 5/8. So: d(t) = 5/8 * d0. In English, this means that the universe was 5/8th its current size when the supernova went off. Using z = 10.7, we infer that the universe was one-twelfth its current size when light started its journey from MACS0647-JD to reach us.

    As it happens, residual radiation from the primordial universe is still around today – as the cosmic microwave background radiation. It originated 378,000 years after the Big Bang, following a period called the recombination epoch, 13.8 billion years ago. Its redshift is 1,089. Phew.

    The relation between redshift (z) and distance (in billions of light years). d_H is the comoving distance between you and the object you're observing. Where it flattens out is the distance out to the edge of the observable universe. Credit: Redshiftimprove/Wikimedia Commons, CC BY-SA 3.0
    The relation between redshift (z) and distance (in billions of light years). d_H is the comoving distance between you and the object you’re observing. Where it flattens out is the distance out to the edge of the observable universe. Credit: Redshiftimprove/Wikimedia Commons, CC BY-SA 3.0

    A curious redshift is z = 1.4, corresponding to a distance of about 4,200 megaparsec (~0.13 trillion trillion km). Objects that are already this far from us will be moving away faster than at the speed of light. However, this isn’t faster-than-light travel because it doesn’t involve travelling. It’s just a case of the distance between us and the object increasing at such a rate that, if that distance was once covered by light in time t0, light will now need t > t0 to cover it*. The corresponding a(t) = 0.42. I wonder at times if this is what Douglas Adams was referring to (… and at other times I don’t because the exact z at which this happens is 1.69, which means a(t) = 0.37. But it’s something to think about).

    Ultimately, we will never be able to detect any electromagnetic radiation from before the recombination epoch 13.8 billion years ago; then again, the universe has since expanded, leaving the supposed edge of the observable universe 46.5 billion lightyears away in any direction. In the same vein, we can imagine there will be a distance (closing in) at which objects are moving away from us so fast that the photons from their surface never reach us. These objects will define the outermost edges of the potentially observable universe, nature’s paltry alms to our insatiable hunger.

    Now, a gentle reminder that the universe is expanding a wee bit faster than we thought it was. This means that our theoretical predictions, founded on Einstein’s theories of relativity, have been wrong for some reason; perhaps we haven’t properly accounted for the effects of dark matter? This also means that, in an Asimovian tale, there could be a twist in the plot.

    *When making such a measurement, Earthlings assume that Earth as seen from the object is at rest and that it’s the object that is moving. In other words: we measure the relative velocity. A third observer will notice both Earth and the object to be moving away, and her measurement of the velocity between us will be different.


    Particle physics

    Candidate Higgs boson event from collisions in 2012 between protons in the ATLAS detector on the LHC. Credit: ATLAS/CERN
    Candidate Higgs boson event from collisions in 2012 between protons in the ATLAS detector on the LHC. Credit: ATLAS/CERN

    If the news that our universe is expanding 5-9% faster than we thought sooner portends a stellar barrenness in the future, then another foretells a fecundity of opportunities: in the opening days of its 2016 run, the Large Hadron Collider produced more data in a single day than it did in the entirety of its first run (which led to the discovery of the Higgs boson).

    Now, so much about the cosmos was easy to visualise, abiding as it all did with Einstein’s conceptualisation of physics: as inherently classical, and never violating the principles of locality and causality. However, Einstein’s physics explains only one of the two infinities that modern physics has been able to comprehend – the other being the world of subatomic particles. And the kind of physics that reigns over the particles isn’t classical in any sense, and sometimes takes liberties with locality and causality as well. At the same time, it isn’t arbitrary either. How then do we reconcile these two sides of quantum physics?

    Through the rules of statistics. Take the example of the Higgs boson: it is not created every time two protons smash together, no matter how energetic the protons are. It is created at a fixed rate – once every ~X collisions. Even better: we say that whenever a Higgs boson forms, it decays to a group of specific particles one-Yth of the time. The value of Y is related to a number called the coupling constant. The lower Y is, the higher the coupling constant is, and more often will the Higgs boson decay into that group of particles. When estimating a coupling constant, theoretical physicists assess the various ways in which the decays can happen (e.g., Higgs boson → two photons).

    A similar interpretation is that the coupling constant determines how strongly a particle and a force acting on that particle will interact. Between the electron and the electromagnetic force is the fine-structure constant,

    α = e2/2ε0hc;

    and between quarks and the strong nuclear force is the constant defining the strength of the asymptotic freedom:

    αs(k2) = [β0ln(k22)]-1

    So, if the LHC’s experiments require P (number of) Higgs bosons to make their measurements, and its detectors are tuned to detect that group of particles, then at least P-times-that-coupling-constant collisions ought to have happened. The LHC might be a bad example because it’s a machine on the Energy Frontier: it is tasked with attaining higher and higher energies so that, at the moment the protons collide, heavier and much shorter-lived particles can show themselves. A better example would be a machine on the Intensity Frontier: its aim would be to produce orders of magnitude more collisions to spot extremely rare processes, such as particles that are formed very rarely. Then again, it’s not as straightforward as just being prolific.

    It’s like rolling an unbiased die. The chance that you’ll roll a four is 1/6 (i.e. the coupling constant) – but it could happen that if you roll the die six times, you never get a four. This is because the chance can also be represented as 10/60. Then again, you could roll the die 60 times and still never get a four (though the odds of that happened are even lower). So you decide to take it to the next level: you build a die-rolling machine that rolls the die a thousand times. You would surely have gotten some fours – but say you didn’t get fours one-sixth of the time. So you take it up a notch: you make the machine roll the die a million times. The odds of a four should by now start converging toward 1/6. This is how a particle accelerator-collider aims to work, and succeeds.

    And this is why the LHC producing as much data as it already has this year is exciting news. That much data means a lot more opportunities for ‘new physics’ – phenomena beyond what our theories can currently explain – to manifest itself. Analysing all this data completely will take many years (physicists continue to publish papers based on results gleaned from data generated in the first run), and all of it will be useful in some way even if very little of it ends up contributing to new ideas.

    The steady (logarithmic) rise in luminosity – the number of collision events detected – at the CMS detector on the LHC. Credit: CMS/CERN
    The steady (logarithmic) rise in luminosity – the number of collision events detected – at the CMS detector on the LHC. Credit: CMS/CERN

    Occasionally, an oddball will show up – like a pentaquark, a state of five quarks bound together. As particles in their own right, they might not be as exciting as the Higgs boson, but in the larger schemes of things, they have a role to call their own. For example, the existence of a pentaquark teaches physicists about what sorts of configurations of the strong nuclear force, which holds the quarks together, are really possible, and what sorts are not. However, let’s say the LHC data throws up nothing. What then?

    Tumult is what. In the first run, the LHC used to smash two beams of billions of protons, each beam accelerated to 4 TeV and separated into 2,000+ bunches, head on at the rate of two opposing bunches every 50 nanoseconds. In the second run, after upgrades through early 2015, the LHC smashes bunches accelerated to 6.5 TeV once every 25 nanoseconds. In the process, the number of collisions per sq. cm per second increased tenfold, to 1 × 1034. These heightened numbers are so new physics has fewer places to hide; we are at the verge of desperation to tease them out, to plumb the weakest coupling constants, because existing theories have not been able to answer all of our questions about fundamental physics (why things are the way they are, etc.). And even the barest hint of something new, something we haven’t seen before, will:

    • Tell us that we haven’t seen all that there is to see**, that there is yet more, and
    • Validate this or that speculative theory over a host of others, and point us down a new path to tread

    Axiomatically, these are the desiderata at stake should the LHC find nothing, even more so that it’s yielded a massive dataset. Of course, not all will be lost: larger, more powerful, more innovative colliders will be built – even as a disappointment will linger. Let’s imagine for a moment that all of them continue to find nothing, and that persistent day comes to be when the cosmos falls out of our reach, too. Wouldn’t that be maddening?

    **I’m not sure of what an expanding universe’s effects on gravitational waves will be, but I presume it will be the same as its effect on electromagnetic radiation. Both are energy transmissions travelling on the universe’s surface at the speed of light, right? Do correct me if I’m wrong.

  • Thinking quantum

    In quantum physics, every metric is conceived as a vector. But that’s where its relation with classical physics ends, makes teaching a pain.

    Teaching classical mechanics is easy because we engage with it every day in many ways. Enough successful visualization tools exist to do that.

    Just wondering why quantum mechanics has to be so hard. All I need is to find a smart way to make visualizing it easier.

    Analogizing quantum physics with classical physics creates more problems than it solves. More than anything, the practice creates a need to nip cognitive inconsistencies in the bud.

    If quantum mechanics is the way the world works at its most fundamental levels, why is it taught in continuation of classical physics?

    Is or isn’t it easier to teach mathematics and experiments relating to quantum mechanics and then present the classical scenario as an idealized, macroscopic state?

    After all, isn’t that the real physics of the times? We completely understand classical mechanics; we need more people who can “think quantum” today.