Category: Science

  • What does a quantum Bayes’s rule look like?

    Bayes’s rule is one of the most fundamental principles in probability and statistics. It allows us to update our beliefs in the face of new evidence. In its simplest form, the rule tells us how to revise the probability of a hypothesis once new data becomes available.

    A standard way to teach it involves drawing coloured balls from a pouch: you start with some expectation (e.g. “there’s a 20% chance I’ll draw a blue ball”), then you update your belief depending on what you observe (“I’ve drawn a red ball, so the actual chance of drawing a blue ball is 10%”). While this example seems simple, the rule carries considerable weight: physicists and mathematicians have described it as the most consistent way to handle uncertainty in science, and it’s a central part of logic, decision theory, and indeed nearly every field of applied science.

    There are two well-known ways of arriving at Bayes’s rule. One is the axiomatic route, which treats probability as a set of logical rules and shows that Bayesian updating is the only way to preserve consistency. The other is variational, which demands that updates should stay as close as possible to prior beliefs while remaining consistent with new data. This latter view is known as the principle of minimum change. It captures the intuition that learning should be conservative: we shouldn’t alter our beliefs more than is necessary. This principle explains why Bayesian methods have become so effective in practical statistical inference: because they balance a respect for new data with loyalty to old information.

    A natural question arises here: can Bayes’s rule be extended into the quantum world?

    Quantum theory can be thought of as a noncommutative extension of probability theory. While there are good reasons to expect there should be a quantum analogue of Bayes’s rule, the field has for a long time struggled to identify a unique and universally accepted version. Instead, there are several competing proposals. One of them stands out: the Petz transpose map. This is a mathematical transformation that appears in many areas of quantum information theory, particularly in quantum error correction and statistical sufficiency. Some scholars have even argued that it’s the “correct” quantum Bayes’s rule. Still, the situation remains unsettled.

    In probability, the joint distribution is like a big table that lists the chances of every possible pair of events happening together. If you roll a die and flip a coin, the joint distribution specifies the probability of getting “heads and a 3”, “tails and a 5”, and so on. In this big table, you can also zoom out and just look at one part. For example, if you only care about the die, you can add up over all coin results to get the probability of each die face. Or if you only care about the coin, you can add up over all die results to get the probability of heads or tails. These zoomed-out views are called marginals.

    The classical Bayes’s rule doesn’t just update the zoomed-out views but the whole table — i.e. the entire joint distribution — so the connection between the two events also remains consistent with the new evidence.

    In the quantum version, the joint distribution isn’t a table of numbers but a mathematical object that records how the input and output of a quantum process are related. The point of the new study is that if you want a true quantum Bayes’s rule, you need to update that whole object, not just one part of it.

    A new study by Ge Bai, Francesco Buscemi, and Valerio Scarani in Physical Review Letters has taken just this step. In particular, they’ve presented a quantum version of the principle of minimum change by showing that when the measure of change is chosen to be quantum fidelity — a widely used measure of similarity between states — this optimisation leads to a unique solution. Equally remarkably, this solution coincided with the Petz transpose map in many important cases. As a result, the researchers have built a strong bridge between classical Bayesian updating, the minimum change principle, and a central tool of quantum information.

    The motivation for this new work isn’t only philosophical. If we’re to generalise Bayes’s rule to include quantum mechanics as well, we need to do so in a way that respects the structural constraints of quantum theory without breaking away from its classical roots.

    The researchers began by recalling how the minimum change principle works in classical probability. Instead of updating only a single marginal distribution, the principle works at the level of the joint input-output distribution. Updating then becomes an optimisation problem, i.e. finding the subsequent distribution that’s consistent with the new evidence but minimally different from the evidence from before.

    In ordinary probability, we talk about stochastic processes. These are rules that tell us how an input is turned into an output, with certain probabilities. For example if you put a coin into a vending machine, there might be a 90% chance you get a chips packet and a 10% chance you get nothing. This rule describes a stochastic process. This process can also be described with a joint distribution.

    In quantum physics, however, it’s tricky. The inputs and outputs aren’t just numbers or events but quantum states, which are described by wavefunctions or density matrices. This makes the maths much more complex. The resulting stochastic processes also become sequences of events called completely positive trace-preserving (CPTP) maps.

    A CPTP map is the most general kind of physical evolution allowed: it takes a quantum state and transforms it into another quantum state. And in the course of doing so, it needs to follow two rules: it shouldn’t yield any negative probabilities and it should ensure the total probability adds up to 1. That is, your chance of getting a chips packet shouldn’t be –90% nor should it be 90% plus a 20% chance of getting nothing.

    These complications mean that, while the joint distribution in classical Bayesian updating is a simple table, the one in quantum theory is more sophisticated. It uses two mathematical tools in particular. One is purification, a way to embed a mixed quantum state into a larger ‘pure’ state so that mathematicians can keep track of correlations. The other is Choi operators, a standard way of representing a CPTP map as a big matrix that encodes all possible input-output behaviour at once.

    Together, these tools play the role of the joint distribution in the quantum setting: they record the whole picture of how inputs and outputs are related.

    Now, how do you compare two processes, i.e. the actual forward process (input → output) and the guessed reverse process (output → input)?

    In quantum mechanics, one of the best measures of similarity is fidelity. It’s a number between 0 and 1. 0 means two processes are completely different and 1 means they’re exactly the same.

    In this context, the researchers’ problem statement was this: given a forward process, what reverse process is closest to it?

    To solve this, they looked over all possible reverse processes that obeyed the two rules, then they picked the one that maximised the fidelity, i.e. the CPTP map most similar to the forward process. This is the quantum version of applying the principle of minimum change.

    In the course of this process, the researchers found that in natural conditions, the Petz transpose map emerges as the quantum Bayes’s rule.

    In quantum mechanics, two objects (like matrices) commute if the order in which you apply them doesn’t matter. That is, A then B produces the same outcome as B then A. In physical terms, if two quantum states commute, they behave more like classical probabilities.

    The researchers found that when the CPTP map that takes an input and produces an output, called the forward channel, commutes with the new state, the updating process is nothing but the Petz transpose map.

    This is an important result for many reasons. Perhaps foremost is that it explains why the Petz map has shown up consistently across different parts of quantum information theory. It appears it isn’t just a useful tool but the natural consequence of the principle of minimum change applied in the quantum setting.

    The study also highlighted instances where the Petz transpose map isn’t optimal, specifically when the commutativity condition fails. In these situations, the optimal updating process depends more intricately on the new evidence. This subtlety departs clearly from classical Bayesian logic because in the quantum case, the structure of non-commutativity forces updates to depend non-linearly on the evidence (i.e. the scope of updating can be disproportionate to changes in evidence).

    Finally, the researchers have shown how their framework can recover special cases of practical importance. If some new evidence perfectly agrees with prior expectations, the forward and reverse processes become identical, mirroring the classical situation where Bayes’s rule simply reaffirms existing beliefs. Similarly, in contexts like quantum error correction, the Petz transpose map’s appearance is explained by its status as the optimal minimal-change reverse process.

    But the broader significance of this work lies in the way it unifies different strands of quantum information theory under a single conceptual roof. By proving that the Petz transpose map can be derived from the principle of minimum change, the study has provided a principled justification for its widespread use rather than being restricted to particular contexts. This fact has immediate consequences for quantum computing, where physicists are looking for ways to reverse the effects of noise on fragile quantum states. The Petz transpose map has long been known to do a good job of recovering information from these states after they’ve been affected by noise. Now that physicists know the map embodies the smallest update required to stay consistent with the observed outcomes, they may be able to design new recovery schemes that exploit the structure of minimal change more directly.

    The study may also open doors to extending Bayesian networks into the quantum regime. In classical probability, a Bayesian network provides a structured way to represent cause-effect relationships. By adapting the minimum change framework, scientists may be able to develop ‘quantum Bayesian networks’ where the way one updates their expectations of a particular outcome respects the peculiar constraints of CPTP maps. This could have applications in quantum machine learning and in the study of quantum causal models.

    There are also some open questions as well. For instance, the researchers have noted that if different measures of divergence other than fidelity are used, e.g. the Hilbert-Schmidt distance or quantum relative entropy, the resulting quantum Bayes’s rules may be different. This in turn indicates that there could be multiple valid updating rules, each suited to different contexts. Future research will need to map out these possibilities and determine which ones are most useful for particular applications.

    In all, the study provides both a conceptual advance and a technical tool. Conceptually, it shows how the spirit of Bayesian updating can carry over into the quantum world; technically, it provides a rigorous derivation of when and why the Petz transpose map is the optimal quantum Bayes’s rule. Taken together, the study’s finding strengthens the bridge between classical and quantum reasoning and offers a deeper understanding of how information is updated in a world where uncertainty is baked into reality rather than being due to an observer’s ignorance.

  • Using 10,000 atoms and 1 to probe the Bohr-Einstein debate

    The double-slit experiment has often been described as the most beautiful demonstration in physics. In one striking image, it shows the strange dual character of matter and light. When particles such as electrons or photons are sent through two narrow slits, the resulting pattern on a screen behind them is not the simple outline of the slits, but a series of alternating bright and dark bands. This pattern looks exactly like the ripples produced by waves on the surface of water when two stones are thrown in together. But when detectors are placed to see which slit each particle passes through, the pattern changes: the wave-like interference disappears and the particles line up as if they had travelled like microscopic bullets.

    This puzzling switch between wave and particle behaviour became the stage for one of the deepest disputes of the 20th century. The two central figures were Albert Einstein and Niels Bohr, each with a different vision of what the double-slit experiment really meant. Their disagreement was not about the results themselves but about how these results should be interpreted, and what they revealed about the nature of reality.

    Einstein believed strongly that the purpose of physics was to describe an external reality that exists independently of us. For him, the universe must have clear properties whether or not anyone is looking. In a double-slit experiment, this meant an electron or photon must in fact have taken a definite path, through one slit or the other, before striking the screen. The interference pattern might suggest some deeper process that we don’t yet understand but, to Einstein, it couldn’t mean that the particle lacked a path altogether.

    Based on this idea, Einstein argued that quantum mechanics (as formulated in the 1920s) couldn’t be the full story. The strange idea that a particle had no definite position until measured, or that its path depended on the presence of a detector, was unacceptable to him. He felt that there must be hidden details that explained the apparently random outcomes. These details would restore determinism and make physics once again a science that described what happens, not just what is observed.

    Bohr, however, argued that Einstein’s demand for definite paths misunderstood what quantum mechanics was telling us. Bohr’s central idea was called complementarity. According to this principle, particles like electrons or photons can show both wave-like and particle-like behaviour, but never both at the same time. Which behaviour appears depends entirely on how an experiment is arranged.

    In the double-slit experiment, if the apparatus is set up to measure which slit the particle passes through, the outcome will display particle-like behaviour and the interference pattern will vanish. If the apparatus is set up without path detectors, the outcome will display wave-like interference. For Bohr, the two descriptions are not contradictions but complementary views of the same reality, each valid only within its experimental context.

    Specifically, Bohr insisted that physics doesn’t reveal a world of objects with definite properties existing independently of measurement. Instead, physics provides a framework for predicting the outcomes of experiments. The act of measurement is inseparable from the phenomenon itself. Asking what “really happened” to the particle when no one was watching was, for Bohr, a meaningless question.

    Thus, while Einstein demanded hidden details to restore certainty, Bohr argued that uncertainty was built into nature itself. The double-slit experiment, for Bohr, showed that the universe at its smallest scales does not conform to classical ideas of definite paths and objective reality.

    The disagreement between Einstein and Bohr was not simply about technical details but a clash of philosophies. Einstein’s view was rooted in the classical tradition: the world exists in a definite state and science should describe that state. Quantum mechanics, he thought, was useful but incomplete, like a map missing a part of the territory.

    Bohr’s view was more radical. He believed that the limits revealed by the double-slit experiment were not shortcomings of the theory but truths about the universe. For him, the experiment demonstrated that the old categories of waves and particles, causes and paths, couldn’t be applied without qualification. Science had to adapt its concepts to match what experiments revealed, even if that meant abandoning the idea of an observer-independent reality.

    Though the two men never reached agreement, their debate has continued to inspire generations of physicists and philosophers. The double-slit experiment remains the clearest demonstration of the puzzle they argued over. Do particles truly have no definite properties until measured, as Bohr claimed? Or are we simply missing hidden elements that would complete the picture, as Einstein insisted?

    A new study in Physical Review Letters has taken the double-slit spirit into the realm of single atoms and scattered photons. And rather than ask whether an electron goes through one slit or another, it has asked whether scattered light carries “which-way” information about an atom. By focusing on the coherence or incoherence of scattered light, the researchers — from the Massachusetts Institute of Technology — have effectively reopened the old debate in a modern setting.

    The researchers trapped rubidium atoms held in an optical lattice, a regular grid of light that traps atoms in well-defined positions, like pieces on a chessboard. By carefully preparing these atoms in a particular state, each lattice site contained exactly one atom in its lowest energy state. The lattice could then be suddenly switched off, letting the atoms expand as localised wavepackets (i.e. wave-like packets of energy). A short pulse of laser light was directed at these atoms. The photons it emitted were scattered off the atoms and collected by a detector.

    By checking whether the scattered light was coherent (with a steady, predictable phase) or incoherent (with a random phase), the scientists could tell if the photons carried hints of the motion of the atom that scattered them.

    The main finding was that even a single atom scattered light that was only partly coherent. In other words, the scattered light wasn’t completely wave-like: one part of it showed a clear phase pattern, another part looked random. The randomness came from the fact that the scattering process linked, or entangled, the photon with the atom’s movement. This was because each time a photon was scattered off, the atom recoiled just a little, and that recoil left behind a faint clue about which atom had scattered the photon. This in turn meant that if the scientists looked close enough, they could work out where the photon came from in theory.

    To study this effect, the team compared three cases. First, they observed atoms still held tightly in the optical lattice. In this case, scattering could create sidebands — frequency shifts in the scattered light — that reflected changes in the atom’s motion. These sidebands represented incoherent scattering. Second, they looked at atoms immediately after switching off the lattice, before the expanding wavepackets had spread out. Third, they examined atoms after a longer expansion in free space, when the wavepackets had grown even wider.

    In all three cases, the ratio of coherent to incoherent light could be described by a simple mathematical term called the Debye-Waller factor. This factor depends only on the spatial spread of the wavepacket. As the atoms expanded in space, the Debye-Waller factor decreased, meaning more and more of the scattered light became incoherent. Eventually, after long enough expansion, essentially all the scattered light was incoherent.

    Experiments with two different atomic species supported this picture. With lithium-7 atoms, which are very light, the wavepackets expanded quickly, so the transition from partial coherence to full incoherence was rapid. With the much heavier dysprosium-162 atoms, the expansion was slower, allowing the researchers to track the change in more detail. In both cases, the results agreed with theoretical predictions.

    An especially striking observation was that the presence or absence of the trap made no difference to the basic coherence properties. The same mix of coherent and incoherent scattering appeared whether the atoms were confined in the lattice or expanding in free space. This showed that sidebands and trapping states were not the fundamental source of incoherence. Instead, what mattered was the partial entanglement between the light and the atoms.

    The team also compared long and short laser pulses. Long pulses could in principle resolve the sidebands while short pulses could not. Yet the fraction of coherent versus incoherent scattering was the same in both cases. This further reinforced the conclusion that coherence was lost not because of frequency shifts but because of entanglement itself.

    In 2024, another group in China also realised the recoiling-slit thought experiment in practice. Researchers from the University of Science and Technology of China trapped a single rubidium atom in an optical tweezer and cooled it to its quantum ground state, thus making the atom act like a movable slit whose recoil could be directly entangled with scattered photons.

    By tightening or loosening the trap, the scientists could pin the atom more firmly in place. When it was held tightly, the atom’s recoil left almost no mark on the photons, which went on to form a clear interference pattern (like the ripples in water). When the atom was loosely held, however, its recoil was easier to notice and the interference pattern faded. This gave the researchers a controllable way to show how a recoiling slit could erase the wave pattern — which is also the issue at the heart of Bohr-Einstein debate.

    Importantly, the researchers also distinguished true quantum effects from classical noise, such as heating of the atom during repeated scattering. Their data showed that the sharpness of the interference pattern wasn’t an artifact of an imperfect apparatus but a direct result of the atom-photon entanglement itself. In this way, they were able to demonstrate the transition from quantum uncertainty to classical disturbance within a single, controllable system. And even at this scale, the Bohr-Einstein debate couldn’t be settled.

    The results pointed to a physical mechanism for how information becomes embedded in light scattered from atoms. In the conventional double-slit experiment, the question was whether a photon’s path could ever be known without destroying the interference pattern. In the new, modern version, the question was whether a scattered photon carried any ‘imprint’ of the atom’s motion. The MIT team’s measurements showed that it did.

    The Debye-Waller factor — the measure of how much of the scattered light is still coherent — played an important role in this analysis. When atoms are confined tightly in a lattice, their spatial spread is small and the factor is relatively large, meaning a smaller fraction of the light is incoherent and thus reveals which-way information. But as the atoms are released and their wavepackets spread, the factor drops and with it the coherent fraction of scattered light. Eventually, after free expansion for long enough, essentially all of the scattered light becomes incoherent.

    Further, while the lighter lithium atoms expanded so quickly that the coherence decayed almost at once, the heavier dysprosium atoms expanded more slowly, allowing the researchers to track them in detail. Yet both atomic species followed a common rule: the Debye-Waller factor depended solely on how much the atom became delocalised as a wave, and not by the technical details of the traps or the sidebands. The conclusion here was that the light lost its coherence because the atom’s recoil became entangled with the scattered photon.

    This finding adds substance to the Bohr-Einstein debate. In one sense, Einstein’s intuition has been vindicated: every scattering event leaves behind faint traces of which atom interacted with the light. This recoil information is physically real and, at least in principle, accessible. But Bohr’s point also emerges clearly: that no amount of experimental cleverness can undo the trade-off set by quantum mechanics. The ratio of coherent to incoherent light is dictated not by human knowledge or ignorance but by implicit uncertainties in the spread of the atomic wavepacket itself.

    Together with the MIT results, the second experiment showed that both Einstein’s and Bohr’s insights remain relevant: every scattering leaves behind a real, measurable recoil — yet the amount of interference lost is dictated by the unavoidable quantum uncertainties of the system. When a photon scatters off an atom, the atom must recoil a little bit to conserve momentum. That recoil in principle carries which-way information because it marks the atom as the source of the scattered photon. But whether that information is accessible depends on how sharply the atom’s momentum (and position) can be defined.

    According to the Heisenberg uncertainty principle, the atom can’t simultaneously have both a precisely known position and momentum. In these experiments, the key measure was how delocalised the atom’s wavepacket was in space. If the atom was tightly trapped, its position uncertainty would be small, so its momentum uncertainty would be large. The recoil from a photon is then ‘blurred’ by that momentum spread, meaning the photon doesn’t clearly encode which-way information. Ultimately, interference is preserved.

    By recasting the debate in the language of scattered photons and expanding wavepackets, the MIT experiment has thus moved the double-slit spirit into new terrain. It shows that quantum mechanics doesn’t simply suggest fuzziness in the abstract but enforces it in how matter and light are allowed to share information. The loss of coherence isn’t a flaw in the experimental technique or a sign of missing details, as Einstein might’ve claimed, but the very mechanism by which the microscopic world keeps both Einstein’s and Bohr’s insights in tension. The double-slit experiment, even in a highly sophisticated avatar, continues to reinforce the notion that the universe resists any single-sided description.

    (The researchers leading the two studies are Wolfgang Ketterle and Pan Jianwei, respectively a Nobel laureate and a rockstar in the field of quantum information likely to win a Nobel Prize soon.)

    Featured image created with ChatGPT.

  • Curiosity as a public good

    India has won 22 Ig Nobel prizes to date. These awards, given annually at Harvard University by the magazine Annals of Improbable Research, honour studies that “first make people laugh, and then make them think” — a description that can suggest the prizes are little more than jokes whereas the research they reward is genuine.

    Many of the Indian wins are in the sciences and they highlight an oft unacknowledged truth: even if the country hasn’t produced a Nobel laureate in science since C.V. Raman in 1930, Indian labs continue to generate knowledge of consequence by pursuing questions that appear odd at first sight. In 2004, for example, IIT Kanpur researchers won an Ig Nobel prize for studying why people spill coffee when they walk. They analysed oscillations and resonance in liquid-filled containers, thus expanding the principles of fluid dynamics into daily life.

    Eleven years later, another team won a prize for measuring the friction coefficients of banana skins, showing why people who step on them are likely to fall. In 2019, doctors in Chennai were feted for documenting how cockroaches can survive inside human skulls, a subject of study drawn from real instances where medical workers had to respond to such challenges in emergency rooms. In 2022, biologists examined how scorpion stings are treated in rural India and compared traditional remedies against science-based pharmacology. More recently, researchers were honoured for describing the role of nasal hair in filtering air and pathogens.

    The wins thus demonstrate core scientific virtues as well as reflect the particular conditions in which research often happens in India. Most of the work also wasn’t supported by lavish grants nor was it published in élite journals with high citation counts. Instead, the work emerged from scientists choosing to follow curiosity rather than institutional incentives. In this sense, the Ig Nobel prizes are less a distraction and more an index of how ‘serious’ science might actually begin.

    Of course it’s also important to acknowledge that India’s research landscape is crowded with work of indifferent quality. A large share of papers are produced to satisfy promotion requirements, with little attention to design or originality, and many find their way into predatory journals where peer review is nonexistent or a joke. Such publications seldom advance knowledge, whether in curiosity-driven or application-oriented paradigms, and they dilute the credibility of the system as a whole.

    Then again whimsy isn’t foreign to the Nobel Prizes themselves, which are generally quite sombre. For example, in 2016, the chemistry prize was awarded to researchers who designed molecular rotors and elevators constructed from just a handful of atoms. The achievement was profound but it also carried the air of play. The prize-giving committee compared the laureates’ work to the invention of the electric motor in the 1830s, noting that even if practical applications (may or may not) come later, the first step remains the act of imagining, not unlike a child. If the Nobel Committee can reward such imaginative departures, India’s Ig Nobel prize wins should be seen as more evidence that playful research is a legitimate part of the scientific enterprise.

    The larger question is whether curiosity-driven research has a place in national science policy. Some experts have argued that in a country like India, with pressing social and economic needs and allegedly insufficient funding to support research, scientists must focus on topics that’re immediately useful: better crops, cheaper drugs, new energy sources, etc. But this is too narrow a view. Science doesn’t have to be useful in the short term to be valuable. The history of discovery is filled with examples that seemed obscure at the time but later transformed technology and society, including X-rays, lasers, and the structure of DNA. Equally importantly, the finitude of resources to which science administrators and lawmakers have often appealed is likely a red herring set up to make excuses for diverting funds away from scientific research.

    Measuring why banana skins are slippery didn’t solve a crisis but it advanced scientists’ understanding of biomechanics. Analysing why coffee spills while walking generated models in fluid mechanics that researchers could apply to a range of fluid systems. Together with documenting cockroaches inside skulls and studying scorpion sting therapies, none of this research was wasteful or should be seen that way but more importantly the freedom to pursue such questions is vital. If nothing else, winning a Nobel Prize can’t be engineered by restricting scientists to specific questions. They prizes often go to scientists who are well connected, work in well-funded laboratories, and who publish in highly visible journals — yet bias and visibility explain only part of the pattern. Doing good science depends on an openness to ideas that its exponents can’t be expected to plan in advance.

    This is a broader reason the Ig Nobel prizes are really reminders that curiosity remains alive among Indian scientists, even in a system that often discourages it. They also reveal what we stand to lose when research freedom is curtailed. The point isn’t that every odd question will lead to a breakthrough but that no one can predict in advance which questions will. We don’t know what we don’t know and the only way to find out is to explore.

    India’s 22 Ig Nobel wins in this sense are indicators of a culture of inquiry that deserves more institutional support. If the country wants to achieve scientific recognition of the highest order — the Indian government has in fact been aspiring to “science superpower” status — it must learn to value curiosity as a public good. What may appear whimsical today could prove indispensable tomorrow.

  • Dispelling Maxwell’s demon

    Maxwell’s demon is one of the most famous thought experiments in the history of physics, a puzzle first posed in the 1860s that continues to shape scientific debates to this day. I’ve struggled to make sense of it for years. Last week I had some time and decided to hunker down and figure it out, and I think I succeeded. The following post describes the fruits of my efforts.

    At first sight, the Maxwell’s demon paradox seems odd because it presents a supernatural creature tampering with molecules of gas. But if you pare down the imagery and focus on the technological backdrop of the time of James Clerk Maxwell, who proposed it, a profoundly insightful probe of the second law of thermodynamics comes into view.

    The thought experiment asks a simple question: if you had a way to measure and control molecules with perfect precision and at no cost, will you able to make heat flow backwards, as if in an engine?

    Picture a box of air divided into two halves by a partition. In the partition is a very small trapdoor. It has a hinge so it can swing open and shut. Now imagine a microscopic valve operator that can detect the speed of each gas molecule as it approaches the trapdoor, decide whether to open or close the door, and actuate the door accordingly.

    The operator follows two simple rules: let fast molecules through from left to right and let slow molecules through from right to left. The temperature of a system is nothing but the average kinetic energy of its constituent particles. As the operator operates, over time the right side will heat up and the left side will cool down — thus producing a temperature gradient for free. Where there’s a temperature gradient, it’s possible to run a heat engine. (The internal combustion engine in fossil-fuel vehicles is a common example.)

    A schematic diagram of the Maxwell’s demon thought experiment. Htkym (CC BY-SA)

    But the possibility that this operator can detect and sort the molecules, thus creating the temperature gradient without consuming some energy of its own, seems to break the second law of thermodynamics. The second law states that the entropy of a closed system increases over time — whereas the operator ensures that the temperature will decrease, violating the law. This was the Maxwell’s demon thought experiment, with the demon as a whimsical stand-in for the operator.

    The paradox was made compelling by the silent assumption that the act of sorting the molecules could have no cost — i.e. that the imagined operator didn’t add energy to the system (the air in the box) but simply allowed molecules that are already in motion to pass one way and not the other. In this sense the operator acted like a valve or a one-way gate. Devices of this kind — including check valves, ratchets, and centrifugal governors — were already familiar in the 19th century. And scientists assumed that if they were scaled down to the molecular level, they’d be able to work without friction and thus separate hot and cold particles without drawing more energy to overcome that friction.

    This detail is in fact the fulcrum of the paradox, and the thing that’d kept me all these years from actually understanding what the issue was. Maxwell et al. assumed that it was possible that an entity like this gate could exist: one that, without spending energy to do work (and thus increase entropy), could passively, effortlessly sort the molecules. Overall, the paradox stated that if such a sorting exercise really had no cost, the second law of thermodynamics would be violated.

    The second law had been established only a few decades before Maxwell thought up this paradox. If entropy is taken to be a measure of disorder, the second law states that if a system is left to itself, heat will not spontaneously flow from cold to hot and whatever useful energy it holds will inevitably degrade into the random motion of its constituent particles. The second law is the reason why perpetual motion machines are impossible, why the engines in our cars and bikes can’t be 100% efficient, and why time flows in one specific direction (from past to future).

    Yet Maxwell’s imagined operator seemed to be able to make heat flow backwards, sifting molecules so that order increases spontaneously. For many decades, this possibility challenged what physicists thought they knew about physics. While some brushed it off as a curiosity, others contended that the demon itself must expend some energy to operate the door and that this expense would restore the balance. However, Maxwell had been careful when he conceived the thought experiment: he specified that the trapdoor was small and moved without friction, so it could in principle operate in a negligible way. The real puzzle lay elsewhere.

    In 1929, the Hungarian physicist Leó Szilard sharpened the problem by boiling it down to a single-particle machine. This so-called Szilard engine imagined one gas molecule in a box with a partition that could be inserted or removed. By observing on which side the molecule lay and then allowing it to push a piston, the operator could apparently extract work from a single particle at uniform temperature. Szilard showed that the key step was not the movement of the piston but the acquisition of information: knowing where the particle was. That is, Szilard reframed the paradox to be not about the molecules being sorted but about an observer making a measurement.

    (Aside: Szilard was played by Máté Haumann in the 2023 film Oppenheimer.)

    A (low-res) visualisation of a Szilard engine. Its simplest form has only one atom (i.e. N = 1) pushing against a piston. Credit: P. Fraundorf (CC BY-SA)

    The next clue to cracking the puzzle came in the mid-20th century from the growing field of information theory. In 1961, the German-American physicist Rolf Landauer proposed a principle that connected information and entropy directly. Landauer’s principle states that while it’s possible in principle to acquire information in a reversible way — i.e. to be able to acquire it as well as lose it — erasing information from a device with memory has a non-zero thermodynamic cost that can’t be avoided. That is, the act of resetting a memory register of one bit to a standard state generates a small amount of entropy (proportional to Boltzmann’s constant multiplied by the logarithm of two).

    The American information theorist Charles H. Bennett later built on Landauer’s principle and argued that Maxwell’s demon could gather information and act on it — but in order to continue indefinitely, it’d have to erase or overwrite its memory. And that this act of resetting would generate exactly the entropy needed to compensate for the apparent decrease, ultimately preserving the second law of thermodynamics.

    Taken together, Maxwell’s demon was defeated not by the mechanics of the trapdoor but by the thermodynamic cost of processing information. Specifically, the decrease in entropy as a result of the molecules being sorted by their speed is compensated for by the increase in entropy due to the operator’s rewriting or erasure of information about the molecules’ speed. Thus a paradox that’d begun as a challenge to thermodynamics ended up enriching it — by showing information could be physical. It also revealed to scientists that entropy is disorder in matter and energy as well as is linked to uncertainty and information.

    Over time, Maxwell’s demon also became a fount of insight across multiple branches of physics. In classical thermodynamics, for example, entropy came to represent a measure of the probabilities that the system could exist in different combinations of microscopic states. That is, the probabilities referred to the likelihood that a given set of molecules could be arranged in one way instead of another. In statistical mechanics, Maxwell’s demon gave scientists a concrete way to think about fluctuations. In any small system, random fluctuations can reduce entropy for some time in a small portion. While the demon seemed to exploit these fluctuations, the laws of probability were found to ensure that on average, entropy would increase. So the demon became a metaphor for how selection based on microscopic knowledge could alter outcomes but also why such selection can’t be performed without paying a cost.

    For information theorists and computer scientists, the demon was an early symbol of the deep ties between computation and thermodynamics. Landauer’s principle showed that erasing information imposes a minimum entropy cost — an insight that matters for how computer hardware should be designed. The principle also influenced debates about reversible computing, where the goal is to design logic gates that don’t ever erase information and thus approach zero energy dissipation. In other words, Maxwell’s demon foreshadowed modern questions about how energy-efficient computing could really be.

    Even beyond physics, the demon has seeped into philosophy, biology, and social thought as a symbol of control and knowledge. In biology, the resemblance between the demon and enzymes that sorts molecules has inspired metaphors about how life maintains order. In economics and social theory, the demon has been used to discuss the limits of surveillance and control. The lesson has been the same in every instance: that information is never free and that the act of using it imposes inescapable energy costs.

    I’m particularly taken by the philosophy that animates the paradox. Maxwell’s demon was introduced as a way to dramatise the tension between the microscopic reversibility of physical laws and the macroscopic irreversibility encoded in the second law of thermodynamics. I found that a few questions in particular — whether the entropy increase due to the use of information is a matter of an observer’s ignorance (i.e. because the observer doesn’t know which particular microstate the system occupies at any given moment), whether information has physical significance, and whether the laws of nature really guarantee the irreversibility we observe — have become touchstones in the philosophy of physics.

    In the mid-20th century, the Szilard engine became the focus of these debates because it refocused the second law from molecular dynamics to the cost of acquiring information. Later figures such as the French physicist Léon Brillouin and the Hungarian-Canadian physicist Dennis Gabor claimed that it’s impossible to measure something without spending energy. Critics however countered that these requirements stipulated the need for specific technologies that would in turn smuggle in some limitations — rather than stipulate the presence of a fundamental principle. That is to say, the debate among philosophers became whether Maxwell’s demon was prevented from breaking the second law by deep and hitherto hidden principles or by engineering challenges.

    This gridlock was broken when physicists observed that even a demon-free machine must leave some physical trace of its interactions with the molecule. That is, any device that sorts particles will end up in different physical states depending on the outcome, and to complete a thermodynamic cycle those states must be reset. Here, the entropy is not due to the informational content but due to the logical structure of memory. Landauer solidified this with his principle that logically irreversible operations such as erasure carry a minimum thermodynamic cost. Bennett extended this by saying that measurements can be made reversibly but not erasure. The philosophical meaning of both these arguments is that entropy increase isn’t just about ignorance but also about parts of information processing being irreversible.

    Credit: Cdd20

    In the quantum domain, the philosophical puzzles became more intense. When an object is measured in quantum mechanics, it isn’t just about an observer updating the information they have about the object — the act of measuring also seems to alter the object’s quantum states. For example, in the Schrödinger’s cat thought experiment, checking whether there’s a cat in the box also causes the cat to default to one of two states: dead or alive. Quantum physicists have recreated Maxwell’s demon in new ways in order to check whether the second law continues to hold. And over the course of many experiments, they’ve concluded that indeed it does.

    The second law didn’t break even when Maxwell’s demon could exploit phenomena that aren’t available in the classical domain, including quantum entanglement, superposition, and tunnelling. This was because, among others, quantum mechanics also has some restrictive rules of its own. For one, some physicists have tried to design “quantum demons” that use quantum entanglement between particles to sort them without expending energy. But these experiments have found that as soon as the demon tries to reset its memory and start again, it must erase the record of what happened before. This step destroys the advantage and the entropy cost returns. The overall result is that even a “quantum demon” gains nothing in the long run.

    For another, the no-cloning theorem states that you can’t make a perfect copy of an unknown quantum state. If the demon could freely copy every quantum particle it measured, it could retain flawless records while still resetting its memory, this avoiding the usual entropy cost. The theorem blocks this strategy by forbidding perfect duplication, ensuring that information can’t be ‘multiplied’ without limit. Similarly, the principle of unitarity implies that a system will always evolve in a way that preserves overall probabilities. As a result, quantum phenomena can’t selectively amplify certain outcomes while discarding others. For the demon, this means it can’t secretly limit the range of possible states the system can occupy into a smaller set where the system has lower entropy, because unitarity guarantees that the full spread of possibilities is preserved across time.

    All these rules together prevent the demon from multiplying or rearranging quantum states in a way that would allow it to beat the second law.

    Then again, these ‘blocks’ that prevent Maxwell’s demon from breaking the second law of thermodynamics in the quantum realm raise a puzzle of their own: is the second law of thermodynamics guaranteed no matter how we interpret quantum mechanics? ‘Interpreting quantum mechanics’ means to interpret what the rules of quantum mechanics say about reality, a topic I covered at length in a recent post. Some interpretations say that when we measure a quantum system, its wavefunction “collapses” to a definite outcome. Others say collapse never happens and that measurement is just entangled with the environment, a process called decoherence. The Maxwell’s demon thought experiment thus forces the question: is the second law of thermodynamics safe in a particular interpretation of quantum mechanics or in all interpretations?

    Credit: Amy Young/Unsplash

    Landauer’s idea, that erasing information always carries a cost, also applies to quantum information. Even if Maxwell’s demon used qubits instead of bits, it won’t be able to escape the fact that to reuse its memory, it must erase the record, which will generate heat. But then the question becomes more subtle in quantum systems because qubits can be entangled with each other, and their delicate coherence — the special quantum link between quantum states — can be lost when information is processed. This means scientists need to carefully separate two different ideas of entropy: one based on what we as observers don’t know (our ignorance) and another based on what the quantum system itself has physically lost (by losing coherence).

    The lesson is that the second law of thermodynamics doesn’t just guard the flow of energy. In the quantum realm it also governs the flow of information. Entropy increases not only because we lose track of details but also because the very act of erasing and resetting information, whether classical or quantum, forces a cost that no demon can avoid.

    Then again, some philosophers and physicists have resisted the move to information altogether, arguing that ordinary statistical mechanics suffices to resolve the paradox. They’ve argued that any device designed to exploit fluctuations will be subject to its own fluctuations, and thus in aggregate no violation will have occurred. In this view, the second law is self-sufficient and doesn’t need the language of information, memory or knowledge to justify itself. This line of thought is attractive to those wary of anthropomorphising physics even if it also risks trivialising the demon. After all, the demon was designed to expose the gap between microscopic reversibility and macroscopic irreversibility, and simply declaring that “the averages work out” seems to bypass the conceptual tension.

    Thus, the philosophical significance of Maxwell’s demon is that it forces us to clarify the nature of entropy and the second law. Is entropy tied to our knowledge/ignorance of microstates, or is it ontic, tied to the irreversibility of information processing and computation? If Landauer is right, handling information and conserving energy are ‘equally’ fundamental physical concepts. If the statistical purists are right, on the other hand, then information adds nothing to the physics and the demon was never a serious challenge. Quantum theory can further stir both pots by suggesting that entropy is closely linked to the act of measurement, of quantum entanglement, and how quantum systems ‘collapse’ to classical ones by the process of decoherence. The demon debate therefore tests whether information is a physically primitive entity or a knowledge-based tool. Either way, however, Maxwell’s demon endures as a parable.

    Ultimately, what makes Maxwell’s demon a gift that keeps giving is that it works on several levels. On the surface it’s a riddle about sorting molecules between two chambers. Dig a little deeper and it becomes a probe into the meaning of entropy. If you dig even further, it seems to be a bridge between matter and information. As the Schrödinger’s cat thought experiment dramatised the oddness of quantum superposition, Maxwell’s demon dramatised the subtleties of thermodynamics by invoking a fantastical entity. And while Schrödinger’s cat forces us to ask what it means for a macroscopic system to be in two states at once, Maxwell’s demon forces us to ask what it means to know something about a system and whether that knowledge can be used without consequence.

  • CSIR touts dubious ‘Ayurveda’ product for diabetes

    At 6 am on September 13, the CSIR handle on X.com published the following post about an “anti-diabetic medicine” called either “Daiba 250” or “Diabe 250”, developed at the CSIR-Indian Institute of Integrative Medicine (IIIM):

    Its “key features”, according to the CSIR, are that it created more than 250 jobs and that Prime Minister Narendra Modi “mentioned the startup” to which it has been licensed in his podcast ‘Mann ki Baat’. What of the clinical credentials of Diabe-250, however?

    Diabe-250 is being marketed on India-based online pharmacies like Tata 1mg as an “Ayurvedic” over-the-counter tablet “for diabetes support/healthy sugar levels”. The listing also claims Diabe-250 is backed by a US patent granted to an Innoveda Biological Solutions Pvt. Ltd. Contrary to the CSIR post calling Diabe-250 “medicine”, some listings also carry the disclaimer that it’s “a dietary nutritional supplement, not for medicinal use”.

    (“Ayurveda” is within double-quotes throughout this post because, like most products like Diabe-250 in the market that are also licensed by the Ministry of AYUSH, there’s no evidence that they’re actually Ayurvedic. They may be, they may not be — and until there’s credible proof, the Ayurvedic identity is just another claim.)

    Second, while e-commerce and brand pages use the spellings “Diabe 250” or “Diabe-250” (without or without the hyphen), the CSIR’s social media posts refer to it as “Daiba 250”. The latter also describe it as an anti-diabetic developed/produced with the CSIR-IIIM in the context of incubation and licensing. These communications don’t constitute clinical evidence but they might be the clearest public basis to link the “Daiba” or “Diabe” spellings with the CSIR.

    Multiple product pages also credit Innoveda Biological Solutions Pvt. Ltd. as a marketer and manufacturer. Corporate registry aggregators corroborate the firm’s existence; its CIN is U24239DL2008PTC178821). Similarly, the claim that Diabe-250 is backed by a US patent can be traced most directly to US8163312B2 for “Herbal formulation for prevention and treatment of diabetes and associated complications”. Its inventor is listed as a G. Geetha Krishnan and Innoveda Biological Solutions (P) Ltd. is listed as the current assignee.

    The patent text describes combinations of Indian herbs for diabetes and some complications. Of course no patent is proof of efficacy for any specific branded product or dose.

    The ingredients in Diabe-250 vary by retailer and there’s no consistent, quantitative per-tablet composition on public pages. This said, multiple listings name the following ingredients:

    • “Vidanga” (Embelia ribes)
    • “Gorakh buti” (Aerva lanata)
    • “Raj patha” (Cyclea peltata)
    • “Vairi” or “salacia” (often Salacia oblonga), and
    • “Lajalu” (Biophytum sensitivum)

    The brand page also asserts a “unique combination of 16 herbs” and describes additional “Ayurveda” staples such as berberine source, turmeric, and jamun. However, there doesn’t appear to be a full label image or a quantitative breakdown of the composition of Diabe-250.

    Retail and brand pages also claim Diabe-250 “helps maintain healthy sugar levels”, “improves lipid profile/reduces cholesterol”, and “reduces diabetic complications”, sometimes also including non-glycaemic effects such as “better sleep” and “regular bowel movement”. Several pages also include the caveat that it’s a “dietary nutritional supplement” and that it’s “not for medicinal use”. However, none of these source cite a peer-reviewed clinical trial of Diabe-250 itself.

    In fact, there appear to be no peer-reviewed, product-specific clinical trials of Diabe-250 or Daiba-250 in humans; there are also no clinical trial registry records that were specific to this brand. If such a trial exists and its results were published in a peer-reviewed journal, it hasn’t been cited on the sellers’ or brand pages or in accessible databases.


    Some ingredient classes in Diabe-250 are interesting even if they don’t validate Diabe-250 as a finished product. For instance, Salacia spp., especially S. reticulata, S. oblonga, and S. chinensis have been known to be α-glucosidase inhibitors. In vitro studies and chemistry reviews have also described Salacia spp. can be potent inhibitors of maltase, sucrase, and isomaltase.

    In one triple-blind, randomised crossover trial in 2023, biscuits fortified with S. reticulata extract reduced HbA1c levels by around 0.25% (2.7 mmol/mol) over three months versus the placebo, with an acceptable safety profile.In post-prandial studies involving healthy volunteers and type 2 diabetes, several randomised crossover designs had lower post-meal glucose and insulin area under the curve when Salacia extract was co-ingested along with carbohydrate.

    Similarly, berberine-based neutraceuticals (such as those including Berberis aristata) have shown glycaemic improvements in the clinical literature (at large, not specific to Diabe-250) in people with type 2 diabetes. However, these effects were often reported in combination with other compounds and which researchers also indicated depended strongly on formulation and dose.

    Finally, a 2022 systematic review of “Ayurvedic” medicines in people with type 2 diabetes reported heterogeneous evidence, including some promising signals, but also emphasised methodological limitations and the need for randomised controlled trials of higher quality.

    Right now, if Diabe-250 works as advertised, there’s no scientific proof in the public domain, especially in the form of product-specific clinical trials that define its composition, dosage, and endpoints.


    In India, Ayurvedic drugs come under the Drugs & Cosmetics Rules 1945. Labelling provisions under Section 161 require details such as the manufacturer’s address, batch, and manufacturing and expiry dates while practice guides also note the product license number on the label for “Ayurvedic” drugs. However, several retail pages for Diabe-250 display it as a “dietary nutritional supplement” and add that it’s “not for medicinal use”, implying that it’s being marketed with supplement-style claims rather than as an Ayurvedic “medicine” in the narrow regulatory sense — which runs against the claim in the CSIR post on X.com. Public pages also didn’t display an AYUSH license number for Diabe-250. I haven’t checked a physical pack.

    A well-known study in JAMA in 2008, of “Ayurvedic” products purchased over the internet, found that around 20% of them contained lead, mercury or arsenic, and public-health advisories and case reports that have appeared since have echoed these concerns. This isn’t a claim about Diabe-250 specifically but a category-level risk of “Ayurvedic” products that are available to buy online and which are compounded by the unclear composition of Diabe-250. The inconsistent naming also opens the door to counterfeit products that are also more likely to be contaminated.

    Materials published by the Indian and state governments, including the Ministry of AYUSH, have framed “Ayurveda” as complementary to allopathic medicine. For example, if a person with diabetes chooses to try “Ayurvedic” support, the standard advice is to not discontinue prescribed therapy and to monitor one’s glucose, especially if the individual is using α-glucosidase-like agents that alter the post-prandial response.

    In sum, Diabe-250 is a multi-herb “Ayurvedic” tablet marketed by Innoveda for glycaemic support and has often been promoted with a related US patent owned by the company. However, patents are not clinical trials and patent offices don’t clinically evaluate drugs described in patent applications. That information can only come from clinical trials, especially when a drug is being touted as “science-led”, as the CSIR has vis-à-vis Diabe-250. But there are no published clinical trials of the product. And while there’s some evidence for some of its constituents, particularly Salacia, to reduce postprandial glucose and to effect small changes in the HbA1c levels over a few months, there’s no product-specific proof.

  • What does it mean to interpret quantum physics?

    The United Nations has designated 2025 the International Year of Quantum Science and Technology. Many physics magazines and journals have taken the opportunity to publish more articles on quantum physics than they usually do, and that has meant quantum physics research has often been on my mind. Nirmalya Kajuri, an occasional collaborator, an assistant professor at IIT Mandi, and an excellent science communicator, recently asked other physics teachers on X.com how much time they spend teaching the interpretations of quantum physics. His question and the articles I’ve been reading inspired me to write the following post. I hope it’s useful in particular to people like me, who are interested in physics but didn’t formally train to study it.


    Quantum physics is often described as the most successful theory in science. It explains how atoms bond, how light interacts with matter, how semiconductors and lasers work, and even how the sun produces energy. With its equations, scientists can predict experimental results with astonishing precision — up to 10 decimal places in the case of the electron’s magnetic moment.

    In spite of this extraordinary success, quantum physics is unusual compared to other scientific theories because it doesn’t tell us a single, clear story about what reality is like. The mathematics yields predictions that have never been contradicted within their tested domain, yet it leaves open the question of what the world is actually doing behind those numbers. This is what physicists mean when they speak of the ‘interpretations’ of quantum mechanics.

    In classical physics, the situation is more straightforward. Newton’s laws describe how forces act on bodies, leading them to move along definite paths. Maxwell’s theory of electromagnetism describes electric and magnetic fields filling space and interacting with charges. Einstein’s relativity shows space and time are flexible and curve under the influence of matter and energy. These theories predict outcomes and provide a coherent picture of the world: objects have locations, fields have values, and spacetime has shape. In quantum mechanics, the mathematics works perfectly — but the corresponding picture of reality is still unclear.

    The central concept in quantum theory is the wavefunction. This is a mathematical object that contains all the information about a system, such as an electron moving through space. The wavefunction evolves smoothly in time according to the Schrödinger equation. If you know the wavefunction at one moment, you can calculate it at any later moment using the equation. But when a measurement is made, the rules of the theory change. Instead of continuing smoothly, the wavefunction is used to calculate probabilities for different possible outcomes, and then one of those outcomes occurs.

    For instance, if an electron has a 50% chance of being detected on the left and a 50% chance of being detected on the right, the experiment will yield either left or right, never both at once. The mathematics says that before the measurement, the electron exists in a superposition of left and right, but after the measurement only one is found. This peculiar structure, where the wavefunction evolves deterministically between measurements but then seems to collapse into a definite outcome when observed, has no counterpart in classical physics.

    The puzzles arise because it’s not clear what the wavefunction really represents. Is it a real physical wave that somehow ‘collapses’? Is it merely a tool for calculating probabilities, with no independent existence? Is it information in the mind of an observer rather than a feature of the external world? The mathematics doesn’t say.

    The measurement problem asks why the wavefunction collapses at all and what exactly counts as a measurement. Superposition raises the question of whether a system can truly be in several states at once or whether the mathematics is only a convenient shorthand. Entanglement, where two particles remain linked in ways that seem to defy distance, forces us to wonder whether reality itself is nonlocal in some deep sense. Each of these problems points to the fact that while the predictive rules of quantum theory are clear, their meaning is not.

    Over the past century, physicists and philosophers have proposed many interpretations of quantum mechanics. The most traditional is often called the Copenhagen interpretation, illustrated by the Schrödinger’s cat thought experiment. In this view, the wavefunction is not real but only a computational tool. In many Copenhagen-style readings, the wavefunction is a device for organising expectations while measurement is taken as a primitive, irreducible step. The many-worlds interpretation offers a different view that denies the wavefunction ever collapses. Instead, all possible outcomes occur, each in its own branch of reality. When you measure the electron, there is one version of you that sees it on the left and another version that sees it on the right.

    In Bohmian mechanics, particles always have definite positions guided by a pilot wave that’s represented by the wavefunction. In this view, the randomness of measurement outcomes arises because we can’t know the precise initial positions of the particles. There are also objective collapse theories that take the wavefunction as real but argue that it undergoes genuine, physical collapse triggered randomly or by specific conditions. Finally, an informational approach called QBism says the wavefunction isn’t about the world at all but about an observer’s expectations for experiences upon acting on the world.

    Most interpretations reproduce the same experimental predictions (objective-collapse models predict small, testable deviations) but tell different stories about what the world is really like.

    It’s natural to ask why interpretations are needed at all if they don’t change the predictions. Indeed, many physicists work happily without worrying about them. To build a transistor, calculate the energy of a molecule or design a quantum computer, the rules of standard quantum mechanics suffice. Yet interpretations matter for several reasons, but especially because they shape our philosophical understanding of what kind of universe we live in.

    They also influence scientific creativity because some interpretations suggest directions for new experiments. For example, objective collapse theories predict small deviations from the usual quantum rules that can, at least in principle, be tested. Interpretations also matter in education. Students taught only the Copenhagen interpretation may come away thinking quantum physics is inherently mysterious and that reality only crystallises when it’s observed. Students introduced to many-worlds alone may instead think of the universe as an endlessly branching tree. The choice of interpretation moulds the intuition of future physicists. At the frontiers of physics, in efforts to unify quantum theory with gravity or to describe the universe as a whole, questions about what the wavefunction really is become unavoidable.

    In research fields that apply quantum mechanics to practical problems, many physicists don’t think about interpretation at all. A condensed-matter physicist studying superconductors uses the standard formalism without worrying about whether electrons are splitting into multiple worlds. But at the edges of theory, interpretation plays a major role. In quantum cosmology, where there are no external observers to perform measurements, one needs to decide what the wavefunction of the universe means. How we interpret entanglement, i.e. as a real physical relation versus as a representational device, colours how technologists imagine the future of quantum computing. In quantum gravity, the question of whether spacetime itself can exist in superposition renders interpretation crucial.

    Interpretations also matter in teaching. Instructors make choices, sometimes unconsciously, about how to present the theory. One professor may stick to the Copenhagen view and tell students that measurement collapses the wavefunction and that that’s the end of the story. Another may prefer many-worlds and suggest that collapse never occurs, only branching universes. A third may highlight information-based views, stressing that quantum mechanics is really about knowledge and prediction rather than about what exists independently. These different approaches shape the way students can understand quantum mechanics as a tool as well as as a worldview. For some, quantum physics will always appear mysterious and paradoxical. For others, it will seem strange but logical once its hidden assumptions are made clear.

    Interpretations also play a role in experiment design. Objective collapse theories, for example, predict that superpositions of large objects should spontaneously collapse. Experimental physicists are now testing whether quantum superpositions survive for increasingly massive molecules or for diminutive mechanical devices, precisely to check whether collapse really happens. Interpretations have also motivated tests of Bell’s inequalities, an idea that shows no local theory with “hidden variables” can reproduce the correlations predicted by quantum mechanics. The scientists who conducted these experiments confirmed entanglement is a genuine feature of the world, not a residue of the mathematical tools we use to study it — and won the Nobel Prize for physics in 2022. Today, entanglement is exploited in technologies such as quantum cryptography. Without the interpretative debates that forced physicists to take these puzzles seriously, such developments may never have been pursued.

    The fact that some physicists care deeply about interpretation while others don’t reflects different goals. Those who work on applied problems or who need to build devices don’t have to care much. The maths provides the answers they need. Those who are concerned with the foundations of physics, with the philosophy of science or with the unification of physical theories care very much, because interpretation guides their thinking about what’s possible and what’s not. Many physicists switch back and forth, ignoring interpretation when calculating in the lab but discussing many-worlds or informational views over chai.

    Quantum mechanics is unique among physical theories in this way. Few chemists or engineers spend time worrying about the ‘interpretation’ of Newtonian mechanics or thermodynamics because these theories present straightforward pictures of the world. Quantum mechanics instead gives flawless predictions but an under-determined picture. The search for interpretation is the search for a coherent story that links the extraordinary success of the mathematics to a clear vision of what the world is like.

    To interpret quantum physics is therefore to move beyond the bare equations and ask what they mean. Unlike classical theories, quantum mechanics doesn’t supply a single picture of reality along with its predictions. It leaves us with probabilities, superpositions, and entanglement, and it remains ambiguous about what these things really are. Some physicists insist interpretation is unnecessary; to others it’s essential. Some interpretations depict reality as a branching multiverse, others as a set of hidden particles, yet others as information alone. None has won final acceptance, but all try to close the gap between predictive success and conceptual clarity.

    In daily practice, many physicists calculate without worrying, but in teaching, in probing the limits of the theory, and in searching for new physics, interpretations matter. They shape not only what we understand about the quantum world but also how we imagine the universe we live in.

  • The Hyperion dispute and chaos in space

    I believe my blog’s subscribers did not receive email notifications of some recent posts. If you’re interested, I’ve listed the links to the last eight posts at the bottom of this edition.

    When reading around for my piece yesterday on the wavefunctions of quantum mechanics, I stumbled across an old and fascinating debate about Saturn’s moon Hyperion.

    The question of how the smooth, classical world around us emerges from the rules of quantum mechanics has haunted physicists for a century. Most of the time the divide seems easy: quantum laws govern atoms and electrons while planets, chairs, and cats are governed by the laws of Newton and Einstein. Yet there are cases where this distinction is not so easy to draw. One of the most surprising examples comes not from a laboratory experiment but from the cosmos.

    In the 1990s, Hyperion became the focus of a deep debate about the nature of classicality, one that quickly snowballed into the so-called Hyperion dispute. It showed how different interpretations of quantum theory could lead to apparently contradictory claims, and how those claims can be settled by making their underlying assumptions clear.

    Hyperion is not one of Saturn’s best-known moons but it is among the most unusual. Unlike round bodies such as Titan or Enceladus, Hyperion has an irregular shape, resembling a potato more than a sphere. Its surface is pocked by craters and its interior appears porous, almost like a sponge. But the feature that caught physicists’ attention was its rotation. Hyperion does not spin in a steady, predictable way. Instead, it tumbles chaotically. Its orientation changes in an irregular fashion as it orbits Saturn, influenced by the gravitational pulls of Saturn and Titan, which is a moon larger than Mercury.

    In physics, chaos does not mean complete disorder. It means a system is sensitive to its initial conditions. For instance, imagine two weather models that start with almost the same initial data: one says the temperature in your locality at 9:00 am is 20.000º C, the other says it’s 20.001º C. That seems like a meaningless difference. But because the atmosphere is chaotic, this difference can grow rapidly. After a few days, the two models may predict very different outcomes: one may show a sunny afternoon and the other, thunderstorms.

    This sensitivity to initial conditions is often called the butterfly effect — it’s the idea that the flap of a butterfly’s wings in Brazil might, through a chain of amplifications, eventually influence the formation of a tornado in Canada.

    Hyperion behaves in a similar way. A minuscule difference in its initial spin angle or speed grows exponentially with time, making its future orientation unpredictable beyond a few months. In classical mechanics this is chaos; in quantum mechanics, those tiny initial uncertainties are built in by the uncertainty principle, and chaos amplifies them dramatically. As a result, predicting its orientation more than a few months ahead is impossible, even with precise initial data.

    To astronomers, this was a striking case of classical chaos. But to a quantum theorist, it raised a deeper question: how does quantum mechanics describe such a macroscopic, chaotic system?

    Why Hyperion interested quantum physicists is rooted in that core feature of quantum theory: the wavefunction. A quantum particle is described by a wavefunction, which encodes the probabilities of finding it in different places or states. A key property of wavefunctions is that they spread over time. A sharply localised particle will gradually smear out, with a nonzero probability of it being found over an expanding region of space.

    For microscopic particles such as electrons, this spreading occurs very rapidly. For macroscopic objects, like a chair, an orange or you, the spread is usually negligible. The large mass of everyday objects makes the quantum uncertainty in their motion astronomically small. This is why you don’t have to be worried about your chai mug being in two places at once.

    Hyperion is a macroscopic moon, so you might think it falls clearly on the classical side. But this is where chaos changes the picture. In a chaotic system, small uncertainties get amplified exponentially fast. A variable called the Lyapunov exponent measures this sensitivity. If Hyperion begins with an orientation with a minuscule uncertainty, chaos will magnify that uncertainty at an exponential rate. In quantum terms, this means the wavefunction describing Hyperion’s orientation will not spread slowly, as for most macroscopic bodies, but at full tilt.

    In 1998, the Polish-American theoretical physicist Wojciech Zurek calculated that within about 20 years, the quantum state of Hyperion should evolve into a superposition of macroscopically distinct orientations. In other words, if you took quantum mechanics seriously, Hyperion would be “pointing this way and that way at once”, just like Schrödinger’s famous cat that is alive and dead at once.

    This startling conclusion raised the question: why do we not observe such superpositions in the real Solar System?

    Zurek’s answer to this question was decoherence. Say you’re blowing a soap bubble in a dark room. If no light touches it, the bubble is just there, invisible to you. Now shine a torchlight on it. Photons from the bulb will scatter off the bubble and enter your eyes, letting you see its position and color. But here’s the catch: every photon that bounces off the bubble also carries away a little bit of information about it. In quantum terms, the bubble’s wavefunction becomes entangled with all those photons.

    If the bubble were treated purely quantum mechanically, you could imagine a strange state where it was simultaneously in many places in the room — a giant superposition. But once trillions of photons have scattered off it, each carrying “which path?” information, the superposition is effectively destroyed. What remains is an apparent mixture of “bubble here” or “bubble there”, and to any observer the bubble looks like a localised classical object. This is decoherence in action: the environment (the sea of photons here) acts like a constant measuring device, preventing large objects from showing quantum weirdness.

    For Hyperion, decoherence would be rapid. Interactions with sunlight, Saturn’s magnetospheric particles, and cosmic dust would constantly ‘measure’ Hyperion’s orientation. Any coherent superposition of orientations would be suppressed almost instantly, long before it could ever be observed. Thus, although pure quantum theory predicts Hyperion’s wavefunction would spread into cat-like superpositions, decoherence explains why we only ever see Hyperion in a definite orientation.

    Thus Zurek argued that decoherence is essential to understand how the classical world emerges from its quantum substrate. To him, Hyperion provided an astronomical example of how chaotic dynamics could, in principle, generate macroscopic superpositions, and how decoherence ensures these superpositions remain invisible to us.

    Not everyone agreed with Zurek’s conclusion, however. In 2005, physicists Nathan Wiebe and Leslie Ballentine revisited the problem. They wanted to know: if we treat Hyperion using the rules of quantum mechanics, do we really need the idea of decoherence to explain why it looks classical? Or would Hyperion look classical even without bringing the environment into the picture?

    To answer this, they did something quite concrete. Instead of trying to describe every possible property of Hyperion, they focused on one specific and measurable feature: the part of its spin that pointed along a fixed axis, perpendicular to Hyperion’s orbit. This quantity — essentially the up-and-down component of Hyperion’s tumbling spin — was a natural choice because it can be defined both in classical mechanics and in quantum mechanics. By looking at the same feature in both worlds, they could make a direct comparison.

    Wiebe and Ballentine then built a detailed model of Hyperion’s chaotic motion and ran numerical simulations. They asked: if we look at this component of Hyperion’s spin, how does the distribution of outcomes predicted by classical physics compare with the distribution predicted by quantum mechanics?

    The result was striking. The two sets of predictions matched extremely well. Even though Hyperion’s quantum state was spreading in complicated ways, the actual probabilities for this chosen feature of its spin lined up with the classical expectations. In other words, for this observable, Hyperion looked just as classical in the quantum description as it did in the classical one.

    From this, Wiebe and Ballentine drew a bold conclusion: that Hyperion doesn’t require decoherence to appear classical. The agreement between quantum and classical predictions was already enough. They went further and suggested that this might be true more broadly: perhaps decoherence is not essential to explain why macroscopic bodies, the large objects we see around us, behave classically.

    This conclusion went directly against the prevailing view of quantum physics as a whole. By the early 2000s, many physicists believed that decoherence was the central mechanism that bridged the quantum and classical worlds. Zurek and others had spent years showing how environmental interactions suppress the quantum superpositions that would otherwise appear in macroscopic systems. To suggest that decoherence was not essential was to challenge the very foundation of that programme.

    The debate quickly gained attention. On one side stood Wiebe and Ballentine, arguing that simple agreement between quantum and classical predictions for certain observables was enough to resolve the issue. On the other stood Zurek and the decoherence community, insisting that the real puzzle was more fundamental: why we never observe interference between large-scale quantum states.

    At this time, the Hyperion dispute wasn’t just about a chaotic moon. It was about how we could define ‘classical behavior’ in the first place. For Wiebe and Ballentine, classical meant “quantum predictions match classical ones”. For Zurek et al., classical meant “no detectable superpositions of macroscopically distinct states”. The difference in definitions made the two sides seem to clash.

    But then, in 2008, physicist Maximilian Schlosshauer carefully analysed the issue and showed that the two sides were not actually talking about the same problem. The apparent clash arose because Zurek and Wiebe-Ballentine had started from essentially different assumptions.

    Specifically, Wiebe and Ballentine had adopted the ensemble interpretation of quantum mechanics. In everyday terms, the ensemble interpretation says, “Don’t take the quantum wavefunction too literally.” That is, it does not describe the “real state” of a single object. Instead, it’s a tool to calculate the probabilities of what we will see if we repeat an experiment many times on many identical systems. It’s like rolling dice. If I say the probability of rolling a 6 is 1/6, that probability does not describe the dice themselves as being in a strange mixture of outcomes. It simply summarises what will happen if I roll a large collection of dice.

    Applied to quantum mechanics, the ensemble interpretation works the same way. If an electron is described by a wavefunction that seems to say it is “spread out” over many positions, the ensemble interpretation insists this does not mean the electron is literally smeared across space. Rather, the wavefunction encodes the probabilities for where the electron would be found if we prepared many electrons in the same way and measured them. The apparent superposition is not a weird physical reality, just a statistical recipe.

    Wiebe and Ballentine carried this outlook over to Hyperion. When Zurek described Hyperion’s chaotic motion as evolving into a superposition of many distinct orientations, he meant this as a literal statement: without decoherence, the moon’s quantum state really would be in a giant blend of “pointing this way” and “pointing that way”. From his perspective, there was a crisis because no one ever observes moons or chai mugs in such states. Decoherence, he argued, was the missing mechanism that explained why these superpositions never show up.

    But under the ensemble interpretation, the situation looks entirely different. For Wiebe and Ballentine, Hyperion’s wavefunction was never a literal “moon in superposition”. It was always just a probability tool, telling us the likelihood of finding Hyperion with one orientation or another if we made a measurement. Their job, then, was simply to check: do these quantum probabilities match the probabilities that classical physics would give us? If they do, then Hyperion behaves classically by definition. There is no puzzle to be solved and no role for decoherence to play.

    This explains why Wiebe and Ballentine concentrated on comparing the probability distributions for a single observable, namely the component of Hyperion’s spin along a chosen axis. If the quantum and classical results lined up — as their calculations showed — then from the ensemble point of view Hyperion’s classicality was secured. The apparent superpositions that worried Zurek were never taken as physically real in the first place.

    Zurek, on the other hand, was addressing the measurement problem. In standard quantum mechanics, superpositions are physically real. Without decoherence, there is always some observable that could reveal the coherence between different macroscopic orientations. The puzzle is why we never see such observables registering superpositions. Decoherence provided the answer: the environment prevents us from ever detecting those delicate quantum correlations.

    In other words, Zurek and Wiebe-Ballentine were tackling different notions of classicality. For Wiebe and Ballentine, classicality meant the match between quantum and classical statistical distributions for certain observables. For Zurek, classicality meant the suppression of interference between macroscopically distinct states.

    Once Schlosshauer spotted this difference, the apparent dispute went away. His resolution showed that the clash was less over data than over perspectives. If you adopt the ensemble interpretation, then decoherence indeed seems unnecessary, because you never take the superposition as a real physical state in the first place. If you are interested in solving the measurement problem, then decoherence is crucial, because it explains why macroscopic superpositions never manifest.

    The overarching takeaway is that, from the quantum point of view, there is no single definition of what constitutes “classical behaviour”. The Hyperion dispute forced physicists to articulate what they meant by classicality and to recognise the assumptions embedded in different interpretations. Depending on your personal stance, you may emphasise the agreement of statistical distributions or you may emphasise the absence of observable superpositions. Both approaches can be internally consistent — but they  also answer different questions.

    For school students that are reading this story, the Hyperion dispute may seem obscure. Why should we care about whether a distant moon’s tumbling motion demands decoherence or not? The reason is that the moon provides a vivid example of a deep issue: how do we reconcile the strange predictions of quantum theory with the ordinary world we see?

    In the laboratory, decoherence is an everyday reality. Quantum computers, for example, must be carefully shielded from their environments to prevent decoherence from destroying fragile quantum information. In cosmology, decoherence plays a role in explaining how quantum fluctuations in the early universe influenced the structure of galaxies. Hyperion showed that even an astronomical body can, in principle, highlight the same foundational issues.


    Last five posts:

    1. The guiding light of KD45

    2. What on earth is a wavefunction?

    3.  The PixxelSpace constellation conundrum

    4. The Zomato ad and India’s hustle since 1947

    5. A new kind of quantum engine with ultracold atoms

    6. Trade rift today, cryogenic tech yesterday

    7. What keeps the red queen running?

    8. A limit of ‘show, don’t tell’

  • Quasiparticles do the twist

    Physics often involves hidden surprises in how matter behaves at the smallest scales. A fundamental property in physics is angular momentum, which describes how things spin or rotate, from planets all the way down to particles. Angular momentum is involved in many important effects like magnetism and quantum states that could one day be used in quantum computers.

    When atoms vibrate inside crystals, the vibrational energy they release is often found in multiples of discrete values, i.e. they resemble fixed packets of energy. Physicists liken these packets to particles of vibrational energy that they call phonons.

    More particularly, a phonon is a kind of emergent particle called a quasiparticle. In 2017, Vijay B. Shenoy, an associate professor at the Centre for Condensed Matter Theory at the Indian Institute of Science, Bengaluru, explained the concept to me in a way I’ve always liked to return to:

    The idea of a ‘quasiparticle’ is a very subtle one. At the risk of being technical, let me try this: An excitation is called a particle if, for a given momentum of the excitation, there is a well-defined energy. Quite remarkably, this definition of a particle embodies what we conventionally think of as a particle: small hard things that move about.

    Now, to an example. Consider a system made of atoms at a very low density. It will be in a gaseous state. Due to their kinetic energy, the atoms will be freely moving about. Such a system has particle-like excitations. These particle-like excitations correspond to the behaviour of individual atoms.

    Now consider the system at a higher density. The atoms will be strongly interacting with each other and, therefore, make up a solid. You will never “see” these atoms as low-energy excitations. There will now be new types of excitations that are made of the collective motion of atoms and which will be particle-like (since there is a well-defined energy for a given momentum). These particle-like excitations are called phonons. Note that the phonon excitation is very different from the atom that makes up the solid. For example, phonons carry sound within a solid – but when the sound propagates, you don’t have atoms being carried from place to place!

    A ‘quasiparticle’ excitation is one that is very nearly a particle-like excitation: for the given momentum, it is a small spread of energy about some average value. The manifestation is such that, for practical purposes, if you watch this excitation over longer durations, it will behave like a particle in an experiment…

    Recently, physicists predicted that phonons can themselves carry angular momentum the way physical particles like electrons do. They were predicted to do so in materials called chiral crystals, where the atoms are arranged in a spiral structure. However, in spite of the exciting prediction, nobody had directly observed this phonon angular momentum before. Proof was missing in part because measuring something so small and subtle isn’t easy. A new study in Nature Physics finally appears to have fixed this gap, reporting the first direct evidence of the effect using a well-known chiral crystal.

    Researchers from Germany and the US designed an experiment with tellurium, an element whose crystals grow in spiral shapes that wind either to the left or to the right. Since phonons are the vibrations inside a crystal, their angular momentum as they travel in curved paths through the crystal can’t be recorded directly. Instead, the researchers surmised that if all the phonons in the chiral crystal added up, they might twist the whole crystal ever so slightly, like a wind-up toy.

    So in their experiment, they heated a crystal in an uneven way in order to throw the left‑ and right‑handed phonons off balance, leaving behind a net phonon angular momentum that the whole crystal would have to offset by twisting in the opposite direction.

    To test this, the team started by growing small, pure tellurium crystals in the lab, making sure some were single crystals — i.e. with all atoms lining up the same way — and others were polycrystals, consisting of atoms lining up in random orientations. The team assumed that only the pure chiral crystals would show the new effect whereas the polycrystals wouldn’t.

    Team members then attached the crystals to minuscule cantilevers. If the crystal twisted even a small amount, the cantilever would bend, and an electrical circuit would detect and amplify the signal. Finally, they created a temperature difference between the two ends of the crystals by shining a small, focused laser light on it. This thermal gradient was expected to allow a net angular momentum to build up, if it was there.

    The team ran its tests on both types of crystals, changing the direction of the temperature gradient and running the experiment at different temperatures. In the process the team also ruled out the effects of other forces acting on the crystals, such as expansion due to heating.

    When the laser was switched on, the single-crystal tellurium samples showed a clear torque on the cantilevers while the polycrystalline samples didn’t. The torque flipped direction if the temperature gradient was reversed — a smoking gun that it was related to the handedness of the vibrations — and disappeared altogether when the laser was turned off.

    The team measured the torque to be an extremely slight 10-11 N·m, which matched theoretical predictions.

    At higher temperatures, even the pure crystals stopped displaying a torque, in keeping with the expectation that the effect only appeared below the Debye temperature — which is the temperature at which a crystal lattice has its highest vibrational quantum energy.

    More than the recent theoretical predictions, the research team’s motivation also traced back to an experiment that Albert Einstein and the Dutch physicist Wander Johannes de Haas conducted in 1915. It showed that flipping a magnetic field also made a tiny iron rod twist. Einstein and de Haas explained that this happened because the rod’s electrons had to conserve angular momentum, thus confirming that these particles had this property, an important moment in the history of physics. The researchers behind the new study similarly called what they observed the phonon Einstein-de Haas effect.

    Shenoy, however was more measured in his assessment of the new study:

    It is, in general, not unusual to have quasiparticles possessing properties of physical particles. Condensed matter physics is replete with examples, such as phonons (discussed here), magnons, density excitations in low dimensions, etc.

    What is not usual is the discussion of angular momentum in the context of phonons. As the authors emphasise, this is possible due to the noncentrosymmetric nature of tellurium. The system does not have centrosymmetry (or inversion symmetry): that is, roughly, if you flip [the crystal] ‘inside out’ it looks like an inside out image’ rather than itself. An instructive illustration is a mirror image: the mirror image of a circle is a circle (mirror-symmetric), but the mirror image of a right hand is not a right hand. Centrosymmetry is a three-dimensional version of mirror reflection. Broadly speaking, the whole report is not super surprising, but it is interesting that the scientists can measure this.

    Many of these physics papers reporting very specialised results make it a point to mention potential future applications of the underlying science. Admittedly, the pursuit of these applications, as and when they come to pass, and the commercial opportunities they create may help to fund the research. However, such speculation in papers also reinforces the idea that studies at the cutting edge are indebted (especially financially) to the future. I don’t agree with that position although I understand its grounding.

    For example, this is what the researchers behind the new study wrote in their paper (emphasis added; AM stands for ‘angular momentum’):

    … our measurements firmly establish the existence of phonon-AM in chiral crystals. Phonon-AM is the theoretical basis of chiral and topological phonons that may interact with topological fermions to create unique topological quantum states. Phonons can also transfer AM to other fundamental particles and elementary excitations allowing for novel quantum transduction mechanisms, thermal manipulation of spin, and detection of hidden quantum fields. This discovery provides a solid foundation for emergent chiral quantum states and opens a new avenue for phonon-AM enabled quantum information science and microelectronic applications.

    And this is what Shenoy had to say about that:

    I am not sure that [the finding] will have an immediate technological impact, particularly since this is a very subtle effect that requires very expensive single crystals; my guess is that this will be useful in some very specialised sensor application of some sort in the future. The authors also mention some microelectronics stuff, not sure about that. At this stage, this is firmly in the basic sciences column!

  • Found: clue to crack the antimatter mystery

    Imagine you’ve put together a torchlight. You know exactly how each part of the device works. You know exactly how they’re all connected togetger. Yet when you put in fresh batteries and turn it on, the light flickers. You take the torchlight apart, check each component piece by piece. It’s all good. The batteries are fully charged as well. Then you put it back together and turn it — and the light flickers still.

    This torchlight is the Standard Model of particle physics. It’s the main theory of its field: it ties together the various properties of all the subatomic particles scientists have found thus far. It organises them into groups, describes how the groups interact with each other, and makes predictions about particles that have been tested to extraordinary precision. And yet, the Standard Model can’t explain what dark matter is, why the Higgs boson is so light or how neutrinos have mass.

    Physicists are thus looking for ‘new physics’: a hitherto unseen part of the torchlight’s internal apparatus that causes its light to flicker, i.e. some new particle or force that completes the Standard Model, closing the gaps that the current crop of particles and forces haven’t been able to.

    This search for new physics received a boost yesterday when the physicists working with one of the detectors of the Large Hadron Collider reported that they had observed CP violations in baryons. This phenomenon is required to explain why the universe has more matter than antimatter today even though it was assumed to have been born with equal quantities of both. Baryons are particles made up of three quarks, like protons and neutrons.

    CP symmetry is the idea that the laws of physics should be the same if you swap all particles with their antiparticles and flip left and right, like looking in a mirror. Thus CP violation in baryons means if swapped a baryon with the corresponding anti-baryon and swapped left and right, the laws of physics won’t be the same, i.e. the laws treat matter and antimatter differently.

    I wrote about this finding and its implications — including its place in the Sakharov conditions and what the results mean for the Standard Model — for The Hindu. Do read it.

    I’ve found it’s one of those things you don’t read because it has anything to say about saving money or living longer. By reminding you that there’s a natural universe out there worth exploring and discovering and that it contains no sign or imprint of the false justifications humans have advanced for their crimes, perhaps it can help you live better. As I’ve said before, if you’re not interested in particle physics, that’s fine. But remember that you can be.

    Featured image: A view of the LHCb detector at the LHC as seen through a fisheye lens. Credit: CERN.

  • Watch the celebrations, on mute

    Right now, Shubhanshu Shukla is on his way back to Earth from the International Space Station. Am I proud he’s been the first Indian up there? I don’t know. It’s not clear.

    The whole thing seemed to be stage-managed. Shukla didn’t say anything surprising, nothing that popped. In fact he said exactly what we expected him to say. Nothing more, nothing less.

    Fuck controversy. It’s possible to be interesting in new ways all the time without edging into the objectionable. It’s not hard to beat predictability — but there it was for two weeks straight. I wonder if Shukla was fed all his lines. It could’ve been a monumental thing but it feels… droll.

    “India’s short on cash.” “India’s short on skills.” “India’s short on liberties.” We’ve heard these refrains as we’ve covered science and space journalism. But it’s been clear for some time now that “India’s short on cash” is a myth.

    We’ve written and spoken over and over that Gaganyaan needs better accountability and more proactive communication from ISRO’s Human Space Flight Centre. But it’s also true that it needs even more money than the Rs 20,000 crore it’s already been allocated.

    One thing I’ve learnt about the Narendra Modi government is that if it puts its mind to it, if it believes it can extract political mileage from a particular commitment, it will find a way to go all in. So when it doesn’t, the fact that it doesn’t sticks out. It’s a signal that The Thing isn’t a priority.

    Looking at the Indian space programme through the same lens can be revealing. Shukla’s whole trip and back was carefully choreographed. There’s been no sense of adventure. Grit is nowhere to be seen.

    But between Prime Minister Modi announcing his name in the list of four astronaut-candidates for Gaganyaan’s first crewed flight (currently set for 2027) and today, I know marginally more about Shukla, much less about the other three, and nothing really personal to boot. Just banal stuff.

    This isn’t some military campaign we’re talking about, is it? Just checking.

    Chethan Kumar at ToI and Jatan Mehta have done everyone a favour: one by reporting extensively on Shukla’s and ISRO’s activities and the other by collecting even the most deeply buried scraps of information from across the internet in one place. The point, however, is that it shouldn’t have come to this. Their work is laborious, made possible by the fact that it’s by far their primary responsibility.

    It needed to be much easier than this to find out more about India’s first homegrown astronauts. ISRO itself has been mum, so much so that every new ISRO story is turning out to be an investigative story. The details of Shukla’s exploits needed to be interesting, too. The haven’t been.

    So now, Shukla’s returning from the International Space Station. It’s really not clear what one’s expected to be excited about…

    Featured image credit: Ray Hennessy/Unsplash.