Month: October 2025

  • Using disorder to reveal hidden objects

    When light, sound or any kind of wave travels through a complex medium like fog, murky water, or biological tissue, it scatters in many directions. Each particle or irregularity in the medium changes the path of the waves, scrambling them and blurring the resulting image. This is why doctors struggle to image deep inside tissue using ultrasound, why optical microscopes can’t see through thick samples, and why radar and sonar sometimes miss objects hidden behind clutter.

    Scientists have long looked for ways to focus waves through such disordered environments — and while many have tried to compensate for scattering, their success has been limited when the medium becomes very opaque.

    A team led by Alexandre Aubry at ESPCI Paris and collaborators from Vienna and Aix-en-Provence wanted to turn this problem around. Instead of correcting or undoing the scattering, they wondered if something in the wave patterns remains stable even in the middle of all that complexity. That is, could they identify and locate a target based on the part of the signal that still carries its unique ‘fingerprint’?

    Their new study, published in Nature Physics, introduces a mathematical tool called the fingerprint operator that allows exactly this. This operator can detect, locate, and even characterise an object hidden inside a strongly scattering medium by comparing the reflected light to a reference pattern recorded in simpler conditions. The method can work for sound, light, radar, and other kinds of waves.

    At the heart of the technique is the reflection matrix, a large dataset recording how each source in an array of sources sends a wave into the medium and how every receiver picks up the returning echoes. Each element of this matrix contains information about how waves bounce off of different points, so together they capture the complete response of the system.

    To find a target within this sea of signals, the researchers introduced the fingerprint operator, written as Γ = R × R₀†, where R is the measured reflection matrix from the complex medium and R₀ is a reference matrix measured for the same target in clear, homogenous conditions. The dagger (†) indicates a mathematical conjugate that makes the comparison sensitive to how well the two patterns match. By calculating how strongly the two matrices correlate, the team obtained a likelihood index, which indicates how likely it is that a target with certain properties — e.g. position, size or shape — is present at a given spot.

    Effectively the team has developed a way to image hidden objects using scattered light.

    The researchers tested this concept with ultrasound. They used arrays containing up to 1,024 transducers (devices that convert energy from one form to another) to send and receive sound waves. First, they embedded small metal spheres inside a suspension of glass beads mixed with water, making for a strongly scattering environment.

    In the granular suspension, conventional ultrasound couldn’t see the buried metal spheres at all. The multiple scattering caused an exponential loss of contrast with depth, making the target signals roughly a 100x weaker than the background noise. Yet when the fingerprint operator was applied, the two spheres appeared sharply on the reconstructed likelihood map, each represented by a bright peak at its correct location. The contrast improvement reached factors of several hundred, strong enough to rule out false positive signals with a probability of error smaller than 1 in a hundred million.

    This success came from the fingerprint operator’s ability to filter out diffuse, randomly scattered waves and isolate those faint waves that behave as if the medium were transparent. In simple terms, the operator is a mathematical tool that can use the complexity of the target’s own echo to cancel the complexity of the medium.

    The same approach worked inside a foam that mimicked human tissue. A normal ultrasound image was dominated by speckle (random bright and dark spots caused by small scattering events), rendering a small pre-inserted marker nearly invisible. But when the fingerprint operator was applied to the data, the marker was revealed clearly and precisely.

    To its credit, the fingerprint operator doesn’t require scientists to fully known the medium, only the ability to record a reflection matrix and a reference response. It can then use these resources to find patterns that survive scattering and extract meaningful information.

    For medicine, this could improve ultrasound detection of small implants, needles, and markers that currently get lost in tissue noise. It could also help map the internal fibre structure of muscles or hearts, providing new diagnostic insights into diseases like cardiomyopathy and fibrosis. In materials science, it could reveal the orientation of grains in metals or composites. In military settings, it could locate targets hidden behind foliage or turbulent water.

    The approach is also computationally efficient: according to the researchers’ paper, generating the likelihood map takes about the same time as developing a standard ultrasound image and can be adapted for moving targets by incorporating motion parameters into the fingerprint.

    Finally, the idea animating the study also challenges a long-standing view that multiple scattering is purely a nuisance, incapable of being useful. The study overturns this view by extracting information from the multiple scattering data, using the fingerprint operator to account for how a target’s own echoes evolve through scattering, and leveraging those distortions to detect it more confidently.

    Featured image credit: Rafael Peier/Unsplash.

  • Remembering ‘The Melancholy of Resistance’

    Congratulations, László Krasznahorkai, for winning the Nobel Prize for literature.

    I still remember reading his The Melancholy of Resistance (1989). It was a mostly unnerving, somewhat frightening experience because I read it at a time of great uncertainty in my own life. In The Melancholy, chaos lurks in the banal mechanisms of civic life: when a train is delayed, when a town is disordered, when an old woman’s journey is warped by fear. Using a slow implosion of meaning, Krasznahorkai confronts the reader not with the spectacle of a dystopia, which is all but apparent, but with a particular character of it that’s otherwise often too fleeting to scrutinise — the slow death of coherence that makes catastrophe feel almost natural more than even inevitable.

    While the town in The Melancholy remains unnamed, there’s a sense that we all know where it is. It’s a landscape in which every gesture of governance has been reduced to a ritual and where only the forms of order persist even as their substance rots. The circus that arrives at the town’s edge, trailing a colossal whale and the promise of revelation, becomes a parable of how societies yearn for meaning precisely when they lose the capacity to create it. The town’s population is numbed by routine and discovers, unfortunately too late, that it has already surrendered its collective will. If you look closely, you’ll see just this line appear throughout history, often on the cusp of great violence and generational trauma.

    To me at least, The Melancholy was disturbing not because of its setting in late-socialist Hungary but because of Krasznahorkai’s method, which portrayed entropy as an ordinary condition of modern life. In fact the town’s decay seems to mirror our own saturation with information and disinformation today, our bureaucratised indifference, and our surrender to slow violence rather than sudden terror. Today’s dystopias are not built by tyrants (even if they abound) but maintained by exhaustion — by creating virtues of the same systems that no longer work yet continue to grind on. Krasznahorkai’s long, spiralling sentences mimic this endurance by trapping readers within the drawling syntax of futility.

    In this world, resistance teeters on the brink of melancholy because it no longer imagines victory, even new kinds: it simply refuses to forget what dignity once meant. The novel’s citizens shuffle through darkness as a travelling monstrosity settles down in their midst, resembling the contemporary crowds scrolling through crisis after crisis, aware that something monstrous is underway but which are too enmeshed, too ground down, in old habits to act. The Melancholy endures in effect because Krasznahorkai turns dystopia inside out. He doesn’t ask what happens when civilisation collapses but how it can manage to lumber on even long after its meaning has departed.

    And its terror, of course, lies in recognising that the end of the world, whenever it comes, will look and feel exactly like the world we already know.

  • What does a quantum Bayes’s rule look like?

    Bayes’s rule is one of the most fundamental principles in probability and statistics. It allows us to update our beliefs in the face of new evidence. In its simplest form, the rule tells us how to revise the probability of a hypothesis once new data becomes available.

    A standard way to teach it involves drawing coloured balls from a pouch: you start with some expectation (e.g. “there’s a 20% chance I’ll draw a blue ball”), then you update your belief depending on what you observe (“I’ve drawn a red ball, so the actual chance of drawing a blue ball is 10%”). While this example seems simple, the rule carries considerable weight: physicists and mathematicians have described it as the most consistent way to handle uncertainty in science, and it’s a central part of logic, decision theory, and indeed nearly every field of applied science.

    There are two well-known ways of arriving at Bayes’s rule. One is the axiomatic route, which treats probability as a set of logical rules and shows that Bayesian updating is the only way to preserve consistency. The other is variational, which demands that updates should stay as close as possible to prior beliefs while remaining consistent with new data. This latter view is known as the principle of minimum change. It captures the intuition that learning should be conservative: we shouldn’t alter our beliefs more than is necessary. This principle explains why Bayesian methods have become so effective in practical statistical inference: because they balance a respect for new data with loyalty to old information.

    A natural question arises here: can Bayes’s rule be extended into the quantum world?

    Quantum theory can be thought of as a noncommutative extension of probability theory. While there are good reasons to expect there should be a quantum analogue of Bayes’s rule, the field has for a long time struggled to identify a unique and universally accepted version. Instead, there are several competing proposals. One of them stands out: the Petz transpose map. This is a mathematical transformation that appears in many areas of quantum information theory, particularly in quantum error correction and statistical sufficiency. Some scholars have even argued that it’s the “correct” quantum Bayes’s rule. Still, the situation remains unsettled.

    In probability, the joint distribution is like a big table that lists the chances of every possible pair of events happening together. If you roll a die and flip a coin, the joint distribution specifies the probability of getting “heads and a 3”, “tails and a 5”, and so on. In this big table, you can also zoom out and just look at one part. For example, if you only care about the die, you can add up over all coin results to get the probability of each die face. Or if you only care about the coin, you can add up over all die results to get the probability of heads or tails. These zoomed-out views are called marginals.

    The classical Bayes’s rule doesn’t just update the zoomed-out views but the whole table — i.e. the entire joint distribution — so the connection between the two events also remains consistent with the new evidence.

    In the quantum version, the joint distribution isn’t a table of numbers but a mathematical object that records how the input and output of a quantum process are related. The point of the new study is that if you want a true quantum Bayes’s rule, you need to update that whole object, not just one part of it.

    A new study by Ge Bai, Francesco Buscemi, and Valerio Scarani in Physical Review Letters has taken just this step. In particular, they’ve presented a quantum version of the principle of minimum change by showing that when the measure of change is chosen to be quantum fidelity — a widely used measure of similarity between states — this optimisation leads to a unique solution. Equally remarkably, this solution coincided with the Petz transpose map in many important cases. As a result, the researchers have built a strong bridge between classical Bayesian updating, the minimum change principle, and a central tool of quantum information.

    The motivation for this new work isn’t only philosophical. If we’re to generalise Bayes’s rule to include quantum mechanics as well, we need to do so in a way that respects the structural constraints of quantum theory without breaking away from its classical roots.

    The researchers began by recalling how the minimum change principle works in classical probability. Instead of updating only a single marginal distribution, the principle works at the level of the joint input-output distribution. Updating then becomes an optimisation problem, i.e. finding the subsequent distribution that’s consistent with the new evidence but minimally different from the evidence from before.

    In ordinary probability, we talk about stochastic processes. These are rules that tell us how an input is turned into an output, with certain probabilities. For example if you put a coin into a vending machine, there might be a 90% chance you get a chips packet and a 10% chance you get nothing. This rule describes a stochastic process. This process can also be described with a joint distribution.

    In quantum physics, however, it’s tricky. The inputs and outputs aren’t just numbers or events but quantum states, which are described by wavefunctions or density matrices. This makes the maths much more complex. The resulting stochastic processes also become sequences of events called completely positive trace-preserving (CPTP) maps.

    A CPTP map is the most general kind of physical evolution allowed: it takes a quantum state and transforms it into another quantum state. And in the course of doing so, it needs to follow two rules: it shouldn’t yield any negative probabilities and it should ensure the total probability adds up to 1. That is, your chance of getting a chips packet shouldn’t be –90% nor should it be 90% plus a 20% chance of getting nothing.

    These complications mean that, while the joint distribution in classical Bayesian updating is a simple table, the one in quantum theory is more sophisticated. It uses two mathematical tools in particular. One is purification, a way to embed a mixed quantum state into a larger ‘pure’ state so that mathematicians can keep track of correlations. The other is Choi operators, a standard way of representing a CPTP map as a big matrix that encodes all possible input-output behaviour at once.

    Together, these tools play the role of the joint distribution in the quantum setting: they record the whole picture of how inputs and outputs are related.

    Now, how do you compare two processes, i.e. the actual forward process (input → output) and the guessed reverse process (output → input)?

    In quantum mechanics, one of the best measures of similarity is fidelity. It’s a number between 0 and 1. 0 means two processes are completely different and 1 means they’re exactly the same.

    In this context, the researchers’ problem statement was this: given a forward process, what reverse process is closest to it?

    To solve this, they looked over all possible reverse processes that obeyed the two rules, then they picked the one that maximised the fidelity, i.e. the CPTP map most similar to the forward process. This is the quantum version of applying the principle of minimum change.

    In the course of this process, the researchers found that in natural conditions, the Petz transpose map emerges as the quantum Bayes’s rule.

    In quantum mechanics, two objects (like matrices) commute if the order in which you apply them doesn’t matter. That is, A then B produces the same outcome as B then A. In physical terms, if two quantum states commute, they behave more like classical probabilities.

    The researchers found that when the CPTP map that takes an input and produces an output, called the forward channel, commutes with the new state, the updating process is nothing but the Petz transpose map.

    This is an important result for many reasons. Perhaps foremost is that it explains why the Petz map has shown up consistently across different parts of quantum information theory. It appears it isn’t just a useful tool but the natural consequence of the principle of minimum change applied in the quantum setting.

    The study also highlighted instances where the Petz transpose map isn’t optimal, specifically when the commutativity condition fails. In these situations, the optimal updating process depends more intricately on the new evidence. This subtlety departs clearly from classical Bayesian logic because in the quantum case, the structure of non-commutativity forces updates to depend non-linearly on the evidence (i.e. the scope of updating can be disproportionate to changes in evidence).

    Finally, the researchers have shown how their framework can recover special cases of practical importance. If some new evidence perfectly agrees with prior expectations, the forward and reverse processes become identical, mirroring the classical situation where Bayes’s rule simply reaffirms existing beliefs. Similarly, in contexts like quantum error correction, the Petz transpose map’s appearance is explained by its status as the optimal minimal-change reverse process.

    But the broader significance of this work lies in the way it unifies different strands of quantum information theory under a single conceptual roof. By proving that the Petz transpose map can be derived from the principle of minimum change, the study has provided a principled justification for its widespread use rather than being restricted to particular contexts. This fact has immediate consequences for quantum computing, where physicists are looking for ways to reverse the effects of noise on fragile quantum states. The Petz transpose map has long been known to do a good job of recovering information from these states after they’ve been affected by noise. Now that physicists know the map embodies the smallest update required to stay consistent with the observed outcomes, they may be able to design new recovery schemes that exploit the structure of minimal change more directly.

    The study may also open doors to extending Bayesian networks into the quantum regime. In classical probability, a Bayesian network provides a structured way to represent cause-effect relationships. By adapting the minimum change framework, scientists may be able to develop ‘quantum Bayesian networks’ where the way one updates their expectations of a particular outcome respects the peculiar constraints of CPTP maps. This could have applications in quantum machine learning and in the study of quantum causal models.

    There are also some open questions as well. For instance, the researchers have noted that if different measures of divergence other than fidelity are used, e.g. the Hilbert-Schmidt distance or quantum relative entropy, the resulting quantum Bayes’s rules may be different. This in turn indicates that there could be multiple valid updating rules, each suited to different contexts. Future research will need to map out these possibilities and determine which ones are most useful for particular applications.

    In all, the study provides both a conceptual advance and a technical tool. Conceptually, it shows how the spirit of Bayesian updating can carry over into the quantum world; technically, it provides a rigorous derivation of when and why the Petz transpose map is the optimal quantum Bayes’s rule. Taken together, the study’s finding strengthens the bridge between classical and quantum reasoning and offers a deeper understanding of how information is updated in a world where uncertainty is baked into reality rather than being due to an observer’s ignorance.

  • Using 10,000 atoms and 1 to probe the Bohr-Einstein debate

    The double-slit experiment has often been described as the most beautiful demonstration in physics. In one striking image, it shows the strange dual character of matter and light. When particles such as electrons or photons are sent through two narrow slits, the resulting pattern on a screen behind them is not the simple outline of the slits, but a series of alternating bright and dark bands. This pattern looks exactly like the ripples produced by waves on the surface of water when two stones are thrown in together. But when detectors are placed to see which slit each particle passes through, the pattern changes: the wave-like interference disappears and the particles line up as if they had travelled like microscopic bullets.

    This puzzling switch between wave and particle behaviour became the stage for one of the deepest disputes of the 20th century. The two central figures were Albert Einstein and Niels Bohr, each with a different vision of what the double-slit experiment really meant. Their disagreement was not about the results themselves but about how these results should be interpreted, and what they revealed about the nature of reality.

    Einstein believed strongly that the purpose of physics was to describe an external reality that exists independently of us. For him, the universe must have clear properties whether or not anyone is looking. In a double-slit experiment, this meant an electron or photon must in fact have taken a definite path, through one slit or the other, before striking the screen. The interference pattern might suggest some deeper process that we don’t yet understand but, to Einstein, it couldn’t mean that the particle lacked a path altogether.

    Based on this idea, Einstein argued that quantum mechanics (as formulated in the 1920s) couldn’t be the full story. The strange idea that a particle had no definite position until measured, or that its path depended on the presence of a detector, was unacceptable to him. He felt that there must be hidden details that explained the apparently random outcomes. These details would restore determinism and make physics once again a science that described what happens, not just what is observed.

    Bohr, however, argued that Einstein’s demand for definite paths misunderstood what quantum mechanics was telling us. Bohr’s central idea was called complementarity. According to this principle, particles like electrons or photons can show both wave-like and particle-like behaviour, but never both at the same time. Which behaviour appears depends entirely on how an experiment is arranged.

    In the double-slit experiment, if the apparatus is set up to measure which slit the particle passes through, the outcome will display particle-like behaviour and the interference pattern will vanish. If the apparatus is set up without path detectors, the outcome will display wave-like interference. For Bohr, the two descriptions are not contradictions but complementary views of the same reality, each valid only within its experimental context.

    Specifically, Bohr insisted that physics doesn’t reveal a world of objects with definite properties existing independently of measurement. Instead, physics provides a framework for predicting the outcomes of experiments. The act of measurement is inseparable from the phenomenon itself. Asking what “really happened” to the particle when no one was watching was, for Bohr, a meaningless question.

    Thus, while Einstein demanded hidden details to restore certainty, Bohr argued that uncertainty was built into nature itself. The double-slit experiment, for Bohr, showed that the universe at its smallest scales does not conform to classical ideas of definite paths and objective reality.

    The disagreement between Einstein and Bohr was not simply about technical details but a clash of philosophies. Einstein’s view was rooted in the classical tradition: the world exists in a definite state and science should describe that state. Quantum mechanics, he thought, was useful but incomplete, like a map missing a part of the territory.

    Bohr’s view was more radical. He believed that the limits revealed by the double-slit experiment were not shortcomings of the theory but truths about the universe. For him, the experiment demonstrated that the old categories of waves and particles, causes and paths, couldn’t be applied without qualification. Science had to adapt its concepts to match what experiments revealed, even if that meant abandoning the idea of an observer-independent reality.

    Though the two men never reached agreement, their debate has continued to inspire generations of physicists and philosophers. The double-slit experiment remains the clearest demonstration of the puzzle they argued over. Do particles truly have no definite properties until measured, as Bohr claimed? Or are we simply missing hidden elements that would complete the picture, as Einstein insisted?

    A new study in Physical Review Letters has taken the double-slit spirit into the realm of single atoms and scattered photons. And rather than ask whether an electron goes through one slit or another, it has asked whether scattered light carries “which-way” information about an atom. By focusing on the coherence or incoherence of scattered light, the researchers — from the Massachusetts Institute of Technology — have effectively reopened the old debate in a modern setting.

    The researchers trapped rubidium atoms held in an optical lattice, a regular grid of light that traps atoms in well-defined positions, like pieces on a chessboard. By carefully preparing these atoms in a particular state, each lattice site contained exactly one atom in its lowest energy state. The lattice could then be suddenly switched off, letting the atoms expand as localised wavepackets (i.e. wave-like packets of energy). A short pulse of laser light was directed at these atoms. The photons it emitted were scattered off the atoms and collected by a detector.

    By checking whether the scattered light was coherent (with a steady, predictable phase) or incoherent (with a random phase), the scientists could tell if the photons carried hints of the motion of the atom that scattered them.

    The main finding was that even a single atom scattered light that was only partly coherent. In other words, the scattered light wasn’t completely wave-like: one part of it showed a clear phase pattern, another part looked random. The randomness came from the fact that the scattering process linked, or entangled, the photon with the atom’s movement. This was because each time a photon was scattered off, the atom recoiled just a little, and that recoil left behind a faint clue about which atom had scattered the photon. This in turn meant that if the scientists looked close enough, they could work out where the photon came from in theory.

    To study this effect, the team compared three cases. First, they observed atoms still held tightly in the optical lattice. In this case, scattering could create sidebands — frequency shifts in the scattered light — that reflected changes in the atom’s motion. These sidebands represented incoherent scattering. Second, they looked at atoms immediately after switching off the lattice, before the expanding wavepackets had spread out. Third, they examined atoms after a longer expansion in free space, when the wavepackets had grown even wider.

    In all three cases, the ratio of coherent to incoherent light could be described by a simple mathematical term called the Debye-Waller factor. This factor depends only on the spatial spread of the wavepacket. As the atoms expanded in space, the Debye-Waller factor decreased, meaning more and more of the scattered light became incoherent. Eventually, after long enough expansion, essentially all the scattered light was incoherent.

    Experiments with two different atomic species supported this picture. With lithium-7 atoms, which are very light, the wavepackets expanded quickly, so the transition from partial coherence to full incoherence was rapid. With the much heavier dysprosium-162 atoms, the expansion was slower, allowing the researchers to track the change in more detail. In both cases, the results agreed with theoretical predictions.

    An especially striking observation was that the presence or absence of the trap made no difference to the basic coherence properties. The same mix of coherent and incoherent scattering appeared whether the atoms were confined in the lattice or expanding in free space. This showed that sidebands and trapping states were not the fundamental source of incoherence. Instead, what mattered was the partial entanglement between the light and the atoms.

    The team also compared long and short laser pulses. Long pulses could in principle resolve the sidebands while short pulses could not. Yet the fraction of coherent versus incoherent scattering was the same in both cases. This further reinforced the conclusion that coherence was lost not because of frequency shifts but because of entanglement itself.

    In 2024, another group in China also realised the recoiling-slit thought experiment in practice. Researchers from the University of Science and Technology of China trapped a single rubidium atom in an optical tweezer and cooled it to its quantum ground state, thus making the atom act like a movable slit whose recoil could be directly entangled with scattered photons.

    By tightening or loosening the trap, the scientists could pin the atom more firmly in place. When it was held tightly, the atom’s recoil left almost no mark on the photons, which went on to form a clear interference pattern (like the ripples in water). When the atom was loosely held, however, its recoil was easier to notice and the interference pattern faded. This gave the researchers a controllable way to show how a recoiling slit could erase the wave pattern — which is also the issue at the heart of Bohr-Einstein debate.

    Importantly, the researchers also distinguished true quantum effects from classical noise, such as heating of the atom during repeated scattering. Their data showed that the sharpness of the interference pattern wasn’t an artifact of an imperfect apparatus but a direct result of the atom-photon entanglement itself. In this way, they were able to demonstrate the transition from quantum uncertainty to classical disturbance within a single, controllable system. And even at this scale, the Bohr-Einstein debate couldn’t be settled.

    The results pointed to a physical mechanism for how information becomes embedded in light scattered from atoms. In the conventional double-slit experiment, the question was whether a photon’s path could ever be known without destroying the interference pattern. In the new, modern version, the question was whether a scattered photon carried any ‘imprint’ of the atom’s motion. The MIT team’s measurements showed that it did.

    The Debye-Waller factor — the measure of how much of the scattered light is still coherent — played an important role in this analysis. When atoms are confined tightly in a lattice, their spatial spread is small and the factor is relatively large, meaning a smaller fraction of the light is incoherent and thus reveals which-way information. But as the atoms are released and their wavepackets spread, the factor drops and with it the coherent fraction of scattered light. Eventually, after free expansion for long enough, essentially all of the scattered light becomes incoherent.

    Further, while the lighter lithium atoms expanded so quickly that the coherence decayed almost at once, the heavier dysprosium atoms expanded more slowly, allowing the researchers to track them in detail. Yet both atomic species followed a common rule: the Debye-Waller factor depended solely on how much the atom became delocalised as a wave, and not by the technical details of the traps or the sidebands. The conclusion here was that the light lost its coherence because the atom’s recoil became entangled with the scattered photon.

    This finding adds substance to the Bohr-Einstein debate. In one sense, Einstein’s intuition has been vindicated: every scattering event leaves behind faint traces of which atom interacted with the light. This recoil information is physically real and, at least in principle, accessible. But Bohr’s point also emerges clearly: that no amount of experimental cleverness can undo the trade-off set by quantum mechanics. The ratio of coherent to incoherent light is dictated not by human knowledge or ignorance but by implicit uncertainties in the spread of the atomic wavepacket itself.

    Together with the MIT results, the second experiment showed that both Einstein’s and Bohr’s insights remain relevant: every scattering leaves behind a real, measurable recoil — yet the amount of interference lost is dictated by the unavoidable quantum uncertainties of the system. When a photon scatters off an atom, the atom must recoil a little bit to conserve momentum. That recoil in principle carries which-way information because it marks the atom as the source of the scattered photon. But whether that information is accessible depends on how sharply the atom’s momentum (and position) can be defined.

    According to the Heisenberg uncertainty principle, the atom can’t simultaneously have both a precisely known position and momentum. In these experiments, the key measure was how delocalised the atom’s wavepacket was in space. If the atom was tightly trapped, its position uncertainty would be small, so its momentum uncertainty would be large. The recoil from a photon is then ‘blurred’ by that momentum spread, meaning the photon doesn’t clearly encode which-way information. Ultimately, interference is preserved.

    By recasting the debate in the language of scattered photons and expanding wavepackets, the MIT experiment has thus moved the double-slit spirit into new terrain. It shows that quantum mechanics doesn’t simply suggest fuzziness in the abstract but enforces it in how matter and light are allowed to share information. The loss of coherence isn’t a flaw in the experimental technique or a sign of missing details, as Einstein might’ve claimed, but the very mechanism by which the microscopic world keeps both Einstein’s and Bohr’s insights in tension. The double-slit experiment, even in a highly sophisticated avatar, continues to reinforce the notion that the universe resists any single-sided description.

    (The researchers leading the two studies are Wolfgang Ketterle and Pan Jianwei, respectively a Nobel laureate and a rockstar in the field of quantum information likely to win a Nobel Prize soon.)

    Featured image created with ChatGPT.