Month: August 2025

  • A transistor for heat

    Quantum technologies and the prospect of advanced, next-generation electronic devices have been maturing at an increasingly rapid pace. Both research groups and governments around the world are investing more attention in this domain.

    India for example mooted its National Quantum Mission in 2023 with a decade-long outlay of Rs 6,000 crore. One of the Mission’s goals, in the words of IISER Pune physics professor Umakant Rapol, is “to engineer and utilise the delicate quantum features of photons and subatomic particles to build advanced sensors” for applications in “healthcare, security, and environmental monitoring”.

    On the science front, as these technologies become better understood, scientists have been paying increasingly more attention to managing and controlling heat in them. These technologies often rely on quantum physical phenomena that appear only at extremely low temperatures and are so fragile that even a small amount of stray heat can destabilise them. In these settings, scientists have found that traditional methods of handling heat — mainly by controlling the vibrations of atoms in the devices’ materials — become ineffective.

    Instead, scientists have identified a promising alternative: energy transfer through photons, the particles of light. And in this paradigm, instead of simply moving heat from one place to another, scientists have been trying to control and amplify it, much like how transistors and amplifiers handle electrical signals in everyday electronics.

    Playing with fire

    Central to this effort is the concept of a thermal transistor. This device resembles an electrical transistor but works with heat instead of electrical current. Electrical transistors amplify or switch currents, allowing the complex logic and computation required to power modern computers. Creating similar thermal devices would represent a major advance, especially for technologies that require very precise temperature control. This is particularly true in the sub-kelvin temperature range where many quantum processors and sensors operate.

    Transistor Simple Circuit Diagram with NPN Labels.svg.
    This circuit diagram depicts an NPN bipolar transistor. When a small voltage is applied between the base and emitter, electrons are injected from the emitter into the base, most of which then sweep across into the collector. The end result is a large current flowing through the collector, controlled by the much smaller current flowing through the base. Credit: Michael9422 (CC BY-SA)

    Energy transport at such cryogenic temperatures differs significantly from normal conditions. Below roughly 1 kelvin, atomic vibrations no longer carry most of the heat. Instead, electromagnetic fluctuations — ripples of energy carried by photons — dominate the conduction of heat. Scientists channel these photons through specially designed, lossless wires made of superconducting materials. They keep these wires below their superconducting critical temperatures, allowing only photons to transfer energy between the reservoirs. This arrangement enables careful and precise control of heat flow.

    One crucial phenomenon that allows scientists to manipulate heat in this way is negative differential thermal conductance (NDTC). NDTC defies common intuition. Normally, decreasing the temperature difference between two bodies reduces the amount of heat they exchange. This is why a glass of water at 50º C in a room at 25º C will cool faster than a glass of water at 30º C. In NDTC, however, reducing the temperature difference between two connected reservoirs can actually increase the heat flow between them.

    NDTC arises from a detailed relationship between temperature and the properties of the material that makes up the reservoirs. When physicists harness NDTC, they can amplify heat signals in a manner similar to how negative electrical resistance powers electrical amplifiers.

    A ‘circuit’ for heat

    In a new study, researchers from Italy have designed and theoretically modelled a new kind of ‘thermal transistor’ that they have said can actively control and amplify how heat flows at extremely low temperatures for quantum technology applications. Their findings were published recently in the journal Physical Review Applied.

    To explore NDTC experimentally, the researchers studied reservoirs made of a disordered semiconductor material that exhibited a transport mechanism called variable range hopping (VRH). An example is neutron-transmutation-doped germanium. In VRH materials, the electrical resistance at low temperatures depends very strongly, sometimes exponentially, on temperature.

    This attribute makes them ideal to tune their impedance, a property that controls the material’s resistance to energy flow, simply by adjusting temperature. That is, how well two reservoirs made of VRH materials exchange heat can be controlled by tuning the impedance of the materials, which in turn can be controlled by tuning their temperature.

    In the new study, the researchers reported that impedance matching played a key role. When the reservoirs’ impedances matched perfectly (when their temperatures became equal), the efficiency with which they transferred photonic heat reached a peak. As the materials’ temperatures diverged, heat flow dropped. In fact, the researchers wrote that there was a temperature range, especially as the colder reservoir’s temperature rose to approach that of the warmer one, within which the heat flow increased even as the temperature difference shrank. This effect forms the core of NDTC.

    The research team, associated with the NEST initiative at the Istituto Nanoscienze-CNR and Scuola Normale Superiore, both in Pisa in Italy, have proposed a device they call the photonic heat amplifier. They built it using two VRH reservoirs connected by superconducting, lossless wires. One reservoir was kept at a higher temperature and served as the source of heat energy. The other reservoir, called the central island, received heat by exchanging photons with the warmer reservoir.

    The proposed device features a central island at temperature T1 that transfers heat currents to various terminals. The tunnel contacts to the drain and gate are positioned at heavily doped regions of the yellow central island, highlighted by a grey etched pattern. Each arrow indicates the positive direction of the heat flux. The substrate is (shown as and) maintained at temperature Tb, the gate at Tg, and the drain at Td. Credit: arXiv:2502.04250v3

    The central island was also connected to two additional metallic reservoirs named the “gate” and the “drain”. These points operated with the same purpose as the control and output terminals in an electrical transistor. The drain stayed cold, allowing the amplified heat signal to exit the system from this point. By adjusting the gate temperature, the team could modulate and even amplify the flow of heat between the source and the drain (see image below).

    To understand and predict the amplifier’s behaviour, the researchers developed mathematical models for all forms of heat transfer within the device. These included photonic currents between VRH reservoirs, electron tunnelling through the gate and drain contacts, and energy lost as vibrations through the device’s substrate.

    (Tunnelling is a quantum mechanical phenomenon where an electron has a small chance of floating through a thin barrier instead of going around it.)

    Raring to go

    By carefully selecting the device parameters — including the characteristic temperature of the VRH material, the source temperature, resistances at the gate and drain contacts, the volume of the central island, and geometric factors — the researchers said they could tailor the device for different amplification purposes.

    They reported two main operating modes. The first was called ‘current modulation amplifier’. In this configuration, the device amplified small variations in thermal input at the gate. In this mode, small oscillations in the gate heat current produced much larger oscillations, up to 15-times greater, in the photon current between the source and the central island and in the drain current, according to the paper. This amplification was efficient down to 20 millikelvin, matching the ultracold conditions required in quantum technologies. The output range of heat current was similarly broad, showing the device’s suitability to amplify heat signals.

    The second mode was called ‘temperature modulation amplifier’. Here, slight changes of only a few millikelvin in the gate temperature, the team wrote, caused the output temperature in the central island to swing by as large as 3.3 times the changes in the input. The device could also handle input temperature ranges over 100 millikelvin. This performance reportedly matched or surpassed other temperature amplifiers already reported in the scientific literature. The researchers also noted that this mode could be used to pre-amplify signals in bolometric detectors used in astronomy telescopes.

    An important ability relevant for practical use is the relaxation time, i.e. how soon after operating once the device returned to its original state, ready for the next run. The amplifier in both configurations showed relaxation times between microseconds and milliseconds. According to the researchers, this speed resulted from the device’s low thermal mass and efficient heat channels. Such a fast response could make it suitable to detect and amplify thermal signals in real time.

    The researchers wrote that the amplifier also maintained good linearity and low distortion across various inputs. In other words, the output heat signal changed proportionally to the input heat signal and the device didn’t add unwanted changes, noise or artifacts to the input signal. Its noise-equivalent power values were also found to rival the best available solid-state thermometers, indicating low noise levels.

    Approaching the limits

    For these promising results, realising this device involves some significant practical challenges. For instance, NDTC depends heavily on precise impedance matching. Real materials inevitably have imperfections, including those due to imperfect fabrication and environmental fluctuations. Such deviations could lower the device’s heat transfer efficiency and reduce the operational range of NDTC.

    The system also banked on lossless superconducting wires being kept well below their critical temperatures. Achieving and maintaining these ultralow temperatures requires sophisticated and expensive refrigeration infrastructure, which adds to the experimental complexity.

    Fabrication also demands very precise doping and finely tuned resistances for the gate and drain terminals. Scaling production to create many devices or arrays poses major technical difficulties. Integrating numerous photonic heat amplifiers into larger thermal circuits risks unwanted thermal crosstalk and signal degradation, a risk compounded by the extremely small heat currents involved.

    Furthermore, the fully photonic design offers benefits such as electrical isolation and long-distance thermal connections. However, it also approaches fundamental physical limits. Thermal conductance caps the maximum possible heat flow through photonic channels. This limitation could restrict how much power the device is able to handle in some applications.

    Then again, many of these challenges are typical of cutting-edge research in quantum devices, and highlight the need for detailed experimental work to realise and integrate photonic heat amplifiers into operational quantum systems.

    If they are successfully realised for practical applications, photonic heat amplifiers could transform how scientists manage heat in quantum computing and nanotechnologies that operate near absolute zero. They could pave the way for on-chip heat control, computers to autonomously stabilise the temperature, and perform thermal logic operations. Redirecting or harvesting waste heat could also improve the efficiency and significantly reduce noise — a critical barrier in ultra-sensitive quantum devices like quantum computers.

    Featured image credit: Lucas K./Unsplash.

  • The Hyperion dispute and chaos in space

    I believe my blog’s subscribers did not receive email notifications of some recent posts. If you’re interested, I’ve listed the links to the last eight posts at the bottom of this edition.

    When reading around for my piece yesterday on the wavefunctions of quantum mechanics, I stumbled across an old and fascinating debate about Saturn’s moon Hyperion.

    The question of how the smooth, classical world around us emerges from the rules of quantum mechanics has haunted physicists for a century. Most of the time the divide seems easy: quantum laws govern atoms and electrons while planets, chairs, and cats are governed by the laws of Newton and Einstein. Yet there are cases where this distinction is not so easy to draw. One of the most surprising examples comes not from a laboratory experiment but from the cosmos.

    In the 1990s, Hyperion became the focus of a deep debate about the nature of classicality, one that quickly snowballed into the so-called Hyperion dispute. It showed how different interpretations of quantum theory could lead to apparently contradictory claims, and how those claims can be settled by making their underlying assumptions clear.

    Hyperion is not one of Saturn’s best-known moons but it is among the most unusual. Unlike round bodies such as Titan or Enceladus, Hyperion has an irregular shape, resembling a potato more than a sphere. Its surface is pocked by craters and its interior appears porous, almost like a sponge. But the feature that caught physicists’ attention was its rotation. Hyperion does not spin in a steady, predictable way. Instead, it tumbles chaotically. Its orientation changes in an irregular fashion as it orbits Saturn, influenced by the gravitational pulls of Saturn and Titan, which is a moon larger than Mercury.

    In physics, chaos does not mean complete disorder. It means a system is sensitive to its initial conditions. For instance, imagine two weather models that start with almost the same initial data: one says the temperature in your locality at 9:00 am is 20.000º C, the other says it’s 20.001º C. That seems like a meaningless difference. But because the atmosphere is chaotic, this difference can grow rapidly. After a few days, the two models may predict very different outcomes: one may show a sunny afternoon and the other, thunderstorms.

    This sensitivity to initial conditions is often called the butterfly effect — it’s the idea that the flap of a butterfly’s wings in Brazil might, through a chain of amplifications, eventually influence the formation of a tornado in Canada.

    Hyperion behaves in a similar way. A minuscule difference in its initial spin angle or speed grows exponentially with time, making its future orientation unpredictable beyond a few months. In classical mechanics this is chaos; in quantum mechanics, those tiny initial uncertainties are built in by the uncertainty principle, and chaos amplifies them dramatically. As a result, predicting its orientation more than a few months ahead is impossible, even with precise initial data.

    To astronomers, this was a striking case of classical chaos. But to a quantum theorist, it raised a deeper question: how does quantum mechanics describe such a macroscopic, chaotic system?

    Why Hyperion interested quantum physicists is rooted in that core feature of quantum theory: the wavefunction. A quantum particle is described by a wavefunction, which encodes the probabilities of finding it in different places or states. A key property of wavefunctions is that they spread over time. A sharply localised particle will gradually smear out, with a nonzero probability of it being found over an expanding region of space.

    For microscopic particles such as electrons, this spreading occurs very rapidly. For macroscopic objects, like a chair, an orange or you, the spread is usually negligible. The large mass of everyday objects makes the quantum uncertainty in their motion astronomically small. This is why you don’t have to be worried about your chai mug being in two places at once.

    Hyperion is a macroscopic moon, so you might think it falls clearly on the classical side. But this is where chaos changes the picture. In a chaotic system, small uncertainties get amplified exponentially fast. A variable called the Lyapunov exponent measures this sensitivity. If Hyperion begins with an orientation with a minuscule uncertainty, chaos will magnify that uncertainty at an exponential rate. In quantum terms, this means the wavefunction describing Hyperion’s orientation will not spread slowly, as for most macroscopic bodies, but at full tilt.

    In 1998, the Polish-American theoretical physicist Wojciech Zurek calculated that within about 20 years, the quantum state of Hyperion should evolve into a superposition of macroscopically distinct orientations. In other words, if you took quantum mechanics seriously, Hyperion would be “pointing this way and that way at once”, just like Schrödinger’s famous cat that is alive and dead at once.

    This startling conclusion raised the question: why do we not observe such superpositions in the real Solar System?

    Zurek’s answer to this question was decoherence. Say you’re blowing a soap bubble in a dark room. If no light touches it, the bubble is just there, invisible to you. Now shine a torchlight on it. Photons from the bulb will scatter off the bubble and enter your eyes, letting you see its position and color. But here’s the catch: every photon that bounces off the bubble also carries away a little bit of information about it. In quantum terms, the bubble’s wavefunction becomes entangled with all those photons.

    If the bubble were treated purely quantum mechanically, you could imagine a strange state where it was simultaneously in many places in the room — a giant superposition. But once trillions of photons have scattered off it, each carrying “which path?” information, the superposition is effectively destroyed. What remains is an apparent mixture of “bubble here” or “bubble there”, and to any observer the bubble looks like a localised classical object. This is decoherence in action: the environment (the sea of photons here) acts like a constant measuring device, preventing large objects from showing quantum weirdness.

    For Hyperion, decoherence would be rapid. Interactions with sunlight, Saturn’s magnetospheric particles, and cosmic dust would constantly ‘measure’ Hyperion’s orientation. Any coherent superposition of orientations would be suppressed almost instantly, long before it could ever be observed. Thus, although pure quantum theory predicts Hyperion’s wavefunction would spread into cat-like superpositions, decoherence explains why we only ever see Hyperion in a definite orientation.

    Thus Zurek argued that decoherence is essential to understand how the classical world emerges from its quantum substrate. To him, Hyperion provided an astronomical example of how chaotic dynamics could, in principle, generate macroscopic superpositions, and how decoherence ensures these superpositions remain invisible to us.

    Not everyone agreed with Zurek’s conclusion, however. In 2005, physicists Nathan Wiebe and Leslie Ballentine revisited the problem. They wanted to know: if we treat Hyperion using the rules of quantum mechanics, do we really need the idea of decoherence to explain why it looks classical? Or would Hyperion look classical even without bringing the environment into the picture?

    To answer this, they did something quite concrete. Instead of trying to describe every possible property of Hyperion, they focused on one specific and measurable feature: the part of its spin that pointed along a fixed axis, perpendicular to Hyperion’s orbit. This quantity — essentially the up-and-down component of Hyperion’s tumbling spin — was a natural choice because it can be defined both in classical mechanics and in quantum mechanics. By looking at the same feature in both worlds, they could make a direct comparison.

    Wiebe and Ballentine then built a detailed model of Hyperion’s chaotic motion and ran numerical simulations. They asked: if we look at this component of Hyperion’s spin, how does the distribution of outcomes predicted by classical physics compare with the distribution predicted by quantum mechanics?

    The result was striking. The two sets of predictions matched extremely well. Even though Hyperion’s quantum state was spreading in complicated ways, the actual probabilities for this chosen feature of its spin lined up with the classical expectations. In other words, for this observable, Hyperion looked just as classical in the quantum description as it did in the classical one.

    From this, Wiebe and Ballentine drew a bold conclusion: that Hyperion doesn’t require decoherence to appear classical. The agreement between quantum and classical predictions was already enough. They went further and suggested that this might be true more broadly: perhaps decoherence is not essential to explain why macroscopic bodies, the large objects we see around us, behave classically.

    This conclusion went directly against the prevailing view of quantum physics as a whole. By the early 2000s, many physicists believed that decoherence was the central mechanism that bridged the quantum and classical worlds. Zurek and others had spent years showing how environmental interactions suppress the quantum superpositions that would otherwise appear in macroscopic systems. To suggest that decoherence was not essential was to challenge the very foundation of that programme.

    The debate quickly gained attention. On one side stood Wiebe and Ballentine, arguing that simple agreement between quantum and classical predictions for certain observables was enough to resolve the issue. On the other stood Zurek and the decoherence community, insisting that the real puzzle was more fundamental: why we never observe interference between large-scale quantum states.

    At this time, the Hyperion dispute wasn’t just about a chaotic moon. It was about how we could define ‘classical behavior’ in the first place. For Wiebe and Ballentine, classical meant “quantum predictions match classical ones”. For Zurek et al., classical meant “no detectable superpositions of macroscopically distinct states”. The difference in definitions made the two sides seem to clash.

    But then, in 2008, physicist Maximilian Schlosshauer carefully analysed the issue and showed that the two sides were not actually talking about the same problem. The apparent clash arose because Zurek and Wiebe-Ballentine had started from essentially different assumptions.

    Specifically, Wiebe and Ballentine had adopted the ensemble interpretation of quantum mechanics. In everyday terms, the ensemble interpretation says, “Don’t take the quantum wavefunction too literally.” That is, it does not describe the “real state” of a single object. Instead, it’s a tool to calculate the probabilities of what we will see if we repeat an experiment many times on many identical systems. It’s like rolling dice. If I say the probability of rolling a 6 is 1/6, that probability does not describe the dice themselves as being in a strange mixture of outcomes. It simply summarises what will happen if I roll a large collection of dice.

    Applied to quantum mechanics, the ensemble interpretation works the same way. If an electron is described by a wavefunction that seems to say it is “spread out” over many positions, the ensemble interpretation insists this does not mean the electron is literally smeared across space. Rather, the wavefunction encodes the probabilities for where the electron would be found if we prepared many electrons in the same way and measured them. The apparent superposition is not a weird physical reality, just a statistical recipe.

    Wiebe and Ballentine carried this outlook over to Hyperion. When Zurek described Hyperion’s chaotic motion as evolving into a superposition of many distinct orientations, he meant this as a literal statement: without decoherence, the moon’s quantum state really would be in a giant blend of “pointing this way” and “pointing that way”. From his perspective, there was a crisis because no one ever observes moons or chai mugs in such states. Decoherence, he argued, was the missing mechanism that explained why these superpositions never show up.

    But under the ensemble interpretation, the situation looks entirely different. For Wiebe and Ballentine, Hyperion’s wavefunction was never a literal “moon in superposition”. It was always just a probability tool, telling us the likelihood of finding Hyperion with one orientation or another if we made a measurement. Their job, then, was simply to check: do these quantum probabilities match the probabilities that classical physics would give us? If they do, then Hyperion behaves classically by definition. There is no puzzle to be solved and no role for decoherence to play.

    This explains why Wiebe and Ballentine concentrated on comparing the probability distributions for a single observable, namely the component of Hyperion’s spin along a chosen axis. If the quantum and classical results lined up — as their calculations showed — then from the ensemble point of view Hyperion’s classicality was secured. The apparent superpositions that worried Zurek were never taken as physically real in the first place.

    Zurek, on the other hand, was addressing the measurement problem. In standard quantum mechanics, superpositions are physically real. Without decoherence, there is always some observable that could reveal the coherence between different macroscopic orientations. The puzzle is why we never see such observables registering superpositions. Decoherence provided the answer: the environment prevents us from ever detecting those delicate quantum correlations.

    In other words, Zurek and Wiebe-Ballentine were tackling different notions of classicality. For Wiebe and Ballentine, classicality meant the match between quantum and classical statistical distributions for certain observables. For Zurek, classicality meant the suppression of interference between macroscopically distinct states.

    Once Schlosshauer spotted this difference, the apparent dispute went away. His resolution showed that the clash was less over data than over perspectives. If you adopt the ensemble interpretation, then decoherence indeed seems unnecessary, because you never take the superposition as a real physical state in the first place. If you are interested in solving the measurement problem, then decoherence is crucial, because it explains why macroscopic superpositions never manifest.

    The overarching takeaway is that, from the quantum point of view, there is no single definition of what constitutes “classical behaviour”. The Hyperion dispute forced physicists to articulate what they meant by classicality and to recognise the assumptions embedded in different interpretations. Depending on your personal stance, you may emphasise the agreement of statistical distributions or you may emphasise the absence of observable superpositions. Both approaches can be internally consistent — but they  also answer different questions.

    For school students that are reading this story, the Hyperion dispute may seem obscure. Why should we care about whether a distant moon’s tumbling motion demands decoherence or not? The reason is that the moon provides a vivid example of a deep issue: how do we reconcile the strange predictions of quantum theory with the ordinary world we see?

    In the laboratory, decoherence is an everyday reality. Quantum computers, for example, must be carefully shielded from their environments to prevent decoherence from destroying fragile quantum information. In cosmology, decoherence plays a role in explaining how quantum fluctuations in the early universe influenced the structure of galaxies. Hyperion showed that even an astronomical body can, in principle, highlight the same foundational issues.


    Last five posts:

    1. The guiding light of KD45

    2. What on earth is a wavefunction?

    3.  The PixxelSpace constellation conundrum

    4. The Zomato ad and India’s hustle since 1947

    5. A new kind of quantum engine with ultracold atoms

    6. Trade rift today, cryogenic tech yesterday

    7. What keeps the red queen running?

    8. A limit of ‘show, don’t tell’

  • Towards KD45

    On the subject of belief, I’m instinctively drawn to logical systems that demand consistency, closure, and introspection. And the KD45 system among them exerts a special pull. It consists of the following axioms:

    • K (closure): If you believe an implication and you believe the antecedent, then you believe the consequent. E.g. if you believe “if X then Y” and you believe X, then you also believe Y.
    • D (consistency): If you believe X, you don’t also believe not-X (i.e. X’s negation).
    • 4 (positive introspection): If you believe X, then you also believe that you believe X, i.e. you’re aware of your own beliefs.
    • 5 (negative introspection): If you don’t believe X, then you believe that you don’t believe X, i.e. you know what you don’t believe.

    Thus, KD45 pictures a believer who never embraces contradictions, who always sees the consequences of what they believe, and who is perfectly aware of their own commitments. It’s the portrait of a mind that’s transparent to itself, free from error in structure, and entirely coherent. There’s something admirable in this picture. In moments of near-perfect clarity, it seems to me to describe the kind of believer I’d like to be.

    Yet the attraction itself throws up a paradox. KD45 is appealing precisely because it abstracts away from the conditions in which real human beings actually think. In other words, its consistency is pristine because it’s idealised. It eliminates the compromises, distractions, and biases that animate everyday life. To aspire to KD45 is therefore to aspire to something constantly unattainable: a mind that’s rational at every step, free of contradiction, and immune to the fog of human psychology.

    My attraction to KD45 is tempered by an equal admiration for Bayesian belief systems. The Bayesian approach allows for degrees of confidence and recognises that belief is often graded rather than binary. To me, this reflects the world as we encounter it — a realm of incomplete evidence, partial understanding, and evolving perspectives.

    I admire Bayesianism because it doesn’t demand that we ignore uncertainty. It compels us to face it directly. Where KD45 insists on consistency, Bayesian thinking insists on responsiveness. I update beliefs not because they were previously incoherent but because new evidence has altered the balance of probabilities. This system thus embodies humility, my admission that no matter how strongly I believe today, tomorrow may bring evidence that forces me to change my mind.

    The world, however, isn’t simply uncertain: it’s often contradictory. People hold opposing views, traditions preserve inconsistencies, and institutions are riddled with tensions. This is why I’m also drawn to paraconsistent logics, which allow contradictions to exist without collapsing. If I stick to classical logic, I’ll have to accept everything if I also accept a contradiction. One inconsistency causes the entire system to explode. Paraconsistent theories reject that explosion and instead allow me to live with contradictions without being consumed by them.

    This isn’t an endorsement of confusion for its own sake but a recognition that practical thought must often proceed even when the data is messy. I can accept, provisionally, both “this practice is harmful” and “this practice is necessary”, and work through the tension without pretending I can neatly resolve the contradiction in advance. To deny myself this capacity is not to be rational — it’s to risk paralysis.

    Finally, if Bayesianism teaches humility and paraconsistency teaches tolerance, the AGM theory of belief revision teaches discipline. Its core idea is that beliefs must be revised when confronted by new evidence, and that there are rational ways of choosing what to retract, what to retain, and what to alter. AGM speaks to me because it bridges the gap between the ideal and the real. It allows me to acknowledge that belief systems can be disrupted by facts while also maintaining that I can manage disruptions in a principled way.

    That is to say, I don’t aspire to avoid the shock of revision but to absorb it intelligently.

    Taken together, my position isn’t a choice of one system over another. It’s an attempt to weave their virtues together while recognising their limits. KD45 represents the ideal that belief should be consistent, closed under reasoning, and introspectively clear. Bayesianism represents the reality that belief is probabilistic and always open to revision. Paraconsistent logic represents the need to live with contradictions without succumbing to incoherence. AGM represents the discipline of revising beliefs rationally when evidence compels change.

    A final point about aspiration itself. To aspire to KD45 isn’t to believe I will ever achieve it. In fact, I acknowledge I’m unlikely to desire complete consistency at every turn. There are cases where contradictions are useful, where I’ll need to tolerate ambiguity, and where the cost of absolute closure is too high. If I deny this, I’ll only end up misrepresenting myself.

    However, I’m not going to be complacent either. I believe it’s important to aspire even if what I’m trying to achieve is going to be perpetually out of reach. By holding KD45 as a guiding ideal, I hope to give shape to my desire for rationality even as I expect to deviate from it. The value lies in the direction, not the destination.

    Therefore, I state plainly (he said pompously):

    • I admire the clarity of KD45 and treat it as the horizon of rational belief
    • I embrace the flexibility of Bayesianism as the method of navigating uncertainty
    • I acknowledge the need for paraconsistency as the condition of living in a world of contradictions
    • I uphold the discipline of AGM belief revision as the art of managing disruption
    • I aspire to coherence but accept that my path will involve noise, contradiction, and compromise

    In the end, the point isn’t to model myself after one system but to recognise the world demands several. KD45 will always represent the perfection of rational belief but I doubt I’ll ever get there in practice — not because I think I can’t but because I know I will choose not to in many matters. To be rational is not to be pure. It is to balance ideals with realities, to aspire without illusion, and to reason without denying the contradictions of life.

  • What on earth is a wavefunction?

    If you drop a pebble into a pond, ripples spread outward in gentle circles. We all know this sight, and it feels natural to call them waves. Now imagine being told that everything — from an electron to an atom to a speck of dust — can also behave like a wave, even though they are made of matter and not water or air. That is the bold claim of quantum mechanics. The waves in this case are not ripples in a material substance. Instead, they are mathematical entities known as wavefunctions.

    At first, this sounds like nothing more than fancy maths. But the wavefunction is central to how the quantum world works. It carries the information that tells us where a particle might be found, what momentum it might have, and how it might interact. In place of neat certainties, the quantum world offers a blur of possibilities. The wavefunction is the map of that blur. The peculiar thing is, experiments show that this ‘blur’ behaves as though it is real. Electrons fired through two slits make interference patterns as though each one went through both slits at once. Molecules too large to see under a microscope can act the same way, spreading out in space like waves until they are detected.

    So what exactly is a wavefunction, and how should we think about it? That question has haunted physicists since the early 20th century and it remains unsettled to this day.

    In classical life, you can say with confidence, “The cricket ball is here, moving at this speed.” If you can’t measure it, that’s your problem, not nature’s. In quantum mechanics, it is not so simple. Until a measurement is made, a particle does not have a definite position in the classical sense. Instead, the wavefunction stretches out and describes a range of possibilities. If the wavefunction is sharply peaked, the particle is most likely near a particular spot. If it is wide, the particle is spread out. Squaring the wavefunction’s magnitude gives the probability distribution you would see in many repeated experiments.

    If this sounds abstract, remember that the predictions are tangible. Interference patterns, tunnelling, superpositions, entanglement — all of these quantum phenomena flow from the properties of the wavefunction. It is the script that the universe seems to follow at its smallest scales.

    To make sense of this, many physicists use analogies. Some compare the wavefunction to a musical chord. A chord is not just one note but several at once. When you play it, the sound is rich and full. Similarly, a particle’s wavefunction contains many possible positions (or momenta) simultaneously. Only when you press down with measurement do you “pick out” a single note from the chord.

    Others have compared it to a weather forecast. Meteorologists don’t say, “It will rain here at exactly 3:07 pm.” They say, “There’s a 60% chance of showers in this region.” The wavefunction is like nature’s own forecast, except it is more fundamental: it is not our ignorance that makes it probabilistic, but the way the universe itself behaves.

    Mathematically, the wavefunction is found by solving the Schrödinger equation, which is a central law of quantum physics. This equation describes how the wavefunction changes in time. It is to quantum mechanics what Newton’s second law (F = ma) is to classical mechanics. But unlike Newton’s law, which predicts a single trajectory, the Schrödinger equation predicts the evolving shape of probabilities. For example, it can show how a sharply localised wavefunction naturally spreads over time, just like a drop of ink disperses in water. The difference is that the spreading is not caused by random mixing but by the fundamental rules of the quantum world.

    But does that mean the wavefunction is real, like a water wave you can touch, or is it just a clever mathematical fiction?

    There are two broad camps. One camp, sometimes called the instrumentalists, argues the wavefunction is only a tool for making predictions. In this view, nothing actually waves in space. The particle is simply somewhere, and the wavefunction is our best way to calculate the odds of finding it. When we measure, we discover the position, and the wavefunction ‘collapses’ because our information has been updated, not because the world itself has changed.

    The other camp, the realists, argues that the wavefunction is as real as any energy field. If the mathematics says a particle is spread out across two slits, then until you measure it, the particle really is spread out, occupying both paths in a superposed state. Measurement then forces the possibilities into a single outcome, but before that moment, the wavefunction’s broad reach isn’t just bookkeeping: it’s physical.

    This isn’t an idle philosophical spat. It has consequences for how we interpret famous paradoxes like Schrödinger’s cat — supposedly “alive and dead at once until observed” — and for how we understand the limits of quantum mechanics itself. If the wavefunction is real, then perhaps macroscopic objects like cats, tables or even ourselves can exist in superpositions in the right conditions. If it is not real, then quantum mechanics is only a calculating device, and the world remains classical at larger scales.

    The ability of a wavefunction to remain spread out is tied to what physicists call coherence. A coherent state is one where the different parts of the wavefunction stay in step with each other, like musicians in an orchestra keeping perfect time. If even a few instruments go off-beat, the harmony collapses into noise. In the same way, when coherence is lost, the wavefunction’s delicate correlations vanish.

    Physicists measure this ‘togetherness’ with a parameter called the coherence length. You can think of it as the distance over which the wavefunction’s rhythm remains intact. A laser pointer offers a good everyday example: its light is coherent, so the waves line up across long distances, allowing a sharp red dot to appear even all the way across a lecture hall. By contrast, the light from a torch is incoherent: the waves quickly fall out of step, producing only a fuzzy glow. In the quantum world, a longer coherence length means the particle’s wavefunction can stay spread out and in tune across a larger stretch of space, making the object more thoroughly delocalised.

    However, coherence is fragile. The world outside — the air, the light, the random hustle of molecules — constantly disturbs the system. Each poke causes the system to ‘leak’ information, collapsing the wavefunction’s delicate superposition. This process is called decoherence, and it explains why we don’t see cats or chairs spread out in superpositions in daily life. The environment ‘measures’ them constantly, destroying their quantum fuzziness.

    One frontier of modern physics is to see how far coherence can be pushed before decoherence wins. For electrons and atoms, the answer is “very far”. Physicists have found their wavefunctions can stretch across micrometres or more. They have also demonstrated coherence with molecules with thousands of atoms, but keeping them coherent has been much more difficult. For larger solid objects, it’s harder still.

    Physicists often talk about expanding a wavefunction. What they mean is deliberately increasing the spatial extent of the quantum state, making the fuzziness spread wider, while still keeping it coherent. Imagine a violin string: if it vibrates softly, the motion is narrow; if it vibrates with larger amplitude, it spreads. In quantum mechanics, expansion is more subtle but the analogy holds: you want the wavefunction to cover more ground not through noise or randomness but through genuine quantum uncertainty.

    Another way to picture it is as a drop of ink released into clear water. At first, the drop is tight and dark. Over time, it spreads outward, thinning and covering more space. Expanding a quantum wavefunction is like speeding up this spreading process, but with a twist: the cloud must remain coherent. The ink can’t become blotchy or disturbed by outside currents. Instead, it must preserve its smooth, wave-like character, where all parts of the spread remain correlated.

    How can this be done? One way is to relax the trap that’s being used to hold the particle in place. In physics, the trap is described by a potential, which is just a way of talking about how strong the forces are that pull the particle back towards the centre. Imagine a ball sitting in a bowl. The shape of the bowl represents the potential. A deep, steep bowl means strong restoring forces, which prevent the ball from moving around. A shallow bowl means the forces are weaker. That is, if you suddenly make the bowl shallower, the ball is less tightly confined and can explore more space. In the quantum picture, reducing the stiffness of the potential is like flattening the bowl, which allows the wavefunction to swell outward. If you later return the bowl to its steep form, you can catch the now-broader state and measure its properties.

    The challenge is to do this fast and cleanly, before decoherence destroys the quantum character. And you must measure in ways that reveal quantum behaviour rather than just classical blur.

    This brings us to an experiment reported on August 19 in Physical Review Letters, conducted by researchers at ETH Zürich and their collaborators. It seems the researchers have achieved something unprecedented: they prepared a small silica sphere, only about 100 nm across, in a nearly pure quantum state and then expanded its wavefunction beyond the natural zero-point limit. This means they coherently stretched the particle’s quantum fuzziness farther than the smallest quantum wiggle that nature usually allows, while still keeping the state coherent.

    To appreciate why this matters, let’s consider the numbers. The zero-point motion of their nanoparticle — the smallest possible movement even at absolute zero — is about 17 picometres (one picometre is a trillionth of a meter). Before expansion, the coherence length was about 21 pm. After the expansion protocol, it reached roughly 73 pm, more than tripling the initial reach and surpassing the ground-state value. For something as massive as a nanoparticle, this is a big step.

    The team began by levitating a silica nanoparticle in an optical tweezer, created by a tightly focused laser beam. The particle floated in an ultra-high vacuum at a temperature of just 7 K (-266º C). These conditions reduced outside disturbances to almost nothing.

    Next, they cooled the particle’s motion close to its ground state using feedback control. By monitoring its position and applying gentle electrical forces through the surrounding electrodes, they damped its jostling until only a fraction of a quantum of motion remained. At this point, the particle was quiet enough for quantum effects to dominate.

    The core step was the two-pulse expansion protocol. First, the researchers switched off the cooling and briefly lowered the trap’s stiffness by reducing the laser power. This allowed the wavefunction to spread. Then, after a carefully timed delay, they applied a second softening pulse. This sequence cancelled out unwanted drifts caused by stray forces while letting the wavefunction expand even further.

    Finally, they restored the trap to full strength and measured the particle’s motion by studying how they scattered light. Repeating this process hundreds of times gave them a statistical view of the expanded state.

    The results showed that the nanoparticle’s wavefunction expanded far beyond its zero-point motion while still remaining coherent. The coherence length grew more than threefold, reaching 73 ± 34 pm. Per the team, this wasn’t just noisy spread but genuine quantum delocalisation.

    More strikingly, the momentum of the nanoparticle had become ‘squeezed’ below its zero-point value. In other words, while uncertainty over the particle’s position increased, that over its momentum decreased, in keeping with Heisenberg’s uncertainty principle. This kind of squeezed state is useful because it’s especially sensitive to feeble external forces.

    The data matched theoretical models that considered photon recoil to be the main source of decoherence. Each scattered photon gave the nanoparticle a small kick, and this set a fundamental limit. The experiment confirmed that photon recoil was indeed the bottleneck, not hidden technical noise. The researchers have suggested using dark traps in future — trapping methods that use less light, such as radio-frequency fields — to reduce this recoil. With such tools, the coherence lengths can potentially be expanded to scales comparable to the particle’s size. Imagine a nanoparticle existing in a state that spans its own diameter. That would be a true macroscopic quantum object.

    This new study pushes quantum mechanics into a new regime. Thus far, large, solid objects like nanoparticles could be cooled and controlled, but their coherence lengths stayed pinned near the zero-point level. Here, the researchers were able to deliberately increase the coherence length beyond that limit, and in doing so showed that quantum fuzziness can be engineered, not just preserved.

    The implications are broad. On the practical side, delocalised nanoparticles could become extremely sensitive force sensors, able to detect faint electric or gravitational forces. On the fundamental side, the ability to hold large objects in coherent, expanded states is a step towards probing whether gravity itself has quantum features. Several theoretical proposals suggest that if two massive objects in superposition can become entangled through their mutual gravity, it would prove gravity must be quantum. To reach that stage, experiments must first learn to create and control delocalised states like this one.

    The possibilities for sensing in particular are exciting. Imagine a nanoparticle prepared in a squeezed, delocalised state being used to detect the tug of an unseen mass nearby or to measure an electric field too weak for ordinary instruments. Some physicists have speculated that such systems could help search for exotic particles such as certain dark matter candidates, which might nudge the nanoparticle ever so slightly. The extreme sensitivity arises because a delocalised quantum object is like a feather balanced on a pin: the tiniest push shifts it in measurable ways.

    There are also parallels with past breakthroughs. The Laser Interferometer Gravitational-wave Observatories, which detect gravitational waves, rely on manipulating quantum noise in light to reach unprecedented sensitivity. The ETH Zürich experiment has extended the same philosophy into the mechanical world of nanoparticles. Both cases show that pushing deeper into quantum control could yield technologies that were once unimaginable.

    But beyond the technologies also lies a more interesting philosophical edge. The experiment strengthens the case that the wavefunction behaves like something real. If it were only an abstract formula, could we stretch it, squeeze it, and measure the changes in line with theory? The fact that researchers can engineer the wavefunction of a many-atom object and watch it respond like a physical entity tilts the balance towards reality. At the least, it shows that the wavefunction is not just a mathematical ghost. It’s a structure that researchers can shape with lasers and measure with detectors.

    There are also of course the broader human questions. If nature at its core is described not by certainties but by probabilities, then philosophers must rethink determinism, the idea that everything is fixed in advance. Our everyday world looks predictable only because decoherence hides the fuzziness. But under carefully controlled conditions, that fuzziness comes back into view. Experiments like this remind us that the universe is stranger, and more flexible, than classical common sense would suggest.

    The experiment also reminds us that the line between the quantum and classical worlds is not a brick wall but a veil — thin, fragile, and possibly removable in the right conditions. And each time we lift it a little further, we don’t just see strange behaviour: we also glimpse sensors more sensitive than ever, tests of gravity’s quantum nature, and perhaps someday, direct encounters with macroscopic superpositions that will force us to rewrite what we mean by reality.

  • On the PixxelSpace constellation

    The announcement that a consortium led by PixxelSpace India will design, build, and operate a constellation of 12 earth-observation satellites marks a sharp shift in how India approaches large space projects. The Indian National Space Promotion and Authorisation Centre (IN-SPACe) awarded the project after a competitive process.

    What made headlines was that the winning bid asked for no money from the government. Instead, the group — which includes Piersight Space, SatSure Analytics India, and Dhruva Space — has committed to invest more than Rs 1,200 crore of its own resources over the next four to five years. The constellation will carry a mix of advanced sensors, from multispectral and hyperspectral imagers to synthetic aperture radar, and it will be owned and operated entirely by the private side of the partnership.

    PixxelSpace has said the zero-rupee bid is a conscious decision to support the vision of building an advanced earth-observation system for India and the world. The companies have also expressed belief they will recover their investment over time by selling high-value geospatial data and services in India and abroad. IN-SPACe’s chairman has called this a major endorsement of the future of India’s space economy.

    Of course the benefits for India are clear. Once operational, the constellation should reduce the country’s reliance on foreign sources of satellite imagery. That will matter in areas like disaster management, agriculture planning, and national security, where delays or restrictions on outside data can have serious consequences. Having multiple companies in the consortium brings together strengths in hardware, analytics, and services, which could create a more complete space industry ecosystem. The phased rollout will also mean technology upgrades can be built in as the system grows, without heavy public spending.

    Still, the arrangement raises difficult questions. In practice, this is less a public–private partnership than a joint venture. I assume the state will provide its seal of approval, policy support, and access to launch and ground facilities. If it does share policy support, it will have to explain why that’s vouchsafed for the collaboration isn’t of being expanded to the industry as a whole. I also heard IN-SPACe will ‘collate’ demand within the government for the constellation’s products and help meet them.

    Without assuming a fiscal stake, however, the government is left with less leverage to set terms or enforce priorities, especially if the consortium’s commercial goals don’t always align with national needs. It’s worth asking why the government issued an official request-for-proposal if didn’t intend to assume a stake, and whether the Rs-350-crore soft loan IN-SPACe originally offered for the project will still be available, repurposed or quietly withdrawn.

    I think the pitch will also test public oversight. IN-SPACe will need stronger technical capacity, legal authority, procedural clarity, and better public communication to monitor compliance without frustrating innovation. Regulations on remote sensing and data-sharing will probably have to be updated to cover a fully commercial system that sells services worldwide. Provisions that guarantee government priority access in emergencies and that protect sensitive imagery will have to be written clearly into law and contracts. Infrastructure access, from integration facilities to launch slots, must be managed transparently to avoid bottlenecks or perceived bias.

    The government’s minimal financial involvement saves public money but it also reduces long-term control. If India repeats this model, it should put in place new laws and safeguards that define how sovereignty, security, and public interest are to be protected when critical space assets are run by private companies. Without such steps, the promise of cost-free expansion could instead lead to new dependencies that are even harder to manage in future.

    Featured image credit: Carl Wang/Unsplash.

  • The Zomato ad and India’s hustle since 1947

    In contemporary India, corporate branding has often aligned itself with nationalist sentiment, adopting imagery such as the tricolour, Sanskrit slogans or references to ancient achievements to evoke cultural pride. Marketing narratives frequently frame consumption as a patriotic act, linking the choice of a product with the nation’s progress or “self-reliance”. This fusion of commercial messaging and nationalist symbolism serves both to capitalise on the prevailing political mood and to present companies as partners in the nationalist project. An advertisement in The Times of India on August 15, which describes the work of nation-building as a “hustle”, is a good example.

    I remember in engineering college my class had a small-minded and vindictive professor in our second year of undergraduate studies. He repeatedly picked on one particular classmate to the extent that, as resentment between the two people escalated, the professor’s actions in one arguably innocuous matter resulted in the student being suspended for a semester. He eventually didn’t have the number of credits he needed to graduate and had to spend six more months redoing many of the same classes. Today, this student is a successful researcher in Europe, having gone on to acquire a graduate degree followed by a PhD from some of the best research institutes in the world.

    When we were chatting a few years ago about our batch’s decadal reunion that was coming up, we thought it would be a good idea to attend and, there, rub my friend’s success in this professor’s face. We really wanted to do it because we wanted him to know how petty he had been. But as we discussed how we’d orchestrate this moment, it dawned on us that we’d also be signalling that our achievements don’t amount to more than those necessary to snub him, as if to say they have no greater meaning or purpose. We eventually dropped the idea. At the reunion itself, my friend simply ignored the professor.

    India may appear today to have progressed well past Winston Churchill’s belief, expressed in the early 1930s, but to advertise as Zomato has is to imply that it remains on our minds and animates the purpose of what we’re trying to do. It is a juvenile and frankly resentful attitude that also hints at a more deep-seated lack of contentment. The advertisement’s achievement of choice is the Chandrayaan 3 mission, its Vikram lander lit dramatically by sunlight and earthlight and photographed by the Pragyan rover. The landing was a significant achievement, but to claim that that above all else describes contemporary India is also to dismiss the evident truth that a functional space organisation and a democracy in distress can coexist within the same borders. One neither carries nor excuses the other.

    In fact, it’s possible to argue that ISRO’s success is at least partly a product of the unusual circumstances of its creation and its privileged place in the administrative structure. Founded by a scientist who worked directly with Jawaharlal Nehru — bypassing the bureaucratic hurdles faced by most others — ISRO was placed under the purview of the prime minister, ensuring it received the political attention, resources, and exemptions that are not typically available to other ministries or public enterprises. In this view, ISRO’s achievements are insulated from the broader fortunes of the country and can’t be taken as a reliable proxy for India’s overall ‘success’.

    The question here is: to whose words do we pay attention? Obviously not those of Churchill: his prediction is nearly a century old. In fact, as Ramachandra Guha sets out in the prologue of India After Gandhi (which I’m currently rereading), they seem in their particular context to be untempered and provocative.

    In the 1940s, with Indian independence manifestly round the corner, Churchill grumbled that he had not becoming the King’s first minister in order to preside over the liquidation of the British Empire. A decade previously he had tried to rebuild a fading political career on the plank of opposing self-government for Indians. After Gandhi’s ‘salt satyagraha’ of 1930 in protest against taxes on salt, the British government began speaking with Indian nationalists about the possibility of granting the colony dominion status. This was vaguely defined, with no timetable set for its realization. Even so, Churchill called the idea ‘not only fantastic in itself but criminally mischievous in its effects’. Since Indians were not fit for self-government, it was necessary to marshal ‘the sober and resolute forces of the British Empire’ to stall any such possibility.

    In 1930 and 1931 Churchill delivered numerous speeches designed to work up, in most unsober form, the constituency opposed to independence for India. Speaking to an audience at the City of London in December 1930, he claimed that if the British left the subcontinent, then an ‘army of white janissaries, officered if necessary from Germany, will be hired to secure the armed ascendancy of the Hindu’.

    This said, Guha continues later in the prologue:

    The forces that divide India are many. … But there are also forces that have kept India together, that have helped transcend or contain the cleavages of class and culture, that — so far, at least — have nullified those many predictions that India would not stay united and not stay democratic. These moderating influences are far less visible. … they have included individuals as well as institutions.

    Indeed, reading through the history of independent India, through the 1940s and ’50s filled with hope and ambition, the turmoil of the ’60s and the ’70s, the Emergency, followed by economic downturn, liberalisation, finally to the rise of Hindu nationalism, it has been clear that the work of the “forces that have kept India together” is unceasing. Earlier, the Constitution’s framework, with its guarantees of rights and democratic representation, provided a common political anchor. Regular elections, a free press, and an independent judiciary reinforced faith in the system even as the linguistic reorganisation of states reduced separatist tensions. National institutions such as the armed forces, civil services, and railways fostered a sense of shared identity across disparate regions.

    Equally, integrative political movements and leaders — including the All India Kisan Sabha, trade union federations like INTUC and AITUC, the Janata Party coalition of 1977, Akali leaders in Punjab in the post-1984 period, the Mazdoor Kisan Shakti Sangathan, and so on, as well as Lal Bahadur Shastri, Govind Ballabh Pant, C. Rajagopalachari, Vinoba Bhave, Jayaprakash Narayan, C.N. Annadurai, Atal Bihari Vajpayee, and so on — operated despite sharp disagreements largely within constitutional boundaries, sustaining the legitimacy of the Union. Today, however, most of these “forces” are directed at a more cynical cause of disunity: a nationalist ideology that has repeatedly defended itself with deceit, evasion, obfuscation, opportunism, pietism, pretence, subterfuge, vindictiveness, and violence.

    In this light, to claim we have “just put in the work, year after year”, as if to suggest India has only been growing from strength to strength, rather than lurching from one crisis to the next and of late becoming a little more balkanised as a result, is plainly disingenuous — and yet entirely in keeping with the alignment of corporate branding with nationalist sentiment, which is designed to create a climate in which criticism of corporate conduct is framed as unpatriotic. When companies wrap themselves in the symbols of the nation and position their products or services as contributions to India’s progress, questioning their practices risks being cast as undermining that progress. This can blunt scrutiny of resource over-extraction, environmental degradation, and exploitative labour practices by accusing dissenters of obstructing development.

    Aggressively promoting consumption and consumerism (“fuel your hustle”), which drives profits but also deepens social inequalities in the process, is recast as participating in the patriotic project of economic growth. When corporate campaigns subtly or explicitly endorse certain political agendas, their association with national pride can normalise those positions and marginalise alternative views. In this way, the fusion of commerce and nationalism builds market share while fostering a superficial sense of national harmony, even as it sidelines debates on inequality, exclusion, and the varied experiences of different communities within the nation.

  • A new kind of quantum engine with ultracold atoms

    In conventional ‘macroscopic’ engines like the ones that guzzle fossil fuels to power cars and motorcycles, the fuels are set ablaze to release heat, which is converted to mechanical energy and transferred to the vehicle’s moving parts. In order to perform these functions over and over in a continuous manner, the engine cycles through four repeating steps. There are different kinds of cycles depending on the engine’s design and needs. A common example is the Otto cycle, where the engine’s four steps are: 

    1. Adiabatic compression: The piston compresses the air-fuel mixture, increasing its pressure and temperature without exchanging heat with the surroundings

    2. Constant volume heat addition: At the piston’s top position, a spark plug ignites the fuel-air mixture, rapidly increasing pressure and temperature while the volume remains constant

    3. Adiabatic expansion: The high-pressure gas pushes the piston down, doing work on the piston, which powers the engine

    4. Constant volume heat rejection: At the bottom of the piston stroke, heat is expelled from the gas at constant volume as the engine prepares to clear the exhaust gases

    So the engine goes 1-2-3-4-1-2-3-4 and so on. This is useful. If you plot the pressure and volume of the fuel-air mixture in the engine on two axes of a graph, you’ll see that at the end of the ‘constant volume heat rejection’ step (no. 4), the mixture is in the same state as it is at the start of the adiabatic compression step (no. 1). The work that the engine does on the vehicle is equal to the difference between the work done during the expansion and compression steps. Engines are designed to meet the cyclical requirement while increasing the amount of work it does for a given fuel and vehicle design.

    It’s easy to understand the value of machines like this. They’re the reason we have vehicles that we can drive in different ways using our hands, legs, and our senses and in relative comfort. As long as we refill the fuel tank once in a while, engines can repeatedly perform mechanical work using their fuel combustion cycles. It’s understandable then why scientists have been trying to build quantum engines. While conventional engines use classical physics to operate, quantum engines are machines that use the ideas of quantum physics. For now, however, these machines are futuristic because scientists have found that they don’t understand the working principles of quantum engines well enough. University of Kaiserslautern-Landau professor Artur Widera told me the following in September 2023 after he and his team published a paper reporting that they had developed a new kind of quantum engine:

    Just observing the development and miniaturisation of engines from macroscopic scales to biological machines and further potentially to single- or few-atom engines, it becomes clear that for few particles close to the quantum regime, thermodynamics as we use in classical life will not be sufficient to understand processes or devices. In fact, quantum thermodynamics is just emerging, and some aspects of how to describe the thermodynamical aspects of quantum processes are even theoretically not fully understood.

    This said, recent advances in ultracold atomic physics have allowed physicists to control substances called quantum gases in the so-called low-dimensional regimes, laying the ground for them to realise and study quantum engines. Two recent studies exemplify this progress: the study by Widera et al. in 2023 and a new theoretical study reported in Physical Review E. Both studies have explored engines based on ultracold quantum gases but  have approached the concept of quantum energy conversion from complementary perspectives.

    The Physical Review E work investigated a ‘quantum thermochemical engine’ operating with a trapped one-dimensional (1D) Bose gas in the quasicondensate regime as the working fluid — just like the fuel-air mixture in in the internal combustion engine of a petrol-powered car. A Bose gas is a quantum system that consists of subatomic particles called bosons. The ‘1D’ simply means they are limited to moving back and forth on a straight line, i.e. a single spatial dimension. This restriction dramatically changes the bosons’ physical and quantum properties.

    According to the paper’s single author, University of Queensland theoretical physicist Vijit Nautiyal, the resulting engine can operate on an Otto cycle where the compression and expansion steps — which dictate the work the engine can do — are implemented by tuning how strongly the bosons interact, instead of changing the volume as in a classical engine. In order to do this, the quantum engine needs to exchange not heat with its surroundings but particles. That is, the particles flow from a hot reservoir to the working boson gas, allowing the engine to perform net work.

    Energy enters and leaves the system in the A-B and C-D steps, respectively, when the engine absorbs and releases particles from the hot reservoir. The engine consumes work during adiabatic compression (D-A) and performs work during adiabatic expansion (B-C). The difference between these steps is the engine’s net work output. Credit: arXiv:2411.13041v2

    Nautiyal’s study focused on the engine’s performance in two regimes: one where the strength of interaction between bosons was suddenly quenched in order to maximise the engine’s power at the cost of its efficiency, and another where the quantum engine operates at maximum efficiency but produces negligible power. Nautiyal has reported doing this using advanced numerical simulations.

    The simulations showed that if the engine only used heat but didn’t absorb particles from the hot reservoir, it couldn’t really produce useful energy at a finite temperatures. This was because of complicated quantum effects and uneven density in the boson gas. But when the engine was allowed to gain or lose particles from/to the reservoir, it got the extra energy it needed to work properly. Surprisingly, this particle exchange allowed the engine operate very efficiently, even when it ran fast. Usually, engines have to choose between going fast and losing efficiency or go slow and being more efficient. The particle exchange allowed Nautiyal’s quantum thermochemical engine avoid that trade-off. Letting more particles flow in and out also made the engine produce more energy and be even more efficient.

    Finally, unlike regular engines where higher temperature usually means better efficiency, increasing the temperature of the quantum thermochemical engine too much actually lowered its efficiency, speaking to the important role chemical work played in this engine design.

    In contrast, the 2023 experimental study — which I wrote about in The Hindu — realised a quantum engine that, instead of relying on conventional heating and cooling with thermal reservoirs, operated by cycling a gas of particles between two quantum states, a Bose-Einstein condensate and a Fermi gas. The process was driven by adiabatic changes (i.e. changes that happen while keeping the entropy fixed) that converted the fundamental difference in total energy distribution arising from the two states into usable work. The experiment demonstrated that this energy difference, called the Pauli energy, constituted a significant resource for thermodynamic cycles.

    The theoretical 2025 paper and the experimental 2023 work are intimately connected as complementary explorations of quantum engine operation using ultracold atomic gases. Both have taken advantage of the unique quantum effects accessible in such systems while focusing on distinct energy resources and operational principles.

    The 2025 work emphasised the role of chemical work arising from particle exchange in a one-dimensional Bose gas, exploring the balance of efficiency and power in finite-time quantum thermochemical engines. It also provided detailed computational frameworks to understand and optimise these engines. Likewise, the 2023 experiment physically realised a related but conceptually different mechanism: the movement of lithium atoms between two states and converting their Pauli energy to work. This approach highlighted how the fundamental differences between the two states could be a direct energy source, rather than conventional heat baths, and one operating with little to no production of entropy.

    Together, these studies broaden the scope of quantum engines beyond traditional heat-based cycles by demonstrating the usefulness of intrinsically quantum energy forms such as chemical work and Pauli energy. Such microscopic ‘machines’ also herald a new class of engines that harness the fundamental laws of quantum physics to convert energy between different forms more efficiently than the best conventional engines can manage with classical physics.

    Physics World asked Nautiyal about the potential applications of his work:

    … Nautiyal referred to “quantum steampunk”. This term, which was coined by the physicist Nicole Yunger Halpern at the US National Institute of Standards and Technology and the University of Maryland, encapsulates the idea that as quantum technologies advance, the field of quantum thermodynamics must also advance in order to make such technologies more efficient. A similar principle, Nautiyal explains, applies to smartphones: “The processor can be made more powerful, but the benefits cannot be appreciated without an efficient battery to meet the increased power demands.” Conducting research on quantum engines and quantum thermodynamics is thus a way to optimize quantum technologies.

  • Trade rift today, cryogenic tech yesterday

    US President Donald Trump recently imposed substantial tariffs on Indian goods, explicitly in response to India’s continued purchase of Russian oil during the ongoing Ukraine conflict. These penalties, reaching an unprecedented cumulative rate of 50% on targeted Indian exports, have been described by Trump as a response to what his administration has called an “unusual and extraordinary threat” posed by India’s trade relations with Russia. The official rationale for these measures centres on national security and foreign policy priorities and their design is to coerce India into aligning with US policy goals vis-à-vis the Russia-Ukraine war.

    The enforcement of these tariffs is notable among other things for its selectivity. While India faces acute economic repercussions, other major importers of Russian oil such as China and Turkey have thus far not been subjected to equivalent sanctions. The impact is also likely to be immediate and severe since almost half of Indian exports to the US, which is in fact India’s most significant export market, now encounter sharply higher costs, threatening widespread disruption in sectors such as textiles, automobile parts, pharmaceuticals, and electronics. Thus the tariffs have provoked a strong diplomatic response from the Government of India, which has characterised the US’s actions as “unfair, unjustified, and unreasonable,” while also asserting its primary responsibility to protect the country’s energy security.

    This fracas is reminiscent of US-India relations in the early 1990s regarding the former’s denial of cryogenic engine technology. In this period, the US government actively intervened to block the transfer of cryogenic rocket engines and associated technologies from Russia’s Glavkosmos to ISRO by invoking the Missile Technology Control Regime (MTCR) as justification. The MTCR was established in 1987 and was intended to prevent the proliferation of missile delivery systems capable of carrying weapons of mass destruction. In 1992, citing non-proliferation concerns, the US imposed sanctions on both ISRO and Glavkosmos, effectively stalling a deal that would have allowed India to acquire not only fully assembled engines but also the vital expertise for indigenous production in a much shorter timeframe than what transpired.

    The stated US concern was that cryogenic technology could potentially be adapted for intercontinental ballistic missiles (ICBMs). However experts had been clear that cryogenic engines are unsuitable for ICBMs because they’re complex, difficult to operate, and can’t be deployed on short notice. In fact, critics as well as historical analyses that followed later have said that the US’s strategic objective was less concerned with preventing missile proliferation and more with restricting advances in India’s ability to launch heavy satellites, thus protecting American and allied commercial and strategic interests in the global space sector.

    The response in both eras, economic plus technological coercion, suggests a pattern of American policy: punitive action when India’s sovereign decisions diverge from perceived US security or geoeconomic imperatives. The explicit justifications have also shifted from non-proliferation in the 1990s to support for Ukraine in the present, yet in both cases the US has singled India our for selective enforcement while comparable actions by other states have been allowed to proceed largely unchallenged.

    Thus, both actions have produced parallel outcomes. India faced immediate setbacks: export disruptions today; delays in its space launch programme three decades ago. There is an opportunity however. The technology denial in the 1990s catalysed an ambitious indigenous cryogenic engine programme, culminating in landmark achievements for ISRO in the following decades. Similarly, the current trade rift could accelerate India’s efforts to diversify its partnerships and supply chains if it proactively forges strategic trade agreements with emerging and established economies, invests in advanced domestic manufacturing capabilities, incentivises innovation across critical sectors, and fortifies logistical infrastructure.

    Diplomatically, however, each episode has strained US-India relations even as their mutual interests have at other times fostered rapprochement. Whenever India’s independent strategic choices appear to challenge core US interests, Washington has thus far used the levers of market access and technology transfers as the means of compulsion. But history suggests that these efforts, rather than yield compliance, could prompt adaptive strategies, whether through indigenous technology development or by recalibrating diplomatic and economic alignments.

    Featured image: I don’t know which rocket that is. Credit: Perplexity AI.

  • What keeps the red queen running?

    AI-generated definition based on ‘Quantitative and analytical tools to analyze the spatiotemporal population dynamics of microbial consortia’, Current Opinion in Biotechnology, August 2022:

    The Red Queen hypothesis refers to the idea that a constant rate of extinction persists in a community, independent of the duration of a species’ existence, driven by interspecies relationships where beneficial mutations in one species can negatively impact others.

    Encyclopedia of Ecology (second ed.), 2008:

    The term is derived from Lewis Carroll’s Through the Looking Glass, where the Red Queen informs Alice that “here, you see, it takes all the running you can do to keep in the same place.” Thus, with organisms, it may require multitudes of evolutionary adjustments just to keep from going extinct.

    The Red Queen hypothesis serves as a primary explanation for the evolution of sexual reproduction. As parasites (or other selective agents) become specialized on common host genotypes, frequency-dependent selection favors sexual reproduction (i.e., recombination) in host populations (which produces novel genotypes, increasing the rate of adaptation). The Red Queen hypothesis also describes how coevolution can produce extinction probabilities that are relatively constant over millions of years, which is consistent with much of the fossil record.

    Also read: ‘Sexual reproduction as an adaptation to resist parasites (a review).’, Proceedings of the National Academy of Sciences, May 1, 1990.

    ~

    In nature, scientists have found that even very similar strains of bacteria constantly appear and disappear even when their environment doesn’t seem to change much. This is called continual turnover. In a new study in PRX Life, Aditya Mahadevan and Daniel Fisher of Stanford University make sense of how this ongoing change happens, even without big differences between species or dramatic changes in the environment. Their jumping-off point is the red queen hypothesis.

    While the hypothesis has usually been used to talk about ‘arms races’, like between hosts and parasites, the new study asked: can continuous red queen evolution also happen in communities where different species or strains overlap a lot in what they do and where there aren’t obvious teams fighting each other?

    Mahadevan and Fisher built mathematical models to mimic how communities of microbes evolve over time. These models allowed the duo to simulate what would happen if a population started with just one microbial strain and over time new strains appeared due to random changes in their genes (i.e. mutations). Some of these new strains could invade other species’ resources and survive while others are forced to extinction.

    The models focused especially on ecological interactions, meaning how strains or species affected each other’s survival based on how they competed for the same food.

    When they ran the models, the duo found that even when there were no clear teams (like host v. parasite), communities could enter a red queen phase. The overall number of coexisting strains stayed roughly constant, but which strains were present keeps changing, like a continuous evolutionary game of musical chairs.

    The continual turnover happened most robustly when strains interacted in a non-reciprocal way. As ICTS biological physicist Akshit Goyal put it in Physics:

    … almost every attempt to model evolving ecological communities ran into the same problem: One organism, dubbed a Darwinian monster, evolves to be good at everything, killing diversity and collapsing the community. Theorists circumvented this outcome by imposing metabolic trade-offs, essentially declaring that no species could excel at everything. But that approach felt like cheating because the trade-offs in the models needed to be unreasonably strict. Moreover, for mathematical convenience, previous models assumed that ecological interactions between species were reciprocal: Species A affects species B in exactly the same way that B affects A. However, when interactions are reciprocal, community evolution ends up resembling the misleading fixed fitness landscape. Evolution is fast at first but eventually slows down and stops instead of going on endlessly.

    Mahadevan and Fisher solved this puzzle by focusing on a previously neglected but ubiquitous aspect of ecological interactions: nonreciprocity. This feature occurs when the way species A affects species B differs from the way B affects A—for example, when two species compete for the same nutrient, but the competition harms one species more than the other

    Next, despite the continual turnover, there was a cap on the number of strains that could coexist. This depended on the number of different resources available and how strains interacted, but as new strains invaded others, some old ones had to go extinct, keeping diversity within limits.

    If some strains started off much better (i.e. with higher fitness), over time the evolving competition narrowed these differences and only strains with similar overall abilities managed to stick around.

    Finally, if the system got close to being perfectly reciprocal, the dynamics could shift to an oligarch phase in which a few strains dominated most of the population and continual turnover slowed considerably.

    Taken together, the study’s main conclusion is that there doesn’t need to be a constant or elaborate ‘arms race’ between predator and prey or dramatic environmental changes to keep evolution going in bacterial communities. Such evolution can arise naturally when species or strains interact asymmetrically as they compete for resources.

    Featured image: “Now, here, you see, it takes all the running you can do, to keep in the same place.” Credit: Public domain.

  • A limit of ‘show, don’t tell’

    The virtue of ‘show, don’t tell’ in writing, including in journalism, lies in its power to create a more vivid, immersive, and emotionally engaging reading experience. Instead of simply providing information or summarising events, the technique encourages writers to use evocative imagery, action, dialogue, and sensory details to invite readers into the world of the story.

    The idea is that once they’re in there, they’ll be able to do a lot of the task of engaging for you.

    However, perhaps this depends on the world the reader is being invited to enter.

    There’s an episode in season 10 of ‘Friends’ where a palaeontologist tells Joey she doesn’t own a TV. Joey is confused and asks, “Then what’s all your furniture pointed at?”

    Most of the (textual) journalism of physics I’m seeing these days frames narratives around the application of some discovery or concept. For example, here’s the last paragraph of one of the top articles on Physics World today:

    The trio hopes that its technique will help us understand polaron behaviours. “The method we developed could also help study strong interactions between light and matter, or even provide the blueprint to efficiently add up Feynman diagrams in entirely different physical theories,” Bernardi says. In turn, it could help to provide deeper insights into a variety of effects where polarons contribute – including electrical transport, spectroscopy, and superconductivity.

    I’m not sure if there’s something implicitly bad about this framing but I do believe it gives the impression that the research is in pursuit of those applications, which in my view is often misguided. Scientific research is incremental and theories and data often takes many turns before they can be stitched together cleanly enough for a technological application in the real world.

    Yet I’m also aware that, just like pointing all your furniture at the TV can simplify your decisions about arranging your house, drafting narratives in order to convey the relevance of some research for specific applications can help hold readers’ attention better. Yes, this is a populist approach to the extent that it panders to what readers know they want rather than what they may not know, but it’s useful — especially when the communicator or journalist is pressed for time and/or doesn’t have the mental bandwidth to craft a thoughtful narrative.

    But this narrative choice may also imply a partial triumph of “tell, don’t show” over “show, don’t tell”. This is because the narrative has an incentive to restrict itself to communicating whatever physics is required to describe the technology and still be considered complete rather than wade into waters that will potentially complicate the narrative.

    A closely related issue here is that a lot of physics worth knowing about — if for no reason other than that they’re windows into scientists’ spirit and ingenuity — is quite involved. (It doesn’t help that it’s also mostly mathematical.) The concepts are simply impossible to show, at least not without the liberal use of metaphors and, inevitably, some oversimplification.

    Of course, it’s not possible to compare a physics news piece in Physics World with that in The Hindu: the former will be able to show more by telling itself because its target audience is physicists and other scientists, and they will see more detail in the word “polaron” than readers of The Hindu can be expected to. But even if The Hindu’s readers need more showing, I can’t show them the physics without expecting they will be interested in complicated theoretical ideas.

    In fact, I’ll be hard-pressed to be a better communicator than if I resorted to telling. Thus my lesson is that ‘show, don’t tell’ isn’t always a virtue. Sometimes what you show can bore or maybe scare readers off, and for reasons that have nothing to do with your skills as a communicator. Obviously the point isn’t to condescend readers here. Instead, we need to acknowledge that telling is virtuous in its own right, and in the proper context may be the more engaging way to communicate science.