India has won 22 Ig Nobel prizes to date. These awards, given annually at Harvard University by the magazine Annals of Improbable Research, honour studies that “first make people laugh, and then make them think” — a description that can suggest the prizes are little more than jokes whereas the research they reward is genuine.
Many of the Indian wins are in the sciences and they highlight an oft unacknowledged truth: even if the country hasn’t produced a Nobel laureate in science since C.V. Raman in 1930, Indian labs continue to generate knowledge of consequence by pursuing questions that appear odd at first sight. In 2004, for example, IIT Kanpur researchers won an Ig Nobel prize for studying why people spill coffee when they walk. They analysed oscillations and resonance in liquid-filled containers, thus expanding the principles of fluid dynamics into daily life.
Eleven years later, another team won a prize for measuring the friction coefficients of banana skins, showing why people who step on them are likely to fall. In 2019, doctors in Chennai were feted for documenting how cockroaches can survive inside human skulls, a subject of study drawn from real instances where medical workers had to respond to such challenges in emergency rooms. In 2022, biologists examined how scorpion stings are treated in rural India and compared traditional remedies against science-based pharmacology. More recently, researchers were honoured for describing the role of nasal hair in filtering air and pathogens.
The wins thus demonstrate core scientific virtues as well as reflect the particular conditions in which research often happens in India. Most of the work also wasn’t supported by lavish grants nor was it published in élite journals with high citation counts. Instead, the work emerged from scientists choosing to follow curiosity rather than institutional incentives. In this sense, the Ig Nobel prizes are less a distraction and more an index of how ‘serious’ science might actually begin.
Of course it’s also important to acknowledge that India’s research landscape is crowded with work of indifferent quality. A large share of papers are produced to satisfy promotion requirements, with little attention to design or originality, and many find their way into predatory journals where peer review is nonexistent or a joke. Such publications seldom advance knowledge, whether in curiosity-driven or application-oriented paradigms, and they dilute the credibility of the system as a whole.
Then again whimsy isn’t foreign to the Nobel Prizes themselves, which are generally quite sombre. For example, in 2016, the chemistry prize was awarded to researchers who designed molecular rotors and elevators constructed from just a handful of atoms. The achievement was profound but it also carried the air of play. The prize-giving committee compared the laureates’ work to the invention of the electric motor in the 1830s, noting that even if practical applications (may or may not) come later, the first step remains the act of imagining, not unlike a child. If the Nobel Committee can reward such imaginative departures, India’s Ig Nobel prize wins should be seen as more evidence that playful research is a legitimate part of the scientific enterprise.
The larger question is whether curiosity-driven research has a place in national science policy. Some experts have argued that in a country like India, with pressing social and economic needs and allegedly insufficient funding to support research, scientists must focus on topics that’re immediately useful: better crops, cheaper drugs, new energy sources, etc. But this is too narrow a view. Science doesn’t have to be useful in the short term to be valuable. The history of discovery is filled with examples that seemed obscure at the time but later transformed technology and society, including X-rays, lasers, and the structure of DNA. Equally importantly, the finitude of resources to which science administrators and lawmakers have often appealed is likely a red herring set up to make excuses for diverting funds away from scientific research.
Measuring why banana skins are slippery didn’t solve a crisis but it advanced scientists’ understanding of biomechanics. Analysing why coffee spills while walking generated models in fluid mechanics that researchers could apply to a range of fluid systems. Together with documenting cockroaches inside skulls and studying scorpion sting therapies, none of this research was wasteful or should be seen that way but more importantly the freedom to pursue such questions is vital. If nothing else, winning a Nobel Prize can’t be engineered by restricting scientists to specific questions. They prizes often go to scientists who are well connected, work in well-funded laboratories, and who publish in highly visible journals — yet bias and visibility explain only part of the pattern. Doing good science depends on an openness to ideas that its exponents can’t be expected to plan in advance.
This is a broader reason the Ig Nobel prizes are really reminders that curiosity remains alive among Indian scientists, even in a system that often discourages it. They also reveal what we stand to lose when research freedom is curtailed. The point isn’t that every odd question will lead to a breakthrough but that no one can predict in advance which questions will. We don’t know what we don’t know and the only way to find out is to explore.
India’s 22 Ig Nobel wins in this sense are indicators of a culture of inquiry that deserves more institutional support. If the country wants to achieve scientific recognition of the highest order — the Indian government has in fact been aspiring to “science superpower” status — it must learn to value curiosity as a public good. What may appear whimsical today could prove indispensable tomorrow.
Maxwell’s demon is one of the most famous thought experiments in the history of physics, a puzzle first posed in the 1860s that continues to shape scientific debates to this day. I’ve struggled to make sense of it for years. Last week I had some time and decided to hunker down and figure it out, and I think I succeeded. The following post describes the fruits of my efforts.
At first sight, the Maxwell’s demon paradox seems odd because it presents a supernatural creature tampering with molecules of gas. But if you pare down the imagery and focus on the technological backdrop of the time of James Clerk Maxwell, who proposed it, a profoundly insightful probe of the second law of thermodynamics comes into view.
The thought experiment asks a simple question: if you had a way to measure and control molecules with perfect precision and at no cost, will you able to make heat flow backwards, as if in an engine?
Picture a box of air divided into two halves by a partition. In the partition is a very small trapdoor. It has a hinge so it can swing open and shut. Now imagine a microscopic valve operator that can detect the speed of each gas molecule as it approaches the trapdoor, decide whether to open or close the door, and actuate the door accordingly.
The operator follows two simple rules: let fast molecules through from left to right and let slow molecules through from right to left. The temperature of a system is nothing but the average kinetic energy of its constituent particles. As the operator operates, over time the right side will heat up and the left side will cool down — thus producing a temperature gradient for free. Where there’s a temperature gradient, it’s possible to run a heat engine. (The internal combustion engine in fossil-fuel vehicles is a common example.)
A schematic diagram of the Maxwell’s demon thought experiment. Htkym (CC BY-SA)
But the possibility that this operator can detect and sort the molecules, thus creating the temperature gradient without consuming some energy of its own, seems to break the second law of thermodynamics. The second law states that the entropy of a closed system increases over time — whereas the operator ensures that the temperature will decrease, violating the law. This was the Maxwell’s demon thought experiment, with the demon as a whimsical stand-in for the operator.
The paradox was made compelling by the silent assumption that the act of sorting the molecules could have no cost — i.e. that the imagined operator didn’t add energy to the system (the air in the box) but simply allowed molecules that are already in motion to pass one way and not the other. In this sense the operator acted like a valve or a one-way gate. Devices of this kind — including check valves, ratchets, and centrifugal governors — were already familiar in the 19th century. And scientists assumed that if they were scaled down to the molecular level, they’d be able to work without friction and thus separate hot and cold particles without drawing more energy to overcome that friction.
This detail is in fact the fulcrum of the paradox, and the thing that’d kept me all these years from actually understanding what the issue was. Maxwell et al. assumed that it was possible that an entity like this gate could exist: one that, without spending energy to do work (and thus increase entropy), could passively, effortlessly sort the molecules. Overall, the paradox stated that if such a sorting exercise really had no cost, the second law of thermodynamics would be violated.
The second law had been established only a few decades before Maxwell thought up this paradox. If entropy is taken to be a measure of disorder, the second law states that if a system is left to itself, heat will not spontaneously flow from cold to hot and whatever useful energy it holds will inevitably degrade into the random motion of its constituent particles. The second law is the reason why perpetual motion machines are impossible, why the engines in our cars and bikes can’t be 100% efficient, and why time flows in one specific direction (from past to future).
Yet Maxwell’s imagined operator seemed to be able to make heat flow backwards, sifting molecules so that order increases spontaneously. For many decades, this possibility challenged what physicists thought they knew about physics. While some brushed it off as a curiosity, others contended that the demon itself must expend some energy to operate the door and that this expense would restore the balance. However, Maxwell had been careful when he conceived the thought experiment: he specified that the trapdoor was small and moved without friction, so it could in principle operate in a negligible way. The real puzzle lay elsewhere.
In 1929, the Hungarian physicist Leó Szilard sharpened the problem by boiling it down to a single-particle machine. This so-called Szilard engine imagined one gas molecule in a box with a partition that could be inserted or removed. By observing on which side the molecule lay and then allowing it to push a piston, the operator could apparently extract work from a single particle at uniform temperature. Szilard showed that the key step was not the movement of the piston but the acquisition of information: knowing where the particle was. That is, Szilard reframed the paradox to be not about the molecules being sorted but about an observer making a measurement.
(Aside: Szilard was played by Máté Haumann in the 2023 film Oppenheimer.)
A (low-res) visualisation of a Szilard engine. Its simplest form has only one atom (i.e. N = 1) pushing against a piston. Credit: P. Fraundorf (CC BY-SA)
The next clue to cracking the puzzle came in the mid-20th century from the growing field of information theory. In 1961, the German-American physicist Rolf Landauer proposed a principle that connected information and entropy directly. Landauer’s principle states that while it’s possible in principle to acquire information in a reversible way — i.e. to be able to acquire it as well as lose it — erasing information from a device with memory has a non-zero thermodynamic cost that can’t be avoided. That is, the act of resetting a memory register of one bit to a standard state generates a small amount of entropy (proportional to Boltzmann’s constant multiplied by the logarithm of two).
The American information theorist Charles H. Bennett later built on Landauer’s principle and argued that Maxwell’s demon could gather information and act on it — but in order to continue indefinitely, it’d have to erase or overwrite its memory. And that this act of resetting would generate exactly the entropy needed to compensate for the apparent decrease, ultimately preserving the second law of thermodynamics.
Taken together, Maxwell’s demon was defeated not by the mechanics of the trapdoor but by the thermodynamic cost of processing information. Specifically, the decrease in entropy as a result of the molecules being sorted by their speed is compensated for by the increase in entropy due to the operator’s rewriting or erasure of information about the molecules’ speed. Thus a paradox that’d begun as a challenge to thermodynamics ended up enriching it — by showing information could be physical. It also revealed to scientists that entropy is disorder in matter and energy as well as is linked to uncertainty and information.
Over time, Maxwell’s demon also became a fount of insight across multiple branches of physics. In classical thermodynamics, for example, entropy came to represent a measure of the probabilities that the system could exist in different combinations of microscopic states. That is, the probabilities referred to the likelihood that a given set of molecules could be arranged in one way instead of another. In statistical mechanics, Maxwell’s demon gave scientists a concrete way to think about fluctuations. In any small system, random fluctuations can reduce entropy for some time in a small portion. While the demon seemed to exploit these fluctuations, the laws of probability were found to ensure that on average, entropy would increase. So the demon became a metaphor for how selection based on microscopic knowledge could alter outcomes but also why such selection can’t be performed without paying a cost.
For information theorists and computer scientists, the demon was an early symbol of the deep ties between computation and thermodynamics. Landauer’s principle showed that erasing information imposes a minimum entropy cost — an insight that matters for how computer hardware should be designed. The principle also influenced debates about reversible computing, where the goal is to design logic gates that don’t ever erase information and thus approach zero energy dissipation. In other words, Maxwell’s demon foreshadowed modern questions about how energy-efficient computing could really be.
Even beyond physics, the demon has seeped into philosophy, biology, and social thought as a symbol of control and knowledge. In biology, the resemblance between the demon and enzymes that sorts molecules has inspired metaphors about how life maintains order. In economics and social theory, the demon has been used to discuss the limits of surveillance and control. The lesson has been the same in every instance: that information is never free and that the act of using it imposes inescapable energy costs.
I’m particularly taken by the philosophy that animates the paradox. Maxwell’s demon was introduced as a way to dramatise the tension between the microscopic reversibility of physical laws and the macroscopic irreversibility encoded in the second law of thermodynamics. I found that a few questions in particular — whether the entropy increase due to the use of information is a matter of an observer’s ignorance (i.e. because the observer doesn’t know which particular microstate the system occupies at any given moment), whether information has physical significance, and whether the laws of nature really guarantee the irreversibility we observe — have become touchstones in the philosophy of physics.
In the mid-20th century, the Szilard engine became the focus of these debates because it refocused the second law from molecular dynamics to the cost of acquiring information. Later figures such as the French physicist Léon Brillouin and the Hungarian-Canadian physicist Dennis Gabor claimed that it’s impossible to measure something without spending energy. Critics however countered that these requirements stipulated the need for specific technologies that would in turn smuggle in some limitations — rather than stipulate the presence of a fundamental principle. That is to say, the debate among philosophers became whether Maxwell’s demon was prevented from breaking the second law by deep and hitherto hidden principles or by engineering challenges.
This gridlock was broken when physicists observed that even a demon-free machine must leave some physical trace of its interactions with the molecule. That is, any device that sorts particles will end up in different physical states depending on the outcome, and to complete a thermodynamic cycle those states must be reset. Here, the entropy is not due to the informational content but due to the logical structure of memory. Landauer solidified this with his principle that logically irreversible operations such as erasure carry a minimum thermodynamic cost. Bennett extended this by saying that measurements can be made reversibly but not erasure. The philosophical meaning of both these arguments is that entropy increase isn’t just about ignorance but also about parts of information processing being irreversible.
Credit: Cdd20
In the quantum domain, the philosophical puzzles became more intense. When an object is measured in quantum mechanics, it isn’t just about an observer updating the information they have about the object — the act of measuring also seems to alter the object’s quantum states. For example, in the Schrödinger’s cat thought experiment, checking whether there’s a cat in the box also causes the cat to default to one of two states: dead or alive. Quantum physicists have recreated Maxwell’s demon in new ways in order to check whether the second law continues to hold. And over the course of many experiments, they’ve concluded that indeed it does.
The second law didn’t break even when Maxwell’s demon could exploit phenomena that aren’t available in the classical domain, including quantum entanglement, superposition, and tunnelling. This was because, among others, quantum mechanics also has some restrictive rules of its own. For one, some physicists have tried to design “quantum demons” that use quantum entanglement between particles to sort them without expending energy. But these experiments have found that as soon as the demon tries to reset its memory and start again, it must erase the record of what happened before. This step destroys the advantage and the entropy cost returns. The overall result is that even a “quantum demon” gains nothing in the long run.
For another, the no-cloning theorem states that you can’t make a perfect copy of an unknown quantum state. If the demon could freely copy every quantum particle it measured, it could retain flawless records while still resetting its memory, this avoiding the usual entropy cost. The theorem blocks this strategy by forbidding perfect duplication, ensuring that information can’t be ‘multiplied’ without limit. Similarly, the principle of unitarity implies that a system will always evolve in a way that preserves overall probabilities. As a result, quantum phenomena can’t selectively amplify certain outcomes while discarding others. For the demon, this means it can’t secretly limit the range of possible states the system can occupy into a smaller set where the system has lower entropy, because unitarity guarantees that the full spread of possibilities is preserved across time.
All these rules together prevent the demon from multiplying or rearranging quantum states in a way that would allow it to beat the second law.
Then again, these ‘blocks’ that prevent Maxwell’s demon from breaking the second law of thermodynamics in the quantum realm raise a puzzle of their own: is the second law of thermodynamics guaranteed no matter how we interpret quantum mechanics? ‘Interpreting quantum mechanics’ means to interpret what the rules of quantum mechanics say about reality, a topic I covered at length in a recent post. Some interpretations say that when we measure a quantum system, its wavefunction “collapses” to a definite outcome. Others say collapse never happens and that measurement is just entangled with the environment, a process called decoherence. The Maxwell’s demon thought experiment thus forces the question: is the second law of thermodynamics safe in a particular interpretation of quantum mechanics or in all interpretations?
Credit: Amy Young/Unsplash
Landauer’s idea, that erasing information always carries a cost, also applies to quantum information. Even if Maxwell’s demon used qubits instead of bits, it won’t be able to escape the fact that to reuse its memory, it must erase the record, which will generate heat. But then the question becomes more subtle in quantum systems because qubits can be entangled with each other, and their delicate coherence — the special quantum link between quantum states — can be lost when information is processed. This means scientists need to carefully separate two different ideas of entropy: one based on what we as observers don’t know (our ignorance) and another based on what the quantum system itself has physically lost (by losing coherence).
The lesson is that the second law of thermodynamics doesn’t just guard the flow of energy. In the quantum realm it also governs the flow of information. Entropy increases not only because we lose track of details but also because the very act of erasing and resetting information, whether classical or quantum, forces a cost that no demon can avoid.
Then again, some philosophers and physicists have resisted the move to information altogether, arguing that ordinary statistical mechanics suffices to resolve the paradox. They’ve argued that any device designed to exploit fluctuations will be subject to its own fluctuations, and thus in aggregate no violation will have occurred. In this view, the second law is self-sufficient and doesn’t need the language of information, memory or knowledge to justify itself. This line of thought is attractive to those wary of anthropomorphising physics even if it also risks trivialising the demon. After all, the demon was designed to expose the gap between microscopic reversibility and macroscopic irreversibility, and simply declaring that “the averages work out” seems to bypass the conceptual tension.
Thus, the philosophical significance of Maxwell’s demon is that it forces us to clarify the nature of entropy and the second law. Is entropy tied to our knowledge/ignorance of microstates, or is it ontic, tied to the irreversibility of information processing and computation? If Landauer is right, handling information and conserving energy are ‘equally’ fundamental physical concepts. If the statistical purists are right, on the other hand, then information adds nothing to the physics and the demon was never a serious challenge. Quantum theory can further stir both pots by suggesting that entropy is closely linked to the act of measurement, of quantum entanglement, and how quantum systems ‘collapse’ to classical ones by the process of decoherence. The demon debate therefore tests whether information is a physically primitive entity or a knowledge-based tool. Either way, however, Maxwell’s demon endures as a parable.
Ultimately, what makes Maxwell’s demon a gift that keeps giving is that it works on several levels. On the surface it’s a riddle about sorting molecules between two chambers. Dig a little deeper and it becomes a probe into the meaning of entropy. If you dig even further, it seems to be a bridge between matter and information. As the Schrödinger’s cat thought experiment dramatised the oddness of quantum superposition, Maxwell’s demon dramatised the subtleties of thermodynamics by invoking a fantastical entity. And while Schrödinger’s cat forces us to ask what it means for a macroscopic system to be in two states at once, Maxwell’s demon forces us to ask what it means to know something about a system and whether that knowledge can be used without consequence.
Carbon is famous for its many solid forms. It’s the soot in air pollution, the graphite in pencil leads, and the glittering diamond in expensive jewellery. It’s also the carbon nanotubes in biosensors and fullerenes in organic solar cells.
However, despite its ability to exist in various shapes as a solid, carbon’s liquid form has been a long-standing mystery. The main reason is that carbon is very difficult to liquefy: at around 4,500º C of temperature and a hundred atmospheres’ worth of pressure — not something found even inside a blast furnace. Scientists have thus struggled to see what molten carbon actually looks like.
The question of its structure isn’t only scientific curiosity. Liquid carbon shows up in laser-fusion experiments and in the manufacture of nanodiamonds. The substance probably also exists deep inside planets like Uranus and Neptune.
In a 2020 review of the topic in Chemical Physics Letters, three researchers from the University of California Berkeley wrote:
[M]any intriguing unanswered questions remain regarding the properties of liquid carbon. While theory has produced a wide array of predictions regarding the structure, phase diagram, and electronic nature of the liquid, as of yet, few of these predictions have been experimentally tested, and several of the predicted properties of the liquid remain controversial.
In a major step forward, an international collaboration of researchers from China, Europe, the UK, and the UK recently reported that they had managed to briefly liquefy carbon — but long enough to observe the internal arrangement of its atoms in detail. They achieved the feat by blasting a carbon wafer with a powerful laser, then X-raying it in real time.
The researchers used glassy carbon, a hard form of carbon that absorbs laser energy evenly.
To create the extreme conditions required to liquefy, the team used the European XFEL (EuXFEL) research facility in Germany. Here, a power laser fired 515-nm light to the front of a glassy carbon wafer. The pulse lasted 5-10 nanoseconds, was roughly one-fourth of a millimetre wide, and carried up to 35 joules of energy.
That’s just one-tenth of the energy required to melt 1 g of ice. But because it was delivered in concentrated fashion, the pulse launched a shockwave through the wafer. Shock compression simultaneously squeezed and heated the material, quickly driving pressures to 7 lakh to 16 lakh times the earth’s atmosphere. The temperature in the wafer also soared above 6,000 K — well into the liquid-carbon regime.
Then,a device recorded the speed of the shockwaves and confirmed the wave stayed flat and steady across the region to be blasted by X-rays. With the wave speed and the sample’s thickness, the team calculated the pressure inside the sample to about 98,000 atm.
While the shockwaves were still rippling through the sample, the EuXFEL facility launched a 25-femtosecond-long flash of X-rays at the same spot.
The liquid carbon state lasted for only a few nanoseconds but a femtosecond is a million-times even shorter.
The X-rays scattered off the carbon atoms and were caught by two large detectors, where the radiation produced patterns called diffraction rings. Each ring encoded the distances between atoms, like a fingerprint of the sample’s internal structure.
Because the X-ray flash was intense, each flash revealed enough data to analyse the liquid’s structure.
For added measure, the team also varied laser power and the wafer’s thickness to collect data across a range of physical parameters. For example, the pressure varied from 1 atm (for reference) to 15 lakh atm. Each pressure level corresponded to a separate, single-shot X-ray measurement, so the whole dataset was assembled shot by shot.
At pressures of 7.5-8.2 lakh atm, the glassy carbon began turning into crystalline diamond. At 10-12 lakh atm, the signs in the data that were symptomatic of diamond weakened while broad humps characteristic of the liquid phase emerged and grew. The scientists interpreted this as evidence of a mixed state where solid diamond and liquid carbon coexist.
Then, at about 15 lakh atm, the data pertaining to the diamond form vanished completely, leaving only the broad liquid-form’s humps. The sample was now fully molten. According to the team, this means carbon under shock melts roughly between roughly 9.8 lakh and 16 lakh atm in the experiment.
Then, to convert the diffraction patterns into information about the arrangement of atoms, the team used maths and simulations.
The team members calculated the static structure factor, a framework that described how the atoms in liquid carbon scattered the X-ray radiation. Then they used the factor to estimate the chance of finding another carbon atom at some distance from a reference atom. These distances indicated the distances the atoms preferred to keep and the average number of nearest neighbours.
Next, they used quantum density-functional theory molecular dynamics (DFT-MD) to simulate how 64 carbon atoms move at a chosen density and temperature. The simulations produced static structure factor data that the researchers compared directly to their data. By adjusting the density and temperature in the simulation, they found the best-fit values that matched each experiment.
The team performed this comparison because it could rule out models of liquid carbon’s structure that were incompatible with the findings. For example, the Lennard-Jones model predicted the average number of neighbouring carbon atoms to be 11-12, contrary to the data.
The team estimated that carbon melted temperature to around 6,700 K and a pressure of 12 lakh atm. When fully molten, each carbon atom had about four immediate neighbours on average. This is reminiscent of the way carbon atoms are arranged in diamond, although the bonds in liquid carbon are also constantly breaking and reforming.
The near-perfect fit between experiment and DFT-MD for the structure factor indicated that existing quantum simulation techniques could capture liquid carbon’s behaviour accurately at high pressure. The success will give researchers confidence when using the same methods to predict even harsher conditions, such as those inside giant exoplanets.
Indeed, ice-giants like Neptune may contain layers where methane breaks down and carbon forms a liquid‐like ocean. Knowing the density and the atomic arrangement in such a liquid can help predict the planet’s magnetic field and internal heat flow.
Similarly, in inertial-confinement fusion — of the type that recently breached the break-even barrier at a US facility — a thin diamond shell surrounds the fuel. Designers must know exactly how that shell melts during the first shocks to generate power more efficiently.
Many advanced carbon materials such as nanotubes and nanodiamonds form when liquid carbon cools rapidly. Understanding how the liquid’s atoms establish short-range order could suggest pathways to tailor these materials.
Finally, the team wrote in its paper, the experiment showed that single-shot X-ray diffraction combined with a high-repetition laser can map the liquid structure of any light element at extreme pressure and temperature. Running both the laser and EuXFEL at their full capabilities could thus allow scientists to put together large datasets in minutes rather than weeks.
At 6 am on September 13, the CSIR handle on X.com published the following post about an “anti-diabetic medicine” called either “Daiba 250” or “Diabe 250”, developed at the CSIR-Indian Institute of Integrative Medicine (IIIM):
Its “key features”, according to the CSIR, are that it created more than 250 jobs and that Prime Minister Narendra Modi “mentioned the startup” to which it has been licensed in his podcast ‘Mann ki Baat’. What of the clinical credentials of Diabe-250, however?
Diabe-250 is being marketed on India-based online pharmacies like Tata 1mg as an “Ayurvedic” over-the-counter tablet “for diabetes support/healthy sugar levels”. The listing also claims Diabe-250 is backed by a US patent granted to an Innoveda Biological Solutions Pvt. Ltd. Contrary to the CSIR post calling Diabe-250 “medicine”, some listings also carry the disclaimer that it’s “a dietary nutritional supplement, not for medicinal use”.
(“Ayurveda” is within double-quotes throughout this post because, like most products like Diabe-250 in the market that are also licensed by the Ministry of AYUSH, there’s no evidence that they’re actually Ayurvedic. They may be, they may not be — and until there’s credible proof, the Ayurvedic identity is just another claim.)
Second, while e-commerce and brand pages use the spellings “Diabe 250” or “Diabe-250” (without or without the hyphen), the CSIR’s social media posts refer to it as “Daiba 250”. The latter also describe it as an anti-diabetic developed/produced with the CSIR-IIIM in the context of incubation and licensing. These communications don’t constitute clinical evidence but they might be the clearest public basis to link the “Daiba” or “Diabe” spellings with the CSIR.
Multiple product pages also credit Innoveda Biological Solutions Pvt. Ltd. as a marketer and manufacturer. Corporate registry aggregators corroborate the firm’s existence; its CIN is U24239DL2008PTC178821). Similarly, the claim that Diabe-250 is backed by a US patent can be traced most directly to US8163312B2 for “Herbal formulation for prevention and treatment of diabetes and associated complications”. Its inventor is listed as a G. Geetha Krishnan and Innoveda Biological Solutions (P) Ltd. is listed as the current assignee.
The patent text describes combinations of Indian herbs for diabetes and some complications. Of course no patent is proof of efficacy for any specific branded product or dose.
The ingredients in Diabe-250 vary by retailer and there’s no consistent, quantitative per-tablet composition on public pages. This said, multiple listings name the following ingredients:
“Vidanga” (Embelia ribes)
“Gorakh buti” (Aerva lanata)
“Raj patha” (Cyclea peltata)
“Vairi” or “salacia” (often Salacia oblonga), and
“Lajalu” (Biophytum sensitivum)
The brand page also asserts a “unique combination of 16 herbs” and describes additional “Ayurveda” staples such as berberine source, turmeric, and jamun. However, there doesn’t appear to be a full label image or a quantitative breakdown of the composition of Diabe-250.
Retail and brand pages also claim Diabe-250 “helps maintain healthy sugar levels”, “improves lipid profile/reduces cholesterol”, and “reduces diabetic complications”, sometimes also including non-glycaemic effects such as “better sleep” and “regular bowel movement”. Several pages also include the caveat that it’s a “dietary nutritional supplement” and that it’s “not for medicinal use”. However, none of these source cite a peer-reviewed clinical trial of Diabe-250 itself.
In fact, there appear to be no peer-reviewed, product-specific clinical trials of Diabe-250 or Daiba-250 in humans; there are also no clinical trial registry records that were specific to this brand. If such a trial exists and its results were published in a peer-reviewed journal, it hasn’t been cited on the sellers’ or brand pages or in accessible databases.
Some ingredient classes in Diabe-250 are interesting even if they don’t validate Diabe-250 as a finished product. For instance, Salacia spp., especially S. reticulata, S. oblonga, and S. chinensishave been known to be α-glucosidase inhibitors. In vitro studies and chemistry reviews have also described Salacia spp. can be potent inhibitors of maltase, sucrase, and isomaltase.
In one triple-blind, randomised crossover trial in 2023, biscuits fortified with S. reticulata extract reduced HbA1c levels by around 0.25% (2.7 mmol/mol) over three months versus the placebo, with an acceptable safety profile.In post-prandial studies involving healthy volunteers and type 2 diabetes, several randomised crossover designs had lower post-meal glucose and insulin area under the curve when Salacia extract was co-ingested along with carbohydrate.
Similarly, berberine-based neutraceuticals (such as those including Berberis aristata) have shown glycaemic improvements in the clinical literature (at large, not specific to Diabe-250) in people with type 2 diabetes. However, these effects were often reported in combination with other compounds and which researchers also indicated depended strongly on formulation and dose.
Finally, a 2022 systematic review of “Ayurvedic” medicines in people with type 2 diabetes reported heterogeneous evidence, including some promising signals, but also emphasised methodological limitations and the need for randomised controlled trials of higher quality.
Right now, if Diabe-250 works as advertised, there’s no scientific proof in the public domain, especially in the form of product-specific clinical trials that define its composition, dosage, and endpoints.
In India, Ayurvedic drugs come under the Drugs & Cosmetics Rules 1945. Labelling provisions under Section 161 require details such as the manufacturer’s address, batch, and manufacturing and expiry dates while practice guides also note the product license number on the label for “Ayurvedic” drugs. However, several retail pages for Diabe-250 display it as a “dietary nutritional supplement” and add that it’s “not for medicinal use”, implying that it’s being marketed with supplement-style claims rather than as an Ayurvedic “medicine” in the narrow regulatory sense — which runs against the claim in the CSIR post on X.com. Public pages also didn’t display an AYUSH license number for Diabe-250. I haven’t checked a physical pack.
A well-known study in JAMA in 2008, of “Ayurvedic” products purchased over the internet, found that around 20% of them contained lead, mercury or arsenic, and public-health advisories and case reports that have appeared since have echoed these concerns. This isn’t a claim about Diabe-250 specifically but a category-level risk of “Ayurvedic” products that are available to buy online and which are compounded by the unclear composition of Diabe-250. The inconsistent naming also opens the door to counterfeit products that are also more likely to be contaminated.
Materials published by the Indian and state governments, including the Ministry of AYUSH, have framed “Ayurveda” as complementary to allopathic medicine. For example, if a person with diabetes chooses to try “Ayurvedic” support, the standard advice is to not discontinue prescribed therapy and to monitor one’s glucose, especially if the individual is using α-glucosidase-like agents that alter the post-prandial response.
In sum, Diabe-250 is a multi-herb “Ayurvedic” tablet marketed by Innoveda for glycaemic support and has often been promoted with a related US patent owned by the company. However, patents are not clinical trials and patent offices don’t clinically evaluate drugs described in patent applications. That information can only come from clinical trials, especially when a drug is being touted as “science-led”, as the CSIR has vis-à-vis Diabe-250. But there are no published clinical trials of the product. And while there’s some evidence for some of its constituents, particularly Salacia, to reduce postprandial glucose and to effect small changes in the HbA1c levels over a few months, there’s no product-specific proof.
On September 11, the Supreme Court was asked to urgently hear a petition that sought to cancel the Asia Cup T20 match between India and Pakistan scheduled for September 14 in the UAE. The petition, filed by four law students, claimed that playing the match so soon after the Pahalgam terror attack and Operation Sindoor would demean the sacrifices of armed personnel and was “against national interest”.
The Court declined to intervene. “It’s a match, let it be,” Justice J.K. Maheshwari remarked, refusing to elevate the petition into a question of constitutional urgency. That refusal, however, doesn’t end the matter: the call to stop the match points to the fraught place cricket occupies in India today, where the sport is no longer just a sport but an annex of politics itself.
The petitioners also argued that the Board of Control for Cricket in India (BCCI) must be brought under the Ministry of Youth Affairs and Sports, in line with the new National Sports Governance Act 2025. For many decades the BCCI has prided itself on being a private body, formally outside government control, yet informally intertwined with it through patronage, appointments, and access to resources. Over the years, this hybrid arrangement has allowed political parties to capture the administration of Indian cricket without subjecting it to the mechanisms of accountability under public law. The outcome is an entity that’s a chimaera: neither purely autonomous nor transparently regulated.
This political capture has contributed to a situation in which the sport has become indistinguishable from political theatre. If the BCCI were more genuinely independent and if its leadership were less frequently a stepping-stone for politicians, (men’s) cricket in India may still have had the ability to separate itself from the ebbs and flows of diplomatic posturing. Instead, the BCCI has invited politics onto the field by making itself an extension of political patronage.
To be sure, cricket has always been more than a game. Since the colonial era, it has carried the weight of identity and nationalism. In The Tao of Cricket, Ashis Nandy argued that cricket in India became a way of playing with colonial inheritance rather than rejecting it. Matches against England in the mid-20th century were arenas where newly independent Indians performed parity with their former rulers. With Pakistan, the sport inherited and refracted the trauma of Partition. Every bilateral series has carried more baggage than bat and ball.
Yet the history of India-Pakistan matches is also one of conviviality. For every moment when politicians have sought to cancel tours, there have been times when cricketing exchanges have thawed frozen relations. India’s tours of Pakistan in 2004 and Pakistan’s participation in the 1996 World Cup hosted in India were moments when ordinary spectators could cheer a cover drive irrespective of the batsman’s passport. The very fact that governments have sometimes chosen to use cricket as a tool of rapprochement suggests that the sport holds a special capacity to transcend political divides.
Sport itself has always sat at the junction of rivalry and fellowship. Aristotle saw games as part of leisure, necessary for the cultivation of civic virtue. The Olympic Truce of ancient Greece, revived in modern times, embodied the idea that contests on the field could suspend contests off of it. The South African example after apartheid, when Nelson Mandela donned a Springbok jersey at the 1995 Rugby World Cup, showed how sport could heal a wounded polity.
Against this backdrop, the call to cancel the India-Pakistan match risks impoverishing cricket of its potential to build bridges. To say that playing Pakistan dishonours Indian soldiers is to treat sport as a mere extension of politics. Sport is not reducible to politics: it’s also a space where citizens can experience one another as competitors, not enemies. That distinction matters. A good game of cricket can remind people that beyond the rhetoric of national security, there are human beings bowling yorkers and lofting sixes, acts that spectators from both sides can cheer, grumble about, and analyse over endless replays.
This isn’t to deny that politics already suffuses cricket. The selection of venues, the sponsorship deals, the choreography of opening ceremonies — all carry political weight. Nor can one ignore that militant groups have sometimes targeted cricket precisely because of its symbolic importance. But to cancel matches on the grounds that politics exists is to double down on cynicism. It is to concede that no space can remain where ordinary citizens of India and Pakistan might encounter each other beyond the logic of hostility.
The BCCI’s long entanglement with political elites makes it harder to resist such calls. When cricket administrators behave like political courtiers, it becomes easier for petitioners to argue that cricket is an extension of the state and must therefore obey the same dictates of foreign policy. But precisely because the BCCI has failed to safeguard cricket’s autonomy, the rest of us must insist that the game not be reduced to a political pawn.
The petitioners invoked “national interest” and “national dignity” yet the Constitution of India doesn’t enshrine dignity in the form of cancelling sports fixtures. It enshrines dignity through the protection of rights, the pursuit of fraternity, and the preservation of liberty. Article 51 even enjoins the state to foster respect for international law and promote peace. Seen in that light, playing cricket with Pakistan is not an affront to dignity but an affirmation of the constitutional aspiration to fraternity across borders.
If anything undermines dignity, it’s the reduction of sport to a theatre of grievance. It’s the refusal to allow people an arena where they can cheer together, even if for rival teams. National interest is not served by foreclosing every possible space of conviviality: it’s served by demonstrating that India is confident enough in its own constitutional foundations to play, to lose, to win, and to play again.
The Supreme Court was right to dismiss the petition with a simple phrase: “It’s a match, let it be.” That lightness is what cricket needs in India today. To insist that every over bowled is a statement of geopolitics is to impoverish both politics and cricket.
At the heart of particle physics lies the Standard Model, a theory that has stood for nearly half a century as the best description of the subatomic realm. It tells us what particles exist, how they interact, and why the universe is stable at the smallest scales. The Standard Model has correctly predicted the outcomes of several experiments testing the limits of particle physics. Even then, however, physicists know that it’s incomplete: it can’t explain dark matter, why matter dominates over antimatter, and why the force of gravity is so weak compared to the other forces. To settle these mysteries, physicists have been conducting very detailed tests of the Model, each of which has either tightened their confidence in a hypothetical explanation or has revealed a new piece of the puzzle.
A central character in this story is a subatomic particle called the W boson — the carrier of the weak nuclear force. Without it, the Sun wouldn’t shine because particle interactions involving the weak force are necessary for nuclear fusion to proceed. W bosons are also unusual among force carriers: unlike photons (the particles of light), they’re massive, about 80-times heavier than a proton. This mass difference — of a massless photon and a massive W boson — arises due to a process called the Higgs mechanism. Physicists first proposed this mechanism in 1964 and confirmed it was real when they found the Higgs boson particle at the Large Hadron Collider (LHC) in 2012.
The particles of the Standard Model of particle physics. The W bosons are shown among the force-carrier particles on the right. The photon is denoted γ. The electron (e) and muon (µ) are shown among the leptons on the right. The corresponding neutrino flavours are showing on the bottom row, denoted ν. Credit: Daniel Dominguez/CERN
But finding the Higgs particle was only the beginning. To prove that the Higgs mechanism really works the way the theory says, physicists need to check its predictions in detail. One of the sharpest tests involves how W bosons scatter off each other at high energies. The key to achieving this is the W boson’s polarisation states. Both photons and W bosons have a property called quantum spin, but whereas for photons its value is zero, for W bosons its non-zero. The spin also has a direction. If it points sideways, the W boson is said to be transverse polarised; if it’s pointing along the particle’s direction of travel, the W boson is said to be longitudinally polarised. The longitudinal ones are special because their behaviour is directly tied to the Higgs mechanism.
Specifically, if the Higgs mechanism and the Higgs boson don’t exist, calculations involving the longitudinal W bosons scattering off of each other quickly give rise to nonsensical mathematical results in the theory. The Higgs boson acts like a regulator in this engine, preventing the mathematics from ‘blowing up’. In fact, in the 1970s, the theoretical physicists Benjamin Lee, Chris Quigg, and Hugh Thacker showed that without the Higgs boson, the weak force would become uncontrollably powerful at high energies, leading to the breakdown of the theory. Their work was an important theoretical pillar that justified building the colossal LHC machine to search for the Higgs boson particle.
The terms Higgs boson, Higgs field, and Higgs mechanism describe related but distinct ideas. The Higgs field is a kind of invisible medium thought to fill all of space. Particles like W bosons and Z bosons interact with this field as they move and through that interaction they acquire mass. This is the Higgs mechanism: the process by which particles that would otherwise be massless become heavy.
The Higgs boson is different: it’s a particle that represents a vibration or a ripple in the Higgs field, just as a photon is a ripple in the electromagnetic field. Its discovery in 2012 confirmed that the field is real and not just something that appears in the mathematics of the theory. But discovery alone doesn’t prove the mechanism is doing everything the theory demands. To test that, physicists need to look at situations where the Higgs boson’s balancing role is crucial.
The scattering of longitudinally polarised W bosons is a good example. Without the Higgs boson, the probabilities of the scatterings occurring uncontrollably at higher energy, but with the Higgs boson in the picture, they stay within sensible bounds. Observing longitudinally polarised W bosons behaving as predicted is thus evidence for the particle as well as a check on the field and the mechanism behind it.
Imagine a roller-coaster without brakes. As it goes faster and faster, there’s nothing to stop it from flying off the tracks. The Higgs mechanism is like the braking system that keeps the ride safe. Observing longitudinally polarised W bosons in the right proportions is equivalent to checking that the brakes actually work when the roller-coaster speeds up.
Credit: Skyler Gerald
Another path that physicists once considered and that didn’t involve a Higgs boson at all was called technicolor theory. Instead of a single kind of Higgs boson giving the W bosons their mass, technicolor proposed a brand-new force. Just as the strong nuclear force binds quarks into protons and neutrons, the hypothetical technicolor force would bind new “technifermion” particles into composite states. These bound states would mimic the Higgs boson’s job of giving particles mass, while producing their own new signals in high-energy collisions.
The crucial test to check whether some given signals are due to the Higgs boson or due to technicolor lies in the behaviour of longitudinally polarised W bosons. In the Standard Model, their scattering is kept under control by the Higgs boson’s balancing act. In technicolor, by contrast, there is no Higgs boson to cancel the runaway growth. The probability of the scattering of longitudinally polarised W bosons would therefore rise sharply with more energy, often leaving clearly excessive signals in the data.
Thus, observing longitudinally polarised W bosons at consistent with the predictions of the Standard Model, and not finding any additional signals, would also strengthen the case for the Higgs mechanism and weaken that for technicolor and other “Higgs-less” theories.
At the Large Hadron Collider, the cleanest way to study look for such W bosons is in a phenomenon called vector boson scattering (VBS). In VBS, two protons collide and the quarks inside them emit W bosons. These W bosons then scatter off each other before decaying into lighter particles. The leftover quarks form narrow sprays of particles, or ‘jets’, that fly far forward.
If the two W bosons happen to have the same electric charge — i.e. both positive or both negative — the process is even more distinctive. This same-sign WW scattering is quite rare and that’s an advantage because then it’s easy to spot in the debris of particle collisions.
Both ATLAS and CMS, the two giant detectors at the LHC, had previously observed same-sign WW scattering without breaking down the polarisation. In 2021, the CMS detector reported the first hint of longitudinal polarisation but at a statistical significance only of 2.3 sigma, which isn’t good enough (particle physicists prefer at least 3 sigma). So after the LHC completed its second run in 2018, collecting data from around 10 quadrillion collisions between protons, the ATLAS collaboration set out to analyse it and deliver the evidence. This group’s study was published in Physical Review Letters on September 10.
The layout of the Large Hadron Collider complex at CERN. Protons (p) are pre-accelerated to higher energies in steps — at the Proton Synchrotron (PS) and then the Super Proton Synchrotron (SPS) — before being injected into the the LHC ring. The machine then draws two opposing beams of protons from the SPS and accelerates them to nearly the speed of light before colliding them head-on at four locations, under the gaze of the four detectors. ATLAS and CMS are two of them. Credit: Arpad Horvath (CC BY-SA)
The challenge of finding longitudinally polarised W bosons is like finding a particular needle in a very large haystack where most of the needles look nearly identical. So ATLAS designed a special strategy.
When one W boson decays, the result is one electron or muon and one neutrino. If the W boson is positively charged, for example, the decay could be to one anti-electron and one electron-neutrino or to one anti-muon and a muon-neutrino. Anti-electrons and anti-muons are positively charged. If the W boson is negatively charged, the products could one electron and one electron-antineutrino or one muon and one muon-antineutrino. So first, ATLAS zeroed in on the fact that it was looking for two electrons, two muons, or one of each, both carrying the same electric charge. Neutrinos however are really hard to catch and study, so the ATLAS group look for their absence rather than their presence. In all these particle interactions, the law of conservation of momentum holds — which means in a given interaction, a neutrino’s presence can be elucidated when the momenta of the electrons or muons add up to be slightly lower than that of the W boson; the missing amount would have been carried away by the neutrino, like money unaccounted for in a ledger.
This analysis also required an event of interest to have at least two jets (reconstructed from streams of particles) with a combined energy above 500 GeV and separated widely in rapidity (which is a measure of their angle relative to the beam). This particular VBS pattern — two electrons/muons, two jets, and missing momentum — is the hallmark of same-sign WW scattering.
Second, even with these strict requirements, impostors creep in. The biggest source of confusion is WZ production, a process in which another subatomic particle called the Z boson decays invisibly or one of its decay products goes unnoticed, making the event resemble WW scattering. Other sources include electrons having their charges mismeasured, jets can masquerading as electrons/muons, and some quarks producing electrons/muons that slip into the sample. To control for all this noise, the ATLAS group focused on control regions: subsets of events that produced a distinct kind of noise that the group could cleanly ‘subtract’ from the data to reveal same-sign WW scattering, thus also reducing uncertainty in the final results.
Third, and this is where things get nuanced: the differences between transverse and longitudinally polarised W bosons show up in distributions — i.e. how far apart the electrons/muons are in angle, how the jets are oriented, and the energy of the system. But since no single variable could tell the whole story, the ATLAS group combined them using deep neural networks. These machine-learning models were fed up to 20 kinematic variables — including jet separations, particle angles, and missing momentum patterns — and trained to distinguish between three groups:
(i) Two transverse polarised W bosons;
(ii) One transverse polarised W boson and one longitudinally polarised W boson; and
(iii) Both longitudinally polarised W bosons
Fourth, the group combined the outputs of these neural networks and fit with a maximum likelihood method. When physicists make measurements, they often don’t directly see what they’re measuring. Instead, they see data points that could have come from different possible scenarios. A likelihood is a number that tells them how probable the data is in a given scenario. If a model says “events should look like this,” they can ask: “Given my actual data, how likely is that?” And the maximum likelihood method will help them decide the parameters that make the given data most likely to occur.
For example, say you toss a coin 100 times and get 62 heads. You wonder: is the coin fair or biased? If it’s fair, the chance of exactly 62 heads is small. If the coin is slightly biased (heads with probability 0.62), the chance of 62 heads is higher. The maximum likelihood estimate is to pick the bias, or probability of heads, that makes your actual result most probable. So here the method would say, “The coin’s bias is 0.62” — because this choice maximises the likelihood of seeing 62 heads out of 100.
In their analysis, the ATLAS group used the maximum likelihood method to check with the LHC data ‘preferred’ a contribution from longitudinal scattering, after subtracting what background noise and transverse-only scattering could explain.
The results are a milestone in experimental particle physics. In the September 10 paper, ATLAS reported evidence for longitudinally polarised W bosons in same-sign WW scattering with a significance of 3.3 sigma — sufficiently close to 4, which is the calculated significance based on the predictions of the Standard Model. This means the data behaved as theory predicted, with no unexpected excess or deficit.
It’s also bad news for technicolor theory. By observing longitudinal W bosons at exactly the rates predicted by the Standard Model, and not finding any additional signals, the ATLAS data strengthens the case for the Higgs mechanism providing the check on the W bosons’ scattering probability, rather than the technicolor force.
The measured cross-section for events with at least one longitudinally polarised W boson was 0.88femtobarns, with an uncertainty of 0.3 femtobarns. These figures essentially mean that there were only a few hundred same-sign WW scattering events in the full dataset of around 10 quadrillion proton-proton collisions. The fact that ATLAS could pull this signal out of such a background-heavy environment is a testament to the power of modern machine learning working with advanced statistical methods.
The group was also able to quantify the composition of signals. Among others:
About 58% of events were genuine WW scattering
Roughly 16% were from WZ production
Around 18% arose from irrelevant electrons/muons, charge misidentification or the decay of energetic photons
One way to appreciate the importance of these findings is by analogy: imagine trying to hear a faint melody being played by a single violin in the middle of a roaring orchestra. The violin is the longitudinal signal; the orchestra is the flood of background noise. The neural networks are like sophisticated microphones and filters, tuned to pick out the violin’s specific tone. The fact that ATLAS couldn’t only hear it but also measured its volume to match the score written by the Standard Model is remarkable.
Perhaps in the same vein, these results are more than just another tick mark for the Standard Model. It’s a direct test of the Higgs mechanism in action. The discovery of the Higgs boson particle in 2012 was groundbreaking but proving that the Higgs mechanism performs its theoretical role requires demonstrating that it regulates the scattering of W bosons. By finding evidence for longitudinally polarised W bosons at the expected rate, ATLAS has done just that.
The results also set the stage for the future. The LHC is currently being upgraded to a form called the High-Luminosity LHC and it will begin operating later this decade, collect datasets about 10x larger than what the LHC did in its second run. With that much more data, physicists will be able to study differential distributions, i.e. how the rate of longitudinal scattering varies with energy, angle or jet separation. These patterns are sensitive to hitherto unknown particles and forces, such as additional Higgs-like particles or modifications to the Higgs mechanism itself. That is, even small deviations from the Standard Model’s predictions could hint at new frontiers in particle physics.
Indeed, history has often reminded physicists that such precision studies often uncover surprises. Physicists didn’t discover neutrino oscillations by finding a new particle but by noticing that the number of neutrinos arriving from the Sun at detectors on Earth didn’t match expectations. Similarly, minuscule mismatches between theory and observations in the scattering of W bosons could someday reveal new physics — and if they do, the seeds will have been planted by studies like that of the ATLAS group.
On the methodological front, the analysis also showcases how particle physics is evolving. ‘Classical’ analyses once banked on tracking single variables; now, deep learning has played a starring role by combining many variables into a single discriminant, allowing ATLAS to pull the faint signal of longitudinally polarised W bosons from the noise. This approach could only become more important as both datasets and physicists’ ambitions expand.
Perhaps the broadest lesson in all this is that science often advances by the unglamorous task of verifying the details. The discovery of the Higgs boson answered one question but opened many others; among them, measuring how it affects the scattering of W bosons is one of the ore direct ways to probe whether the Standard Model is complete or just the first chapter of a longer story. Either way, the pursuit exemplifies the spirit of checking, rechecking, testing, and probing until scientists truly understand how nature works at extreme precision.
Featured image: The massive mural of the ATLAS detector at CERN painted by artist Josef Kristofoletti. The mural is located at the ATLAS Experiment site and shows on two perpendicular walls the detector with a collision event superimposed. The event on the large wall shows a simulation of an event that would be recorded in ATLAS if a Higgs boson was produced. The cavern of the ATLAS Experiment with the detector is 100 m directly below the mural. The height of the mural is about 12 m. The actual ATLAS detector is more than twice as big. Credit: Claudia Marcelloni, Michael Barnett/CERN.
Since Union finance minister Nirmala Sitharaman’s announcement last week that India’s Goods and Services Tax (GST) rates will be rationalised anew from September 22, I’ve been seeing a flood of pieces all in praise — and why not?
The GST regime has been somewhat controversial since its launch because, despite simplifying compliance for businesses and industry, it increased the costs for consumers. The Indian government exacerbated that pain point by undermining the fiscal federalism of the Union, increasing its revenues at the expense of states’ as well as cutting allocations.
While there is (informed) speculation that the next Finance Commission will further undercut the devolution of funds to the states, GST 2.0 offers some relief to consumers in the form of making various products more affordable. Populism is popular, after all.
However, increasing affordability isn’t always a good thing even if your sole goal is to increase consumption. This is particularly borne out in the food and nutrition domain.
For example, under the new tax regime, from September 22, the GST on pizza bread will slip from 5% to zero. This means both sourdough pizza bread and maida (refined flour) pizza bread will go from 5% to zero. However, because there is more awareness of maida as an ingredient in the populace and less so of sourdough, and because maida as a result enjoys a higher economy of scale and is thus less expensive (before tax), the demand for maida bread is likely to increase more than the demand for sourdough bread.
This is unfortunate: ideally, sourdough bread should be more affordable — or, alternatively, the two breads should be equally affordable as well as have threshold-based front-of-pack labelling. That is to say, liberating consumers to be able to buy new food products or more of the old ones without simultaneously empowering consumers to make more informed choices could tilt demand in favour of unhealthier foods.
Ultimately, the burden of non-communicable diseases in the population will increase, as will consumers’ expenses on healthcare, dietary interventions, and so on. I explained this issue in The Hindu on September 9, 2025, and set out solutions that the Indian government must implement in its food regulation apparatus posthaste.
Without these measures, GST 2.0 will likely be bad news for India’s dietary and nutritional ambitions.
Rubidium isn’t respectable. It isn’t iron, whose strength built railways and bridges and it isn’t silicon, whose valley became a dubious shrine to progress. Rubidium explodes in water. It tarnishes in air. It’s awkward, soft, and unfit for the neat categories by which schoolteachers tell their students how the world is made. And yet, precisely because of this unruly character, it insinuates itself into the deepest places of science, where precision, control, and prediction are supposed to reign.
For centuries astronomers counted the stars, then engineers counted pendulums and springs — all good and respectable. But when humankind’s machines demanded nanosecond accuracy, it was rubidium, a soft metal that no practical mind would have chosen, that became the metronome of the world. In its hyperfine transitions, coaxed by lasers and microwave cavities, the second is carved more finely than human senses can comprehend. Without rubidium’s unstable grace, GPS collapses, financial markets fall into confusion, trains and planes drift out of sync. The fragile and the explosive have become the custodians of order.
What does this say about the hierarchies of knowledge? Textbooks present a suspiciously orderly picture: noble gases are inert, alkali metals are reactive, and their properties can be arranged neatly in columns of the periodic table, they say. Thus rubidium is placed there like a botanical specimen. But in practice, scientists turned to it not because of its box in a table but because of accidents, conveniences, and contingencies. Its resonance lines happen to fall where lasers can reach them easily. Its isotopes are abundant enough to trap, cool, and measure. The entire edifice of atomic clocks and exotic Bose-Einstein condensates rests not on an inevitable logic of discovery but on this convenient accident. Had rubidium’s levels been slightly different, perhaps caesium or potassium would have played the starring role. Rational reconstruction will never admit this. It prefers tidy sequences and noble inevitabilities. Rubidium, however, laughs at such tidiness.
Take condensed matter. In the 1990s and 2000s, solar researchers sought efficiency in perovskite crystals. These crystals were fragile, prone to decomposition, but again rubidium slipped in: a small ion among larger ones, it stabilised the lattice. A substitution here, a tweak there, and suddenly the efficiency curve rose. Was this progress inevitable? No; it was bricolage: chemists trying one ion after another until the thing worked. And the journals now describe rubidium as if it were always destined to “enhance stability”. But destiny is hindsight dressed as foresight. What actually happened was messy. Rubidium’s success was contingent, not planned.
Then there’s the theatre of optics. Rubidium’s spectral lines at 780 nm and 795 nm became the experimentalist’s playground. When lasers cooled atoms to microkelvin temperatures and clouds of rubidium atoms became motionless, they merged into collective wavefunctions and formed the first Bose-Einstein condensates. The textbooks now call this a triumph of theory, the “inevitable” confirmation of quantum statistics. Nonsense! The condensates weren’t predicted as practical realities — they were curiosities, dismissed by many as impossible in the laboratory. What made them possible was a melange of techniques: magnetic traps, optical molasses, sympathetic cooling. And rubidium, again, happened to be convenient, its transitions accessible, its abundance generous, its behaviour forgiving. Out of this messiness came a Nobel Prize and an entire field. Rubidium teaches us that progress comes not from the logical unfolding of ideas but from playing with elements that allegedly don’t belong.
Rubidium rebukes dogma. It’s neither grand nor noble, yet it controls time, stabilises matter, and demonstrates the strangest predictions of quantum theory. It shows science doesn’t march forward by method alone. It stumbles, it improvises, it tries what happens to be at hand. Philosophers of science prefer to speak of method and rigour yet their laboratories tell a story of messy rooms where equipment is tuned until something works, where grad students swap parts until the resonance reveals itself, where fragile metals are pressed into service because they happen to fit the laser’s reach.
Rubidium teaches us that knowledge is anarchic. It isn’t carved from the heavens by pure reason but coaxed from matter through accidents, failures, and improvised victories. Explosive in one setting, stabilising in another; useless in industry, indispensable in physics — the properties of rubidium are contradictory and it’s precisely this contradiction that makes it valuable. To force it into the straitjacket of predictable science is to rewrite history as propaganda. The truth is less comfortable: rubidium has triumphed where theory has faltered.
And yet, here we are. Our planes and phones rely on rubidium clocks. Our visions of renewable futures lean on rubidium’s quiet strengthening of perovskite cells. Our quantum dreams — of condensates, simulations, computers, and entanglement — are staged with rubidium atoms as actors. An element kings never counted and merchants never valued has become the silent arbiter of our age. Science itself couldn’t have planned it better; indeed, it didn’t plan at all.
Rubidium is the fragment in the mosaic that refuses to fit yet holds the pattern together. It’s the soft yet explosive, fragile yet enduring accident that becomes indispensable. Its lesson is simple: science also needs disorder, risk, and the unruliness of matter to thrive.
Featured image: A sample of rubidium metal. Credit: Dnn87 (CC BY).
In science, paradoxes often appear when familiar rules are pushed into unfamiliar territory. One of them is Parrondo’s paradox, a curious mathematical result showing that when two losing strategies are combined, they can produce a winning outcome. This might sound like trickery but the paradox has deep connections to how randomness and asymmetry interact in the physical world. In fact its roots can be traced back to a famous thought experiment explored by the US physicist Richard Feynman, who analysed whether one could extract useful work from random thermal motion. The link between Feynman’s thought experiment and Parrondo’s paradox demonstrates how chance can be turned into order when the conditions are right.
Imagine two games. Each game, when played on its own, is stacked against you. In one, the odds are slightly less than fair, e.g. you win 49% of the time and lose 51%. In another, the rules are even more complex, with the chances of winning and losing depending on your current position or capital. If you keep playing either game alone, the statistics say you will eventually go broke.
But then there’s a twist. If you alternate the games — sometimes playing one, sometimes the other — your fortune can actually grow. This is Parrondo’s paradox, proposed in 1996 by the Spanish physicist Juan Parrondo.
The answer to how combining losing games can result in a winning streak lies in how randomness interacts with structure. In Parrondo’s games, the rules are not simply fair or unfair in isolation; they have hidden patterns. When the games are alternated, these patterns line up in such a way that random losses become rectified into net gains.
Say there’s a perfectly flat surface in front of you. You place a small bead on it and then you constantly jiggle the surface. The bead jitters back and forth. Because the noise you’re applying to the bead’s position is unbiased, the bead simply wanders around in different directions on the surface. Now, say you introduce a switch that alternates the surface between two states. When the switch is ON, an ice-tray shape appears on the surface. When the switch is OFF, it becomes flat again. This ice-tray shape is special: the cups are slightly lopsided because there’s a gentle downward slope from left to right in each cup. At the right end, there’s a steep wall. If you’re jiggling the surface when the switch is OFF, the bead diffuses a little towards the left, a little towards the right, and so on. When you throw the switch to ON, the bead falls into the nearest cup. Because each cup is slightly tilted towards the right, the bead eventually settles near the steep wall there. Then you move the switch to OFF again.
As you repeat these steps with more and more beads over time, you’ll see they end up a little to the right of where they started. This is Parrando’s paradox. The jittering motion you applied to the surface caused each bead to move randomly. The switch you used to alter the shape of the surface allowed you to expend some energy in order to rectify the beads’ randomness.
The reason why Parrondo’s paradox isn’t just a mathematical trick lies in physics. At the microscopic scale, particles of matter are in constant, jittery motion because of heat. This restless behaviour is known as Brownian motion, named after the botanist Robert Brown, who observed pollen grains dancing erratically in water under a microscope in 1827. At this scale, randomness is unavoidable: molecules collide, rebound, and scatter endlessly.
Scientists have long wondered whether such random motion could be tapped to extract useful work, perhaps to drive a microscopic machine. This was Feynman’s thought experiment as well, involving a device called the Brownian ratchet, a.k.a. the Feynman-Smoluchowski ratchet. The Polish physicist Marian Smoluchowski dreamt up the idea in 1912 and which Feynman popularised in a lecture 50 years later, in 1962.
Picture a set of paddles immersed in a fluid, constantly jolted by Brownian motion. A ratchet and pawl mechanism is attached to the paddles (see video below). The ratchet allows the paddles to rotate in one direction but not the other. It seems plausible that the random kicks from molecules would turn the paddles, which the ratchet would then lock into forward motion. Over time, this could spin a wheel or lift a weight.
In one of his physics famous lectures in 1962, Feynman analysed the ratchet. He showed that the pawl itself would also be subject to Brownian motion. It would jiggle, slip, and release under the same thermal agitation as the paddles. When everything is at the same temperature, the forward and backward slips would cancel out and no net motion would occur.
This insight was crucial: it preserved the rule that free energy can’t be extracted from randomness at equilibrium. If motion is to be biased in only one direction, there needs to be a temperature difference between different parts of the ratchet. In other words, random noise alone isn’t enough: you also need an asymmetry, or what physicists call nonequilibrium conditions, to turn randomness into work.
Let’s return to Parrondo’s paradox now. The paradoxical games are essentially a discrete-time abstraction of Feynman’s ratchet. The losing games are like unbiased random motion: fluctuations that on their own can’t produce net gain because the gains become cancelled out. But when they’re alternated cleverly, they mimic the effect of adding asymmetry. The combination rectifies the randomness, just as a physical ratchet can rectify the molecular jostling when a gradient is present.
This is why Parrondo explicitly acknowledged his inspiration from Feynman’s analysis of the Brownian ratchet. Where Feynman used a wheel and pawl to show how equilibrium noise can’t be exploited without a bias, Parrondo created games whose hidden rules provided the bias when they were combined. Both cases highlight a universal theme: randomness can be guided to produce order.
The implications of these ideas extend well beyond thought experiments. Inside living cells, molecular motors like kinesin and myosin actually function like Brownian ratchets. These proteins move along cellular tracks by drawing energy from random thermal kicks with the aid of a chemical energy gradient. They demonstrate that life itself has evolved ways to turn thermal noise into directed motion by operating out of equilibrium.
Parrondo’s paradox also has applications in economics, evolutionary biology, and computer algorithms. For example, alternating between two investment strategies, each of which is poor on its own, may yield better long-term outcomes if the fluctuations in markets interact in the right way. Similarly, in genetics, when harmful mutations alternate in certain conditions, they can produce beneficial effects for populations. The paradox provides a framework to describe how losing at one level can add up to winning at another.
Feynman’s role in this story is historical as well as philosophical. By dissecting the Brownian ratchet, he demonstrated how deeply the laws of thermodynamics constrain what’s possible. His analysis reminded physicists that intuition about randomness can be misleading and that only careful reasoning could reveal the real rules.
In 2021, a group of scientists from Australia, Canada, France, and Germany wrote in Cancers that the mathematics of Parrondo’s paradox could also illuminate the biology of cancerous tumours. Their starting point was the observation that cancer cells behave in ways that often seem self-defeating: they accumulate genetic and epigenetic instability, devolve into abnormal states, sometimes stop dividing altogether, and often migrate away from their original location and perish. Each of these traits looks like a “losing strategy” — yet cancers that use these ‘strategies’ together are often persistent.
The group suggested that the paradox arises because cancers grow in unstable, hostile environments. Tumour cells deal with low oxygen, intermittent blood supply, attacks by the immune system, and toxic drugs. In these circumstances, no single survival strategy is reliable. A population of only stable tumour cells would be wiped out when the conditions change. Likewise a population of only unstable cells would collapse under its own chaos. But by maintaining a mix, the group contended, cancers achieve resilience. Stable, specialised cells can exploit resources efficiently while unstable cells with high plasticity constantly generate new variations, some of which could respond better to future challenges. Together, the team continued, the cancer can alternate between the two sets of cells so that it can win.
The scientists also interpreted dormancy and metastasis of cancers through this lens. Dormant cells are inactive and can lie hidden for years, escaping chemotherapy drugs that are aimed at cells that divide. Once the drugs have faded, they restart growth. While a migrating cancer cell has a high chance of dying off, even one success can seed a tumor in a new tissue.
On the flip side, the scientists argued that cancer therapy can also be improved by embracing Parrondo’s paradox. In conventional chemotherapy, doctors repeatedly administer strong drugs, creating a strategy that often backfires: the therapy kills off the weak, leaving the strong behind — but in this case the strong are the very cells you least want to survive. By contrast, adaptive approaches that alternate periods of treatment with rest or that mix real drugs with harmless lookalikes could harness evolutionary trade-offs inside the tumor and keep it in check. Just as cancer may use Parrondo’s paradox to outwit the body, doctors may one day use the same paradox to outwit cancer.
On August 6, physicists from Lanzhou University in China published a paper in Physical Review E discussing just such a possibility. They focused on chemotherapy, which is usually delivered in one of two main ways. The first, called the maximum tolerated dose (MTD), uses strong doses given at intervals. The second, called low-dose metronomic (LDM), uses weaker doses applied continuously over time. Each method has been widely tested in clinics and each one has drawbacks.
MTD often succeeds at first by rapidly killing off drug-sensitive cancer cells. In the process, however, it also paves the way for the most resistant cancer cells to expand, leading to relapse. LDM on the other hand keeps steady pressure on a tumor but can end up either failing to control sensitive cells if the dose is too low or clearing them so thoroughly that resistant cells again dominate if the dose is too strong. In other words, both strategies can be losing games in the long run.
The question the study’s authors asked was whether combining these two flawed strategies in a specific sequence could achieve better results than deploying either strategy on its own. This is the sort of situation Parrondo’s paradox describes, even if not exactly. While the paradox is concerned with combining outright losing strategies, the study has discussed combining two ineffective strategies.
To investigate, the researchers used mathematical models that treated tumors as ecosystems containing three interacting populations: healthy cells, drug-sensitive cancer cells, and drug-resistant cancer cells. They applied equations from evolutionary game theory that tracked how the fractions of these groups shifted in different conditions.
The models showed that in a purely MTD strategy, the resistant cells soon took over, and in a purely LDM strategy, the outcomes depended strongly on drug strength but still ended badly. But when the two schedules were alternated, the tumor behaved differently. The more sensitive cells were suppressed but not eliminated while their persistence prevented the resistant cells from proliferating quickly. The team also found that the healthy cells survived longer.
Of course, tumours are not well-mixed soups of cells; in reality they have spatial structure. To account for this, the team put together computer simulations where individual cells occupied positions on a grid; grew, divided or died according to fixed rules; and interacted with their neighbours. This agent-based approach allowed the team to examine how pockets of sensitive and resistant cells might compete in more realistic tissue settings.
Their simulations only confirmed the previous set of results. A therapeutic strategy that alternated between MTD and LDM schedules extended the amount of time before the resistant cells took over and while the healthy cells dominated. When the model started with the LDM phase in particular, the sensitive cancer cells were found to compete with the resistant cancer cells and the arrival of the MTD phase next applied even more pressure on the latter.
This is an interesting finding because it suggests that the goal of therapy may not always be to eliminate every sensitive cancer cell as quickly as possible but, paradoxically, that sometimes it may be wiser to preserve some sensitive cells so that they can compete directly with resistant cells and prevent them from monopolising the tumor. In clinical terms, alternating between high- and low-dose regimens may delay resistance and keep tumours tractable for longer periods.
Then again this is cancer — the “emperor of all maladies” — and in silico evidence from a physics-based model is only the start. Researchers will have to test it in real, live tissue in animal models (or organoids) and subsequently in human trials. They will also have to assess whether certain cancers, followed by a specific combination of drugs for those cancers, will benefit more (or less) from taking the Parrando’s paradox way.
[University of London mathematical oncologist Robert] Noble … says that the method outlined in the new study may not be ripe for a real-world clinical setting. “The alternating strategy fails much faster, and the tumor bounces back, if you slightly change the initial conditions,” adds Noble. Liu and colleagues, however, plan to conduct in vitro experiments to test their mathematical model and to select regimen parameters that would make their strategy more robust in a realistic setting.
Neutrinos are among the most mysterious particles in physics. They are extremely light, electrically neutral, and interact so weakly with matter that trillions of them pass through your body each second without leaving a trace. They are produced in the Sun, nuclear reactors, the atmosphere, and by cosmic explosions. In fact neutrinos are everywhere — yet they’re almost invisible.
Despite their elusiveness, they have already upended physics. In the late 20th century, scientists discovered that neutrinos can oscillate, changing from one type to another as they travel, which is something that the simplest version of the Standard Model of particle physics — the prevailing theory of elementary particles — doesn’t predict. Because oscillations require neutrinos to have mass, this discovery revealed new physics. Today, scientists study neutrinos for what they might tell us about the universe’s structure and for possible hints of particles or forces yet unknown.
When neutrinos travel through space, they are known to oscillate between three types. This visualisation plots the composition of neutrinos (of 4 MeV energy) by type at various distances from a nuclear reactor. Credit: Public domain
However, detecting neutrinos is very hard. Because they rarely interact with matter, experiments must build massive detectors filled with dense material in the hopes that a small fraction of neutrinos will collide inside with atoms. One way to detect such collisions uses Cherenkov radiation, a bluish glow emitted when a charged particle moves through a medium like water or mineral oil faster than light does in that medium.
(This is allowed. The only speed limit is that of light in vacuum: 299,792,458 m/s.)
The MiniBooNE experiment at Fermilab used a large mineral-oil Cherenkov detector. When neutrinos from the Booster Neutrino Beamline struck the atomic nuclei in the mineral oil, the interaction released charged particles, which sometimes produced rings of Cherenkov radiation (like ripples) that the detector recorded. In MiniBooNE’s data, the detection events were classified by the type of light ring produced. An “electron-like” event was one that looked like it had been caused by an electron. But because photons can also produce nearly identical rings when they strike the nuclei, the detector couldn’t always tell the difference. A “muon-like” event, on the other hand, had the distinctive ring pattern of a muon, which is a subatomic particle like the electron but 200-times heavier, and which travels in a straighter, longer track. To be clear, these labels described the detector’s view; they didn’t guarantee which particle was actually present.
MiniBooNE began operating in 2002 to test an anomaly that had been reported at the LSND experiment at Los Alamos. LSND had recorded more electron-like” events than predicted, especially at low energies below about 600 MeV. This came to be called the “low-energy excess” and has become one of the most puzzling results in particle physics. It raised the possibility that neutrinos might be oscillating into a hitherto unknown neutrino type, sometimes called the sterile neutrino — or it might have been a hint of unexpected processes that produced extra photons. Since MiniBooNE couldn’t reliably distinguish electrons from photons, the mystery remained unresolved.
To address this, scientists built the MicroBooNE experiment at Fermilab. It uses a very different technology: the liquid argon time-projection chamber (LArTPC). In a LArTPC, charged particles streak through an ultra-pure mass of liquid argon, leaving a trail of ionised atoms in their wake. An applied electric field causes these trails to drift towards fine wires, where they are recorded. At the same time, the argon emits light that provides the timing of the interaction. This allows the detector to reconstruct interactions in three dimensions with millimetre precision. Crucially, it lets physicists see where the particle shower begins, so they can tell whether it started at the interaction point or some distance away. This capability prepared MicroBooNE to revisit the “low-energy excess” anomaly.
MicroBooNE also had broader motivations. With an active mass of about 90 tonnes of liquid argon inside a 170-tonne cryostat, and 8,256 wires in its readout planes, it was the largest LArTPC in the US when it began operating. It served as a testbed for the much larger detectors that scientists are developing for the Deep Underground Neutrino Experiment (DUNE). And it was also designed to measure the rate at which neutrinos interacted with argon atoms, to study nuclear effects in neutrino scattering, and to contribute to searches for rare processes such as proton decay and supernova neutrino bursts.
(When a star goes supernova, it releases waves upon waves of neutrinos before it releases photons. Scientists were able to confirm this when the star Sanduleak -69 202 exploded in 1987.)
This image, released on February 24, 2017, shows Supernova 1987a (centre) surrounded by dramatic red clouds of gas and dust within the Large Magellanic Cloud. This supernova, first discovered on February 23, 1987, blazed with the power of 100 million Suns. Since that first sighting, SN 1987A has continued to fascinate astronomers with its spectacular light show. Caption and credit: NASA, ESA, R. Kirshner (Harvard-Smithsonian Centre for Astrophysics and Gordon and Betty Moore Foundation), and M. Mutchler and R. Avila (STScI)
Initial MicroBooNE analyses using partial data already challenged the idea that MiniBooNE’s excess was due to the anomaly. However, the collaboration didn’t cover the full range of parameters until recently. On August 21, MicroBooNE published results from five years of operations, corresponding to 1.11 x 1021 protons on target, which was about a 70% increase over previous analyses. This complete dataset together with higher sensitivity and better modelling has provided the most decisive test so far of the anomaly.
The MicroBooNE detector recorded neutrino interactions from the Booster Neutrino Beamline, a setup that produces neutrinos, using its LArTPC detector, which operated at about 87 K inside a cryostat. Charged particles from neutrino interactions produced ionisation electrons that drifted across the detector and were recorded by the wire. Simultaneous flashes of argon scintillation light, seen by photomultiplier tubes, gave the precise time of each interaction.
In neutrino physics, a category of events grouped by what the detector sees in the final state is called a channel. Researchers call it a signal channel when it matches the kind of event they are specifically looking for, as opposed to background signals from other processes. With MicroBooNE, the team stayed on the lookout for two signal channels: (i) one electron and no visible protons or pions (abbreviated as 1e0p0π) and (ii) one electron and at least one proton above 40 MeV (1eNp0π). These categories reflect what MiniBooNE would’ve seen as electron-like events while exploiting MicroBooNE’s ability to identify protons.
One important source of background noise the team had to cut from the data was cosmic rays — high-energy particles from outer space that strike Earth’s atmosphere, creating particle showers that can mimic neutrino signals. In 2017, MicroBooNE added a suite of panels around the detector. For the full dataset, the panels cut an additional 25.4% of background noise in the 1e0p0π channel while preserving 98.9% of signal events.
When a cosmic-ray proton collides with a molecule in the upper atmosphere, it produces a shower of particles that includes pions, muons, photons, neutrons, electrons, and positrons. Credit: SyntaxError55 (CC BY-SA)
In the final analysis, the MicroBooNE data showed no evidence of an anomalous excess of electron-like events. When both channels were combined, the observed events matched the expectations of the Standard Model of particle physics well. The agreement was especially strong in the 1e0p0π channel.
In the 1eNp0π channel, MicroBooNE actually detected slightly fewer events than the Model predicted: 102 events v. 134. This shortfall, of about 24%, is however not enough to claim a new effect but enough to draw attention. But rather than confirming MiniBooNE’s excess, this result suggests there’s some tension in the models the scientists use to simulate how the neutrinos and argon atoms will interact. Argon has a large and complex nucleus, which makes accurate predictions challenging. The scientists have in fact stated in their paper that the deficit may reflect these uncertainties rather than new physics.
The new MicroBooNE results have far-reaching consequences. Foremost, the results reshape the sterile-neutrino debate. For two decades, the LSND and MiniBooNE anomalies had been cited together as signs that the neutrino was oscillating into a previously undetected state. By showing that MiniBooNE’s excess was not due to extra electron-like interactions, MicroBooNE shows that the ‘extra’ events were not caused by excess electron neutrinos. This in turn casts doubt on the simplest explanation, of sterile neutrinos.
As a result, theoretical models that once seemed straightforward now face strong tension. While more complex scenarios remain possible, the easy explanation is no longer viable.
The MicroBooNE cryostat inside which the LArTPC is placed. Credit: Fermilab
Second, they demonstrate the maturity of the LArTPC technology. The MicroBooNE team successfully operated a large detector for years, maintaining the argon’s purity and low-noise electronics required for high-resolution imaging. Its performance validates the design choices for larger detectors like DUNE, which use similar technology but at kilotonne scales. The experiment also showcases innovations such as cryogenic electronics, sophisticated purification systems, protection against cosmic rays, and calibration with ultraviolet lasers, proving that such systems can deliver reliable data over long periods of operation.
Third, the modest deficit in the 1eNp0π channel points to the importance of better understanding neutrino-argon interactions. Argon’s heavy nucleus produces complicated final states where protons and neutrons may scatter or be absorbed, altering the visible event. These nuclear effects can lead to mismatches between simulation and data (possibly including the 24% deficit in the 1eNp0π signal channel). For DUNE, which will also use argon as its target, improving these models is critical. MicroBooNE’s detailed datasets and sideband constraints will continue to inform these refinements.
Fourth, the story highlights the value of complementary detector technologies. MiniBooNE’s Cherenkov detector recorded more events but couldn’t tell electrons from photons; MicroBooNE’s LArTPC recorded fewer events but with much greater clarity. Together, they show how one experiment can identify a puzzle and another can test it with a different method. This multi-technology approach is likely to continue as experiments worldwide cross-check anomalies and precision measurements.
Finally, the MicroBooNE results show how science advances. A puzzling anomaly inspired new theories, new technology, and a new experiment. After five years of data-taking and with the most complete analysis yet, MicroBooNE has said that the MiniBooNE anomaly was not due to electron-neutrino interactions. The anomaly itself remains unexplained, but the field now has a sharper focus. Whether the cause lies in photon production, detector effects or actually new physics, the next generation of experiments can start on firmer footing.