Scicomm

  • What is a supersolid?

    The names that scientists, especially physicists, give to things has been a source of humour or irritation, depending on your point of view. Despite observatories named the Very Large Telescope, succeeded by the Extremely Large Telescope, and in spite of Murray Gell-Mann naming the quarks after a word that appears only once in James Joyce’s Finnegans Wake, I’m firmly on the irritated side. There is no reason that the colour charge, for example, should be called that considering it has nothing to do with colour. Nor should the theory describing how subatomic particles are affected by the colour charge be called quantum chromodynamics.

    For another example, a superfluid is a quantum phase of matter that flows without any resistance – so the name makes sense. But a supersolid is a quantum phase of matter that has the ordered structure of a solid but can flow like a superfluid! (A ‘quantum phase’ is a phase of matter that exists at extremely low temperatures, in quantum systems – i.e. systems where quantum-mechanical forces dominate.) Supersolids are clearly inappropriately named, but we can say the same thing about its properties. In the 1960s, scientists worked out the math and concluded that supersolids should exist – but they weren’t able to create them in the lab until the last decade or so.

    A crystal is a solid whose constituent atoms are arranged in a fixed, repeating pattern. The grid of atoms is called the lattice and each point occupied by an atom is called a node. When a node is empty, i.e. when an atom isn’t present, the site is called a vacancy. When you cool a substance to a lower and lower temperature, you take away energy from it until, at absolute zero, it has no energy. But in quantum systems, the material retains some energy even at absolute zero. This is the energy implicit to the system and is called zero-point energy. This energy allows atoms to move from occupied nodes in the lattice to nearby vacancies.

    Sometimes there could be a cluster of such vacancies. When this cluster moves as a group through the material, it is equivalent to a group of atoms in the lattice moving in the opposite direction. (If you’re sitting at spot 1 on the couch and move to spot 2, it’s equivalent to the vacancy on the couch moving from spot 2 to spot 1.) When this happens, the cloud of vacancies is called a supersolid: the cluster maintains its fixed structure, defined by the lattice, yet it move without resistance through the material.

    The first several attempts to create a supersolid succeeded but they were confined to one dimension. This is because many of them used a common method: to assemble a bunch of atoms of a particular element, “turn them into a superfluid and then add a crystalline structure by altering the interactions between the atoms” (source). This technique doesn’t work well to create two-dimensional supersolids because the “add a crystalline structure” step weakens the fragile superfluid state.

    In 2021, a group of physicists from Austria and Germany attempted to overcome this barrier by using magnetic atoms that formed small clumps with each other as well as allowing the clumps to repel each other and arrange themselves in a two-dimensional array. The jump from one dimension to two dimensions is significant because it allows physicists to explore the presence of other features of supersolids in the system. For example, theoretical calculations say that supersolids can have vortices on their surface. A one-dimensional supersolid doesn’t have a surface per se but a two-dimensional one does. Physicists can also study other features depending on the number of atoms involved. This said, the researchers’ method was cumbersome and didn’t produce a supersolid of sufficient quality.

    In a new study, published on May 13, 2022 (preprint paper), some members of the 2021 group reported that they were able to create a supersolid disc. This is also a two-dimensional supersolid but with a different geometry (the 2021 effort had produced a roughly rhombus-shaped supersolid). More importantly, the researchers devised a new method to synthesise it. While the previous method first introduced the superfluidity and then the crystallinity, in the new method, the physicists introduced both together.

    When you sweat in warm weather, water gets on your skin and then evaporates. When it does, it takes away some heat to change from liquid to vapour, thus cooling your skin. This is called evaporative cooling. When you start with a cloud of atoms, proceed to take away their energy and then progressively remove the most energetic atoms at each stage, you also progressively reduce the average energy of the system, and thus the overall temperature. This is evaporative cooling with atoms. In their study, the research team developed a theory to explain how this form of cooling could be used to create a supersolid inside a circular trap. Then they tested the theory to create a roughly hexagonal supersolid of a few tens of thousands of dysprosium atoms. (Dysprosium is highly magnetic, so its atoms can be clustered by modifying the magnetic field.)

    Considering the way in which physicists have achieved this new supersolid, the name seems all the more confusing.

  • A tale of two myopias, climate change and the present participle

    The Assam floods are going on. One day, they will stop. The water will subside in many parts of the state but the things that caused the floods will continue to work, ceaselessly, and will cause them to occur again next year, and the year after and so on for the foreseeable future.

    Journalists, politicians and even civil society members have become adept at seeing the floods in space. Every year, as if on cue, there have been reports on the cusp of summer of floodwaters inundating many districts in the state, including those containing and surrounding the Kaziranga national park; displacing lakhs of people and killing hundreds; destroying home, crop, cattle and soil; encouraging the spread of diseases; eroding banks and shores; and prompting political leaders to promise all the help that they can muster for the affected people. But the usefulness of the spatial cognition of the Assam floods has run its course.

    Instead, now, we need to inculcate a temporal cognition, whether this alone or a spatio-temporal one. The reason is that more than the floods themselves, we are currently submerged by the effects of two myopias, like two rocks tied around our necks that are dragging us to the bottom. The first one is sustained by the members of our political class, such as Assam CM Himanta Biswa Sarma and Union home minister Amit Shah, when they say that they will avail all the support and restitution to displaced people and the relatives of those killed directly or indirectly by the floods.

    The floods are not the product of climate change but of mindless infrastructure ‘development’, the construction of dikes and embankments, encroachment of wetlands and plains, destruction of forests and the over-extraction resources and its consequences. A flood happens when the water levels rise, but destruction is the result of objects of human value being in the waters’ way. More and more human property is being located in places where the water used to go, and more and more human property is being rendered vulnerable to being washed away.

    When political leaders offer support to the people after every flood (which is the norm), it is akin to saying, “I will shoot you with a gun and then I will pay for your care.” Offering people support is not helpful, at least not when it stops there, followed by silence. Everyone – from parliamentary committees to civil society members – should follow the utterances of Shah, Sarma & co. (both BJP and non-BJP leaders, including those of the Congress, CPI(M), DMK, TMC, etc.) through time, acknowledge the seasonality of their proclamations, and bring them to book for failing to prevent the floods from occurring every year, instead of giving them brownie points for providing support on each occasion post facto.

    The second myopia exists on the part of many journalists, especially in the Indian mainstream press, and their attitude towards cyclones, which can be easily and faithfully extrapolated to floods as well. Every year for the last two decades at least, there has been a cyclone or two that ravaged two states in particular: Andhra Pradesh and West Bengal (the list included Odisha but it has done well to mitigate the consequences). And on every occasion plus some time, reports have appeared in newspapers and magazines of fisherpeople in dire straits with their boats broken, nets torn and stomachs empty; of coastal properties laid to waste; and, soon after, of fuel and power subsidies, loan waivers and – if you wait long enough – sobering stories of younger fishers migrating to other parts of the country looking for other jobs.

    These stories are all important and necessary – but they are not sufficient. We also need stories about something new – stories that are mindful of the passage of time, of people growing old, the rupee becoming less valuable, the land becoming more recalcitrant, and of the world itself passing them all by. We need the present participle.

    This is not a plea for media houses to commoditise tragedy and trade in interestingness but a plea to consider that these stories miss something: the first myopia, the one that our political leaders espouse. By keeping the focus on problem X, we also keep the focus on the solutions for X. Now ask yourself what X might be if all the stories appearing in the mainstream press are about post-disaster events, and thus which solutions – or, indeed, points of accountability – we tend to focus on to the exclusion of others. We also need stories – ranging in type from staff reports to reported features, from hyperlocal dispatches to literary essays – of everything that has happened in the aftermath of a cyclone making landfall near, say, Nellore or North 24 Parganas, whether things have got better or worse with time, whether politicians have kept their promises to ameliorate the conditions of the people there (especially those not living inside concrete structures and/or whose livelihoods depends directly on natural resources); and whether by restricting ourselves to supporting a people after a storm or a flood has wreaked havoc, we are actually dooming them.

    We need timewise data and we need timewise first-hand accounts. To adapt the wisdom of Philip Warren Anderson, we may know how a shrinking wetland may exacerbate the intensity of the next flood, but we cannot ever derive from this relationship knowledge of the specific ways in which people, and then the country, suffer, diminish and fade away.

    The persistence of these two myopias also feeds the bane of incrementalism. By definition, incremental events occur orders of magnitude more often than significant events (so to speak), so it is more efficient to evolve to monitor and record the former. This applies as much to our memories as it does to the economics of newsrooms. We tend to get caught up in the day-to-day and are capable within weeks of forgetting something that happened last year; unscrupulous politicians play to this gallery by lying through their teeth about something happening when it didn’t (or vice versa), offending the memories of all those who have died because of a storm or a flood and yet others who survive but on the brink of tragedy. On the other hand, newsrooms are staffed with more journalists attuned to the small details but not implicitly able to piece all of them together into the politically and economically inconvenient big picture (there are exceptions, of course).

    I am not sure when we entered the crisis period of climate change but in mid-2022, it is a trivial fact that we are in the thick of it – the thick of a beast that assails us both in space and through time. In response, we must change the way we cognise disasters. The Assam floods are ongoing – and so are the Kosithe Sabarmati and the Cauvery floods. We just haven’t seen the waters go wild yet.

  • Press releases and public duty

    From ‘Science vs Marketing’, published on In The Dark, on May 20, 2022:

    … there is an increasing tendency for university press offices to see themselves entirely as marketing agencies instead of informing and/or educating the public. Press releases about scientific research nowadays rarely make any attempt at accuracy – they are just designed to get the institution concerned into the headlines. In other words, research is just a marketing tool.

    What astrophysicist and blogger Peter Coles writes here is very true. It is not a recent phenomenon but it hasn’t been widely acknowledged either, especially in the community of journalists. I had reported in 2016 on a study by researchers at the Universities of Cardiff and Wollongong that concluded that university press releases bloated with hype don’t necessarily result in reports in the media that are also bloated with hype. The study was mooted in part by an attempt to find if there was a relationship between the two locations of hype: in press releases and news reports. The study’s finding was a happy one because it indicated that science journalists at large were doing their jobs right, and were not being carried away by the rubbish that universe press offices often printed.

    But this said, the study also highlighted the presence of hype in science news reports and which I have also blogged about on many occasions. It typically exists in two contexts: when journalists turn into stenographers and print press releases either as is or with superficial rephrasing, and when journalists themselves uncritically buy into the hype. I find the former to be more forgivable in the Indian context in particular because there are many hapless science journalists here: journalists who are actually generalists, not bound to any particular beat, and whose editors (or their editors’ bosses) have forced them to write on topics with which they are not at all familiar (I strongly suspected this bizarre article in Indian Express – while not being based on a press release of any sort – to be a good example of some sort of editorial pressure). Such a failure reflects to my mind the state of Indian mainstream journalism more than Indian science journalism, the best versions of which are still highly localised to a single handful of outlets.

    The latter – of science journalists willfully buying into the hype – is a cardinal sin, more so when it manifests among journalists who should self-evidently know better, as with Pallab Ghosh of the BBC. University press releases affect the former group more, and not the likes of Pallab Ghosh, although there are exceptional cases. Journalists of the former group are more populous and are also employed by larger, wealthier newsrooms with audiences orders of magnitude larger than those that have adopted a more critical view of science. As a result, bad claims in bad press releases crafted by university press offices often reach more people than articles that properly interrogate those claims. So in addition to Coles’s charge that universities are increasingly concerned with “income”, “profit” and “marketing” over “education and research”, I’d add that universities that publish such press releases have also lost sight of their duty to the publics, and would rather be part of the problem.

  • Some comments on India’s heat

    On May 5, a couple people from BBC World reached out to me, presumably after reading my piece last week on the heatwave in North India and the wet-bulb temperature, for a few comments on a story they were producing on the topic. They had five questions between them; I’m reproducing my answers roughly verbatim (since I spoke to them on phone) below.

    Are these high temperatures usual?

    A: Yes and no. Yes because while these numbers are high, we’ve been hearing about them for a decade or so now – and reading about them in news reports and hearing anecdotal reports. This isn’t the first such heatwave to hit India. A few years ago, peak summer temperature in Delhi touched 47º C or so and there were photos in the media of the asphalt on the road having melted. That was worse – that hasn’t happened this time, yet. That’s the ‘yes’ part. The ‘no’ part has to do with the fact that India is a large country and some parts of the country that are becoming hotter are probably also reaching these temperatures for the first time. E.g. Bangalore, where I live, is currently daily highs of around 35º C. This is par for the course in Chennai and Delhi but it’s quite hot for Bangalore. This said, the high heat is starting sooner, on this occasion from mid-March or so itself, and lasting for longer. That has changed our experience of the heat and our exposure. Of course, my answers are limited to urban India, especially to major cities. I don’t know off the top of my head what the situation in other parts is like.

    The government has said India has a national heat plan and some cities have adopted heat action plans. Are they effective?

    Hard to say. Only two score or so cities have adopted functional heat action plans plus they’re cities, which is not where most of India lives. Sure, the heat is probably worse in the urban centres because of the heat island effect, but things are quite poor in rural areas as well, especially in the north. The heat also isn’t just heat – people experience its effects more keenly if they don’t have continuous power supply or access to running water, which is often the case in many parts of rural India. The benefits of these action plans accrue to those who are better off, typically those who are upper class and upper caste, which is hardly the point. When North India’s heatwave was underway last week, NDTV interviewed shopkeepers and small scale traders, vendors, etc. about whether they could take time off. All of them without exception said ‘no’. Come rain or shine, they need to work. I remember there being vicious cyclones in Chennai and waking up in the morning to find the roads flooded, trees fallen down and loose electric wires – and the local mobile vegetable vendor doing his rounds. Also, in urban areas, do the heat action plans account for the plights of homeless people and beggars, and people living in slums, where – even if they’re indoors – they have poor circulation and often erratic water and power supply?

    What should the government do?

    That’s a very broad question. Simply speaking, the government should give people who can’t afford to shut their businesses or take time off from work the money they’d lose if they did, and rations. This is going to be very difficult but this is what should be done. But this won’t happen. Even during the COVID-19 pandemic, the Indian government didn’t plan for the tens of thousands of migrant labourers and daily-wage earners in cities, who, once the lockdown came into effect, slowly migrated back to their home towns and villagers in search of livelihoods. This sector remains invisible to the government.

    [I also wanted to say but didn’t have the time:] the experience of heat is also mediated by gender, geography and caste forces, so state interventions should also be mediated by them. For example, women in particular, in rural India and especially in Central and North India (where literacy is relatively lower) operate in settings where they have few rights and little if any financial and social independence. They can seldom buy or own land and go out to work, and often labour indoors, performing domestic tasks in poorly ventilated residential spaces, venture out to fetch water from often distant sources – a task performed almost exclusively by women and girls –, often have to defecate in the open but do so early in the day or late in the evening to avoid harrassment and shame, which then means they may not drink water to avoid peeing during the day but which would render them vulnerable to heat stress, etc. If state interventions don’t bend around these realities, they will be useless.

    The moment you mention data or figures that you say you obtained from this government, the first thought that comes to mind is that it’s probably inaccurate, and likely an underestimate. Even now, the Indian government has an ongoing dispute with the WHO over the number of people who died during the pandemic in India: India is saying half a million but the WHO as well as many independent experts have said it’s probably 3-5 million. For example, if the government is collecting data of heat-related illnesses at the institutional level (from hospitals, clinics, etc.) you immediately have a bias in terms of which people are able to or intend to access healthcare when they develop a heat-related illness. Daily-wagers don’t go to hospitals unless their conditions are acute – because they’d lose a day’s earnings, because their out of pocket expenses have increased or both.

    Do you think parts of India will become unliveable in your lifetime?

    This is a good question. I’d say that ‘unliveable’ is a subjective thing. I have a friend in Seattle who recently bought a house in what she said was a nice part of the city, with lots of greenery, opportunities to go hiking and trekking on the weekend, with clear skies, clean air and large water bodies nearby. Liveability to her is different from, say, liveability to someone living in New Delhi, where the air is already quite foul, summers are very hot and winters are likely to become colder in future. Liveability means different things to people living in Delhi, London and Seattle. Many parts of India have been unliveable for a long time now, we just put up with it – and many people do because they don’t have any other option – and our bar just keeps slipping lower.

  • Sci-Hub isn’t just for scientists

    Quite a few reporters from other countries have reached out to me, directly or indirectly, to ask about scientists to whom they can speak about how important Sci-Hub is to their work.

    This attention to Sci-Hub is commendable, against the backdrop of the case in Delhi high court, filed by a consortium of three ‘legacy’ publishers of scientific papers, to have access to the website cutoff in India. There has been a groundswell of support for Sci-Hub in India, to no one’s surprise, considering the exorbitant paywalls that legacy publishers have erected in front of the papers they published. As a result, before Sci-Hub, it was impossible to access these papers outside of university libraries, and universities libraries themselves paid through the nose to keep up these journal subscription. But as in drug development, the development of scientific knowledge also happens on government money for the most part, so legacy publishers effectively often charge people twice: first when they publish papers written by scientists funded by the government and second when they need to lift the paywalls. The prices are also somewhat arbitrary, and often far removed from the costs publishers incur to publish each paper and/or to maintain their websites.

    All this said, I think one more demographic is often missing in this conversation about the importance of Sci-Hub, as a result of which the latter is also limited, unfairly, to scientists. This is the community of science writers, reporters, editors, etc. I have used Sci-Hub regularly since 2013, either to identify papers that I can report on, write about cool scientific work on my blog or to select papers that are data-heavy and attempt to replicate their findings by writing code of my own. We must also highlight Sci-Hub’s benefits for journalists if only to remember that science can empower in more ways than one – including providing the means by which to test the validity of knowledge and reduce uncertainty, letting people learn the nature of facts and expertise based on what is considered valid or legitimate, and broadening access to the tools of science and the methods of proofs beyond those whose careers depend on it.

  • Middle fingers to the NYT and NYer

    On April 18, celebrity journalist Ronan Farrow tweeted that he’d “spent two years digging” into the inside story of Pegasus, the spyware whose use by democratic governments around the world – including that of India – to spy on members of civil society, their political opponents and their dissenters was reported by an international collaboration that included The Wire. Yet Farrow credits only “Pegasus Project” in his story, once, and even then only to say that their reporting “reinforced the links between NSO Group and anti-democratic states” – mentioning nothing of what many of the journalists uncovered, probably to avoid admitting that his own piece overlaps significantly with the Project’s pieces – even as his own piece is cast as a revelatory investigation. Tell me, Mr Farrow, when you dug and dug, did you literally go underground? Or is this another form of your tendency to keep half the spotlight on yourself when your stories are published?

    This is the second instance just this week of an influential American publication re-reporting something one or some other outlets in the “Orient” already published, in both cases a substantial amount of time earlier, while making no mention that they’re simply following up. But worse, the New York Times, the second offender, whose Stephanie Nolen and Karan Deep Singh reported on Amruta Byatnal’s report in Devex after two weeks and based on the same sources, wrote the story like it was breaking news. (The story: India wanted the WHO to delay the release of a report by 10 years because it said India had at least four-times as many deaths during the COVID-19 pandemic as its official record claimed.)

    To make matters worse, India’s Union health ministry (in a government in which Prime Minister Narendra Modi calls all the shots) responded to the New York Times story but not to Devex (nor to The Wire Science‘s re-reporting, based on comments from other sources and with credit to Byatnal and Devex). This BJP government and its ministers like to claim that they’re better than the West on one occasion and that India needs to overcome its awe of the West on another, yet when Western publications (re)report developments discovered by journalists working through the minefield that is India’s landscape of stories, the ministers turn into meerkats.

    via GIPHY

    For the journalists in between who first broke the stories, it’s a double whammy: American outlets that will brazenly steal their ideas and obfuscate memories of their initiative and the Indian government that will treat them as if they don’t exist.

  • MIT develops thermo-PV cell with 40% efficiency

    Researchers at MIT have developed a heat engine that can convert heat to electricity with 40% efficiency. Unlike traditional heat engines – a common example is the internal combustion engine inside a car – this device doesn’t have any moving parts. Second, this device has been designed to work with a heat source that has a temperature of 1,900º to 2,400º C. Effectively, it’s like a solar cell that has been optimised to work with photons from vastly hotter sources – although its efficiency still sets it apart. If you know the history, you’ll understand why 40% is a big deal. And if you know a bit of optics and some materials science, you’ll understand how this device could be an important part of the world’s efforts to decarbonise its power sources. But first the history.

    We’ve known how to build heat engines for almost two millennia. They were first built to convert heat, generated by burning a fuel, into mechanical energy – so they’ve typically had moving parts. For example, the internal combustion engine combusts petrol or diesel and harnesses the energy produced to move a piston. However, the engine can only extract mechanical work from the fuel – it can’t put the heat back. If it did, it would have to ‘give back’ the work it just extracted, nullifying the engine’s purpose. So once the piston has been moved, the engine dumps the heat and begins the next cycle of heat extraction from more fuel. (In the parlance of thermodynamics, the origin of the heat is called the source and its eventual resting place is called the sink.)

    The inevitability of this waste heat keeps the heat engine’s efficiency from ever reaching 100% – and is further dragged down by the mechanical energy losses implicit in the moving parts (the piston, in this case). In 1820, the French mechanical engineer Nicolas Sadi Carnot derived the formula to calculate the maximum possible efficiency of a heat engine that works in this way. (The formula also assumes that the engine is reversible – i.e. that it can pump heat from a colder source to a hotter sink.) The number spit out by this formula is called the Carnot efficiency. No heat engine can have an energy efficiency that’s greater than its Carnot efficiency. The internal combustion engines of today have a Carnot efficiency of around 37%. A steam generator at a large power plant can go up to 51%. Against this background, the heat engine that the MIT team has developed has a celebration-worthy efficiency of 40%.

    The other notable thing about it is the amount of heat with which it can operate. There are two potential applications of the new device that come immediately to mind: to use the waste heat from something that operates at 1,900-2,400º C and to take the heat from something that stores energy at those temperatures. There aren’t many entities in the world that maintain a temperature of 1,900-2,400º C as well as dump waste heat. Work on the device caught my attention after I spotted a press release from MIT. The release described one application that combined both possibilities in the form of a thermal battery system. Here, heat from the Sun is concentred in graphite blocks (using lenses and mirrors) that are located in a highly insulated chamber. When the need arises, the insulation can be removed to a suitable extent for the graphite to lose some heat, which the new device then converts to electricity.

    On Twitter, user Scott Leibrand (@ScottLeibrand) also pointed me to a similar technology called FIRES – short for ‘Firebrick Resistance-Heated Energy Storage’, proposed by MIT researchers in 2018. According to a paper they wrote, it “stores electricity as … high-temperature heat (1000–1700 °C) in ceramic firebrick, and discharges it as a hot airstream to either heat industrial plants in place of fossil fuels, or regenerate electricity in a power plant.” They add that “traditional insulation” could limit heat leakage from the firebricks to less than 3% per day and estimate a storage cost of $10/kWh – “substantially less expensive than batteries”. This is where the new device could shine, or better yet enable a complete power-production system: by converting heat deliberately leaked from the graphite blocks or firebricks to electricity, at 40% efficiency. Even given the fact that heat transfer is more efficient at higher temperatures, this is impressive – more since such energy storage options are also geared for the long-term.

    Let’s also take a peek at how the device works. It’s called a thermophotovoltaic (TPV) cell. The “photovoltaic” in the name indicates that it uses the photovoltaic effect to create an electric current. It’s closely related to the photoelectric effect. In both cases, an incoming photon knocks out an electron in the material, creating a voltage that then supports an electric current. In the photoelectric effect, the electron is completely knocked out of the material. In the photovoltaic effect, the electron stays within the material and can be recaptured. Second, in order to achieve the high efficiency, the research team wrote in its paper that it did three things. It’s a bunch of big words but they actually have straightforward implications, as I explain, so don’t back down.

    1. “The usage of higher bandgap materials in combination with emitter temperatures between 1,900 and 2,400 °C” – Band gap refers to the energy difference between two levels. In metals, for example, when electrons in the valence band are imparted enough energy, they can jump across the band gap into the conduction band, where they can flow around the metal conducting electricity. The same thing happens in the TPV cell, where incoming photons can ‘kick’ electrons into the material’s conduction band if they have the right amount of energy. Because the photon source is a very hot object, the photons are bound to have the energy corresponding to the infrared wavelength of light – which carries around 1-1.5 electron-volt, or eV. So the corresponding TPV material also needs to have a bandgap of 1-1.5 eV. This brings us to the second point.

    2. “High-performance multi-junction architectures with bandgap tunability enabled by high-quality metamorphic epitaxy” – Architecture refers to the configuration of the cell’s physical, electrical and chemical components and epitaxy refers to the way in which the cell is made. In the new TPV cell, the MIT team used a multi-junction architecture that allowed the device to ‘accept’ photons of a range of wavelengths (corresponding to the temperature range). This is important because the incoming photons can have one of two effects: either kick out an electron or heat up the material. The latter is undesirable and should be avoided, so the multi-junction setup to absorb as many photons as possible. A related issue is that the power output per unit volume of an object radiating heat scales according to the fourth power of its temperature. That is, if its temperature increases by x, its power output per volume will increase by x^4. Since the heat source of the TPV cell is so hot, it will have a high power output, thus again privileging the multi-junction architecture. The epitaxy is not interesting to me, so I’m skipping it. But I should note that electric cells like the current one aren’t ubiquitous because making them is a highly intricate process.

    3. “The integration of a highly reflective back surface reflector (BSR) for band-edge filtering” – The MIT press release explains this part clearly: “The cell is fabricated from three main regions: a high-bandgap alloy, which sits over a slightly lower-bandgap alloy, underneath which is a mirror-like layer of gold” – the BSR. “The first layer captures a heat source’s highest-energy photons and converts them into electricity, while lower-energy photons that pass through the first layer are captured by the second and converted to add to the generated voltage. Any photons that pass through this second layer are then reflected by the mirror, back to the heat source, rather than being absorbed as wasted heat.”

    While it seems obvious that technology like this will play an important part in humankind’s future, particularly given the attractiveness of maintaining a long-term energy store as well as the use of a higher-efficiency heat engine, the economics matter muchly. I don’t know how much the new TPV cell will cost, especially since it isn’t being mass-produced yet; in addition, the design of the thermal battery system will determine how many square feet of TPV cells will be required, which in turn will affect the cells’ design as well as the economics of the overall facility. This said, the fact that the system as a whole will have so few moving parts as well as the availability of both sunlight and graphite or firebricks, or even molten silicon, which has a high heat capacity, keep the lucre of MIT’s high-temperature TPVs alive.

    Featured image: A thermophotovoltaic cell (size 1 cm x 1 cm) mounted on a heat sink designed to measure the TPV cell efficiency. To measure the efficiency, the cell is exposed to an emitter and simultaneous measurements of electric power and heat flow through the device are taken. Caption and credit: Felice Frankel/MIT, CC BY-NC-ND.

  • At last, physicists report finding the ‘fourth sign’ of superconductivity

    Using an advanced investigative technique, researchers at Stanford University have found that cuprate superconductors – which become superconducting at higher temperatures than their better-known conventional counterparts – transition into this exotic state in a different way. The discovery provides new insights into the way cuprate superconductors work and eases the path to discovering a room-temperature superconductor one day.

    A superconductor is a material that can transport an electric current with zero resistance. The most well-known and also better understood superconductors are certain metallic alloys. They transition from their ‘normal’ resistive state to the superconducting state when their temperature is brought to a very low value, typically a few degrees above absolute zero.

    The theory that explains the microscopic changes that occur as the material transitions is called Bardeen-Cooper-Schrieffer (BCS) theory. As the material crosses its threshold temperature, called the critical temperature, BCS theory predicts four signatures of superconductivity. If these four signatures occur, we can be sure that the material has become superconducting.

    First, the material’s resistivity collapses and its electrons begin to flow without any resistance through the bulk – the electronic effect.

    Second, the material expels all magnetic fields within its bulk – the magnetic (a.k.a. Meissner) effect.

    Third, the amount of heat required to excite electrons to an arbitrarily higher energy is called the electronic specific heat. This number is lower for superconducting electrons than for non-superconducting electrons – but it increases as the material is warmed, only to drop abruptly to the non-superconducting value at the critical temperature. This is the effect on the material’s thermodynamic behaviour.

    Fourth, while the energies of the electrons in the non-superconducting state have a variety of values, in the superconducting state some energy levels become unattainable. This shows up as a gap in a chart mapping the energy values. This is the spectroscopic effect. (The prefix ‘spectro-‘ refers to anything that can assume a continuous series of values, on a spectrum.)

    Conventional superconductors are called so simply because scientists discovered them first and they defined the convention: among other things, they transition from their non-superconducting to superconducting states at very low temperature. Their unconventional counterparts are the high-temperature superconductors, which were discovered in the late 1980s and which transition at temperatures greater than 77 K. And when they do, physicists have thus far observed the corresponding electronic, magnetic and thermodynamic effects – but not the spectroscopic one.

    A new study, published on January 26, 2022, has offered to complete this record. And in so doing, the researchers have uncovered new information about how these materials transition into their superconducting states: it is not the way low-temperature superconductors do.

    The research team, at Stanford, reportedly did this by studying the thermodynamic effect and connecting it to the material’s spectroscopic effect.

    The deeper problem with zeroing in on the spectroscopic effect in high-temperature superconductors is that an electron energy gap shows up before the transition, when the material is not yet a superconductor, and persists into the superconducting phase.

    First, recall that at the critical temperature, the electronic specific heat stops increasing and drops suddenly to the non-superconducting value. The specific heat is directly related to the amount of entropy in the system (energy in the system that can’t be harnessed to perform work). The entropy is in turn related to the spectral function – an equation that dictates which energy states the electrons can and can’t occupy. So by studying changes in the specific heat, the researchers can understand the spectroscopic effect.

    Second, to study the specific heat, the researchers used a technique called angle-resolved photo-emission spectroscopy (ARPES). These are big words but they have a simple meaning. Photo-emission spectroscopy refers to a technique in which energy-loaded photons are shot into a target material, where they knock out those electrons that they have the energy for. Based on the energies of the electrons knocked out, their position and their momenta, scientists can piece together the properties of the electrons inside the material.

    ARPES takes this a step further by also recording the angle at which the electrons are knocked out of the material. This provides an insight into another property of the superconductor. Specifically, another way in which cuprates differ from conventional superconductors is the way in which the electrons pair up. In the latter, the pairs break rotational symmetry, such that the energy required to break up the pair is not equal in all directions.

    This affects the way the thermodynamic and spectral effects look in the data. For example, photons fired at certain angles will knock out more electrons from the material than photons incoming at other angles.

    Taking all this into account, the researchers reported that a cuprate superconductor called Bi-2212 (bismuth strontium calcium copper oxide) transitions to becoming a superconductor in two steps – unlike the single-step transition of low-temperature superconductors.

    According to BCS theory, the electrons in a conventional superconductor are encouraged to overcome their mutual repulsion and bind to each other in pairs when two conditions are met: the material’s lattice – the grid of atomic nuclei – has a vibrational energy of a certain frequency and the material’s temperature is lowered. These electron pairs then move around the material like a fluid of zero viscosity, thus giving rise to superconductivity.

    The Stanford team found that in Bi-2212, the electrons pair up with each other at around 120 K, but condense into the fluid-like state only at around 77 K. The former gives rise to an energy gap – i.e. the spectroscopic effect – even as the superconducting behaviour itself arises only at the 77-K mark, when the pairs condense.

    There are two distinct feats here: finding the spectroscopic effect and finding the two-step transition. Both – but the first more so – were the product of technological advancements. The researchers obtained their Bi-2212 samples, created with specific chemical compositions so as to help analyse the ARPES data, from their collaborators in Japan, and then studied it with two instruments capable of performing ARPES studies at Stanford: an ultraviolet laser and the Synchrotron Radiation Lightsource.

    Makoto Hashimoto, a physicist at Stanford and one of the study’s authors, said in a press statement: “Recent improvements in the overall performance of those instruments were an important factor in obtaining these high-quality results. They allowed us to measure the energy of the ejected electrons with more precision, stability and consistency.”

    The second finding, of the two-step transition, is important foremost because it is new knowledge of the way cuprate superconductors ‘work’ and because it tells physicists that they will have to achieve two things – instead of just one, as in the case of conventional, low-temperature superconductors – if they want to recreate the same effects in a different material.

    As Zhi-Xun Shen, the researcher who led the study at Stanford, told Physics World, “This knowledge will ultimately help us make better superconductors in the future.”

    Featured image: A schematic illustration of an ARPES setup. On the left is the head-on view of the manipulator holding the sample and at the centre is the side-on view. On the right is an electron energy analyser. Credit: Ponor/Wikimedia Commons, CC BY-SA 4.0.

  • Anonymity in journalism and a conflict of ethics

    I wrote the following essay at the invitation of a journal in December 2020. (This was the first draft. There were additional drafts that incorporated feedback from a few editors.) It couldn’t be published because I had to back out of the commission owing to limitations of time and health. I formally withdrew my submission on April 11, 2022, and am publishing it in full below.


    Anonymity in journalism and a conflict of ethics

    Tiger’s dilemma

    I once knew a person, whom I will call Tiger, who worked with the Government of India. Tiger was in a privileged position within the government, not very removed from the upper echelons in fact, and had substantial influence on policies and programmes lying in their domain. (Tiger himself was not a member of any political parties.) Tiger’s work was also commendable: their leadership from within the state had improved the working conditions of and opportunities for people in the corresponding fields, so much so that Tiger was generally well-regarded by their peers and colleagues around the country. Tiger had also produced high-quality work in their domain, which I say here to indicate Tiger’s all-round excellence.

    But while Tiger ascended through government ranks, the Government of India itself was becoming more detestable – feeding communal discontentment, promoting pseudoscience, advancing crony capitalism and arresting/harassing dissidents. At various points in time, the actions and words of ministers and senior party leaders outright conflicted with the work and the spirit that Tiger and their department stood for – yet Tiger never spoke a word against the state or the party. As the government’s actions grew more objectionable, the more Tiger’s refusal to object became conspicuous.

    I used to have trouble judging Tiger’s inaction because I had trouble settling a contest between two ethical loci: values versus outcomes. The question here was that, in the face of a dire threat, such as a vengeful government, how much could I ask of my compatriots? It is undeniably crucial to join protests on the streets and demonstrate the strength of numbers – but if the government almost always responds by having police tear-gas protesters or jail a few and keep them there on trumped-up charges under draconian laws for months on end, it becomes morally painful to insist that people join protests. I might wither under the demand of condemning anyone, but especially the less privileged, to such fates. (The more-privileged of course can and should be expected to do more, and fear the consequences of state viciousness less.)

    If Tiger had spoken up against the prime minister or any of the other offending ministers, Tiger would have lost their position within the government, could in fact have become persona non grata in the state’s eyes, and become earmarked for further disparagement. As symbols go, speaking up against an errant government is a powerful one – especially when it originates from a person like Tiger. However, speaking up will still only be a symbol, and not an outcome. If Tiger had stayed silent to continue to retain their influential place within the government, there is a chance that Tiger’s department may have continued its good work. The implication here is that outcomes trump values.

    Then again, I presume here that the power of symbols is predictable or even finite in any way, or that they are always inferior to action on the ground, so to speak. This need not be true. For example, if Tiger had spoken up, their peers could have been motivated to speak up as well, avalanching over time into a coordinated, collectivised refusal to cooperate with government initiatives that required their support. It is a remote possibility but it exists; more importantly, it is not for me to dismiss. And it is at least just as tempting to believe values trump outcomes, or certainly complement them.

    Now, depending on which relationship is true – values over outcomes or vice versa – we still have to contend with the same defining question before we can draw a line between whom to forgive and whom to punish. Put another way, when confronted with deadly force, how much can you ask of your compatriots? There can’t be shame in bending like grasses against a punishing wind, but at the same time someone somewhere must grow a spine. Then again, not everyone may draw the line between these two sides at the same place. This is useful context to consider issues surrounding anonymity and pseudonymity in journalism today.

    Anonymity in journalism

    Every now and then, The Wire and The Wire Science receive requests from authors to not have their names attached to their articles. In 2020, The Wire Science, which I edit, published at least three articles without a name or under a pseudonym. Anonymity as such has been commonly around for much longer vis-à-vis government officials and experts being quoted saying sensitive things, and individuals whose stories are worth sharing but whose identities are not. It is nearly impossible to regulate journalism, without ‘breaking’ it, from anywhere but the inside. As evasive as this sounds, what is in the public interest is often too fragile to survive the same accountability and transparency we demand of government, or even what the law offers to protect. So the channels to compose and transport such information should be able to be as private as individual liberties and ethical obligations can allow.

    Anonymity is as a matter of principle possible, and journalists (should) have the liberty, and also the integrity, to determine who deserves it. It may help to view anonymity as a duty instead of as a right. For example, we have all come across many stories this year in which reporters quoted unnamed healthcare workers and government officials to uncover important details of the Government of India’s response to the country’s COVID-19 epidemic. Without presuming to know the nature of relationships between these ‘sources’ and the respective reporters, we can say they all likely share Tiger’s (erstwhile) dilemma: they are on the frontline and they are needed there, but if they speak up and have their identities known, they may lose their ability to stay there.

    The state of defence reporting in India could offer an important contrast. Unlike health (although this could be changing), India’s defence has always been shrouded in secrecy, especially on matters of nuclear weapons, terrorist plots, military installations, etc. Not too long ago, one defence reporter began citing unnamed sources to write up a series of articles about a new chapter of terrorist activities in India’s north. A mutual colleague at the time told me he was unsettled by the series: while unnamed sources are not new, the colleague explained, this reporter almost never named anyone – except perhaps those making banal statements.

    Many health-related institutions and activities in India need to abide by the requirements of the Right to Information Act, but defence has few such obligations. In such cases, there is no way for the consumers of journalism – the people at large – to ascertain the legitimacy of such reports and in fact have no option but to trust the reporter. But this doesn’t mean the reporter can do what they wish; there are some simple safeguards to prevent mistakes. One as ubiquitous as it is effective is to allow an offended party in the story to defend itself, with some caveats.

    A notable example of such an incident from the last decade was the 2014 Rolling Stone investigation about an alleged incident of rape on the University of Virginia campus. The reporter had trusted her source and hid her identity in the article, using only the mononym ‘Jackie’. Jackie had alleged that she had been raped by a group of men during a fraternity party. However, other reporters subsequently noticed a series of inconsistencies that quickly snowballed into the alarming revelation that Jackie had probably fabricated the incident, and Rolling Stone had missed it. In this case, Rolling Stone itself claimed to have been duped, but managing editor Will Dana’s note to readers published after a formal investigation had wound up contains a telling passage:

    “In trying to be sensitive to the unfair shame and humiliation many women feel after a sexual assault, we made a judgment – the kind of judgment reporters and editors make every day. We should have not made this agreement with Jackie and we should have worked harder to convince her that the truth would have been better served by getting the other side of the story.”

    Another ‘defence’ is rooted in news literacy: as a reader, try when you can to consider multiple accounts of a common story, as reported by multiple outlets, and look for at least one independently verifiable detail. There must be something, but if there isn’t, consider it a signal that the story is at best located halfway between truth and fiction, awaiting judgment. Fortunately (in a way), science, environment and health stories frequently pass this test – or at least they used to. While an intrepid Business Standard reporter might have tracked down a crucial detail by speaking to an expert who wished to remain unnamed, someone at The Wire or The Hindu, or an enterprising freelance journalist, will soon have been able to get someone else on the record, or find a document in the public domain attesting to the truth of the claim.

    Identity as privilege

    I use the past-tense because, since 2014, the Bharatiya Janata Party (BJP) – which formed the national government then – has been vilifying any part of science that threatens the mythical history the party has sought to construct for itself and for the nation. The BJP is the ideological disciple of the Rashtriya Swayamsevak Sangh and the Vishwa Hindu Parishad, and soon after the BJP’s ascent, members of groups affiliated with these organisations have murdered at least three anti-superstition activists and others have disrupted many a gathering of scholars, even as senior ministers in government have embarked on a campaign to erode scientific temper, appropriate R&D activities into the party’s communal programme and degrade or destabilise the scope for research that is guided by researchers’ interests, in favour of that of bureaucrats.

    Under the party-friendly vice-chancellorship of M. Jagadesh Kumar, the Jawaharlal Nehru University in New Delhi has slid from being a national jewel to being blanketed in misplaced suspicions of secessionist activity. In January, students affiliated with the BJP’s student-politics wing went on a violent spree within the JNU campus, assaulting students and damaging university property, while Kumar did nothing to stop them. In November, well-known professors of the university’s school of physical sciences alleged that Kumar was intervening in unlawful ways with the school’s administration. Moushumi Basu, secretary of the teachers’ association, called the incident a first, since many faculty members had assumed Kumar wouldn’t interfere with the school of physical sciences, being a physical-sciences teacher himself.

    (Edit, April 11, 2022: Kumar was succeeded in February 2022 by Santishree Pandit, and at the end of the first week of April, members of the Akhil Bharatiya Vidyarthi Parishad assaulted JNU students on campus with stones over cooking non-vegetarian food on the occasion of Ram Navami.)

    Shortly before India’s COVID-19 epidemic really bloomed, the Union government revoked the licence of the Manipal Institute of Virology to use foreign money to support its stellar, but in India insufficiently supported, research on viruses, on charges that remain unclear. The party’s government has confronted many other institutes with similar fates – triggering a chilling effect among scientists and pushing them further into their ivory towers.

    In January 2020, I wrote about the unsettling case of a BJP functionary who had shot off an email asking university and institution heads to find out which of their students and faculty members had signed a letter condemning the Citizenship (Amendment) Act 2019. I discovered in the course of my reporting two details useful to understand the reasonable place of anonymous authorship in journalism. First, a researcher at one of the IISERs told me that the board of governors of their institute seemed to be amenable to the argument that since the institute receives funds via the education ministry (formerly the human resource development ministry), it does not enjoy complete autonomy. Second, while the Central Civil Services (Conduct) Rules 1964 do prevent employees of centrally funded institutions, including universities and research facilities, from commenting negatively on the government, they are vague at best about whether employees can protest on issues concerning their rights as citizens of the country.

    These two conditions together imply that state-funded practitioners of scientific activities – from government hospital coroners to spokespersons of billion-dollar research facilities, from PhD scholars to chaired professors – can be arbitrarily denied opportunities to engage as civilians on important issues concerning all people, even as their rights on paper seem straightforward.

    But even under unimaginable pressure to conform, I have found that many of India’s young scientists are still willing to – even insistent on – speaking up, joining public protests, writing and circulating forthright letters, championing democratic and socialist programmes, and tipping off journalists like myself to stories that need to be told. This makes my job as a journalist much easier, but I can’t treat their courage as permission to take advantage. They are still faced with threats whose full magnitude they may comprehend only later, or may be unaware of methods that don’t require them to endanger their lives or careers.

    Earlier, postdoctoral scholars and young scientists may have been more wary than anything else of rubbing senior scientists the wrong way by, say, voicing concerns about a department or institute in the latter’s charge. Today, the biggest danger facing them is indefinite jail time, police brutality and avoidance by institutes that may wish to stay on the party’s good side. (And this is speaking only of the more privileged male scientists; others have only had it increasingly worse.)

    Once again: how much can we ask of our compatriots? How much in particular can we ask of those who have reason to complain even as they stand to lose the most – the Dalits, the women, transgender people, the poor, the Adivasi, the non-English non-Hindi speakers, environmentalists, healthcare workers, migrant labourers, graveyard and crematorium operators, manual scavengers, the Muslims, Christians and members of other minority denominations, farmers and agricultural traders, cattle-rearers, and indeed just about anyone who is not male, rich, Brahmin? All of these people have stories worth sharing, but whose identities have been increasingly isolated, stigmatised and undermined. All of these people, including the young scientists as well, thus deserve to be quoted or published anonymously or pseudonymously – or their views may never be heard.

    Paying the price of fiction

    There are limitations, of course, and this is where ethical and responsible journalism can help. It is hard to trust an anonymous Twitter user issuing scandalous statements about a celebrity, and even harder to trust an anonymous writer laying claim to the credibility that comes with identifying as a scientist yet making unsubstantiated claims about other scientists – as necessary as such a tactic may seem to be. The safest and most responsible way forward is for a ‘source’ to work with a journalist such that the journalist tells the story, with the source supplying one set of quotes. This way, the source’s account will enjoy the benefit of being located in a journalistic narrative, in the company of other viewpoints, before it is broadcast. The journalist’s fundamental role here is to rescue doubts about one’s rights from the grey areas it occupies in the overlap between India’s laws and the wider political context.

    However, it is often also necessary to let scientists, researchers, professors, doctors, etc. to say what they need to themselves, so that they may bring to bear the full weight of their authority as well as the attitudes they don as topical experts. There is certainly a difference between writing about Pushpa Mittra Bhargava’s statements on one hand and allowing Pushpa Mittra Bhargava to express himself directly on the other. Another example, but which doesn’t appeal to celebrity culture (such as it is in the halls of science!), is to let a relatively unknown but surely qualified epidemiologist write a thousand words in the style and voice of their choice about, say, the BJP’s attempts to communalise the epidemic. The message here is contained within the article’s arguments as well as in the writer’s credentials – but again, not necessarily in the writer’s religious or ethnic identity. Or, as the case may be, in their identity as powerless young scientists.

    Ultimately, the most defensible request for anonymity is the one backed by evidence of reasonable risk of injury – physical or otherwise – and the BJP government has been steadily increasing this risk since 2014. Then again, none of this means those who have already received licence to write anonymously or pseudonymously also have license to shoot their mouths. Journalists have a responsibility to be as selective as they reasonably can to identify those who deserve to have their names hidden – with at least two editors signing off on the request instead of the commissioning editor alone, for example – and those who are selected to be reminded that the protection they have received is only for the performance of a necessary duty. Anonymity or even pseudonymity introduces one fiction into the narrative, and all fictions, now matter how trivial, are antithetical to narratives that offer important knowledge but also a demonstration of what good journalism is capable of. So it is important to not see this device as a reason for the journalist to invent more excuses to leave out or obfuscate yet other details in the name of fear or privacy. In fact, the inclusion of one fiction should force every other detail in the narrative to be that much more self-evidently true.

    Though some authors may not like it, the decision to grant anonymity must also be balanced with the importance and uniqueness of the article in question. While anonymity may grant a writer the freedom to not pull their punches, the privilege also foists more responsibility on the editor to ensure the privilege is being granted for something that is in the public interest as well as can’t be obtained through any other means. One particular nuance is important here: the author should convince the editor that they are compelled to speak up. Anonymity shouldn’t be the only reason the article is being written. Otherwise, anonymity or pseudonymity will easily be excuses to fire from behind the publication’s shoulders. This may seem like a crude calculus but it also lies firmly in the realm of due diligence.

    We may not be able to ask too much of our compatriots, but it is necessary to make sure the threats that face them are real and that they will not attempt to gain unfair advantages. In addition, the language must at all points be civil and devoid of polemic; every claim and hypothesis must be substantiated to the extent possible; if the author has had email or telephone conversations with other people, the call records and reporting notes must be preserved; and the author can’t say anything substantial that does not require their identity to be hidden. The reporter or the editor should include in the article the specific reason as to why anonymity has been granted. Finally, the commissioning editor reserves the right to back out of the arrangement anytime they become unsure. This condition simply reflects the author’s responsibility to convince the editor of the need for anonymity, even if specific details may never make it to the copy.

    At the same time, in times as fraught as ours, it may be unreasonable to expect reporters and editors to never make a mistake, even of the Rolling Stone’s proportions (although I admit the Columbia University report on Rolling Stone’s mistakes is unequivocal in its assessment that the magazine made no mistakes it couldn’t have avoided). The straightforward checks that journalists employ to weed out as many mistakes as possible can never be 100% perfect, particularly during a pandemic of a new virus. Some mistakes can be found out only in hindsight, such as when one needs to prove the negative, or when a journalist is caught between the views of two accomplished scientists and one realises a mistake only later.

    Instead, we should expect those who make mistakes to be prompt, honest and reflexive, especially smaller organisations that can’t yet afford independent fact-checkers. A period in which anonymous authorship is becoming more necessary, irrespective of its ad hoc moral validity, ought also to be a period in which newsroom managers and editors treat mistakes not as cardinal sins but as opportunities to strengthen the compact with their readers. One simple first step is to acknowledge post-publication corrections and modifications with a note plus a timestamp. Because let’s face it – journalists are duty-bound to walk the same doubts, ambiguities and fears that also punctuate their stories.

  • Better nuclear fusion – thanks to math from biology

    There’s an interesting new study, published on February 23, 2022, that discusses a way to make nuclear fusion devices called stellarators more efficient by applying equations used all the way away in systems biology.

    The Wikipedia article about stellarators is surprisingly well-written; I’ve often found that I’ve had to bring my undergraduate engineering lessons to bear to understand the physics articles. Not here. Let me quote at length from the sections describing why physicists need stellarators, which also serves to explain how these machines work.

    Heating a gas increases the energy of the particles within it, so by heating a gas into hundreds of millions of degrees, the majority of the particles within it reach the energy required to fuse. … Because the energy released by the fusion reaction is much greater than what it takes to start it, even a small number of reactions can heat surrounding fuel until it fuses as well. In 1944, Enrico Fermi calculated the deuterium-tritium reaction would be self-sustaining at about 50,000,000º C.

    Materials heated beyond a few tens of thousand degrees ionize into their electrons and nuclei, producing a gas-like state of matter known as plasma. According to the ideal gas law, like any hot gas, plasma has an internal pressure and thus wants to expand. For a fusion reactor, the challenge is to keep the plasma contained. In a magnetic field, the electrons and nuclei orbit around the magnetic field lines, confining them to the area defined by the field.

    A simple confinement system can be made by placing a tube inside the open core of a solenoid.

    A solenoid is a wire in the shape of a spring. When an electric current is passed through the wire, it generates a magnetic field running through the centre.

    The tube can be evacuated and then filled with the requisite gas and heated until it becomes a plasma. The plasma naturally wants to expand outwards to the walls of the tube, as well as move along it, towards the ends. The solenoid creates magnetic field lines running down the center of the tube, and the plasma particles orbit these lines, preventing their motion towards the sides. Unfortunately, this arrangement would not confine the plasma along the length of the tube, and the plasma would be free to flow out the ends.

    The obvious solution to this problem is to bend the tube around into a torus (a ring or donut) shape.

    A nuclear fusion reactor of this shape is called a tokamak.

    Motion towards the sides remains constrained as before, and while the particles remain free to move along the lines, in this case, they will simply circulate around the long axis of the tube. But, as Fermi pointed out, when the solenoid is bent into a ring, the electrical windings would be closer together on the inside than the outside. This would lead to an uneven field across the tube, and the fuel will slowly drift out of the center. Since the electrons and ions would drift in opposite directions, this would lead to a charge separation and electrostatic forces that would eventually overwhelm the magnetic force. Some additional force needs to counteract this drift, providing long-term confinement.

    [Lyman] Spitzer’s key concept in the stellarator design is that the drift that Fermi noted could be canceled out through the physical arrangement of the vacuum tube. In a torus, particles on the inside edge of the tube, where the field was stronger, would drift up. … However, if the particle were made to alternate between the inside and outside of the tube, the drifts would alternate between up and down and would cancel out. The cancellation is not perfect, leaving some net drift, but basic calculations suggested drift would be lowered enough to confine plasma long enough to heat it sufficiently.

    These calculations are not simple because this how a stellarator can look:

    When a stellarator is operating and nuclear fusion reactions are underway, impurities accumulate in the plasma. These include ions that have formed but which can’t fuse with other particles, and atoms that have entered the plasma from the reactor lining. These pollutants are typically found at the outer layer.

    An additional device called a diverter is used to remove them. The heavy ions that form in the reactor plasma are also called ‘fusion ash’, and the diverter is the ashtray.

    It works like a pencil sharpener. The graphite is the plasma and the blade is the diverter. It scrapes off the wood around the graphite until the latter is fully exposed and clean. But accomplishing this inside a stellarator is easier said than done.

    In the image above, let’s isolate just the plasma (yellow stuff), slice a small section of it and look at it from the side. Depending on the shape of the stellarator, it will probably look like a vertical ellipse, an elongated egg – a blob, basically. By adjusting the magnetic field near the bottom of the stellarator, operators can change the shape of the plasma there to pinch off its bottom, making the overall shape more like an inverted droplet.

    At the bottom-most point, called the X-point, the magnetic field lines shaping the plasma intersect with each other. At least, some magnetic field lines intersect with each other while others move towards each other without fully criss-crossing, but which are in contact with the surface of the reactor. (In the image below, the boundary between these two layers of the plasma is called the separatrix.)

    Diverter plates are installed near this crossover point to ‘drain’ the plasma moving along the non-intersecting fields.

    In the new study, physicists addressed the problem of diverter overheating. The heat removed at the diverter is considered to be ‘waste’ and not a part of the fusion reactor’s output. The primary purpose here is to take away the impure plasma, so the cooler it is, the longer the diverter will be able to operate without replacement.

    The researchers used the Large Helical Device in Gifu, Japan, to conduct their tests. It is the world’s second largest stellarator (the first is the Wendelstein 7-X). Their solution was to stop heating the plasma just before it hit the diverter plates, in order to allow the ions and electrons to recombine into atoms. The energy of the combined atom is lower than that of the free ions and electrons, so less heat reaches the diverter plates.

    How to achieve this cooling? There were different options, but the physicists resorted to arranging additional magnetic coils around the stellarator such that, just before the plasma hit the diverter, its periphery would detach into a smaller blob that, being separated from the overall plasma, could cool. These smaller blobs are called magnetic islands.

    When they ran tests with the Large Helical Device, they found that the diverter removed heat from the plasma chamber in short bursts, instead of continuously. They interpreted this to mean the magnetic islands didn’t exist in a steady state but attached and detached from the plasma at a regular frequency. The physicists also found that they could model the rate of attachment using the so-called predator-prey equations.

    These are the famous Lotka-Volterra equations. They describe how the populations of two species – one predator and one prey – vary over time. Say we have a small ecosystem in which crows feed on worms. As they do, the crow population increases, but due to overfeeding, the population of worms dwindles. This forces the crow population to shrink as well. But once there are fewer crows around, the number of worms increases again, which then allows more crows to feed on worms and become more populous. And so the cycle goes.

    Similarly, the researchers found that the Lotka-Volterra equations (with some adjustments) could model the attachment frequency if they assumed the magnetic islands to be the predators and an electric current in the plasma to be the prey. This current is the product of electrons moving around in the plasma, which the authors call a “bootstrap current”.

    When the strength of the bootstrap current increases, the magnetic island expands. At the same time, the confining magnetic field resists the expansion, forcing the current to dwindle. This allows the island to shrink as well, receding from the field. But then this allows the bootstrap current to increase once more to expand the island. And so the cycle goes.

    The researchers reported in their paper that while they observed a frequency of 40 Hz (i.e. 40 times per second) in the Large Helical Device, the equations on paper predicted a frequency of around 20 Hz. However, they have interpreted to mean there is “qualitative agreement” between their idea and their observation. They also wrote that they expect the numbers to align once they fine-tune their math to account for various other specifics of the stellarator’s operation.

    They eventually aim to find a way to control the attachment rate so that the diverters can operate for as long as possible – and at the same time take away as much ‘useless’ energy from the plasma as possible.

    I also think that, ultimately, it’s a lovely union of physics, mathematics, biology and engineering. This is thanks in part to the Lotka-Volterra equations, which are a specific form of the Kolmogorov model. This is a framework of equations and principles that describes the evolution of a stochastic process in time. A stochastic process is simply one that depends on variables whose values change randomly.

    In 1931, the Soviet mathematician Andrei Kolmogorov described two kinds of stochastic processes. In 1949, the Croatian-American mathematician William Feller described them thus:

    … the “purely discontinuous” type of … process: in a small time interval there is an overwhelming probability that the state will remain unchanged; however, if it changes, the change may be radical.

    … a “purely continuous” process … there it is certain that some change will occur in any time interval, however small; only, here it is certain that the changes during small time intervals will be also small.

    Kolmogorov derived a pair of ‘forward’ and ‘backward’ equations for each type of stochastic process, depending on the direction of evolution we need to understand. Together, these four equations have been adapted to a diverse array of fields and applications – including quantum mechanics, financial options and biochemical dynamics.

    Featured image: Inside the Large Helical Device stellarator. Credit: Justin Ruckman, Infinite Machine/Wikimedia Commons, CC BY 2.0.