Month: December 2022

  • Something more foolish than completing phase 3 trials in 1.5 months?

    That the Union government and the Indian Space Research Organisation (ISRO) had entered into a more intimate, but not necessarily more beneficial, relationship became evident in 2019 when then ISRO chairman K. Sivan trotted out a series of dubious claims to massage the fate of the Chandrayaan 2 mission, whose lunar surface component had obviously failed. Anyone who follows Indian spaceflight news is familiar with the adage ‘space is hard’ and all of them abide by it (there’s an argument that we shouldn’t extend the same courtesy to more mature space programmes). Yet Sivan was determined to salvage even more, going so far at one point to call the whole mission (orbiter + lander) a “98% success”.

    Shortly after news of the lander’s fate became clear to ground control, Prime Minister Narendra Modi, who was present as the chief guest, consoled Sivan with his customary hug even as ISRO at large withdrew into a shell of silence, offering only the occasional scrap of what it knew had happened to the lander. The vacuum of information allowed a trickle of speculation, but which was soon overwhelmed by a swell of conspiracies and, as is inevitable these days, a virtual barrier erected by right-wing commentators and bots that suppressed all questions asking for more information in the public domain. This ISRO, and the attendant public experience of India’s spaceflight programme, was markedly different from the ISRO of before – a feeling that Sivan deepened with other claims about the amount of time ISRO would need to realise its ‘Gaganyaan’ human spaceflight mission, which has already been delayed by three years. Sivan had unknowingly underestimated the amount, had deliberately communicated a shorter duration, had communicated the actual time but to which government officials couldn’t agree, or something else happened. The first possibility would’ve been unlikely were it not for the COVID-19 pandemic – but then it would seem that even if Sivan’s successor, S. Somanath, were to push back and ask for more time, the government has made up its mind: New Indian Express reported on December 8 that ISRO had received “instructions from the government” to send Indian astronauts to space on its GSLV Mk III rocket before the 2024 Lok Sabha elections! This has to be the second most unintelligent decision the government has made in the limited context of large-scale undertakings involving science and the lives of people, after Balram Bhargava’s subsequently rescinded threat in mid-2020 for researchers to complete the Covaxin phase 3 clinical trial in time for Prime Minister Modi’s Independence Day address less than two months away. It’s not clear if the government will rescind its demand of ISRO; the report itself is brief and doesn’t mention any resistance from the spaceflight mission team. But how this squares with minister Jitendra Singh’s statement in parliament last week, that the first crewed mission will only liftoff in late 2024 and that “crew safety is paramount”, is unclear. Assuming that the government will continue to push ISRO to launch in the first half of 2024, a flight based on a schedule modified to accommodate the demand may surpass the foolishness of Bhargava’s ask.

    Every human spaceflight mission is inordinately complex. ISRO will have to design and test every component of the launch vehicle, crew capsule, mission profile, ground systems and crew management beforehand, in different conditions. It has to anticipate all possible failure scenarios and arrange for both failure-avoidance systems and failsafes. The timeline may have been more flexible in the early days of the undertaking, when the systems being tested were less composite, but not so today. When the government “instructs” ISRO to launch the ‘Gaganyaan’ crewed flight before the 2024 Lok Sabha elections (which are around 18 months away), it’s practically asking ISRO to devise a testing schedule that will be completed – irrespective of the tests’ outcomes – in this period all so it can use the mission’s outcomes (developed with government funds) as part of its election campaign. It’s effectively asking ISRO to sideline science, safety standards and good sense. Imagine one safety test going awry, and which ISRO might in other circumstances have liked to fix and redo. With “instructions” like those of the government, it won’t be able to – jeopardising the mission itself as well as the lives of the astronauts and the reputation of the Indian space programme in the international arena. The government simply shouldn’t make such a frighteningly asinine demand, and instead allow ISRO to take all the time it needs (within reasonable limits) to successfully complete its first human spaceflight mission.

    ISRO has of late also embarked on programmes to increase its commercial revenue, even though it’s a “space research organisation”. If a crewed mission fails because the organisation let itself be cowed by the national government into trimming its testing process, all so a political party could use the launch as part of its poll propaganda, all of the organisation’s other rockets will confront doubts about their safety and whether they won’t threaten satellites worth hundreds of millions of dollars. A lot of ISRO’s work on ‘Gaganyaan’ has also happened to the exclusion of other launch vehicles and scientific missions, including (but not limited to) the reusable launch vehicle, the semi-cryogenic engine and the Aditya L1 space-probe. Its low rate of production of new rockets recently forced it to postpone the Chandrayaan 3 mission to accommodate the OneWeb satellites (in a commercial contract) in its launch manifest. Setting aside questions of ISRO’s relatively low funding and internal priorities, even if ‘Gaganyaan’ succeeds out of luck, the prospects of all of these adversely affected projects will suffer at least further reputational consequences. If ‘Gaganyaan’ fails, the future will be a lot worse.

    Just as the Covaxin incident opened a window into how the Indian government was thinking about the COVID-19 vaccination drive and the role of science in shaping it, a demand of ISRO to launch realise its human spaceflight mission with a hard deadline opens a window into the Indian government’s considerations on ‘Gaganyaan’. The BJP government revived ISRO’s proposal for a human spaceflight mission in 2014, approved it in 2017 and allocated Rs 10,000 crore in 2018. Did it do so only because of how the mission’s success, should it come to pass, would help the party win elections? It’s desirable for a party’s goals and the country’s goals to be aligned – until the former crimps the latter. But more importantly, should we be concerned about the government’s heuristic for selecting and rejecting which spaceflight missions to fund? And should we be concerned about which publicly funded projects it will seek more accountability on?

    There have been standing committee and audit reports calling ISRO out for slow work on this or that matter but the government at large, especially the incumbent one since 2019, has taken pains to maintain a front of amicability. It might be mildly amusing if a political party promises in its pre-poll manifesto to get ISRO in shape, and then in line, by readying a reusable launch vehicle for commercial missions by 2025 or launching five scientific missions in the next four years – but standing in the way of that is more than a knack to translate between public sentiment and technological achievement. It requires breaking a longstanding tradition of cosying up to ISRO as much as granting it autonomy while simultaneously underfunding it. We need the national government, most of all, to pay more attention to all ISRO projects on which there is evidence of dilly-dallying, and grapple honestly with the underlying issues, rather than poke its nose in the necessarily arduous safety-rating process of a crewed mission.

    Featured image: A GSLV Mk III rocket lifts off on its first orbital flight, July 2017. Credit: ISRO.

  • 2022 in retrospect

    2022 has been my worst on the record on several fronts. I had COVID-19 in the first month, which did a freaky number on my heart. My unexplained weight loss from 2021 continued its run into six months of 2022 before stopping suddenly, although all test reports came back normal and doctors were stumped. Bharat Biotech sued me and many others for defamation in a Telangana court. Stress and humiliation scaled new heights. Efforts to get a grip on my depression came to naught. Two long-term relationships came to an unexpected end. The blog was infiltrated once, possibly once more. I lost access to a a decade-old email account and, for some time, to my blog’s domain. I didn’t write as much as I would have liked, leading to the fewest words published in a year since 2016 (see the customary annual numbers below). On the bright side, I have many new beginnings to which I’m looking forward. I have officially blogged a million words (point #8 here is why this matters). It has been four years since I quit The Thing. I’ll be starting at The Hindu on January 2. I’ll be in Chennai again after five years and closer to some friends I have missed very much. (I’ll be speaking Tamil on a daily basis!) My health is getting better. I have room for new relationships and habits. I’m quite looking forward to 2023, and looking forward to looking back on 2022. I hope you have a wonderful year, too. Mask up, be good, and thanks for reading. 🙂

    Made with Datawrapper.
  • The identity of scientific papers

    This prompt arose in response to Stuart Ritchie’s response to a suggestion in an editorial “first published last year but currently getting some attention on Twitter” – that scientists should write their scientific papers as if they were telling a story, with a beginning, middle and end. The act of storytelling produces something entertaining by definition, but it isn’t the same as when people build stories around what they know. That is, people build stories around what they know but that knowledge, when it is first produced, isn’t and in fact can’t be reliably produced through acts of storytelling. This is Ritchie’s point, and it’s clearly true. As Ash Jogalekar commented on Twitter on Ritchie’s post

    (This is different from saying scientific knowledge shouldn’t be associated with stories – or that only it should be, a preference that philosopher of science Robert P. Crease calls “scientific gaslighting”.)

    Ritchie’s objection arises from a problematic recommendation in the 2021 editorial, that when writing their papers, scientists present the “take-home messages” first, then “select” the methods and results that produced those messages, and then conclude with an introduction-discussion hybrid. To Ritchie, scientists don’t face much resistance, as they’re writing their papers, other than their own integrity that keeps them from cherry-picking from their data to support predetermined conclusions. This is perfectly reasonable, especially considering the absence of such resistance manifested in science’s sensational replication crisis.

    But are scientific papers congruent with science itself?

    The 2021 editorial’s authors don’t do themselves any favours in their piece, writing:

    “The scientific story has a beginning, a middle, and an end. These three components can, and should, map onto the typical IMRaD structure. However, as editors we see many manuscripts that follow the IMRaD structure but do not tell a good scientific story, even when the underlying data clearly can provide one. For example, many studies present the findings without any synthesis or an effort to place them into a wider context. This limits the reader’s ability to gain knowledge and understanding, hence reducing the papers impact.”

    Encouraging scientists to do such things as build tension and release it with a punchline, say, could be a recipe for disaster. The case of Brian Wansink in fact fits Ritchie’s concerns to a T. In the most common mode of scientific publishing today, narrative control is expected to lie beyond scientists – and (coming from a science journalist) lies with science journalists. Or at least: the opportunities to shape science-related narratives are available in large quantities to us.

    A charitable interpretation of the editorial is that its authors would like scientists to take a step that they believe to be marginal (“right there,” as they say) in terms of the papers’ narratives but which has extraordinary benefits – but I’m disinclined. Their words hew frustratingly but unsurprisingly close to suggesting that scientists’ work isn’t properly represented in the public imagination. The most common suggestions I’ve encountered in my experience are that science journalists don’t amplify the “right” points and that they dwell on otherwise trivial shortcomings. The criticisms generally disregard the socio-political context in which science operates and to which journalists are required to be attuned.

    This said, and as Ritchie also admits, the scientific paper itself is not science – so why can’t it be repurposed to ends that scientists are better off meeting than one that’s widely misguided? Ritchie writes:

    “Science isn’t a story – and it isn’t even a scientific paper. The mere act of squeezing a complex process into a few thousand words … is itself a distortion of reality. Every time scientists make a decision about “framing” or “emphasis” or “take-home messages”, they risk distorting reality even further, chipping away at the reliability of what they’re reporting. We all know that many science news articles and science books are over-simplified, poorly-framed, and dumbed-down. Why push scientific papers in the same direction?”

    That is, are scientific papers the site of knowledge production? With the advent of preprint papers, research preregistration and open-data and data-sharing protocols, many papers of today are radically different from those a decade or two ago. Especially online, and on the pages of more progressive journals like eLife, papers are accompanied by peer-reviewers’ comments, links to the raw data (code as well as multimedia), ways to contact the authors, a comments section, a ready-reference list of cited papers, and links to other articles that have linked to it. Sometimes some papers deemed to be more notable by a journal’s editors are also published together with commentary by an independent scientist on the papers’ implications for the relevant fields.

    Scientific papers may have originated as, and for a long time have been, the ‘first expression’ of a research group’s labour to produce knowledge, and thus perfectly subject to Ritchie’s concerns about transforming them to be more engaging. But today, given the opportunities that are available in some pockets of research assessment and publishing, they’re undeniably the sites of knowledge consumption – and in effect the ‘first expression’ of researchers’ attempts to communicate with other scientists as well as, in many cases, the public at large.

    It’s then effectively down to science journalists, and the resistance offered by their integrity to report on papers responsibly – although even then we should beware the “seduction of storytelling”.

    I think the 2021 editorial is targetting the ‘site of knowledge consumption’ identity of the contemporaneous scientific paper, and offers ways to engage its audience better. But when the point is to improve it, why continue to work with, in Ritchie’s and the editorial’s words, a “journal-imposed word count” and structure?

    A halfway point between the editorial’s recommendations and Ritchie’s objections (in his post, but more in line with his other view that we should do away with scientific papers altogether) is to publish the products of scientific labour taking full advantage of what today’s information and communication technologies allow: without a paper per se but a concise description of the methods and the findings, an explicitly labeled commentary by the researchers, the raw code, multimedia elements with tools to analyse them in real-time, replication studies, even honest (and therefore admirable) retraction reports if they’re warranted. The commentary can, in the words of the editorial, have “a beginning, a middle and an end”; and in this milieu, in the company of various other knowledge ‘blobs’, readers – including independent scientists – should be able to tell straightforwardly if the narrative fits the raw data on offer.

    All this said, I must add that what I have set out here are far from where reality is at the moment; in Ritchie’s words,

    “Although those of us … who’ve been immersed in this stuff for years might think it’s a bit passé to keep going on about “HARKing” and “researcher degrees of freedom” and “p-hacking” and “publication bias” and “publish-or-perish” and all the rest, the word still hasn’t gotten out to many scientists. At best, they’re vaguely aware that these problems can ruin their research, but don’t take them anywhere near seriously enough.”

    I don’t think scientific papers are co-identifiable with science itself, or they certainly needn’t be. The latter is concerned with reliably producing knowledge of increasingly higher quality while the former explains what the researchers did, why, when and how. Their goals are different, and there’s no reason the faults of one should hold the other back. However, a research communication effort that has completely and perfectly transitioned to embodying the identity of the modern research paper (an anachronism) as the site of, among other things, knowledge consumption is a long way away – but it helps to bear it in mind, to talk about it and to improve it.

  • New LHC data puts ‘new physics’ lead to bed

    One particle in the big zoo of subatomic particles is the B meson. It has a very short lifetime once it’s created. In rare instances it decays to three lighter particles: a kaon, a lepton and an anti-lepton. There are many types of leptons and anti-leptons. Two are electrons/anti-electrons and muons/anti-muons. According to the existing theory of particle physics, they should be the decay products with equal probability: a B meson should decay to a kaon, electron and anti-electron as often as it decays to a kaon, muon and anti-muon (after adjusting for mass, since the muon is heavier).

    In the last 13 years, physicists studying B meson decays had found on four occasions that it decayed to a kaon, electron and anti-electron more often. They were glad for it, in a way. They had worked out the existing theory, called the Standard Model of particle physics, from the mid-20th century in a series of Nobel Prize-winning papers and experiments. Today, it stands complete, explaining the properties of a variety of subatomic particles. But it still can’t explain what dark matter is, why the Higgs boson is so heavy or why there are three ‘generations’ of quarks, not more or less. If the Standard Model is old physics, particle physicists believe there could be a ‘new physics’ out there – some particle or force they haven’t discovered yet – which could really complete the Standard Model and settle the unresolved mysteries.

    Over the years, they have explored various leads for ‘new physics’ in different experiments, but eventually, with more data, the findings have all been found to be in line with the predictions of the Standard Model. Until 2022, the anomalous B meson decays were thought to be a potential source of ‘new physics’ as well. A 2009 study in Japan found that some B meson decays created electron/anti-electrons pairs more often than muons/anti-muon pairs – as did a 2012 study in the US and a 2014 study in Europe. The last one involved the Large Hadron Collider (LHC), operated by the European Organisation for Nuclear Research (CERN) in France, and a detector on it called LHCb. Among other things, the LHCb tracks B mesons. In March 2021, the LHCb collaboration released data qualitatively significant enough to claim ‘evidence’ that some B mesons were decaying to electron/anti-electron pairs more often than to muon/anti-muon pairs.

    But the latest data from the LHC, released on December 20, appears to settle the question: it’s still old physics. The formation of different types of lepton/anti-lepton particle pairs with equal probability is called lepton-flavour universality. Since 2009, physicists had been recording data that suggested that one type of some B meson decays were violating lepton-flavour university, in the form of a previously unknown particle or force acting on the decay process. In the new data, physicists analysed B meson decays in the current as well as one of two other pathways, and at two different energy levels – thus, as the official press release put it, “yielding four independent comparisons of the decays”. The more data there is to compare, the more robust the findings will be.

    This data was collected over the last five years. Every time the LHC operates, it’s called a ‘run’. Each run generates several terabytes of data that physicists, with the help of computers, comb through in search of evidence for different hypotheses. The data for the new analysis was collected over two runs. And it led physicists to conclude that B mesons’ decay does not violate lepton-flavour universality. The Standard Model still stands and, perhaps equally importantly, a 13-year-old ‘new physics’ lead has been returned to dormancy.

    The LHC is currently in its third run; scientists and engineers working with the machine perform maintenance and install upgrades between runs, so each new cycle of operations is expected to produce more as well as more precise data, leading to more high-precision analyses that could, physicists hope, one day reveal ‘new physics’.

  • Notes on the NIF nuclear fusion breakthrough

    My explainer/analysis of the US nuclear fusion breakthrough was published today. Some stuff didn’t make it to the final draft for space and tone constraints; I’m publishing that below.

    1. While most US government officials present at the announcement of the NIF’s results, including the president’s science advisor Arati Prabhakar (and with the exception of energy secretary Jennifer Granholm), were clear that a power plant was a long way off, they weren’t sufficiently clear that the road from the achievement to such a power station was neither well-understood nor straightforward even as they repeatedly invoked the prospect of commercial power production. LLNL director Kim Budil even said she expects the technology to be ready for commercialisation within five decades. Apart from overstating the prospect as a result, their words also created a stark contrast with how the US government has responded to countries’ demand for more climate financing and emissions cuts. It’s okay with playing up a potential source of clean energy that can only be realised well after global warming has shot past the Paris Agreement threshold of 1.5º C (if at all) but dances all around its contributions to the $100 billion fund it promised it would contribute to and demands to cut emissions – both within the country and in the form of investments around the world – before 2050.

    Also read: US fusion bhashan

    2. A definitive prerequisite for a fusion setup to have achieved ignition [i.e. the fusion yield being higher than the input energy] is the Lawson criterion, named for nuclear engineer John D. Lawson, who derived it in 1955. It stipulates a minimum value for the product of the ion density and the confinement time for different fuels. For the deuterium-tritium reaction mixture at the NIF, for example, the product must be at least 1014 s/cm3. In words, this means the temperature must be high enough for long enough to allow the ions to get closer to each other given they are packed densely enough, achieved by compressing the capsule that contains them. The Lawson criterion in effect tells us why high temperature and high pressure are prerequisites for inertial confinement fusion and why we can’t easily compromise them on the road to higher gain.

    3. Mentions of “gain” in the announcement on December 13 referred to the scientific gain of the fusion test: the ratio of fusion output to the lasers’ output. Its value is thus a reflection of the challenges of heating plasma, sources of heat loss during ignition and fusion, and increasing fusion yield. While government officials at the announcement were careful to note that the NIF result was a “scientific breakthrough”, other scientists told this correspondent that a scientific gain of 1 was a matter of time and that the real revolution would be a higher engineering gain. This is the ratio of the power supplied by an inertial confinement fusion power plant to the grid to the plant’s recirculating power – i.e. the power consumed to create, maintain and heat the fusion plasma and to operate other facilities. This metric is more brutal than the scientific gain because it includes the latter’s challenges as well as the challenges to reducing energy loss in electric engineering equipment.

    4. One plasma physicist likened the NIF’s feat to “the Kitty Hawk moment for the Wright brothers” to The Washington Post. But in a January 2022 paper, scientists from the US Department of Energy wrote that their “Kitty Hawk moment” would be the wall-plug gain reaching 1, instead of the scientific gain, for fusion energy. The wall-plug gain is the ratio of the power from fusion to the power drawn from the wall-plug to run the power plant.

    5. The mode of operation of the inertial confinement facility at NIF is indirect-drive and uses central hotspot ignition. Indirect-drive means the laser pulses don’t directly strike the capsule holding the ions but the hohlraum holding the capsule. When the lasers strike the capsule directly, they need to do so as symmetrically as possible to ensure uniform compression on all sides. Any asymmetry leads to a Rayleigh-Taylor instability that rapidly reduces the yield. Achieving such pinpoint accuracy is quite difficult: the capsule is only 2 mm wide, so even a sub-millimetre deviation in a single pulse can tamp the output to an enormous degree. Once the laser pulses have heated up the hohlraum’s inside surface, the latter emits X-rays, which then uniformly compress and heat the capsule from all sides.

    A schematic of the laser, hohlraum and capsule setup for indirect-drive inertial confinement fusion at the National Ignition Facility. Source: S.H. Glenzer et al. Phys. Rev. Lett. 106, 085004

    6. However, this doesn’t heat all of the fuel to the requisite high temperature. The fuel is arranged in concentric layers, and the heat and pressure cause the 20 µg of deueterium-tritium mix in the central portion to fuse first. This sharply increases the temperature and launches a thermonuclear “burn wave” into the rest of the fuel, which triggers additional reactions. The wisdom for this technique arises from the fact that fusing two hydrogen-2 nuclei requires a temperature corresponding to 5-10 keV of energy (a few million kelvin) whereas the yield is 17,600 keV. So supplying the energy for just one fusion reaction could yield enough energy for hundreds more. Its downside in the inertial confinement contest is that a not-insignificant fraction the energy needs to be diverted to compressing the nuclei instead of heating them, which reduces the gain.

    7. As the NIF announcement turns the world’s attention to the prospect of nuclear fusion, ITER’s prospects are also under scrutiny. According to [Shishir Deshpande of IPR Gandhinagar], who is also former project director of ITER-India, the facility is 75% complete and “key components under manufacturing” will arrive in the “next three to five years”. It has already overrun several cost estimates and deadlines (India is one of its funding countries) – but [according to another scientist’s] estimate, it has “great progress” and will “deliver”. Extending the “current experiments” – referring to the NIF’s tests – “is not a direct path to a power station, unlike ITER, which is far more advanced in being an integrated power station. Many engineering issues which ITER is built to address are not even topics yet for laser fusion, such as survival of key components under high-intensity radiation environments.”

  • Skyward light, wayward light

    This is welcome news:

    … even if it’s curious that three of the four officially stated reasons for designating this ‘dark sky reserve’ aren’t directly related to the telescopes, and that telescopes had to come up in the area for the local government, the Indian Institute of Astrophysics (IIA) and whoever else to acknowledge that it deserved to have dark skies. I believe that ‘doing’ astronomy with telescopes shouldn’t be a prerequisite to “promoting livelihoods through … astro-tourism” and “spreading awareness and education about astronomy”. And that’s why I wonder if there there are other sites in the country that are favourable to a popular science-driven activity, where the locals can be taught to guide tourists to pleasurably perform that activity, but which hasn’t been done because scientists aren’t there doing it themselves.

    But frankly, the government should declare as much of the country a dark-sky reserve as possible*, in consultation with local stakeholders – or at least a new kind of ‘reserve’ where, say, light, noise and other neglected forms of pollution are limited to a greater degree than is common by law and to encourage sustainability along these axes as well. This is in opposition to dealing with these irritants in piecemeal or ad hoc fashion, where each type of pollution is addressed in isolation (even when they have common sources, like factories), and – to a lesser extent – not just because scientists require certain conditions for their work.

    (* I’m obviously cynical about instituting large-scale behavioural change that’d preclude the need for such reserves.)

    Case in point: the new Hanle dark-sky reserve hasn’t been designated as such under law but through an MoU between the UT of Ladakh, the IIA and the Ladakh Autonomous Hill Development Council, with a commitment to fulfilling requirements defined by the International Dark Sky Association , based in the US. Fortunately – but sadly, considering we had to wait for an extraneous prompt – one of the association’s requirements is “current/planned legislation to protect the area”.

    Such ‘reserves’ also don’t have to be setup at the expense of development principally because many of the ways to reduce light (and noise) pollution can do so without coming in the way, of development as well as our right as citizens to enjoy public spaces in all the ways in which we’re entitled. (I’m asking for ‘less’ knowing the Indian government’s well-known reluctance to take radical steps to protect natural resources, but we’re also at a point from the PoV of the climate crisis where every gain is good gain. I’m open to being persuaded otherwise, however.)

    One of the simplest ways is in fact to have no public lighting installation that casts light upward, into the sky, but keeps it all facing down. Doing this will subtract the installation’s contribution to light pollution, improve energy-use efficiency by not ‘wasting’ any light thrown upwards and reduce the power consumed by limiting it to that required to illuminate only what needs to be illuminated, together with surfaces that limit the amount of light scattered upward.

    Other similarly simple ways include turning off all lights when you have no need for them (such as when you leave the room), to prefer energy-efficient lighting solutions and to actively limit the use of decorative lighting – but the ‘turn the lamps downward’ bit is both sensible and surprising in its general non-achievement. Hanle of course will be subject to more stringent restrictions, including requiring people to keep the colour temperature under 3,000 kelvin and the light flux of unshielded lamps to 500 lumen. Here’s an example of the difference to be made:

    That’s a (visibly) necessary extremum, in a manner of speaking – to maintain suitable viewing conditions for the ground-based telescopes in the area. On the other hand, India’s (and the UAE’s for that matter, since I was there recently) industrialisation and urbanisation are creating an unnecessary extremum on the other hand, giving seemingly trivial concerns like light pollution the slip. A 2016 study found that less than 10% of India is exposed to “very high nighttime light intensities with no dark adaption for human eyes” – but also that around 80% of the population is exposed to between “from 1 to 8% above the natural light” to complete lack of access to “true night because it is masked by an artificial twilight”.

    The tragedy, if we can call it that, is exacerbated when even trivial fixes aren’t implemented properly. Or is it when an industrialist might look at this chart and think, “We’ve still got a lot of white to go”?

  • US fusion bhashan

    At 8.30 pm on December 13, US Department of Energy officials announced that the federally funded National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory (LLNL) in California had conducted a fusion test in which the energy yield was greater than that supplied to start it.

    All of them seemed eager to say that this is what US leadership looks like, that this is proof of the US gunning for what was once thought impossible, that the US is where the world’s most brilliant minds work, that according to Joe Biden the US is the land of possibility – and it was hilarious.

    The announcement pertains to a scientific demonstration that the NIF’s mode of achieving controlled fusion, called inertial confinement, works. After this come more tests and modelling, manufacturing to key components to very high quality standards, scaling up from the bare essentials to bigger tests, leading up to designing a commercial facility, then building and finally operating it – assuming success at every step. LLNL director Kim Budil said at the presser that commercial inertial confinement could be three or four decades away.

    All this said, the test is actually far removed from “zero-carbon abundant fusion energy powering our society”, in the words of energy secretary Jennifer Granholm. My forthcoming article for The Hindu (Thursday) explains why. One important requirement is the energy gain: the ratio of the output energy to the input. The new test achieved a gain of around 1.5 – but only relative to the energy that started the fusion reactions, not the energy that the lasers consumed to produce and deliver it.

    More importantly, for inertial confinement fusion to be practicable, it needs to achieve a gain in excess of at least 100. If scientists at NIF find that they’re unable to go past, say, a gain of 50, that will be the end of the road for commercial ICF using the NIF’s setup. So there’s a long, long way to go even before researchers conduct a test that’s a faithful proof of concept for practical nuclear fusion power.

    But even more importantly, it’s spellbinding how the US government will stake its claims to being the country that achieves the impossible, etc. but will make all sorts of excuses to disguise its failure of leadership to mobilise $100 billion a year from economically developed countries for poorer countries to use to weather the climate crisis; to disguise its attempts to undermine, modify or defy commitments made under the Paris Agreement; and to evade, stall and deny efforts to set up a ‘loss and damage’ fund at COP27.

    It’s a shame that the Conferences of the Parties to the UN FCCC have been spending bigger chunks of their agenda of late just to push back on the recalcitrance of the US et al. Yet here we are, with government officials blaring their trumpets for a proof of a proof of concept with several caveats (as I spell out in The Hindu). Granholm even called the result “one of the most impressive scientific feats of the 21st century”, to applause from the audience, and said Joe Biden called it a BFD.

    Of course it is. It’s an unexpectedly big umbrella that the US has got to unfurl over its climate action obligations.

  • Science’s humankind shield

    We need to reconsider where the notion that “science benefits all humans” comes from and whether it is really beneficial.

    I was prompted to this after coming upon a short article in Sky & Telescope about the Holmdel Horn antenna in New Jersey being threatened by a local redevelopment plan. In the 1960s, Arno Penzias and Robert Wilson used the Holmdel Horn to record the first observational evidence of the cosmic microwave background, which is radiation leftover from – and therefore favourable evidence for – the Big Bang event. In a manner of speaking, then, the Holmdel Horn is an important part of the story of humans’ awareness of their place in the universe.

    The US government designated the site of the antenna a ‘National Historic Landmark’ in 1989. On November 22, 2022, the Holmdel Township Committee nonetheless petitioned the planning board to consider redeveloping the locality where the antenna is located. According to the Sky & Telescope article, “If the town permits development of the site, most likely to build high-end residences, the Horn could be removed or even destroyed. The fact that it is a National Historic Landmark does not protect it. The horn is on private property and receives no Federal funds for its upkeep.” Some people have responded to the threat by suggesting that the Holmdel Horn be moved to the sprawling Green Bank Telescope premises in Virginia. This would separate it from the piece of land that can then be put to other use.

    Overall, based on posts on Twitter, the prevailing sentiment appears to be that the Holmdel Horn antenna is a historic site worthy of preservation. One commenter, an amateur astronomer, wrote under the article:

    “The Holmdel Horn Antenna changed humanity’s understanding of our place in the universe. The antenna belongs to all of humanity. The owners of the property, Holmdel Township, and Monmouth County have a historic responsibility to preserve the antenna so future generations can see and appreciate it.”

    (I think the commenter meant “humankind” instead of “humanity”.)

    The history of astronomy involved, and involves, thousands of antennae and observatories around the world. Even with an arbitrarily high threshold to define the ‘most significant’ discoveries, there are likely to be hundreds (if not more) of facilities that made them and could thus be deemed to be worthy of preservation. But should we really preserve all of them?

    Astronomers, perhaps among all scientists, are likelier to be most keenly aware of the importance of land to the scientific enterprise. Land is a finite resource that is crucial to most, if not all, realms of the human enterprise. Astronomers experienced this firsthand when the Indigenous peoples of Hawai’i protested the construction of the Thirty Meter Telescope on Mauna Kea, leading to a long-overdue reckoning with the legacy of telescopes on this and other landmarks that are culturally significant to the locals, but whose access to these sites has come to be mediated by the needs of astronomers. In 2020, Nithyanand Rao wrote an informative article about how “astronomy and colonialism have a shared history”, with land and access to clear skies as the resources at its heart.


    Also read:


    One argument that astronomers arguing in favour of building or retaining these controversial telescopes have used is to claim that the fruits of science “belong to all of humankind”, including to the locals. This is dubious in at least two ways.

    First, are the fruits really accessible to everyone? This doesn’t just mean the papers that astronomers publish based on work using these telescopes are openly and freely available. It also requires that the topics that astronomers work on need to be based on the consensus of all stakeholders, not just the astronomers. Also, who does and doesn’t get observation time on the telescope? What does the local government expect the telescope to achieve? What are the sorts of studies the telescope can and can’t support? Are the ground facilities equally accessible to everyone? There are more questions to ask, but I think you get the idea that claiming the fruits of scientific labour – at least astronomic labour – are available to everyone is disingenuous simply because there are many axes of exclusion in the instrument’s construction and operation.

    Second, who wants a telescope? More specifically, what are the terms on which it might be fair for a small group of people to decide what “all of humankind” wants? Sure, what I’m proposing sounds comical – a global consensus mechanism just to make a seemingly harmless statement like “science benefits everyone” – but the converse seems equally comical: to presume benefits for everyone when in fact they really accrue to a small group and to rely on self-fulfilling prophecies to stake claims to favourable outcomes.

    Given enough time and funds, any reasonably designed international enterprise, like housing development or climate financing, is likely to benefit humankind. Scientists have advanced similar arguments when advocating for building particle supercolliders: that the extant Large Hadron Collider (LHC) in Europe has led to advances in medical diagnostics, distributed computing and materials science, apart from confirming the existence of the Higgs boson. All these advances are secondary goals, at best, and justify neither the LHC nor its significant construction and operational costs. Also, who’s to say we wouldn’t have made these advances by following any other trajectory?

    Scientists, or even just the limited group of astronomers, often advance the idea that their work is for everyone’s good – elevating it to a universally desirable thing, propping it up like a shield in the face of questions about whether we really need an expensive new experiment – whereas on the ground its profits are disseminated along crisscrossing gradients, limited by borders.

    I’m inclined to harbour a similar sentiment towards the Holmdel Horn antenna in the US: it doesn’t belong to all of humanity, and if you (astronomers in the US, e.g.) wish to preserve it, don’t do it in my name. I’m indifferent to the fate of the Horn because I recognise that what we do and don’t seek to preserve is influenced by its significance as an instrument of science (in this case) as much as by ideas of national prestige and self-perception – and this is a project in which I have never had any part. A plaque installed on the Horn reads: “This site possesses national significance in commemorating the history of the United States of America.”

    I also recognise the value of land and, thus, must acknowledge the significance of my ignorance of the history of the territory that the Horn currently occupies as well as the importance of reclaiming it for newer use. (I am, however, opposed in principle to the Horn being threatened by the prospect of “high-end residences” rather than affordable housing for more people.) Obviously others – most others, even – might feel differently, but I’m curious if a) scientists anywhere, other than astronomers, have ever systematically dealt with push-back along this line, and b) the other ways in which they defend their work at large when they can’t or won’t use the “benefits everyone” tack.

  • Links (1)