Science

  • Some facts are bigger than numbers – a story

    Some facts are just boring, like 1 + 1 = 2. You already knew them before they were presented as such, and now that you do, it’s hard to know what to do with them. Some facts are clearly important, even if you don’t know how you can use them, like the spark plug fires after there’s fuel in the chamber. These two kinds of facts may seem far apart but you also know on some level that by repeatedly applying the first kind of fact in different combinations, to different materials in different circumstances, you get the second (and it’s fun to make this journey).

    Then there are some other facts that, while seemingly simple, provoke in your mind profound realisations – not something new as much as a way to understand something deeply, so well, that it’s easy for you to believe that that single neural pathway among the multitude in your head has forever changed. It’s an epiphany.

    I came across such a fact this morning when reading an article about a star that may have gone supernova. The author packs the fact into one throwaway sentence.

    Roughly every second, one of the observable Universe’s stars dies in a fiery explosion.

    The observable universe is 90-something billion lightyears wide. The universe was born only 13.8 billion years ago but it has been expanding since, pushed faster and faster apart by dark energy. This is a vast, vast space – too vast for the human mind to comprehend. I’m not just saying that. Scientists must regularly come up against numbers like 8E50 (8 followed by 50 zeroes), but they don’t have to be concerned about comprehending the full magnitude of those numbers. They don’t need to know how big it is in some dimension. They have the tools – formulae, laws, equations, etc. – to tame those numbers into submission, to beat them into discoveries and predictions that can be put to human use. (Then again, they do need to deal with monstrous moonshine.)

    But for the rest of us, the untameability can be terrifying. How big is a number like 8E50? In kilograms, it’s about a 100-times lower than the mass of the observable universe. It’s the estimated volume of the galaxy NGC 1705 in cubic metres. It’s approximately the lifespan of a black hole with the mass of the Sun. You know these facts, yet you don’t know them. They’re true but they’re also very, very big, so big that they’re well past the point of true comprehension, into the realm of the I’d-rather-not-know. Yet the sentence above affords a way to bring these numbers back.

    The author writes that every second or so, a star goes supernova. According to one estimate, 0.1% of stars have enough mass to eventually become a black hole. The observable universe has 200 billion trillion stars. This means there are 2E20 stars in the universe that could become a black hole, if they’re not already. Considering the universe has lived around 38% of its life and assuming a uniform rate of black hole formation (a big assumption but should suffice to illustrate my point), the universe should be visibly darkening by now, considering photons of light shouldn’t have to travel much before encountering a black hole.

    But it isn’t. The simple reason is that that’s how big the universe is. We learn about stars, other plants, black holes, nebulae, galaxies and so forth. There are lots and lots of them, sure, but you know what there is the most of? The things we often discuss the least: the interstellar medium, the space between stars, and the intergalactic medium, the space between galaxies. Places where there isn’t anything big enough, ironically, to be able to catch the popular imagination. One calculation, based on three assumptions, suggests matter occupies an incomprehensibly low fraction of the observable universe (1. 85% of this is supposed to be dark matter; 2. please don’t assume atoms are also mostly empty).

    In numbers, the bigness of all this transcends comprehension – but knowing that billions upon billions of black holes still only trap a tiny amount of the light going around can be… sobering. And enlivening. Why, in the time you’ve taken to read this article, 300 more black holes will have formed. Pfft.

  • An Indian paper retracted for ‘legal reasons’

    The Editor-in-Chief has retracted this article because it was published in error before the peer review process was completed. The content of this article has been removed for legal reasons. The authors have been offered to submit a revised manuscript for further peer review. All authors agree with this retraction.

    This is the notice accompanying the retraction of a paper published in Springer Nature’s Journal of the Indian Society of Remote Sensing. The editor in chief is Shailesh Nayak, the director of the National Institute of Advanced Studies at IISc campus in Bengaluru. As Retraction Watch reported, the paper – about “suspicious activities” on the Indo-China border in 2020 – was being retracted for, legal reasons aside, being replete with grammatical errors. The excerpt on the Retraction Watch page also suggests it’s qualitatively less like a research paper and more like an internal submission; the paper’s corresponding author, an Aditya Kakde of the University of Petroleum and Energy Studies, a private institute in Dehradun, also didn’t comment on the retraction, and isn’t contesting it either.

    The comment by Nayak, the editor in chief, is interesting: he says the badly-written paper had been published before it was peer-reviewed. First, how is this possible?

    Second, I’m personally convinced Nayak is trying to protect his journal’s reputation by implying that the mistake was processual in nature, and that their functional peer-review system would have caught the paper’s quality problem. But this is also an ex post facto explanation that makes Nayak’s claim hard to believe, considering the process error was a big one.

    Third, if you think you need an exercise as formally defined and intensive as a peer-review to catch such low-quality papers, I doubt your credentials as an editor.

    Fourth, and to continue from my previous post, when editors publish bad papers like this, instead of helping authors correct their mistakes and thus avoid a retraction later for bad language, they’re practically setting up the authors to incur a retraction against their names.

    Finally, why – in Nayak’s telling – was the paper retracted for “legal reasons”? It seems like a ridiculous, but also devious, thing to say. Considering the paper’s authors, including Kakde, haven’t been accused of other issues, I assume the paper’s contents are legitimate: that the authors have developed an image-analysis tool that purports to eliminate one step of some military surveillance procedure (although the images in the paper look quite simplistic). At the same time, one of the hallmarks of the current Indian government is its, and its supporters’, tendency to threaten their detractors with vexatious police and court cases, especially under draconian anti-terrorism and sedition provisions in Indian law.

    So Nayak’s allusion “legal reasons” can’t be dismissed easily, as an attempt to be ambiguous and beyond reproach at the same time – although that’s just as possible (note: he’s a “distinguished scientist” in the Ministry of Earth Sciences).

  • Science shouldn’t animate the need for social welfare

    This is an interesting discovery:

    First, it’s also a bad discovery (note: there’s a difference between right/wrong and good/bad). It is useful to found specific interventions on scientific findings – such as that providing pregnant women with iron supplements in a certain window of the pregnancy could reduce the risk of anaemia by X%. However, that the state should provide iron supplements to pregnant women belonging to certain socio-economic groups across the country shouldn’t be founded on scientific findings. Such welfarist schemes should be based on the implicit virtues of social welfare itself. In the case of the new study: the US government should continue with cash payments for poor mothers irrespective of their babies’ learning outcomes. The programme can’t stop if any of their babies are slow learners.

    Second, I think the deeper problem in this example lies with the context in which the study’s findings could be useful. Scientists and economists have the liberty to study what they will, as well as report what they find (see third point). But consider a scenario in which lawmakers are presented with two policies, both rooted in the same ideologies and both presenting equally workable solutions to a persistent societal issue. Only one, however, has the results of a scientific study to back up its ability to achieve its outcomes (let’s call this ‘Policy A’). Which one will the lawmakers pick to fund?

    Note here that this isn’t a straightforward negotiation between the lawmakers’ collective sensibilities and the quality of the study. The decision will also be influenced by the framework of accountability and justification within which the lawmakers operate. For example, those in small, progressive nations like Finland or New Zealand, where the general scientific literacy is high enough to recognise the ills of scientism, may have the liberty to set the study aside and then decide – but those in India, a large and nationalist nation with generally low scientific literacy, are likelier than not to construe the very availability of scientific backing, of any quality, to mean Policy A is better.

    This is how studies like the one above could become a problem: by establishing a pseudo-privilege for policies that have ‘scientific findings’ to back up their promises. It also creates a rationalisation of the Republican Party’s view that by handing out “unconditional aid”, the state will discourage the recipients from working. While the Republicans’ contention is speculative in principle, in policy and, just to be comprehensive, in science, scientific studies that find the opposite play nicely into their hands – even in as straightforward a case as that of poor mothers. As the New York Times article itself writes:

    Another researcher, Charles A. Nelson III of Harvard, reacted more cautiously, noting the full effect of the payments — $333 a month — would not be clear until the children took cognitive tests. While the brain patterns documented in the study are often associated with higher cognitive skills, he said, that is not always the case.

    “It’s potentially a groundbreaking study,” said Dr. Nelson, who served as a consultant to the study. “If I was a policymaker, I’d pay attention to this, but it would be premature of me to pass a bill that gives every family $300 a month.”

    A temporary federal program of near-universal children’s subsidies — up to $300 a month per child through an expanded child tax credit — expired this month after Mr. Biden failed to unite Democrats behind a large social policy bill that would have extended it. Most Republicans oppose the monthly grants, citing the cost and warning that unconditional aid, which they describe as welfare, discourages parents from working.

    Sharing some of those concerns, Senator Joe Manchin III, Democrat of West Virginia, effectively blocked the Biden plan, though he has suggested that he might support payments limited to families of modest means and those with jobs. The payments in the research project, called Baby’s First Years, were provided regardless of whether the parents worked.

    Third, and in continuation, it’s ridiculous to attach the approval for policies whose principles are clear and sound to the quality of data originating from scientific studies, which in turn depends on the quality of theoretical and experimental instruments scientists have at their disposal (“We hypothesized that infants in the high-cash gift group would have greater EEG power in the mid- to high-frequency bands and reduced power in a low-frequency band compared with infants in the low-cash gift group.”). And let’s not forget, on scientists coming along in time to ask the right questions.

    Fourth, do scientists and economists really have the liberty to study and report what they will? There are two ways to slice this. 1: To clarify the limited context in which this question is worth considering – not at all in almost all cases, and only when a study uncovers the scientific basis for something that isn’t well-served by such a basis. This principle is recursive: it should preclude the need for a scientific study of whether support for certain policies has been set back by the presence or absence of scientific studies. 2: where does the demand for these studies originate? Clearly someone somewhere thought, “Do we know the policy’s effects in the population?” Science can provide quick answers in some cases but not in others, and in the latter, it should be prevented from creating the impression that the absence of evidence is the evidence of absence.

    Who bears that responsibility? I believe that has fallen on the shoulders of politicians, social scientists, science communicators and exponents of the humanities alone for too long; scientists also need to exercise the corresponding restraint, and refrain from conducting studies in which they don’t specify the precise context (and not just that limited to science) in which their findings are valid, if at all. In the current case, NYT called the study’s findings “modest” – that the “researchers likened them in statistical magnitude to moving to the 75th position in a line of 100 from the 81st”. Modest results are also results, sure, but as we have come to expect with COVID-19 research, don’t conduct poor studies – and by extension don’t conduct studies of a social-science concept in a scientific way and expect it to be useful.

  • Getting ahead of theory, experiment, ourselves

    Science journalist Laura Spinney wrote an article in The Guardian on January 9, 2022, entitled ‘Are we witnessing the dawn of post-theory science?’. This excerpt from the article captures its points well, I thought:

    Or take protein structures. A protein’s function is largely determined by its structure, so if you want to design a drug that blocks or enhances a given protein’s action, you need to know its structure. AlphaFold was trained on structures that were derived experimentally, using techniques such as X-ray crystallography and at the moment its predictions are considered more reliable for proteins where there is some experimental data available than for those where there is none. But its reliability is improving all the time, says Janet Thornton, former director of the EMBL European Bioinformatics Institute (EMBL-EBI) near Cambridge, and it isn’t the lack of a theory that will stop drug designers using it. “What AlphaFold does is also discovery,” she says, “and it will only improve our understanding of life and therapeutics.”

    Essentially, the article is concerned with machine-learning’s ability to parse large amounts of data, find patterns in them and use them to generate theories – taking over an important realm of human endeavour. In keeping with tradition, it doesn’t answer the question in its headline with a definitive ‘yes’ but with a hard ‘maybe’ to a soft ‘no’. Spinney herself ends by quoting Picasso: “Computers are useless. They can only give you answers” – although the para right before belies the painter’s confidence with a prayer that the human way to think about theories is still meaningful and useful:

    The final objection to post-theory science is that there is likely to be useful old-style theory – that is, generalisations extracted from discrete examples – that remains to be discovered and only humans can do that because it requires intuition. In other words, it requires a kind of instinctive homing in on those properties of the examples that are relevant to the general rule. One reason we consider Newton brilliant is that in order to come up with his second law he had to ignore some data. He had to imagine, for example, that things were falling in a vacuum, free of the interfering effects of air resistance.

    I’m personally cynical about such claims. If we think we are going to be obsolete, there must be a part of the picture we’re missing.

    There was an idea partly similar to this ‘post-theory hypothesis’ a few years ago, and pointing the other way. In 2013, philosopher Richard Dawid wrote a 190-page essay attempting to make the case that string theory shouldn’t be held back by the lack of experimental evidence, i.e. that it was post-empirical. Of course, Spinney is writing about machines taking over the responsibility of, but not precluding the need for, theorising – whereas Dawid and others have argued that string theory doesn’t need experimental data to stay true.

    The idea of falsifiability is important here. If a theory is flawed and if you can design an experiment that would reveal that flaw, the theory is said to be falsifiable. A theory can be flawless but still falsifiable: for example, Newton’s theory of gravity is complete and useful in a limited context but, for example, can’t explain the precession of the perihelion of Mercury’s orbit. An example of an unfalsifiable theory is the one underlying astrology. In science, falsifiable theories are said to be better than unfalsifiable ones.

    I don’t know what impact Dawid’s book-length effort had, although others before and after him have supported the view that scientific theories should no longer be falsifiable in order to be legitimate. Sean Carroll for one. While I’m not familiar enough with criticisms of the philosophy of falsifiability, I found a better reason to consider the case to trust the validity of string theory sans experimental evidence in a June 2017 preprint paper written by Eva Silverstein:

    It is sometimes said that theory has strayed too far from experiment/observation. Historically, there are classic cases with long time delays between theory and experiment – Maxwell’s and Einstein’s waves being prime examples, at 25 and 100 years respectively. These are also good examples of how theory is constrained by serious mathematical and thought-experimental con- sistency conditions.

    Of course electromagnetism and general relativity are not representative of most theoretical ideas, but the point remains valid. When it comes to the vast theory space being explored now, most testable ideas will be constrained or falsified. Even there I believe there is substantial scientific value to this: we learn something significant by ruling out a valid theoretical possibility, as long as it is internally consistent and interesting. We also learn important lessons in excluding potential alternative theories based on theoretical consistency criteria.

    This said, Dawid’s book, entitled String Theory and the Scientific Method, was perhaps the most popular prouncement of his views in recent years (at least in terms of coverage in the non-technical press), even if by then he’d’ been propounding them for nine years and if his supporters included a bevy of influential physicists. Very simply put, an important part of Dawid’s arguments was that string theory, as a theory, has certain characteristics that make it the only possible theory for all the epistemic niches that it fills, so as long as we expect all those niches to filled by a single theory, string theory may be true by virtue of being the sole possible option.

    It’s not hard to see the holes in this line of reasoning, but again, I’ve considerably simplified his idea. But this said, physicist Peter Woit has been (from what little I’ve seen) the most vocal critic of string theorists’ appeals to ‘post-empirical realism’ and has often directed his ire against the uniqueness hypothesis, significantly because accepting it would endanger, for the sake of just one theory’s survival, the foundation upon which almost every other valid scientific theory stands. You must admit this is a powerful argument, and to my mind more persuasive than Silverstein’s argument.

    In the words of another physicist, Carlo Rovelli, from September 2016:

    String theory is a proof of the dangers of relying excessively on non-empirical arguments. It raised great expectations thirty years ago, when it promised to [solve a bunch of difficult problems in physics]. Nothing of this has come true. String theorists, instead, have [made a bunch of other predictions to explain why it couldn’t solve what it set out to solve]. All this was false.

    From a Popperian point of view, these failures do not falsify the theory, because the theory is so flexible that it can be adjusted to escape failed predictions. But from a Bayesian point of view, each of these failures decreases the credibility in the theory, because a positive result would have increased it. The recent failure of the prediction of supersymmetric particles at LHC is the most fragrant example. By Bayesian standards, it lowers the degree of belief in string theory dramatically. This is an empirical argument. Still, Joe Polchinski, prominent string theorist, writes in that he evaluates the probability of string to be correct at 98.5% (!).

    Scientists that devoted their life to a theory have difficulty to let it go, hanging on non-empirical arguments to save their beliefs, in the face of empirical results that Bayes confirmation theory counts as negative. This is human. A philosophy that takes this as an exemplar scientific attitude is a bad philosophy of science.

  • On science, religion, Brahmins and a book

    I’m partway through Renny Thomas’s new book, Science and Religion in India: Beyond Disenchantment. Its description on the Routledge page reads:

    This book provides an in-depth ethnographic study of science and religion in the context of South Asia, giving voice to Indian scientists and shedding valuable light on their engagement with religion. Drawing on biographical, autobiographical, historical, and ethnographic material, the volume focuses on scientists’ religious life and practices, and the variety of ways in which they express them. Renny Thomas challenges the idea that science and religion in India are naturally connected and argues that the discussion has to go beyond binary models of ‘conflict’ and ‘complementarity’. By complicating the understanding of science and religion in India, the book engages with new ways of looking at these categories.

    To be fair to Renny as well as to prospective readers, I’m hardly familiar with scholarship in this area of study and in no position to be able to confidently critique the book’s arguments. I’m reading it to learn. With this caveat out of the way…

    I’ve been somewhat familiar with Renny’s work and my expectation of his new book to be informative and insightful has been more than met. I like two things in particular based on the approximately 40% I’ve read so far (and not necessarily from the beginning). First, Science and Religion quotes scientists with whom Renny spoke to glean insights generously. A very wise man told me recently that in most cases, it’s possible to get the gist of (non-fiction) books written by research scholars and focusing on their areas of work just by reading the introductory chapter. I think this book may be the exception that makes the rule for me. On occasion Renny also quotes from books by other scientists and scholars to make his point, which I say to imply that for readers like me, who are interested in but haven’t had the chance to formally study these topics, Science and Religion can be a sort of introductory text as well.

    For example, in one place, Renny quotes some 150 words from Raja Ramanna’s autobiography, where the latter – a distinguished physicist and one of the more prominent endorsers of the famous 1981 ‘statement on scientific temper’ – recalls in spirited fashion his visit to Gangotri. The passage reminded me of an article by American historian of science Daniel Sarewitz published many years ago, in which he described his experience of walking through the Angkor Wat temple complex in Cambodia. I like to credit Sarewitz’s non-academic articles for getting me interested in the sociology of science, especially critiques of science as a “secularising medium”, to use Renny’s words, but I have also been guilty of having entered this space of thought and writing through accounts of spiritual experiences written by scientists from countries other than India. But now, thanks to Science and Religion, I have the beginnings of a resolution.

    Second, the book’s language is extremely readable: undergraduate students who are enthusiastic about science should be able to read it for pleasure (and I hope students of science and engineering do). I myself was interested in reading it because I’ve wanted, and still want, to understand what goes on in the minds of people like ISRO chairman K. Sivan when they insist on visiting Tirupati before every major rocket launch. And Renny clarifies his awareness of these basic curiosities early in the book:

    … scientists continue to be the ‘special’ folk in India. It is this image of ‘special’ folk and science’s alleged relationship with ‘objectivity’ which makes people uneasy when scientists go to temple, engage in prayer, and openly declare their allegiance to religious beliefs. The dominance and power of science and its status as a superior epistemology is part of the popular imagination. The continuing media discussion on ISRO (Indian Space Research Organisation) scientists when they offer prayer before any mission is an example.

    Renny also clarifies the religious and caste composition of his interlocutors at the outset as well as dedicates a chapter to discussing the ways in which caste and religious identities present themselves in laboratory settings, and the ways in which they’re acknowledged and dismissed – but mostly dismissed. An awareness of caste and religion is also important to understand the Sivan question, according to Science and Religion. Nearly midway through the book, Renny discusses a “strategic adjustment” among scientists that allows them to practice science and believe in gods “without revealing the apparent contradictions between the two”. Here, one scientist identifies one of the origins of religious belief in an individual to be their “cultural upbringing”; but later in the book, in conversations with Brahmin scientists (and partly in the context of an implicit belief that the practice of science is vouchsafed for Brahmins in India), Renny reveals that they don’t distinguish between cultural and religious practices. For example, scientists who claim to be staunch atheists are also strict vegetarians, don the ‘holy thread’ and, most tellingly for me, insist on getting their sons and daughters married off to people belonging to the same caste.

    They argued that they visited temples and pilgrimage centres not for worship but out of an architectural and aesthetic interest, to marvel at the architectural beauty. As Indians, they are proud of these historical places and pilgrimage centres. They happily invite their guests from other countries to these places with a sense of pride and historicity. Some of the atheist scientists I spoke to informed me that they would offer puja and seek darshan while visiting the temples and historically relevant pilgrimage places, especially when they go with their family; “to make them happy.” They argued that they wouldn’t question the religious beliefs and practices of others and professed that it was a personal choice to be religious or non-religious. They also felt that religion and belief in God provided psychological succor to believers in their hardships and one should not oppose them. Many of the atheist scientists think that festivals such as Diwali or Ayudha Puja are cultural events.

    In their worldview, the distinction between religion and culture has dissolved – and which clearly emphasises the importance of considering the placedness of science just as much as we consider the placedness of religion. By way of example, Science and Religion finds both religion and science at work in laboratories, but en route it also discovers that to do science in certain parts of India – but especially South India, where many of the scientists in his book are located – is to do science in a particular milieu distorted by caste: here, the “lifeworld” is to Brahmins as water is to fish. Perhaps this is how Sivan thinks, too,although he is likely to be performing the subsequent rituals more passively, and deliberately and in self-interest, assuming he seeks his sense of his social standing based on and his deservingness of social support from the wider community of fellow Brahmins: that we must pray and make some offerings to god because that’s how we always did it growing up.

    At least, these are my preliminary thoughts. I’m looking forward to finishing Science and Religion this month (I’m a slow reader) and looking forward to learning more in the process.

  • On anticipation and the history of science

    In mid-2012, shortly after physicists working with the Large Hadron Collider (LHC) in Europe had announced the discovery of a particle that looked a lot like the Higgs boson, there was some clamour in India over news reports not paying enough attention or homage to the work of Satyendra Nath Bose. Bose and Albert Einstein together developed Bose-Einstein statistics, a framework of rules and principles that describe how fundamental particles called bosons behave. (Paul A.M. Dirac named these particles in Bose’s honour.) The director-general of CERN, the institute that hosts the LHC, had visited India shortly after the announcement and said in a speech in Kolkata that in honour of Bose, he and other physicists had decided to capitalise the ‘b’ in ‘boson’.

    It was a petty victory of a petty demand, but few realised that it was also misguided. Bose made the first known (or at least published) attempts to understand the particles that would come to be called bosons – but neither he nor Einstein anticipated the existence of the Higgs boson. There have also been some arguments (justified, I think) that Bose wasn’t awarded a Nobel Prize for his ideas because he didn’t make testable predictions; Einstein received the Nobel Prize for physics in 1915 for anticipating the photoelectric effect. The point is that it was unreasonable to expect Bose’s work to be highlighted, much less attributed, as some had demanded at the time, every time we find a new boson particle.

    What such demands only did was to signal an expectation that the reflection of every important contribution by an Indian scientist ought to be found in every major discovery or invention. Such calls detrimentally affect the public perception of science because they are essentially contextless.

    Let’s imagine that discovery of the Higgs boson was the result of series of successes, depicted thus:

    O—o—o—o—o—O—O—o—o—O—o—o—o—O

    An ‘O’ shows a major success and an ‘o’ shows a minor success, where major/minor could mean the relative significance within particle physics communities, the extent to which physicists anticipated it or simply the amount of journal/media coverage it received. In this sequence, Bose’s paper on a certain class of subatomic particles could be the first ‘O’ and the discovery of the Higgs boson the last ‘O’. And looking at this sequence, one could say Bose’s work led to a lot of the work that came after and ultimately led to the Higgs boson. However, doing that would diminish the amount of study, creativity and persistence that went into each subsequent finding – and would also ignore the fact that we have identified only one branch of endeavour, leading from Bose’s work to the Higgs boson, whereas in reality there are hundreds of branches crisscrossing each other at every o, big or small – and then there are countless epiphanies, ideas and flashes, each one less the product of following the scientific method and more of a mysterious combination of science and intuition.

    By reducing the opportunity to celebrate Bose’s work by pointing to just the Higgs boson point on the branch, we lose the opportunities to know and celebrate the importance of Bose’s work for all the points in between, but especially the points that we still haven’t taken the trouble to understand.

    Recently, a couple people forwarded to me a video on WhatsApp of an Indian-American electrical engineer named Nisar Ahmed. I learnt when in college (studying engineering) that Nisar Ahmed was the co-inventor, along with K. Ramamohan Rao, of the direct cosine transform, a technique to transmit a given amount of information using fewer bits than those contained in the information itself. The video introduced Ahmed’s work as the basis for our being able to take video-conferencing for granted; direct cosine transform allows audiovisual data to be compressed by two, maybe three orders of magnitude, making its transmission across the internet much less resource-intensive than if it had to be transmitted without compression.

    However, the video did little to address the immediate aftermath of Ahmed’s and Rao’s paper, the other work by other scientists that built on it, as well as its use in other settings, and rested on the drawing just one connection between two fairly unrelated events (direct cosine transform and their derivatives, many of them created in the same decade, heralded signal compression, but they didn’t particularly anticipate different forms of communication).

    This flattening of the history of science, and technology as the case may be, may be entertaining but it offers no insights into the processes at work behind these inventions, and certainly doesn’t admit any other achivements before each development. In the video, Ahmed reads out tweets by people reacting to his work as depicted on the show This Is Us. One of them says that it’s because of him, and because of This Is Us, that people are now able to exchange photos and videos of each other around the world, without worrying about distance. But… no; Ahmed himself says in the video, “I couldn’t predict how fast the technology would move” (based on his work).

    Put it simply, I find such forms of communication – and thereunto the way we are prompted to think about science – objectionable because they are content with ‘what’, and aren’t interested in ‘when’, ‘why’ or ‘how’. And simply enumerating the ‘what’ is practically non-scientific, more so when they’re a few particularly sensational whats over others that encourage us to ignore the inconvenient details. Other similar recent examples were G.N. Ramachandran, whose work on protein structure, especially Ramachandran plots, have been connected to pharmaceutical companies’ quest for new drugs and vaccines, and Har Gobind Khorana, whose work on synthesising RNA has been connected to mRNA vaccines.

  • A false union in science journalism

    At what point does a journalist become a stenographer? Most people would say it’s when the journalist stops questioning claims and reprints them uncritically, as if they were simply a machine. So at what point does a science journalist become a stenographer? You’ll probably say at the same point – when they become uncritical of claims. I disagree: I believe the gap between being critical and being non-critical is smaller when it comes to science journalism simply because of the nature of its subject.

    The scientific enterprise in itself is an attempt to arrive at the truth by critiquing existing truths in different contexts and by simultaneously subtracting biases. The bulk of what we understand to be science journalism is aligned with this process: science journalists critique the same material that scientists do as well, even when they’re following disputes between groups of scientists, but seldom critique the scientists’ beliefs and methods themselves. This is not a distinction without a difference or even a finer point about labels.

    One might say, “There aren’t many stories in which journalists need to critique scientists and/or their methods” – this would be fair, but I have two issues on this count.

    First, both the language and the narrative are typically deferential towards scientists and their views, and steer clear of examining how a scientist’s opinions may have been shaped by extra-scientific considerations, such as their socio-economic location, or whether their accomplishments were the product of certain unique privileges. Second, at the level of a collection of articles, science journalists who haven’t critiqued science will likelier than not have laid tall, wide bridges between scientists and non-scientists but won’t have called scientists, or the apparatuses of science itself, out on their bullshit.

    One way or another, a science journalism that’s uncritical of science often leads to the impression that the two enterprises share the same purpose: to advance science, whether by bringing supposedly important scientific work to the attention of politicians or by building the public support for good scientific work. And this impression is wrong. I don’t think that science journalists have an obligation to help science, and I also don’t think that science journalists should.

    As it happens, science journalism is often treated differently than, say, journalism that’s concerned with political or financial matters. I completely understand why. But I don’t think there has been much of an effort to flip this relationship to consider whether the conception and practice of science has been improved by the attention of science journalists the way the practices of governance and policymaking have been improved by the attention of those reporting on politics and economics. If I was a wagering man, I’d wager ‘no’, at least not in India.

    And the failure to acknowledge this corollary of the relationship between science and science journalism, leave alone one’s responsibility as a science journalist, is to my mind a deeper cause for the persistence of both stenographic and pro-science science journalism in some quarters. I thought to write this down when reading a new editorial by Holden Thorpe, the editor of Science. He says here:

    It’s not just a matter of translating jargon into plain language. As Kathleen Hall Jamieson at the University of Pennsylvania stated in a recent article, the key is getting the public to realize that science is a work in progress, an honorably self-correcting endeavor carried out in good faith.

    Umm, no. Science is a work in progress, sure, but I have neither reason nor duty to explain that the practice of science is honourable or that it is “carried out in good faith”. (It frequently isn’t.) Granted, the editorial focuses on communicators, not journalists, but I’d place communicators on the journalism side of the fence, instead of on the science side: the purposes of journalists and communicators deviate only slightly, and for the most part both groups travel the same path.

    The rest of Thorpe’s article focuses on the fact that not all scientists can make good communicators – a fact that bears repeating if only because some proponents of science communication tend to go overboard with their insistence on getting scientists to communicate their work to a non-expert audience. But in restricting his examples to full-blown articles, radio programmes, etc., he creates a bit of a false binary (if earlier he created a false union): that you’re a communicator only if you’ve produced ‘packages’ of that size or scope. But I’ve always marvelled at the ability of some reporters, especially at the New York Times‘ science section, to elicit some lovely quotes from experts. Here are three examples:

    This is science communication as well. Of course, not all scientists may be able to articulate things so colourfully or arrive at poignant insights in their quotes but surely there are many more scientists who can do this than there are scientists who can write entire articles or produce engaging podcasts. And a scientist who allows your article to say interesting things is, I’m sure you’ll agree, an invaluable resource. Working in India, for example, I continue to have to give reporters I commission from extra time to file their stories because many scientists don’t want to talk – and while there are many reasons for this, a big and common one is that they believe communication is pointless.

    So overall, I think there needs to be more leeway in what we consider to be communication, if only so it encourages scientists to speak to journalists (whom they trust, of course) instead of being put off by the demands of a common yet singular form of this exercise, as well as what we imagine the science journalist’s purpose to be. If we like to believe that science communication and/or journalism creates new knowledge, as I do, instead of simply being adjacent to science itself, then it must also craft a purpose of its own.

    Featured image credit: Conol Samuel/Unsplash.

  • About vaccines for children and Covaxin…

    I don’t understand his penchant for late-night announcements, much less one at 10 pm on Christmas night, but Prime Minister Narendra has just said the government will roll out vaccines for young adults aged 15-18 years from January 3, 2022 – around the same time I received a press release from Bharat Biotech saying the drug regulator had approved the company’s COVID-19 vaccine, Covaxin, for emergency-use among those aged 12-18 years.

    I think there’s a lot we don’t know about Covaxin at this time – similar to (but hopefully not to the same extent as) when the regulator approved it for emergency-use among adults on January 3, 2021. But what grates at me more now is this: more than being any other vaccine to protect against COVID-19, Covaxin has been the Indian government’s pet project.

    This favour has manifested in the form of numerous government officials supporting its use and advantages sans nearly sufficient supporting evidence, and in the form of help the vaccine hasn’t deserved at the time the government extended it – primarily the emergency-use approval for adults. Most of all, Covaxin has become a victim of India’s vaccine triumphalism.

    And I’m wary that Prime Minister Modi’s 10 pm announcement is a sign that a similar sort of help is in the offing. Until recently, up to December 24 in fact, officials including Rajesh Bhushan, Vinod K. Paul and Balram Bhargava said the government is being guided by science on the need to vaccinate children. Yet Modi’s announcement coincides with the drug regulator’s approval for Covaxin’s emergency-use among children.

    I admit this isn’t much to go on, but it isn’t an allegation either. It’s the following doubt: given the recent political history of Covaxin and its sorry relationship with the Indian government, will we stand to lose anything by ignoring the timing of the prime minister’s announcement? Put another way – and even if pulling at this thread turns out to be an abortive effort – did the government wait to change its policy on vaccinating those aged younger than 18 years until it could be sure Covaxin was in the running? (The drug regulator had approved another vaccine for children in August, Zydus Cadila’s ZyCoV-D – another train-wreck.)

    Modi’s announcement also has him making a deceptively off-handed comment that today is Atal Bihari Vajpayee’s birth anniversary. Such an alignment of dates has never been a coincidence in Modi’s term as prime minister. Makes one wonder what else isn’t a coincidence…

  • Charles Lieber case: A high-energy probe of science

    There’s a phenomenon in high-energy particle physics that I’ve found instructive as a metaphor to explain some things whose inner character may not be apparent to us but whose true nature is exposed in extreme situations. For example, consider the case of Charles Lieber, an American chemist whom a jury found guilty earlier today of lying to the US government about participating in a Chinese science programme and about having a Chinese bank account.

    Through our everyday interactions with protons and neutrons – sitting in the nuclei of their respective atoms – we’d have no reason to believe that they’re made up of smaller particles. But when you probe a proton with another particle at an extremely high energy, such a probe can reveal that the proton is really made of smaller particles called up and down quarks.

    Similarly, Lieber’s case is an extreme instance of a national government clashing with the nation’s scientific enterprise for engaging in a science-related activity with immutable political implications. In our everyday interactions, there is no reason to believe that the government, or any other relatively more powerful political entity, could have a problem with what some scientist is working on or has to say. But sparks start to fly the moment the scientist’s work, words or even thoughts begin to have political implications.

    It’s not like the protons are not made of up and down quarks when probed at lower energies; it’s that the latter don’t reveal themselves. Similarly, it’s not like science isn’t a political activity even when it lacks political implications; it’s that the relationship between science and politics, in that limited context, is too feeble to matter. But it’s there.

    According to a New York Times article explaining Lieber’s case, by Ellen Barry so you know it’s well-written, the Trump-era ‘China Initiative’ to “root out scientists suspected of sharing sensitive information with China” has been accused of “prosecutorial overreach”, but also that Lieber also shot himself in the foot by denying his involvement in the Chinese programme when “he was specifically asked about his participation”.

    Barry’s article makes the point that scientists are scared because the US government criminalised otherwise innocuous activities – activities that scientists have spent decades learning to not fear. At the same time, it would be unfair to spare Lieber – an accomplished nanoscience expert employed at Harvard University – the expectation to know what the consequences of his actions might be and the risk of ignoring them.

    Perhaps he harboured a sense of exceptionalism vis-à-vis his cause; perhaps he thought the ‘China Initiative’ that had knocked on the doors of other scientists wouldn’t knock on his; perhaps he just assumed it wouldn’t matter. But any which way, more than just being “about scaring the scientific community”, as one of Lieber’s former students says in the article, the initiative’s victory in the Charles Lieber case should also remind scientists that the best way to beat the initiative is for the scientific community to proactively engage in political issues.

    Lieber’s excuse, according to tapes of his interrogation by FBI officers, was that he wished to train younger scientists in a technology he had developed and thus increase his chances of winning a Nobel Prize. This is the science-politics link coming back to bite Lieber, and others like him (notably Brian Keating, whose act of ‘coming clean’ on this sentiment I continue to find admirable), who risk ruining their careers just win the prize (see addendum).

    One major impediment to acknowledging that politics is suffused in every human enterprise – including science – that happens in any organised society whose people govern themselves is that people often misunderstand politics to be “what their politicians say/do” instead of “the practice of self-governance”. But by understanding it to be the former, there’s a hoopla every time some political leader or other apparently oversteps their remit.


    Addendum

    Three comments.

    First, somewhere between the early 20th century and the early 21st, the prize’s perception went from being “do good work and you’ll win it” to “do good work and then hack your way to winning it”.

    Second, I’ve seen this tendency of going ‘over and beyond’ to ensure one wins a Nobel Prize predominantly among scientists of the US – which in turn is hard to separate from the fact that most winners of the science Nobel Prizes have been from the US. There is perhaps a academic-cultural issue at work, and there’s certainly a competition issue at work. People are first nominated for a prize by eminent individuals and former laureates, and thanks to a historical skew of the laureates’ countries of citizenship (in favour of the US thanks to the rise of Nazism in Europe) and the way industry and the scientific publishing enterprise are organised today, both these groups of people as well as new laureates are skewed US-ward. What happens when a country produces “too much” good work for one prize, and its inexplicable rule to award only three people at a time, to consider? Surely Lieber believed this and wanted to get ahead of others, leading to his bullheaded actions?

    Third, dismantle the Nobel Prizes.

    Featured image: Charles M. Lieber. Credit: Kris Snibbe/Wikimedia Commons, CC BY-SA 4.0.

  • Some thoughts on Robert Downey, Jr.’s science funding idea

    On December 12, Iron Man, a.k.a. Robert Downey, Jr., and David Lang coauthored an op-ed in Fast Company that announced a grant-giving initiative of theirs designed to help fund scientists doing work too important to wait for the bureaucracy to catch up. Their article opened with a paragraph that, to my eye, seemed to have many flaws in reasoning, or at least overlooked them, perhaps in favour of getting to their limited point.

    If there were a Nobel Prize for Overcoming Bureaucratic Adversity, do you know who would win it? Katalin Karikó. Her story of enduring decades of little to no support for her research into the properties of mRNA, which led to the development of the COVID-19 vaccines, has transcended science. It exposes a blind spot of our current scientific institutions to find and nurture every passionate scientist and line of inquiry.

    Except it isn’t a blind spot.

    I think it’s a romantic ideal that dreams of funding every idea scientists have. You can, there’s nothing wrong with it, except you’d need lots of money. The current system is designed – even if it hasn’t been implemented – to ensure at least a certain percentage of good ideas are identified and funded at the right time and in parallel to maximise that percentage. What Iron Man and Lang imagine in their article is a system that will fund all good ideas, including those that The System has let slip. It’s a welcome move, perhaps, but it isn’t more virtuous, even when it rewards adversity that, again, The System has let slip, simply because The System’s way – which is effectively the tax-funded government’s way in most parts of the world – is the most efficient for its limited corpus of funds and its responsibility to organise research output to maximise societal good, directly or indirectly, instead of letting it all be open-ended.

    Granted, in times of great adversity, it might be foolish to wait for evidence before waiting to act, and a ‘wartime’ funding paradigm during a pandemic makes some sense, even if it’s a solution designed for wartime alone. At the same time, the COVID-19 pandemic – and the ‘fast grants’ for pandemic research that seem to have inspired Downey, Jr. and Lang – is a different kind of adversity than climate change. The latter is longer-lasting and more persistent, is a wicked problem (i.e. has multiple interrelated and/or emergent causes), has significant social implications that complicate the relationships between causes and effects, and is decidedly inter- and multi-generational. These differences could in turn render unbridled rapidity counterproductive.

    A part of the reason for the authors’ outlook, concerned with ‘catching’ good ideas before it’s too late, sticks out in the first sentence, in which Iron Man and Lang single out Katalin Karikó for praise for her work on mRNA vaccines as well as signal that they consider the Nobel Prizes to be the ultimate reward. If you’ve been reading this blog, you know where I stand on these prizes. But more importantly in the current context, the use of these prizes in particular and the choice of Katalin Karikó as an example of the sort of scientist they’d like to fund is… jarring.

    Iron Man and Lang seem to believe, as they write, that it’s important to catch brilliant ideas quickly (and that “the major impediments” to funding scientific work “are the obvious limitations of decision-making by committee”). First, one cause, among many, of the bureaucracy’s slowness is the bureaucracy’s need to be accountable to the polity about how it spends the polity’s money. And I don’t know if Iron Man and Lang are making room for any kind of slowness, and the corresponding paperwork, in their grant-funding programme. ‘Risk-seeking’ shouldn’t become an excuse for ‘accountability-avoiding’. On a related note, to zero in on ‘speed of funding’ as the principal problem with not funding the “right” kind of environmental research is also to ignore other, potentially more fundamental problems hiding behind the slowness – like “the party currently in power is not interested”.

    Second, many of us have lambasted others for singling out individuals – typically white men – as the sole originators of great discoveries. However, many people have identified Karikó more than anyone else with creating the idea of mRNA vaccines, aided by long profiles published by major newspapers about her work and her role in BioNTech, yet haven’t elicited the same or even similar reactions. If adversity is our measure, i.e. “we’re going to associate the person who struggled the most to make a meaningful contribution to an important idea”, then Karikó is by no means alone – nor is she likely to be, as just the post-war history of science has taught us, if we’re focusing on women. She couldn’t have worked alone, and even if the people we’re ignoring as a result are white old men, it’s still problematic to say Katalin Karikó is deserving of a Nobel Prize – at least not without, at the same time, admitting that it would be legitimate for the Nobel Prizes to award two or three people for the invention of mRNA vaccines.

    (I discovered that Nature News published a deep-dive in October on the “tangled history of mRNA vaccines” after I started writing this post, discussing the work of a long line of people, including Karikó, who contributed to this enterprise. So on a related note, if Karikó’s story is being used to illustrate new science-funding ideas, what might the professional experiences of all those other people say about how science is funded – as well as about how we apportion credit?)

    Third, it’s kind of a bummer that, heartening though it is for major Hollywood actors to get interested in the relatively more obscure problems of science administration and funding, and in turn to become part of a concrete solution instead of running their mouths on Twitter, this new initiative refuses to break from the tradition of devising new solutions to old problems instead of fixing existing solutions – an admittedly much less glamorous enterprise. The only other person who’s compared to Iron Man as frequently as Robert Downey, Jr., one Elon Musk, is infamous for this kind of thinking vis-à-vis ‘revolutionising’ personal transport. Musk wants more people to own cars – especially the ones his company makes – but will go so far as to dream up Hyperloop and The Boring Company to avoid considering fixing existing public transportation options.

    Similarly, Downey, Jr. and Lang, and their supporters, will go to the extent of setting up a whole new platform, or getting on a relatively new platform (same difference), instead of building on the things The System is already getting right. And this is a problem for at least three reasons. First, the new system will set up its own forms of discrimination and in-ness. For example, Downey, Jr.’s and Lang’s idea goes like this:

    FootPrint Coalition is funding early research in brand new environmental fields, and doing it under the direction of esteemed Science Leads who can move quickly and fund at their discretion. The FootPrint Coalition Science Engine builds off suggestions made in the Funding Risky Research paper. It operationalizes the “loose-play funding for early-stage risky explorations” but doesn’t bind it to universities.

    We’re doing it “in public” on the Experiment funding platform, a website for crowdfunding science research projects, so anyone can participate as a cofunder.

    As a platform that you get on, describe your idea and convince potential funders that your work is worth funding, ‘Experiment’ fundamentally requires you to be able to communicate clearly and with the same sensibilities as your future funders, most of whom are likely to be English-speakers of the US or Europe, if you expect to be successful. This in turn quickly eliminates a panoply of scientists who aren’t great communicators or aren’t even fluent in English. And in the specific case of the ‘Science Engine’, your work needs to appeal to the ‘Science Lead’ and fit into their sense of what’s important and what isn’t. A version of this problem already exists with scientific journals – where major journals’ editorial boards are often filled with editors who turn down papers because they’re not as enthusiastic as the authors might be about, say, the nutritional properties of an ant species endemic to Odisha.

    In addition, not all ideas to save the environment are great ideas. For example, climate geoengineering is popular with the US government because it needs to make up for historical emissions without compromising on current economic growth, it needs to placate the local, powerful energy industry and it wields the clout to disregard how much geoengineering solutions could screw up the weather in other parts of the world.

    Second, as a system designed to patch “leaks” in the “scientific talent funnel”, it still presumes the existence of a funnel for its own success even as it does nothing to fix the funnel itself. This is self-serving. And third, allowing scientific work to achieve success based solely on what gets funded quickly – that too based on descriptions on platforms on the internet, unmoderated by the criticism of other scientists (have you visited PubPeer?) or even by the critical attention of competent science journalists, and based on what people who are already rich think is “cool” – can be a short path for things the world could really do without to get funded.

    So, do I think Iron Man’s and Lang’s pitch is a good idea? I still don’t know.

    Featured image: A screenshot of Iron Man in action in Avengers: Infinity War (2018). Source: Hotstar.