Month: July 2021

  • The problem with rooting for science

    The idea that trusting in science involves a lot of faith, instead of reason, is lost on most people. More often than not, as a science journalist, I encounter faith through extreme examples – such as the Bloch sphere (used to represent the state of a qubit) or wave functions (‘mathematical objects’ used to understand the evolution of certain simple quantum systems). These and other similar concepts require years of training in physics and mathematics to understand. At the same time, science writers are often confronted with the challenge of making these concepts sensible to an audience that seldom has this training.

    More importantly, how are science writers to understand them? They don’t. Instead, they implicitly trust scientists they’re talking to to make sense. If I know that a black hole curves spacetime to such an extent that pairs of virtual particles created near its surface are torn apart – one particle entering the black hole never to exit and the other sent off into space – it’s not because I’m familiar with the work of Stephen Hawking. It’s because I read his books, read some blogs and scientific papers, spoke to physicists, and decided to trust them all. Every science journalist, in fact, has a set of sources they’re likely to trust over others. I even place my faith in some people over others, based on factors like personal character, past record, transparency, reflexivity, etc., so that what they produce I take only with the smallest pinch of salt, and build on their findings to develop my own. And this way, I’m already creating an interface between science and society – by matching scientific knowledge with the socially developed markers of reliability.

    I choose to trust those people, processes and institutions that display these markers. I call this an act of faith for two reasons: 1) it’s an empirical method, so to speak; there is no proof in theory that such ‘matching’ will always work; and 2) I believe it’s instructive to think of this relationship as being mediated by faith if only to amplify its anti-polarity with reason. Most of us understand science through faith, not reason. Even scientists who are experts on one thing take the word of scientists on completely different things, instead of trying to study those things themselves (see ad verecundiam fallacy).

    Sometimes, such faith is (mostly) harmless, such as in the ‘extreme’ cases of the Bloch sphere and the wave function. It is both inexact and incomplete to think that quantum superposition means an object is in two states at once. The human brain hasn’t evolved to cognate superposition exactly; this is why physicists use the language of mathematics to make sense of this strange existential phenomenon. The problem – i.e. the inexactitude and the incompleteness – arises when a communicator translates the mathematics to a metaphor. Equally importantly, physicists are describing whereas the rest of us are thinking. There is a crucial difference between these activities that illustrates, among other things, the fundamental incompatibility between scientific research and science communication that communicators must first surmount.

    As physicists over the past three or four centuries have relied increasingly on mathematics rather than the word to describe the world, physics, like mathematics itself, has made a “retreat from the word,” as literary scholar George Steiner put it. In a 1961 Kenyon Review article, Steiner wrote, “It is, on the whole, true to say that until the seventeenth century the predominant bias and content of the natural sciences were descriptive.” Mathematics used to be “anchored to the material conditions of experience,” and so was largely susceptible to being expressed in ordinary language. But this changed with the advances of modern mathematicians such as Descartes, Newton, and Leibniz, whose work in geometry, algebra, and calculus helped to distance mathematical notation from ordinary language, such that the history of how mathematics is expressed has become “one of progressive untranslatability.” It is easier to translate between Chinese and English — both express human experience, the vast majority of which is shared — than it is to translate advanced mathematics into a spoken language, because the world that mathematics expresses is theoretical and for the most part not available to our lived experience.

    Samuel Matlack, ‘Quantum Poetics’, The New Atlantic, 2017

    However, the faith becomes more harmful the further we move away from the ‘extreme’ examples – of things we’re unlikely to stumble on in our daily lives – and towards more commonplace ideas, such as ‘how vaccines work’ or ‘why GM foods are not inherently bad’. The harm emerges from the assumption that we think we know something when in fact we’re in denial about how it is that we know that thing. Many of us think it’s reason; most of the time it’s faith. Remember when, in Friends, Monica Geller and Chandler Bing ask David the Scientist Guy how airplanes fly, and David says it has to do with Bernoulli’s principle and Newton’s third law? Monica then turns to Chandler with a knowing look and says, “See?!” To which Chandler says, “Yeah, that’s the same as ‘it has something to do with wind’!”

    The harm is to root for science, to endorse the scientific enterprise and vest our faith in its fruits, without really understanding how these fruits are produced. Such understanding is important for two reasons.

    First, if we trust scientists, instead of presuming to know or actually knowing that we can vouch for their work. It would be vacuous to claim science is superior in any way to another enterprise that demands our faith when science itself also receives our faith. Perhaps more fundamentally, we like to believe that science is trustworthy because it is evidence-based and it is tested – but the COVID-19 pandemic should have clarified, if it hasn’t already, the continuous (as opposed to discrete) nature of scientific evidence, especially if we also acknowledge that scientific progress is almost always incremental. Evidence can be singular and thus clear – like a new avian species, graphene layers superconducting electrons or tuned lasers cooling down atoms – or it can be necessary but insufficient, and therefore on a slippery slope – such as repeated genetic components in viral RNA, a cigar-shaped asteroid or water shortage in the time of climate change.

    Physicists working with giant machines to spot new particles and reactions – all of which are detected indirectly, through their imprints on other well-understood phenomena – have two important thresholds for the reliability of their findings: if the chance of X (say, “spotting a particle of energy 100 GeV”) being false is 0.27%, it’s good enough to be evidence; if the chance of X being false is 0.00006%, then it’s a discovery (i.e., “we have found the particle”). But at what point can we be sure that we’ve indeed found the particle we were looking for if the chance of being false will never reach 0%? One way, for physicists specifically, is to combine the experiment’s results with what they expect to happen according to theory; if the two match, it’s okay to think that even a less reliable result will likely be borne out. Another possibility (in the line of Karl Popper’s philosophy) is that a result expected to be true, and is subsequently found to be true, is true until we have evidence to the contrary. But as suitable as this answer may be, it still doesn’t neatly fit the binary ‘yes’/’no’ we’re used to, and which we often expect from scientific endeavours as well (see experience v. reality).

    (Minor detour: While rational solutions are ideally refutable, faith-based solutions are not. Instead, the simplest way to reject their validity is to use extra-scientific methods, and more broadly deny them power. For example, if two people were offering me drugs to suppress the pain of a headache, I would trust the one who has a state-sanctioned license to practice medicine and is likely to lose that license, even temporarily, if his prescription is found to have been mistaken – that is, by asserting the doctor as the subject of democratic power. Axiomatically, if I know that Crocin helps manage headaches, it’s because, first, I trusted the doctor who prescribed it and, second, Crocin has helped me multiple times before, so empirical experience is on my side.)

    Second, if we don’t know how science works, we become vulnerable to believing pseudoscience to be science as long as the two share some superficial characteristics, like, say, the presence and frequency of jargon or a claim’s originator being affiliated with a ‘top’ institute. The authors of a scientific paper to be published in a forthcoming edition of the Journal of Experimental Social Psychology write:

    We identify two critical determinants of vulnerability to pseudoscience. First, participants who trust science are more likely to believe and disseminate false claims that contain scientific references than false claims that do not. Second, reminding participants of the value of critical evaluation reduces belief in false claims, whereas reminders of the value of trusting science do not.

    (Caveats: 1. We could apply the point of this post to this study itself; 2. I haven’t checked the study’s methods and results with an independent expert, and I’m also mindful that this is psychology research and that its conclusions should be taken with salt until independent scientists have successfully replicated them.)

    Later from the same paper:

    Our four experiments and meta-analysis demonstrated that people, and in particular people with higher trust in science (Experiments 1-3), are vulnerable to misinformation that contains pseudoscientific content. Among participants who reported high trust in science, the mere presence of scientific labels in the article facilitated belief in the misinformation and increased the probability of dissemination. Thus, this research highlights that trust in science ironically increases vulnerability to pseudoscience, a finding that conflicts with campaigns that promote broad trust in science as an antidote to misinformation but does not conflict with efforts to install trust in conclusions about the specific science about COVID-19 or climate change.

    In terms of the process, the findings of Experiments 1-3 may reflect a form of heuristic processing. Complex topics such as the origins of a virus or potential harms of GMOs to human health include information that is difficult for a lay audience to comprehend, and requires acquiring background knowledge when reading news. For most participants, seeing scientists as the source of the information may act as an expertise cue in some conditions, although source cues are well known to also be processed systematically. However, when participants have higher levels of methodological literacy, they may be more able to bring relevant knowledge to bear and scrutinise the misinformation. The consistent negative association between methodological literacy and both belief and dissemination across Experiments 1-3 suggests that one antidote to the influence of pseudoscience is methodological literacy. The meta-analysis supports this.

    So rooting for science per se is not just not enough, it could be harmful vis-à-vis the public support for science itself. For example (and without taking names), in response to right-wing propaganda related to India’s COVID-19 epidemic, quite a few videos produced by YouTube ‘stars’ have advanced dubious claims. They’re not dubious at first glance, if also because they purport to counter pseudoscientific claims with scientific knowledge, but they are – either for insisting a measure of certainty in the results that neither exist nor are achievable, or for making pseudoscientific claims of their own, just wrapped up in technical lingo so they’re more palatable to those supporting science over critical thinking. Some of these YouTubers, and in fact writers, podcasters, etc., are even blissfully unaware of how wrong they often are. (At least one of them was also reluctant to edit a ‘finished’ video to make it less sensational despite repeated requests.)

    Now, where do these ideas leave (other) science communicators? In attempting to bridge a nearly unbridgeable gap, are we doomed to swing only between most and least unsuccessful? I personally think that this problem, such as it is, is comparable to Zeno’s arrow paradox. To use Wikipedia’s words:

    He states that in any one (duration-less) instant of time, the arrow is neither moving to where it is, nor to where it is not. It cannot move to where it is not, because no time elapses for it to move there; it cannot move to where it is, because it is already there. In other words, at every instant of time there is no motion occurring. If everything is motionless at every instant, and time is entirely composed of instants, then motion is impossible.

    To ‘break’ the paradox, we need to identify and discard one or more primitive assumptions. In the arrow paradox, for example, one could argue that time is not composed of a stream of “duration-less” instants, that each instant – no matter how small – encompasses a vanishingly short but not nonexistent passage of time. With popular science communication (in the limited context of translating something that is untranslatable sans inexactitude and/or incompleteness), I’d contend the following:

    • Awareness: ‘Knowing’ and ‘knowing of’ are significantly different and, I hope, self-explanatory also. Example: I’m not fluent with the physics of cryogenic engines but I’m aware that they’re desirable because liquefied hydrogen has the highest specific impulse of all rocket fuels.
    • Context: As I’ve written before, a unit of scientific knowledge that exists in relation to other units of scientific knowledge is a different object from the same unit of scientific knowledge existing in relation to society.
    • Abstraction: 1. perfect can be the enemy of the good, and imperfect knowledge of an object – especially a complicated compound one – can still be useful; 2. when multiple components come together to form a larger entity, the entity can exhibit some emergent properties that one can’t derive entirely from the properties of the individual components. Example: one doesn’t have to understand semiconductor physics to understand what a computer does.

    An introduction to physics that contains no equations is like an introduction to French that contains no French words, but tries instead to capture the essence of the language by discussing it in English. Of course, popular writers on physics must abide by that constraint because they are writing for mathematical illiterates, like me, who wouldn’t be able to understand the equations. (Sometimes I browse math articles in Wikipedia simply to immerse myself in their majestic incomprehensibility, like visiting a foreign planet.)

    Such books don’t teach physical truths; what they teach is that physical truth is knowable in principle, because physicists know it. Ironically, this means that a layperson in science is in basically the same position as a layperson in religion.

    Adam Kirsch, ‘The Ontology of Pop Physics’, Tablet Magazine, 2020

    But by offering these reasons, I don’t intend to over-qualify science communication – i.e. claim that, given enough time and/or other resources, a suitably skilled science communicator will be able to produce a non-mathematical description of, say, quantum superposition that is comprehensible, exact and complete. Instead, it may be useful for communicators to acknowledge that there is an immutable gap between common English (the language of modern science) and mathematics, beyond which scientific expertise is unavoidable – in much the same way communicators must insist that the farther the expert strays into the realm of communication, the closer they’re bound to get to a boundary beyond which they must defer to the communicator.

  • Boron nitride, tougher than it looks

    During World War I, a British aeronautical engineer named A.A. Griffith noticed something odd about glass. He found that the atomic bonds in glass needed 10,000 megapascals of stress to break apart – but a macroscopic mass of glass could be broken apart by a stress of 100 megapascals. Something about glass changed between the atomic level and the bulk, making it more brittle than its atomic properties suggested.

    Griffith attributed this difference to small imperfections in the bulk, like cracks and notches. He also realised the need for a new way to explain how brittle materials like glass fracture, since the atomic properties alone can’t explain it. He drew on thermodynamics to figure an equation based on two forms of energy: elastic energy and surface energy. The elastic energy is energy stored in a material when it is deformed – like the potential energy of a stretched rubber-band. The surface energy is the energy of molecules at the surface, which is always greater than that of molecules in the bulk. The greater the surface area of an object, the more surface energy it has.

    Griffith took a block of glass, subjected it to a tensile load (i.e. a load that stretches the material without breaking it) and then etched a small crack in it. He found that the introduction of this flaw reduced the material’s elastic energy but increased its surface energy. He also found that the free energy – which is surface energy minus elastic energy – increased up to a point as he increased the crack length, before falling back down even if the crack got longer. A material fractures, i.e. breaks, when the amount of stress it is under exceeds this peak value.

    Through experiments, engineers have also been able to calculate the fracture toughness of materials – a number essentially denoting the ability of a material to resist the propagation of surface cracks. Brittle materials usually have higher strength but lower fracture toughness. That is, they can withstand high loads without breaking or deforming, but when they do fail, they fail in catastrophic fashion. No half-measures.

    If a material’s fracture characteristics are in line with Griffith’s theory, it’s usually brittle. For example, glass has a strength of 7 megapascals (with a theoretical upper limit of 17,000 megapascals) – but a fracture toughness of 0.6-0.8 megapascals per square-root metre.

    Graphene is a 2D material, composed of a sheet of carbon atoms arranged in a hexagonal pattern. And like glass, its strength: 130,000 megapascals; its fracture toughness: 4 megapascals per square-root metre – the difference arising similarly from small flaws in the bulk material. Many people have posited graphene as a material of the future for its wondrous properties. Recently, scientists have been excited about the weird behaviour of electrons in graphene and the so-called ‘magic angle’. However, the fact that it is brittle automatically limits graphene’s applications to environments in which material failure can’t be catastrophic.

    Another up-and-coming material is hexagonal boron nitride (h-BN). As its name indicates, h-BN is a grid of boron and nitrogen atoms arranged in a hexagonal pattern. (Boron nitride has two other forms: sphalerite and wurtzite.) h-BN is already used as a lubricant because it is very soft. It can also withstand high temperatures before losing its structural integrity, making it useful in applications related to spaceflight. However, since monolayer h-BN’s atomic structure is similar to that of graphene, it was likely to be brittle as well – with small flaws in the bulk material compromising the strength arising from its atomic bonds.

    But a new study, published on June 2, has found that h-BN is not brittle. Scientists from China, Singapore and the US have reported that cracks in “single-crystal monolayer h-BN” don’t propagate according to Griffith’s theory, but that they do so in a more stable way, making the material tougher.

    Even though h-BN is sometimes called ‘white graphene’, many of its properties are different. Aside from being able to withstand up to 300º C more in air before oxidising, h-BN is an insulator (graphene is a semiconductor) and is more chemically inert. In 2017, scientists from Australia, China, Japan, South Korea, the UK and the US also reported that while graphene’s strength dropped by 30% as the number of stacked layers was increased from one to eight, that of h-BN was pretty much constant. This suggested, the scientists wrote, “that BN nanosheets are one of the strongest insulating materials, and more importantly, the strong interlayer interaction in BN nanosheets, along with their thermal stability, make them ideal for mechanical reinforcement applications.”

    The new study further cements this reputation, and in fact lends itself to the conclusion that h-BN is one of the thermally, chemically and mechanically toughest insulators that we know.

    Here, the scientists found that when a crack is introduced in monolayer h-BN, the resulting release of energy is dissipated more effectively than is observed in graphene. And as the crack grows, they found that unlike in graphene, it gets deflected instead of proceeding along a straight path, and also sprouts branches. This way, monolayer h-BN redistributes the elastic energy released in a way that allows the crack length to increase without fracturing the material (i.e. without causing catastrophic failure).

    According to their paper, this behaviour is the result of h-BN being composed of two different types of atoms, of boron and nitrogen, whereas graphene is composed solely of carbon atoms. As a result, when a bond between boron and nitrogen breaks, two types of crack-edges are formed: those with boron at the edge (B-edge) and those with nitrogen at the edge (N-edge). The scientists write that based on their calculations, “the amplitude of edge stress [along N-edges] is more than twice of that [along B-edges]”. Every time a crack branches or is deflected, the direction in which it propagates is determined according to the relative position of B-edges and N-edges around the crack tip. And as the crack propagates, the asymmetric stress along these two edges causes the crack to turn and branch at different times.

    The scientists summarise this in their paper as that h-BN dissipates more energy by introducing “more local damage” – as opposed to global damage, i.e. fracturing – “which in turn induces a toughening effect”. “If the crack is branched, that means it is turning,” Jun Lou, one of the paper’s authors and a materials scientist at Rice University, Texas, told Nanowerk. “If you have this turning crack, it basically costs additional energy to drive the crack further. So you’ve effectively toughened your material by making it much harder for the crack to propagate.” The paper continues:

    [These two mechanisms] contribute significantly to the one-order of magnitude increase in effective energy release rate compared with its Griffith’s energy release rate. This finding that the asymmetric characteristic of 2D lattice structures can intrinsically generate more energy dissipation through repeated crack deflection and branching, demonstrates a very important new toughening mechanism for brittle materials at the 2D limit.

    To quote from Physics World:

    The discovery that h-BN is also surprisingly tough means that it could be used to add tear resistance to flexible electronics, which Lou observes is one of the niche application areas for 2D-based materials. For flexible devices, he explains, the material needs to mechanically robust before you can bend it around something. “That h-BN is so fracture-resistant is great news for the 2D electronics community,” he adds.

    The team’s findings may also point to a new way of fabricating tough mechanical metamaterials through engineered structural asymmetry. “Under extreme loading, fracture may be inevitable, but its catastrophic effects can be mitigated through structural design,” [Huajian Gao, also at Rice University and another member of the study], says.

    Featured image: A representation of hexagonal boron nitride. Credit: Benjah-bmm27/Wikimedia Commons, public domain.

  • On tabletop accelerators

    Tabletop accelerators are an exciting new field of research in which physicists use devices the size of a shoe box, or something just a bit bigger, to accelerate electrons to high energies. The ‘conventional way’ to do this has been to use machines that are as big as small buildings, but are often bigger as well. The world’s biggest machine, the Large Hadron Collider (LHC), uses thousands of magnets, copious amounts of electric current, sophisticated control systems and kilometres of beam pipes to accelerate protons from 0.09 TeV – their rest energy – to 7 TeV. Tabletop accelerators can’t push electrons to such high energies, required to probe exotic quantum phenomena, but they can attain energies that are useful in medical applications (including scanners and radiation therapy).

    They do this by skipping the methods that ‘conventional’ accelerators use, and instead take advantage of decades of progress in theoretical physics, computer simulations and fabrication. For example, some years ago, there was a group at Stanford University that had developed an accelerator that could sit on your fingertip. It consisted of narrow channels etched on glass, and a tuned infrared laser shined over these ‘mountains’ and ‘valleys’. When an electron passed over a mountain, it would get pushed more than it would slow down over a valley. This way, the group reported an acceleration gradient – amount of acceleration per unit distance – of 300 MV/m. This means the electrons will gain 300 MeV of energy for every meter travelled. This was comparable to some of the best, but gigantic, electron accelerators.

    Another type of tabletop accelerators uses a clump of electrons or a laser fired into a plasma, setting off a ripple of energy that the trailing electrons, from the plasma, can ‘ride’ and be accelerated on. (This is a grossly simplified version; a longer explanation is available here.) In 2016, physicists in California proved that it would be possible to join two such accelerators end to end and accelerate the electrons more – although not twice as more, since there is a cost associated with the plasma’s properties.

    The biggest hurdle between tabletop accelerators and the market is also something that makes the label of ‘tabletop’ meaningless. Today, just the part of the device where electrons accelerate can fit on a tabletop. The rest of the machine is still too big. For example, the team behind the 2016 study realised that they’d need as many of their shoebox-sized devices as to span 100 m to accelerate electrons to 0.1 TeV. In early 2020, the Stanford group improved their fingertip-sized accelerator to make it more robust and scalable – but such that the device’s acceleration gradient dropped 10x and it required pre-accelerated electrons to work. The machines required for the latter are as big as rooms.

    More recently, Physics World published an article on July 12 headlined ‘Table-top laser delivers intense extreme-ultraviolet light’. In the fifth paragraph, however, we find that this table needs to be around 2 m long. Is this an acceptable size for a table? I don’t want to discriminate against bigger tables but I thought ‘tabletop accelerator’ meant something like my study table (pictured above). This new device’s performance reportedly “exceeds the performance of existing, far bulkier XUV sources”, that “simulations done by the team suggest that further improvements could boost [its output] intensity by a factor of 1000,” and that it shrinks something that used to be 10 m wide to a fifth of its size. These are all good, but if by ‘tabletop’ we’re to include banquet-hall tables as well, the future is already here.

  • NCBS fracas: In defence of celebrating retractions

    Continuing from here

    Irrespective of Arati Ramesh’s words and actions, I find every retraction worth celebrating because how hard-won retractions in general have been, in India and abroad. I don’t know how often papers coauthored by Indian scientists are retracted and how high or low that rate is compared to the international average. But I know that the quality of scientific work emerging from India is grossly disproportionate (in the negative sense) to the size of the country’s scientific workforce, which is to say most of the papers published from India, irrespective of the journal, contain low-quality science (if they contain science at all). It’s not for nothing that Retraction Watch has a category called ‘India retractions’, with 196 posts.

    Second, it’s only recently that the global scientific community’s attitude towards retractions started changing, and even now most of it is localised to the US and Europe. And even there, there is a distinction: between retractions for honest mistakes and those for dishonest mistakes. Our attitudes towards retractions for honest mistakes have been changing. Retractions for dishonest conduct, or misconduct, have in fact been harder to secure, and continue to be.

    The work of science integrity consultant Elisabeth Bik allows us a quick take: the rate at which sleuths are spotting research fraud is far higher than the rate at which journals are retracting the corresponding papers. Bik herself has often said on Twitter and in interviews how most journals editors simply don’t respond to complaints, or quash them with weak excuses and zero accountability. Between 2015 and 2019, a group of researchers identified papers that had been published in violation of the CONSORT guidelines in journals that endorsed the same guidelines, and wrote to those editors. From The Wire Science‘s report:

    … of the 58 letters sent to the editors, 32 were rejected for different reasons. The BMJ and Annals published all of those addressed to them. The Lancet accepted 80% of them. The NEJM and JAMA turned down every single letter.

    According to JAMA, the letters did not include all the details it required to challenge the reports. When the researchers pointed out that JAMA’s word limit for the letter precluded that, they never heard back from the journal.

    On the other hand, NEJM stated that the authors of reports it published were not required to abide by the CONSORT guidelines. However, NEJM itself endorses CONSORT.

    The point is that bad science is hard enough to spot, and getting stakeholders to act on them is even harder. It shouldn’t have to be, but it is. In this context, every retraction is a commendable thing – no matter how obviously warranted it is. It’s also commendable when a paper ‘destined’ for retraction is retracted sooner (than the corresponding average) because we already have some evidence that “papers that scientists couldn’t replicate are cited more”. Even if a paper in the scientific literature dies, other scientists don’t seem to be able to immediately recognise that it is dead and cite it in their own work as evidence of this or that thesis. These are called zombie citations. Retracting such papers is a step in the right direction – insufficient to prevent all sorts of problems associated with endeavours to maintain the quality of the literature, but necessary.

    As for the specific case of Arati Ramesh: she defended her group’s paper on PubPeer in two comments that offered more raw data and seemed to be founded on a conviction that the images in the paper were real, not doctored. Some commentators have said that her attitude is a sign that she didn’t know the images had been doctored while some others have said (and I tend to agree) that this defence of Ramesh is baffling considering both of her comments succeeded detailed descriptions of forgery. Members of the latter group have also said that, in effect, Ramesh tried to defend her paper until it was impossible to do so, at which point she published her controversial personal statement in which she threw one of her lab’s students under the bus.

    There are a lot of missing pieces here towards ascertaining the scope and depth of Ramesh’s culpability – given also that she is the lab’s principal investigator (PI), that she is the lab’s PI who has since started to claim that her lab doesn’t have access to the experiments’ raw data, and that the now-retracted paper says that she “conceived the experiments, performed the initial bioinformatic search for Sensei RNAs, supervised the work and wrote the manuscript”.

    [Edit, July 11, 2021, 6:28 pm: After a conversation with Priyanka Pulla, I edited the following paragraph. The previous version appears below, struck through.]

    Against this messy background, are we setting a low bar by giving Arati Ramesh brownie points for retracting the paper? Yes and no… Even if it were the case that someone defended the indefensible to an irrational degree, and at the moment of realisation offered to take the blame while also explicitly blaming someone else, the paper was retracted. This is the ‘no’ part. The ‘yes’ arises from Ramesh’s actions on PubPeer, to ‘keep going until one can go no longer’, so to speak, which suggests, among other things – and I’m shooting in the dark here – that she somehow couldn’t spot the problem right away. So giving her credit for the retraction would set a low, if also weird, bar; I think credit belongs on this count with the fastidious commenters of PubPeer. Ramesh would still have had to sign off on a document saying “we’ve agreed to have the paper retracted”, as journals typically require, but perhaps we can also speculate as to whom we should really thank for this outcome – anyone/anything from Ramesh herself to the looming threat of public pressure.

    Against this messy background, are we setting a low bar by giving Arati Ramesh brownie points for retracting the paper? No. Even if it were the case that someone defended the indefensible to an irrational degree, and at the moment of realisation offered to take the blame while also explicitly blaming someone else, the paper was retracted. Perhaps we can speculate as to whom we should thank for this outcome – Arati Ramesh herself, someone else in her lab, members of the internal inquiry committee that NCBS set up, some others members of the institute or even the looming threat of public pressure. We don’t have to give Ramesh credit here beyond her signing off on the decision (as journals typically require) – and we still need answers on all the other pieces of this puzzle, as well as accountability.

    A final point: I hope that the intense focus that the NCBS fracas has commanded – and could continue to considering Bik has flagged one more paper coauthored by Ramesh and others have flagged two coauthored by her partner Sunil Laxman (published in 2005 and 2006), both on PubPeer for potential image manipulation – will widen to encompass the many instances of misconduct popping up every week across the country.

    NCBS, as we all know, is an elite institute as India’s centres of research go: it is well-funded (by the Department of Atomic Energy, a government body relatively free from bureaucratic intervention), staffed by more-than-competent researchers and students, has published commendable research (I’m told), has a functional outreach office, and whose scientists often feature in press reports commenting on this or that other study. As such, it is overrepresented in the public imagination and easily gets attention. However, the problems assailing NCBS vis-à-vis the reports on PubPeer are not unique to the institute, and should in fact force us to rethink our tendency (mine included) to give such impressive institutes – often, and by no coincidence, Brahmin strongholds – the benefit of the doubt.

    (1. I have no idea how things are at India’s poorly funded state and smaller private universities, but even there, and in fact at the overall less-elite and but still “up there” in terms of fortunes, institutes, like the IISERs, Brahmins have been known to dominate the teaching and professorial staff, if not the students, and still have been found guilty of misconduct, often sans accountability. 2. There’s a point to be made here about plagiarism, the graded way in which it is ‘offensive’, access to good quality English education to people of different castes in India, a resulting access to plus inheritance of cultural and social capital, and the funneling of students with such capital into elite institutes.)

    As I mentioned earlier, Retraction Watch has an ‘India retractions’ category (although to be fair, there are also similar categories for China, Italy, Japan and the UK, but not for France, Russia, South Korea or the US. These countries ranked 1-10 on the list of countries with the most scientific and technical journal publications in 2018.) Its database lists 1,349 papers with at least one author affiliated with an Indian institute that have been retracted – and five papers since the NCBS one met its fate. The latest one was retracted on July 7, 2021 (after being published on October 16, 2012). Again, these are just instances in which a paper was retracted. Further up the funnel, we have retractions that Retraction Watch missed, papers that editors are deliberating on, complaints that editors have rejected, complaints that editors have ignored, complaints that editors haven’t yet received, and journals that don’t care.

    So, retractions – and retractors – deserve brownie points.

  • NCBS retraction – addenda

    My take on the NCBS paper being retracted, and the polarised conversation that has erupted around the incident, is here. The following are some points I’d like to add.

    a. Why didn’t the editorial and peer-review teams at Nature Chemical Biology catch the mistakes before the paper was published? As the work of famous research-fraud detective Dr Elisabeth Bik has shown, detecting image manipulation is sometimes easy and sometimes hard. But what is untenable are claims by some scientists, and journals as well, that peer-review is a non-negotiable requirement to ensure the scientific literature remains of ‘high quality’. Nature Chemical Biology also tries to launder its image by writing in its retraction notice that the paper was withdrawn because the authors could not reproduce its results. Being unable to reproduce results is a far less egregious offence than manipulating images. What the journal is defending here is its peer-review process.

    b. Nature Chemical Biology continues to hold the retracted paper behind a paywall ($9 to rent, EUR 55.14 to subscribe to the journal for a year). I expect readers of this blog to know the background to why paywalls are bad, etc., but I would have thought a retracted paper would be released into the public domain. It’s important for everyone to know the ways in which a paper was flawed post-retraction, especially one that has commanded so much public attention (at least as retractions go). Unless of course this is Nature Chemical Biology acknowledging that paywalls are barriers more than anything else, and the journals’ editors can hide their and their peer-review’s failure this way.

    c. The (now retracted) Arati Ramesh et al result was amazing, etc. but given some social media conversations are focused on why Ramesh didn’t double-check a result that was so significant as to warrant open celebration once the paper was published, some important background info: the result was great but not entirely unexpected. In April 2020, Jianson Xu and Joseph Cotruvo reported that a known riboswitch that bound to nickel and cobalt ions also had features that allowed it to bind to iron. (Ramesh et al’s paper also cites another study from 2015 with a similar claim.) Ramesh et al reported that they had found just such behaviour in a riboswitch (present in a different bacterial species). However, many of the images in their paper appeared to be wholly manipulated, undermining the results. It’s still possible (I think) that someone else could make a legitimate version of the same discovery.

  • Pseudoscientific materials and thermoeconomics

    The Shycocan Corp. took out a full-page jacket ad in the Times of India on June 22 – the same day The Telegraph (UK) had a story about GBP 2,900 handbags by Gucci that exist only online, in some videogame. The Shycocan product’s science is questionable, at best, though its manufacturers have disagreed vehemently with this assessment. (Anusha Krishnan wrote a fantastic article for The Wire Science on this topic). The Gucci ‘product’ is capitalism redigesting its own bile, I suppose – a way to create value out of thin air. This is neither new nor particularly exotic: I have paid not inconsiderable sums of money in the past for perks inside videogames, often after paying for the games themselves. But thinking about both products led me to a topic called thermoeconomics.

    This may be too fine a point but the consumerism implicit in both the pixel-handbags and Shycocan and other medical devices of unproven efficacy has a significant thermodynamic cost. While pixel-handbags may represent a minor offense, so to speak, in the larger scheme of things, their close cousins, the non-fungible tokens (NFTs) of the cryptocurrency universe, are egregiously energy-intensive. (More on this here.) NFTs represent an extreme case of converting energy into monetary value, bringing into sharp focus the relationships between economics and thermodynamics that we often ignore because they are too muted.

    Free energy, entropy and information are three of the many significant concepts at the intersection of economics and thermodynamics. Free energy is the energy available to perform useful work. Entropy is energy that is disorderly and can’t be used to perform useful work. Information, a form of negative entropy, and the other two concepts taken together are better illustrated by the following excerpt, from this paper:

    Consider, as an example, the process of converting a set of raw materials, such as iron ore, coke, limestone and so forth, into a finished product—a piece of machinery of some kind. At each stage the organization (information content) of the materials embodied in the product is increased (the entropy is decreased), while global entropy is increased through the production of waste materials and heat. For example:

    Extraction activities start with the mining of ores, followed by concentration or benefication. All of these steps increase local order in the material being processed, but only by using (dissipating) large quantities of available work derived from burning fuel, wearing out machines and discarding gauge and tailings.

    Metallurgical reduction processes mostly involve the endothermic chemical reactions to separate minerals into the desired element and unwanted impurities such as slag, CO2 and sulfur oxides. Again, available work in the form of coal, oil or natural gas is used up to a much greater extent than is embodied in metal, and there is a physical wear and tear on machines, furnaces and so forth, which must be discarded eventually.

    Petroleum refining involves fractionating the crude oil, cracking heavier fractions, and polymerizing, alkylating or reforming lighter ones. These processes require available work, typically 10% or so of the heating value of the petroleum itself. Petrochemical feedstocks such as olefins or alcohols are obtained by means of further endo- thermic conversion processes. Inorganic chemical processes begin by endothermic reduction of commonplace salts such as chlorides, fluorides or carbonates into their components. Again, available work (from electricity or fuel) is dissipated in each step.

    Fabrication involves the forming of materials into parts with desirable forms and shapes. The information content, or orderliness, of the product is increased, but only by further expending available work.

    Assembly and construction involves the linking of components into complex subsystems and systems. The orderliness of the product continues to increase, but still more available work is used up in the processes. The simultaneous buildup of local order and global entropy during a materials processing sequence is illustrated in figure 4. Some, but not all of the orderliness of the manufactured product is recoverable as thermodynamically available work: Plastic or paper products, for example, can be burned as fuel in a boiler to recover their residual heating value and con- vert some of that to work again. Using scrap instead of iron ore in the manufacture of steel or recycled aluminum instead of bauxite makes use of some of the work expended in the initial refining of the ore.

    Some years ago, I read an article about a debate between a physicist and an economist; I’m unable to find the link now. The physicist says infinite economic growth is impossible because the laws of thermodynamics forbid it. Eventually, we will run out of free energy and entropy will become more abundant, and creating new objects will exact very high, and increasing, resource costs. The economist counters that what a person values doesn’t have to be encoded as objects – that older things can re-acquire new value or become more valuable, or that we will be able to develop virtual objects whose value doesn’t incur the same costs that their physical counterparts do.

    This in turn recalls the concept of eco-economic decoupling – the idea that we can continue and/or expand economic activity without increasing environmental stresses and pollution at the same time. Is this possible? Are we en route to achieving it?

    The Solar System – taken to be the limit of Earth’s extended neighbourhood – is very large but still finite, and the laws of thermodynamics stipulate that it can thus contain a finite amount of energy. What is the maximum number of dollars we can extract through economic activities using this energy? A pro-consumerist brigade believes absolute eco-economic decoupling is possible; at least one of its subscribers, a Michael Liebreich, has written that in fact infinite growth is possible. But NFTs suggest we are not at all moving in the right direction – nor does any product that extracts a significant thermodynamic cost with incommensurate returns (and not just economic ones). Pseudoscientific hardware – by which I mean machines and devices that claim to do something but have no evidence to show for it – belongs in the same category.

    This may not be a productive way to think of problematic entities right now, but it is still interesting to consider that, given we have a finite amount of free energy, and that increasing the efficiency with which we use it is closely tied to humankind’s climate crisis, pseudoscientific hardware can be said to have a climate cost. In fact, the extant severity of the climate crisis already means that even if we had an infinite amount of free energy, thermodynamic efficiency is more important right now. I already think of flygskam in this way, for example: airplane travel is not pseudoscientific, but it can be irrational given its significant carbon footprint, and the privileged among us need to undertake it only with good reason. (I don’t agree with the idea the way Greta Thunberg does, but that’s a different article.)

    To quote physicist Tom Murphy:

    Let me restate that important point. No matter what the technology, a sustained 2.3% energy growth rate would require us to produce as much energy as the entire sun within 1400 years. A word of warning: that power plant is going to run a little warm. Thermodynamics require that if we generated sun-comparable power on Earth, the surface of the Earth—being smaller than that of the sun—would have to be hotter than the surface of the sun! …

    The purpose of this exploration is to point out the absurdity that results from the assumption that we can continue growing our use of energy—even if doing so more modestly than the last 350 years have seen. This analysis is an easy target for criticism, given the tunnel-vision of its premise. I would enjoy shredding it myself. Chiefly, continued energy growth will likely be unnecessary if the human population stabilizes. At least the 2.9% energy growth rate we have experienced should ease off as the world saturates with people. But let’s not overlook the key point: continued growth in energy use becomes physically impossible within conceivable timeframes. The foregoing analysis offers a cute way to demonstrate this point. I have found it to be a compelling argument that snaps people into appreciating the genuine limits to indefinite growth.

    And … And Then There’s Physics:

    As I understand it, we can’t have economic activity that simply doesn’t have any impact on the environment, but we can choose to commit resources to minimising this impact (i.e., use some of the available energy to avoid increasing entropy, as Liebreich suggests). However, this would seem to have a cost and it seems to me that we mostly spend our time convincing ourselves that we shouldn’t yet pay this cost, or shouldn’t pay too much now because people in the future will be richer. So, my issue isn’t that I think we can’t continue to grow our economies while decoupling economic activity from environmental impact, I just think that we won’t.

    A final point: information is considered negative entropy because it describes certainty – something we know that allows us to organise materials in such a way as to minimise disorder. However, what we consider to be useful information, thanks to capitalism, nationalism (it is not for nothing that Shycocan’s front-page ad ends with a “Jai Hind”), etc., has become all wonky, and all forms of commercialised pseudoscience are good examples of this.

  • Scicommers as knowledge producers

    Reading the latest edition of Raghavendra Gadagkar’s column in The Wire Science, ‘More Fun Than Fun’, about how scientists should become communicators and communicators should be treated as knowledge-producers, I began wondering if the knowledge produced by the latter is in fact not the same knowledge but something entirely new. The idea that communicators simply make the scientists’ Promethean fire more palatable to a wider audience has led, among other things, to a belief widespread among scientists that science communicators are adjacent to science and aren’t part of the enterprise producing ‘scientific knowledge’ itself. And this perceived adjacency often belittles communicators by trivialising the work that they do and hiding the knowledge that only they produce.

    Explanatory writing that “enters into the mental world of uninitiated readers and helps them understand complex scientific concepts”, to use Gadagkar’s words, takes copious and focused work. (And if it doesn’t result in papers, citations and h-indices, just as well: no one should become trapped in bibliometrics the way so many scientists have.) In fact, describing the work of communicators in this way dismisses a specific kind of proof of work that is present in the final product – in much the same way scientists’ proofs of work are implicit in new solutions to old problems, development of new technologies, etc. The knowledge that people writing about science for a wider audience produce is, in my view, entirely distinct, even if the nature of the task at hand is explanatory.

    In his article, Gadagkar writes:

    Science writers should do more than just reporting, more than translating the gibberish of scientists into English or whatever language they may choose to write in. … Science writers are in a much better position to make lateral comparisons, understand the process of science, and detect possible biases and conflicts of interest, something that scientists, being insiders, cannot do very well. So rather than just expect them to clean up our messy prose, we should elevate science writers to the role of knowledge producers.

    My point is about knowledge arising from a more limited enterprise – i.e. explanation – but which I think can be generalised to all of journalism as well (and to other expository enterprises). And in making this point, I hope my two-pronged deviation from Gadagkar’s view is clear. First, science journalists should be treated as knowledge producers, but not in the limited confines of the scientific enterprise and certainly not just to expose biases; instead, communicators as knowledge producers exist in a wider arena – that of society, including its messy traditions and politics, itself. Here, knowledge is composed of much more than scientific facts. Second, science journalists are already knowledge producers, even when they’re ‘just’ “translating the gibberish of scientists”.

    Specifically, the knowledge that science journalists produce differs from the knowledge that scientists produce in at least two ways: it is accessible and it makes knowledge socially relevant. What scientists find is not what people know. Society broadly synthesises knowledge from information that it weights together with extra-scientific considerations, including biases like “which university is the scientist affiliated with” and concerns like “will the finding affect my quality of life”. Journalists are influential synthesisers who work with or around these and other psychosocial stressors to contextualise scientific findings, and thus science itself. Even when they write drab stories about obscure phenomena, they make an important choice: “this is what the reader gets to read, instead of something else”.

    These properties taken together encompass the journalist’s proof of work, which is knowledge accessible to a much larger audience. The scientific enterprise is not designed to produce this particular knowledge. Scientists may find that “leaves use chlorophyll to photosynthesise sunlight”; a skilled communicator will find that more people know this, know why it matters and know how they can put such knowledge to use, thus fostering a more empowered society. And the latter is entirely new knowledge – akin to an emergent object that is greater than the sum of its scientific bits.