Scicomm

  • A false union in science journalism

    At what point does a journalist become a stenographer? Most people would say it’s when the journalist stops questioning claims and reprints them uncritically, as if they were simply a machine. So at what point does a science journalist become a stenographer? You’ll probably say at the same point – when they become uncritical of claims. I disagree: I believe the gap between being critical and being non-critical is smaller when it comes to science journalism simply because of the nature of its subject.

    The scientific enterprise in itself is an attempt to arrive at the truth by critiquing existing truths in different contexts and by simultaneously subtracting biases. The bulk of what we understand to be science journalism is aligned with this process: science journalists critique the same material that scientists do as well, even when they’re following disputes between groups of scientists, but seldom critique the scientists’ beliefs and methods themselves. This is not a distinction without a difference or even a finer point about labels.

    One might say, “There aren’t many stories in which journalists need to critique scientists and/or their methods” – this would be fair, but I have two issues on this count.

    First, both the language and the narrative are typically deferential towards scientists and their views, and steer clear of examining how a scientist’s opinions may have been shaped by extra-scientific considerations, such as their socio-economic location, or whether their accomplishments were the product of certain unique privileges. Second, at the level of a collection of articles, science journalists who haven’t critiqued science will likelier than not have laid tall, wide bridges between scientists and non-scientists but won’t have called scientists, or the apparatuses of science itself, out on their bullshit.

    One way or another, a science journalism that’s uncritical of science often leads to the impression that the two enterprises share the same purpose: to advance science, whether by bringing supposedly important scientific work to the attention of politicians or by building the public support for good scientific work. And this impression is wrong. I don’t think that science journalists have an obligation to help science, and I also don’t think that science journalists should.

    As it happens, science journalism is often treated differently than, say, journalism that’s concerned with political or financial matters. I completely understand why. But I don’t think there has been much of an effort to flip this relationship to consider whether the conception and practice of science has been improved by the attention of science journalists the way the practices of governance and policymaking have been improved by the attention of those reporting on politics and economics. If I was a wagering man, I’d wager ‘no’, at least not in India.

    And the failure to acknowledge this corollary of the relationship between science and science journalism, leave alone one’s responsibility as a science journalist, is to my mind a deeper cause for the persistence of both stenographic and pro-science science journalism in some quarters. I thought to write this down when reading a new editorial by Holden Thorpe, the editor of Science. He says here:

    It’s not just a matter of translating jargon into plain language. As Kathleen Hall Jamieson at the University of Pennsylvania stated in a recent article, the key is getting the public to realize that science is a work in progress, an honorably self-correcting endeavor carried out in good faith.

    Umm, no. Science is a work in progress, sure, but I have neither reason nor duty to explain that the practice of science is honourable or that it is “carried out in good faith”. (It frequently isn’t.) Granted, the editorial focuses on communicators, not journalists, but I’d place communicators on the journalism side of the fence, instead of on the science side: the purposes of journalists and communicators deviate only slightly, and for the most part both groups travel the same path.

    The rest of Thorpe’s article focuses on the fact that not all scientists can make good communicators – a fact that bears repeating if only because some proponents of science communication tend to go overboard with their insistence on getting scientists to communicate their work to a non-expert audience. But in restricting his examples to full-blown articles, radio programmes, etc., he creates a bit of a false binary (if earlier he created a false union): that you’re a communicator only if you’ve produced ‘packages’ of that size or scope. But I’ve always marvelled at the ability of some reporters, especially at the New York Times‘ science section, to elicit some lovely quotes from experts. Here are three examples:

    This is science communication as well. Of course, not all scientists may be able to articulate things so colourfully or arrive at poignant insights in their quotes but surely there are many more scientists who can do this than there are scientists who can write entire articles or produce engaging podcasts. And a scientist who allows your article to say interesting things is, I’m sure you’ll agree, an invaluable resource. Working in India, for example, I continue to have to give reporters I commission from extra time to file their stories because many scientists don’t want to talk – and while there are many reasons for this, a big and common one is that they believe communication is pointless.

    So overall, I think there needs to be more leeway in what we consider to be communication, if only so it encourages scientists to speak to journalists (whom they trust, of course) instead of being put off by the demands of a common yet singular form of this exercise, as well as what we imagine the science journalist’s purpose to be. If we like to believe that science communication and/or journalism creates new knowledge, as I do, instead of simply being adjacent to science itself, then it must also craft a purpose of its own.

    Featured image credit: Conol Samuel/Unsplash.

  • The omicron variant and scicomm

    Somewhere between the middle of India’s second major COVID-19 outbreak in March-May this year and today, a lot of us appear to have lost sight of a fact that was central to our understanding of COVID-19 outbreaks in 2020: that the only way a disease outbreak, especially of the novel coronavirus, can be truly devastating is if the virus collaborated with poor public health infrastructure and subpar state response. (Similarly, even a variant deemed mild in, say, the UK could lead to disaster in Chennai.) The virus alone doesn’t lead to catastrophic outcomes.

    Just as India’s second outbreak was picking up speed, there was a considerable awareness that the delta variant was wreaking as much havoc as we were letting it. In fact, the Indian government was more than letting it. But since the outbreak began to subside in kurtotic fashion and, much later, as the omicron variant appeared on the scene, the focus on the latter has appeared to overwhelm – at least in public discourse – the extent to which we’re prepared (or not) to face it. Put another way, the focus on the omicron variant and the contexts in which it has been discussed have remained far too scientific. I’m not saying that it should become less scientific but that the social should start finding mention more.

    I realise that everyone is weary of the pandemic and would like if it ended already, and together with the fact that most people in India’s cities have received their two doses of some COVID-19 vaccine, it might seem to everyone that there’s sufficient ground to persist with the idea that the omicron variant couldn’t possibly be devastating, and that we can all return to some kind of normal soon. Now, this is one kind of fatigue. There appears to be a second kind also, based on the fact that the delta variant was the first “major” variant, in a manner of speaking, and the way we talked about it and acted in its potential (and menacing) presence co-evolved with its dispersal through the population.

    The omicron variant, on the other hand, affords both scientists and science communicators the option to simply refer to the narratives and discourses we developed with the delta variant, simply updated to match what we’re finding out about omicron. And this, not surprisingly, has led to a bit of laziness as well. The form I find most lazy, and most annoying, is some scientists’ insistence on pointing to graphs of the number of cases over time in different countries and saying, “If this doesn’t shake us out of our slumber, what will?”

    This is scientism, pure and simple, even if it’s not on the nose: pointing to case trends alone isn’t going to solve anything, especially not in the face of the sort of significant, demographic-wide yearning for a ‘new normal’, or in fact any kind of normal, instead of more and more upheavals. In fact, consider the fact that for most of 2020, most poor people in India believed that if the novel coronavirus had an infection fatality rate of just 1%, it was no big whoop, and that they would continue going to work and eke out a living. Let’s be clear, this is perfectly reasonable. The idea of letting the virus take its course through the population went sideways in Sweden, but in India, if something has a 1% chance of getting you really sick – or even killing you – it’s tragically the case that it quickly falls down a long list of threats, most of which are often much more lethal, beginning, in too many parts of the country, with breathing the air around you or drinking the water that’s available to you.

    To repeat in this context exhortations based solely on graphs printed in English and shared on Twitter that rapidly rising case-loads elsewhere on the planet should suffice to nudge us out of the Indian subcontinent’s collective torpor is a deference to facts that, I’m very tempted to say, understand only 1% of what is going on. Even if these exhortations are directed at state leaders and government officials, they are really misdirected: as I have written before in the context of Anthony Fauci’s senseless interview responses, if the government hasn’t done something that’s obvious to everyone, the reason just can’t be that it hasn’t seen the chart or the numbers you’ve seen to reach your conclusions. The only way such statements could make some sense is if they are intended to galvanise public opinion, but even then, I’m not convinced.

    And seeing these scientists do what they do strikes me that just as much as we’d like to encourage scientists to communicate science as often as is possible, there may be virtue in casting science communication as much in terms of what it does as what it doesn’t. For example, as the number of cases due to the omicron variant of the novel coronavirus is increasing in different parts of the world, socially responsible science communication requires us to not stop at pointing at graphs but to continue to reflect on and articulate how much – or how little – the greater transmissibility of the variant means in and of itself. And in my view, not doing this would just be socially anti-responsible communication: sticking to the science, and accomplishing little overall.

  • Specific impulse, etc.

    One of the best rocket propellants there is is hydrolox – a combination of liquid hydrogen, the fuel, and liquid oxygen, the oxidiser. This might seem a bit unexpected because most (if not all) other fuels performing the same function are compounds, not elements – like 1,1-dimethylhydrazine.

    An engine that uses hydrolox as its propellant must technically be a cryogenic engine. This is because storing and transporting gaseous hydrogen, which is its natural form at ambient temperature and pressure, is very difficult. Hydrogen has the lowest density of all elements, and thus occupies a large volume for a given mass; is difficult to pump; and reacts explosively with oxygen to form water. To use it in an engine, then, engineers typically cool it until it becomes a liquid, at -253º C, and store and move it in containers that are also constantly maintained at this temperature. The same condition applies to the oxygen used to oxidise the hydrogen in the engine’s combustion chamber; oxygen becomes liquid at -184º C. ‘Cryogenic’ today typically refers to systems that operate below -150º C or so.

    The spaceflight industry goes to all this trouble for hydrogen because hydrolox affords a very high specific impulse among all rocket propellants, together with the engine designs currently in use. In technical terms, specific impulse refers to the efficiency with which the reaction mass – the propellant, in this case, liquid hydrogen plus liquid oxygen – lifts the rocket, a.k.a. creating thrust. The higher the specific impulse, the more efficiently the engine can use the reaction mass, the higher the rocket’s acceleration per unit of fuel consumed.

    The concept of a specific impulse applies to jet engines as well, with a small difference. Rockets typically carry the fuel and its oxidiser; on the other hand, jet engines carry only the fuel and ‘breathe’ the oxygen from the atmosphere. So a jet engine’s reaction mass is only the fuel. In these cases, the specific impulse is directly proportional to the velocity at which the exhaust exits the engine’s nozzle. In rocket engines, the relationship is similar with a small modification.

    But in both cases, the specific impulse depends on the exhaust velocity, which in turn depends on the difference between the combustion chamber pressure and the ambient pressure. The exhaust velocity is highest when the engine operates in vacuum, because this is when the difference between the combustion chamber pressure and the ambient pressure is highest. Also in both cases, the specific impulse also depends on how the engine itself works.

    For example, the CE7.5 cryogenic engine that the Indian Space Research Organisation (ISRO) uses on its GSLV Mk 2 rockets has a specific impulse of 454 seconds in vacuum. The most common way to measure specific impulse is in terms of seconds – that is, “for how many seconds can one kilogram of fuel produce one newton of thrust”. (1 N = 1 kg m/s2). So this means the CE7.5 engine allows one kilogram of its reaction mass – hydrolox – to produce one newton of thrust for 454 seconds in vacuum. The CE20 engine that ISRO uses onboard its GSLV Mk 3 rockets has a specific impulse of 443 seconds in vacuum – and the Aerojet Rocketdyne RS-25 engine, which the Space Shuttle used, has a specific impulse of 453 seconds in vacuum. Yet all three engines use hydrolox as their propellant.

    Jet engines typically have much higher specific impulse. For example, the GE GEnx and the Rolls Royce Trent 1000 engines, both built for the Boeing 787 Dreamliner, use aviation fuel and have specific impulses of 12,650 and 13,200 seconds, respectively.

    One reason for the higher specific impulse of rocket engines that work with propellants that have liquid hydrogen as the fuel is hydrogen’s high heating value – the amount of energy released when a fixed mass of it is combusted. The theoretically highest heating value of hydrocarbon fuels is 42 megajoule per kilogram (MJ/kg) in air. The heating value of dihydrogen, or H2, is 143 MJ/kg in air. This is the highest of all known fuels, for rockets or otherwise, and is also why hydrogen has been increasingly touted as “key” to the world’s energy transition.

    I wanted to see if I could break my writer’s block by writing something. This was it. Thanks for reading. 🙂

    Featured image credit: NASA.

  • Is mathematics real?

    I didn’t think to think about the realism of mathematics until I got to high school, and encountered quantum mechanics.

    Mathematics was at first just another subject, before becoming a tool with which to think intelligently about money and, later, with advanced statistical concepts in the picture, to understand the properties of groups of objects that couldn’t be deduced from those of individual ones. But by this time, mathematics – taken here to mean the systematic manipulation of numbers according to a fixed and rigid system of rules – seemed to be a world unto its own, separated cleanly from our physical reality akin to the way “a map is not the territory”.

    Put another and limited way, mathematics seemed to me to be a post facto system of rationalisation that people used to understand forces and outcomes whose physical forms weren’t available for direct observation (through one, some or all of the human senses). For example, (a + b)2 = a2 + b2 + 2ab. To what does this translate in the real world? Perhaps I had 10 rupees in one pocket and 20 rupees in the other, and 29 other people turn up with the same combination of funds in their pockets. We could use this formula to quickly calculate the total amount of money there is in all of our pockets. But other than finding application of this sort, I didn’t think the formulae could have any other purpose – and that, certainly, knowing the formula wouldn’t allow us to predict anything new about the world (ergo post facto).

    I was constantly on the cusp of concluding mathematics was made up, a contrivance fashioned to fit our observations, and not real. But in high school, I came upon a form of mathematics-based reasoning that suggested I should think about it differently, if only for the sake of my own productivity. In class XI, my physics teacher at school introduced Wolfgang Pauli’s exclusion principle.

    The principle itself is simple, at least at the outset. Every particle has a fixed set of quantum numbers. An electron in an atom, for example, has four quantum numbers. Each quantum number can take a range of discrete values. A particular combination of the numbers is called a quantum state (i.e. the combination confers the particle with some possibilities and impossibilities). The principle is that no two particles in the same system can occupy the same quantum state.

    Now, it is Pauli’s principle – a logical relationship between various facts – that animates the idea, and not any mathematical rule or prescription. At the same time, the principle itself is arrived at by solving mathematical problems. Why do electrons in atoms have four quantum numbers? Because historically we started off with one, because we perceived the need for one, and over time we added a second, then a third and finally a fourth – all based on experiments in which the electrons behaved in a certain way, but because direct physical observation was out of the question we invented mathematical relationships between the particles’ parameters in different contexts and ascribed meaning to them.

    It was still ‘only’ empirical: scientists tried different things and those that worked stuck. There may be another way to make sense of the particles’ behaviour with, say, five dim sum (🥟) numbers, and reorganise the rest of quantum mechanics to fit in this paradigm. Even then, only the mathematical features of the topic will have changed – the physical features, or more broadly the specific ways in which particles are real, will have not. But this view of mine changed when I read about experiments that proved Pauli’s principle was real. A mathematical system we set up eventually led to the creation of a fixed set (not more, not less) of quantum numbers, and which Wolfgang Pauli eventually combined into a common principle. If scientists had proved that the principle was true and therefore real, could the mathematics undergirding the principle be true and real as well?

    Not all fundamental particles obey Pauli’s exclusion principle. The four quantum numbers of an electron in an atom are: principal (n), azimuthal (l), magnetic (ml) and spin (s). Of these, the spin quantum number can take two kinds of values: half-integer (1/2, 3/2, …) and integer (0, 1, 2, …). Particles with half-integer spin are called fermions, and the rules describing their behaviour are defined by Fermi-Dirac statistics. They obey Pauli’s exclusion principle. Particles with integer spin are called bosons, and the rules describing their behaviour are defined by Bose-Einstein statistics. They don’t obey Pauli’s exclusion principle.

    When some kinds of heavy stars can no longer continue fusion reactions outside their core, they collapse into a neutron star – an ultra-dense ball of neutrons. Neutrons are fermionic particles – they have half-integer spin – which means they obey Pauli’s exclusion principle, and can’t occupy common quantum states. So the neutrons in a neutron star are tightly packed against each other. Their combined mass generates gravity that tries to pull them even closer together – but at the same time Pauli’s exclusion principle forces them to stay apart and remain stuck in their existing quantum states, creating a counter-force called neutron degeneracy pressure.

    We wouldn’t have neutron stars, or electronic goods or even heavy elements in the periodic table, if Pauli’s exclusion principle didn’t exist.

    Most recently, three separate groups of scientists described a new physical manifestation of the principle, called Pauli blocking. Most atoms are fermions (as a whole); each group first created a gas of such atoms and cooled them to a very low temperature – to ensure that in each gaseous system, all of the lowest available quantum states were occupied. (The higher a particle’s quantum state, the more energy it has.)

    A group at JILA, in Colorado, used strontium-87 atoms. A group from the University of Otago, New Zealand, used potassium-40 atoms. And a group from MIT used lithium-6 atoms. (The last one includes Wolfgang Ketterle, whose work I have discussed before).

    Usually, when a photon and an electron collide, the photon is scattered off into a different direction while the atom absorbs some of the photon’s energy and recoils. The absorbed energy forces the atom into a higher quantum state, with a different combination of the quantum numbers than the one it had before the collision. In an ultra-cold fermionic gas in which the particles have occupied the lowest available quantum states, and are packed tightly together as if in a solid, there is no room for any atom to absorb a small amount of energy imparted by a photon because all of the ‘nearby’ quantum states are taken. So the atoms allow the photons to sail right through, and the gas appears to be transparent.

    This barrier, in the form of the atoms being ‘blocked’ from scattering the photons, is called Pauli blocking. And in the three experiments, its effects were directly observable, without their validity having to be mediated through the use of mathematics.

    My views in high school and through college being what they were, I don’t have any serious position on the matter of whether mathematics is real. In fact, my reasoning could have been flawed in ways that I’m yet to realise but which a philosopher who has seriously studied this question may consider trivial. (Update, December 10, 2024: More than three years later, I can think of one. Both the theoretical description of X and the experimental verification of X — where X is any phenomenon grounded in the exclusion principle, e.g. neutron degeneracy pressure, Pauli blocking, etc. — are founded on a mathematical description of a physical reality, i.e. neither activity/event directly accesses the physical condition of X but deals only with the way we’ve chosen to describe such activity/event mathematically, and thus it’s no surprise that the experimental verification of X holds up the mathematical description of X.)

    This said, having to work my way through different concepts in high-energy, astroparticle and condensed-matter physics (as a science communicator) has forced me to accept not anything about mathematics as much as the importance we place on the distinction between something being real versus non-real, and the consequences of that on what mathematics is and isn’t allowed to tell us about the real world. Ultimately, dwelling on the distinction and its consequences distracted from what I found to be the most worthwhile part of discovery: the discovery itself. Even this post was motivated by an article in Physics World about the three experiments, whose second paragraph (and in fact most of whose second paragraphs) focused on potential, far-in-the-future applications of cold fermionic gases displaying Pauli blocking. I don’t care, and I think that from time to time, no one should.

  • The problem with rooting for science

    The idea that trusting in science involves a lot of faith, instead of reason, is lost on most people. More often than not, as a science journalist, I encounter faith through extreme examples – such as the Bloch sphere (used to represent the state of a qubit) or wave functions (‘mathematical objects’ used to understand the evolution of certain simple quantum systems). These and other similar concepts require years of training in physics and mathematics to understand. At the same time, science writers are often confronted with the challenge of making these concepts sensible to an audience that seldom has this training.

    More importantly, how are science writers to understand them? They don’t. Instead, they implicitly trust scientists they’re talking to to make sense. If I know that a black hole curves spacetime to such an extent that pairs of virtual particles created near its surface are torn apart – one particle entering the black hole never to exit and the other sent off into space – it’s not because I’m familiar with the work of Stephen Hawking. It’s because I read his books, read some blogs and scientific papers, spoke to physicists, and decided to trust them all. Every science journalist, in fact, has a set of sources they’re likely to trust over others. I even place my faith in some people over others, based on factors like personal character, past record, transparency, reflexivity, etc., so that what they produce I take only with the smallest pinch of salt, and build on their findings to develop my own. And this way, I’m already creating an interface between science and society – by matching scientific knowledge with the socially developed markers of reliability.

    I choose to trust those people, processes and institutions that display these markers. I call this an act of faith for two reasons: 1) it’s an empirical method, so to speak; there is no proof in theory that such ‘matching’ will always work; and 2) I believe it’s instructive to think of this relationship as being mediated by faith if only to amplify its anti-polarity with reason. Most of us understand science through faith, not reason. Even scientists who are experts on one thing take the word of scientists on completely different things, instead of trying to study those things themselves (see ad verecundiam fallacy).

    Sometimes, such faith is (mostly) harmless, such as in the ‘extreme’ cases of the Bloch sphere and the wave function. It is both inexact and incomplete to think that quantum superposition means an object is in two states at once. The human brain hasn’t evolved to cognate superposition exactly; this is why physicists use the language of mathematics to make sense of this strange existential phenomenon. The problem – i.e. the inexactitude and the incompleteness – arises when a communicator translates the mathematics to a metaphor. Equally importantly, physicists are describing whereas the rest of us are thinking. There is a crucial difference between these activities that illustrates, among other things, the fundamental incompatibility between scientific research and science communication that communicators must first surmount.

    As physicists over the past three or four centuries have relied increasingly on mathematics rather than the word to describe the world, physics, like mathematics itself, has made a “retreat from the word,” as literary scholar George Steiner put it. In a 1961 Kenyon Review article, Steiner wrote, “It is, on the whole, true to say that until the seventeenth century the predominant bias and content of the natural sciences were descriptive.” Mathematics used to be “anchored to the material conditions of experience,” and so was largely susceptible to being expressed in ordinary language. But this changed with the advances of modern mathematicians such as Descartes, Newton, and Leibniz, whose work in geometry, algebra, and calculus helped to distance mathematical notation from ordinary language, such that the history of how mathematics is expressed has become “one of progressive untranslatability.” It is easier to translate between Chinese and English — both express human experience, the vast majority of which is shared — than it is to translate advanced mathematics into a spoken language, because the world that mathematics expresses is theoretical and for the most part not available to our lived experience.

    Samuel Matlack, ‘Quantum Poetics’, The New Atlantic, 2017

    However, the faith becomes more harmful the further we move away from the ‘extreme’ examples – of things we’re unlikely to stumble on in our daily lives – and towards more commonplace ideas, such as ‘how vaccines work’ or ‘why GM foods are not inherently bad’. The harm emerges from the assumption that we think we know something when in fact we’re in denial about how it is that we know that thing. Many of us think it’s reason; most of the time it’s faith. Remember when, in Friends, Monica Geller and Chandler Bing ask David the Scientist Guy how airplanes fly, and David says it has to do with Bernoulli’s principle and Newton’s third law? Monica then turns to Chandler with a knowing look and says, “See?!” To which Chandler says, “Yeah, that’s the same as ‘it has something to do with wind’!”

    The harm is to root for science, to endorse the scientific enterprise and vest our faith in its fruits, without really understanding how these fruits are produced. Such understanding is important for two reasons.

    First, if we trust scientists, instead of presuming to know or actually knowing that we can vouch for their work. It would be vacuous to claim science is superior in any way to another enterprise that demands our faith when science itself also receives our faith. Perhaps more fundamentally, we like to believe that science is trustworthy because it is evidence-based and it is tested – but the COVID-19 pandemic should have clarified, if it hasn’t already, the continuous (as opposed to discrete) nature of scientific evidence, especially if we also acknowledge that scientific progress is almost always incremental. Evidence can be singular and thus clear – like a new avian species, graphene layers superconducting electrons or tuned lasers cooling down atoms – or it can be necessary but insufficient, and therefore on a slippery slope – such as repeated genetic components in viral RNA, a cigar-shaped asteroid or water shortage in the time of climate change.

    Physicists working with giant machines to spot new particles and reactions – all of which are detected indirectly, through their imprints on other well-understood phenomena – have two important thresholds for the reliability of their findings: if the chance of X (say, “spotting a particle of energy 100 GeV”) being false is 0.27%, it’s good enough to be evidence; if the chance of X being false is 0.00006%, then it’s a discovery (i.e., “we have found the particle”). But at what point can we be sure that we’ve indeed found the particle we were looking for if the chance of being false will never reach 0%? One way, for physicists specifically, is to combine the experiment’s results with what they expect to happen according to theory; if the two match, it’s okay to think that even a less reliable result will likely be borne out. Another possibility (in the line of Karl Popper’s philosophy) is that a result expected to be true, and is subsequently found to be true, is true until we have evidence to the contrary. But as suitable as this answer may be, it still doesn’t neatly fit the binary ‘yes’/’no’ we’re used to, and which we often expect from scientific endeavours as well (see experience v. reality).

    (Minor detour: While rational solutions are ideally refutable, faith-based solutions are not. Instead, the simplest way to reject their validity is to use extra-scientific methods, and more broadly deny them power. For example, if two people were offering me drugs to suppress the pain of a headache, I would trust the one who has a state-sanctioned license to practice medicine and is likely to lose that license, even temporarily, if his prescription is found to have been mistaken – that is, by asserting the doctor as the subject of democratic power. Axiomatically, if I know that Crocin helps manage headaches, it’s because, first, I trusted the doctor who prescribed it and, second, Crocin has helped me multiple times before, so empirical experience is on my side.)

    Second, if we don’t know how science works, we become vulnerable to believing pseudoscience to be science as long as the two share some superficial characteristics, like, say, the presence and frequency of jargon or a claim’s originator being affiliated with a ‘top’ institute. The authors of a scientific paper to be published in a forthcoming edition of the Journal of Experimental Social Psychology write:

    We identify two critical determinants of vulnerability to pseudoscience. First, participants who trust science are more likely to believe and disseminate false claims that contain scientific references than false claims that do not. Second, reminding participants of the value of critical evaluation reduces belief in false claims, whereas reminders of the value of trusting science do not.

    (Caveats: 1. We could apply the point of this post to this study itself; 2. I haven’t checked the study’s methods and results with an independent expert, and I’m also mindful that this is psychology research and that its conclusions should be taken with salt until independent scientists have successfully replicated them.)

    Later from the same paper:

    Our four experiments and meta-analysis demonstrated that people, and in particular people with higher trust in science (Experiments 1-3), are vulnerable to misinformation that contains pseudoscientific content. Among participants who reported high trust in science, the mere presence of scientific labels in the article facilitated belief in the misinformation and increased the probability of dissemination. Thus, this research highlights that trust in science ironically increases vulnerability to pseudoscience, a finding that conflicts with campaigns that promote broad trust in science as an antidote to misinformation but does not conflict with efforts to install trust in conclusions about the specific science about COVID-19 or climate change.

    In terms of the process, the findings of Experiments 1-3 may reflect a form of heuristic processing. Complex topics such as the origins of a virus or potential harms of GMOs to human health include information that is difficult for a lay audience to comprehend, and requires acquiring background knowledge when reading news. For most participants, seeing scientists as the source of the information may act as an expertise cue in some conditions, although source cues are well known to also be processed systematically. However, when participants have higher levels of methodological literacy, they may be more able to bring relevant knowledge to bear and scrutinise the misinformation. The consistent negative association between methodological literacy and both belief and dissemination across Experiments 1-3 suggests that one antidote to the influence of pseudoscience is methodological literacy. The meta-analysis supports this.

    So rooting for science per se is not just not enough, it could be harmful vis-à-vis the public support for science itself. For example (and without taking names), in response to right-wing propaganda related to India’s COVID-19 epidemic, quite a few videos produced by YouTube ‘stars’ have advanced dubious claims. They’re not dubious at first glance, if also because they purport to counter pseudoscientific claims with scientific knowledge, but they are – either for insisting a measure of certainty in the results that neither exist nor are achievable, or for making pseudoscientific claims of their own, just wrapped up in technical lingo so they’re more palatable to those supporting science over critical thinking. Some of these YouTubers, and in fact writers, podcasters, etc., are even blissfully unaware of how wrong they often are. (At least one of them was also reluctant to edit a ‘finished’ video to make it less sensational despite repeated requests.)

    Now, where do these ideas leave (other) science communicators? In attempting to bridge a nearly unbridgeable gap, are we doomed to swing only between most and least unsuccessful? I personally think that this problem, such as it is, is comparable to Zeno’s arrow paradox. To use Wikipedia’s words:

    He states that in any one (duration-less) instant of time, the arrow is neither moving to where it is, nor to where it is not. It cannot move to where it is not, because no time elapses for it to move there; it cannot move to where it is, because it is already there. In other words, at every instant of time there is no motion occurring. If everything is motionless at every instant, and time is entirely composed of instants, then motion is impossible.

    To ‘break’ the paradox, we need to identify and discard one or more primitive assumptions. In the arrow paradox, for example, one could argue that time is not composed of a stream of “duration-less” instants, that each instant – no matter how small – encompasses a vanishingly short but not nonexistent passage of time. With popular science communication (in the limited context of translating something that is untranslatable sans inexactitude and/or incompleteness), I’d contend the following:

    • Awareness: ‘Knowing’ and ‘knowing of’ are significantly different and, I hope, self-explanatory also. Example: I’m not fluent with the physics of cryogenic engines but I’m aware that they’re desirable because liquefied hydrogen has the highest specific impulse of all rocket fuels.
    • Context: As I’ve written before, a unit of scientific knowledge that exists in relation to other units of scientific knowledge is a different object from the same unit of scientific knowledge existing in relation to society.
    • Abstraction: 1. perfect can be the enemy of the good, and imperfect knowledge of an object – especially a complicated compound one – can still be useful; 2. when multiple components come together to form a larger entity, the entity can exhibit some emergent properties that one can’t derive entirely from the properties of the individual components. Example: one doesn’t have to understand semiconductor physics to understand what a computer does.

    An introduction to physics that contains no equations is like an introduction to French that contains no French words, but tries instead to capture the essence of the language by discussing it in English. Of course, popular writers on physics must abide by that constraint because they are writing for mathematical illiterates, like me, who wouldn’t be able to understand the equations. (Sometimes I browse math articles in Wikipedia simply to immerse myself in their majestic incomprehensibility, like visiting a foreign planet.)

    Such books don’t teach physical truths; what they teach is that physical truth is knowable in principle, because physicists know it. Ironically, this means that a layperson in science is in basically the same position as a layperson in religion.

    Adam Kirsch, ‘The Ontology of Pop Physics’, Tablet Magazine, 2020

    But by offering these reasons, I don’t intend to over-qualify science communication – i.e. claim that, given enough time and/or other resources, a suitably skilled science communicator will be able to produce a non-mathematical description of, say, quantum superposition that is comprehensible, exact and complete. Instead, it may be useful for communicators to acknowledge that there is an immutable gap between common English (the language of modern science) and mathematics, beyond which scientific expertise is unavoidable – in much the same way communicators must insist that the farther the expert strays into the realm of communication, the closer they’re bound to get to a boundary beyond which they must defer to the communicator.

  • Boron nitride, tougher than it looks

    During World War I, a British aeronautical engineer named A.A. Griffith noticed something odd about glass. He found that the atomic bonds in glass needed 10,000 megapascals of stress to break apart – but a macroscopic mass of glass could be broken apart by a stress of 100 megapascals. Something about glass changed between the atomic level and the bulk, making it more brittle than its atomic properties suggested.

    Griffith attributed this difference to small imperfections in the bulk, like cracks and notches. He also realised the need for a new way to explain how brittle materials like glass fracture, since the atomic properties alone can’t explain it. He drew on thermodynamics to figure an equation based on two forms of energy: elastic energy and surface energy. The elastic energy is energy stored in a material when it is deformed – like the potential energy of a stretched rubber-band. The surface energy is the energy of molecules at the surface, which is always greater than that of molecules in the bulk. The greater the surface area of an object, the more surface energy it has.

    Griffith took a block of glass, subjected it to a tensile load (i.e. a load that stretches the material without breaking it) and then etched a small crack in it. He found that the introduction of this flaw reduced the material’s elastic energy but increased its surface energy. He also found that the free energy – which is surface energy minus elastic energy – increased up to a point as he increased the crack length, before falling back down even if the crack got longer. A material fractures, i.e. breaks, when the amount of stress it is under exceeds this peak value.

    Through experiments, engineers have also been able to calculate the fracture toughness of materials – a number essentially denoting the ability of a material to resist the propagation of surface cracks. Brittle materials usually have higher strength but lower fracture toughness. That is, they can withstand high loads without breaking or deforming, but when they do fail, they fail in catastrophic fashion. No half-measures.

    If a material’s fracture characteristics are in line with Griffith’s theory, it’s usually brittle. For example, glass has a strength of 7 megapascals (with a theoretical upper limit of 17,000 megapascals) – but a fracture toughness of 0.6-0.8 megapascals per square-root metre.

    Graphene is a 2D material, composed of a sheet of carbon atoms arranged in a hexagonal pattern. And like glass, its strength: 130,000 megapascals; its fracture toughness: 4 megapascals per square-root metre – the difference arising similarly from small flaws in the bulk material. Many people have posited graphene as a material of the future for its wondrous properties. Recently, scientists have been excited about the weird behaviour of electrons in graphene and the so-called ‘magic angle’. However, the fact that it is brittle automatically limits graphene’s applications to environments in which material failure can’t be catastrophic.

    Another up-and-coming material is hexagonal boron nitride (h-BN). As its name indicates, h-BN is a grid of boron and nitrogen atoms arranged in a hexagonal pattern. (Boron nitride has two other forms: sphalerite and wurtzite.) h-BN is already used as a lubricant because it is very soft. It can also withstand high temperatures before losing its structural integrity, making it useful in applications related to spaceflight. However, since monolayer h-BN’s atomic structure is similar to that of graphene, it was likely to be brittle as well – with small flaws in the bulk material compromising the strength arising from its atomic bonds.

    But a new study, published on June 2, has found that h-BN is not brittle. Scientists from China, Singapore and the US have reported that cracks in “single-crystal monolayer h-BN” don’t propagate according to Griffith’s theory, but that they do so in a more stable way, making the material tougher.

    Even though h-BN is sometimes called ‘white graphene’, many of its properties are different. Aside from being able to withstand up to 300º C more in air before oxidising, h-BN is an insulator (graphene is a semiconductor) and is more chemically inert. In 2017, scientists from Australia, China, Japan, South Korea, the UK and the US also reported that while graphene’s strength dropped by 30% as the number of stacked layers was increased from one to eight, that of h-BN was pretty much constant. This suggested, the scientists wrote, “that BN nanosheets are one of the strongest insulating materials, and more importantly, the strong interlayer interaction in BN nanosheets, along with their thermal stability, make them ideal for mechanical reinforcement applications.”

    The new study further cements this reputation, and in fact lends itself to the conclusion that h-BN is one of the thermally, chemically and mechanically toughest insulators that we know.

    Here, the scientists found that when a crack is introduced in monolayer h-BN, the resulting release of energy is dissipated more effectively than is observed in graphene. And as the crack grows, they found that unlike in graphene, it gets deflected instead of proceeding along a straight path, and also sprouts branches. This way, monolayer h-BN redistributes the elastic energy released in a way that allows the crack length to increase without fracturing the material (i.e. without causing catastrophic failure).

    According to their paper, this behaviour is the result of h-BN being composed of two different types of atoms, of boron and nitrogen, whereas graphene is composed solely of carbon atoms. As a result, when a bond between boron and nitrogen breaks, two types of crack-edges are formed: those with boron at the edge (B-edge) and those with nitrogen at the edge (N-edge). The scientists write that based on their calculations, “the amplitude of edge stress [along N-edges] is more than twice of that [along B-edges]”. Every time a crack branches or is deflected, the direction in which it propagates is determined according to the relative position of B-edges and N-edges around the crack tip. And as the crack propagates, the asymmetric stress along these two edges causes the crack to turn and branch at different times.

    The scientists summarise this in their paper as that h-BN dissipates more energy by introducing “more local damage” – as opposed to global damage, i.e. fracturing – “which in turn induces a toughening effect”. “If the crack is branched, that means it is turning,” Jun Lou, one of the paper’s authors and a materials scientist at Rice University, Texas, told Nanowerk. “If you have this turning crack, it basically costs additional energy to drive the crack further. So you’ve effectively toughened your material by making it much harder for the crack to propagate.” The paper continues:

    [These two mechanisms] contribute significantly to the one-order of magnitude increase in effective energy release rate compared with its Griffith’s energy release rate. This finding that the asymmetric characteristic of 2D lattice structures can intrinsically generate more energy dissipation through repeated crack deflection and branching, demonstrates a very important new toughening mechanism for brittle materials at the 2D limit.

    To quote from Physics World:

    The discovery that h-BN is also surprisingly tough means that it could be used to add tear resistance to flexible electronics, which Lou observes is one of the niche application areas for 2D-based materials. For flexible devices, he explains, the material needs to mechanically robust before you can bend it around something. “That h-BN is so fracture-resistant is great news for the 2D electronics community,” he adds.

    The team’s findings may also point to a new way of fabricating tough mechanical metamaterials through engineered structural asymmetry. “Under extreme loading, fracture may be inevitable, but its catastrophic effects can be mitigated through structural design,” [Huajian Gao, also at Rice University and another member of the study], says.

    Featured image: A representation of hexagonal boron nitride. Credit: Benjah-bmm27/Wikimedia Commons, public domain.

  • On tabletop accelerators

    Tabletop accelerators are an exciting new field of research in which physicists use devices the size of a shoe box, or something just a bit bigger, to accelerate electrons to high energies. The ‘conventional way’ to do this has been to use machines that are as big as small buildings, but are often bigger as well. The world’s biggest machine, the Large Hadron Collider (LHC), uses thousands of magnets, copious amounts of electric current, sophisticated control systems and kilometres of beam pipes to accelerate protons from 0.09 TeV – their rest energy – to 7 TeV. Tabletop accelerators can’t push electrons to such high energies, required to probe exotic quantum phenomena, but they can attain energies that are useful in medical applications (including scanners and radiation therapy).

    They do this by skipping the methods that ‘conventional’ accelerators use, and instead take advantage of decades of progress in theoretical physics, computer simulations and fabrication. For example, some years ago, there was a group at Stanford University that had developed an accelerator that could sit on your fingertip. It consisted of narrow channels etched on glass, and a tuned infrared laser shined over these ‘mountains’ and ‘valleys’. When an electron passed over a mountain, it would get pushed more than it would slow down over a valley. This way, the group reported an acceleration gradient – amount of acceleration per unit distance – of 300 MV/m. This means the electrons will gain 300 MeV of energy for every meter travelled. This was comparable to some of the best, but gigantic, electron accelerators.

    Another type of tabletop accelerators uses a clump of electrons or a laser fired into a plasma, setting off a ripple of energy that the trailing electrons, from the plasma, can ‘ride’ and be accelerated on. (This is a grossly simplified version; a longer explanation is available here.) In 2016, physicists in California proved that it would be possible to join two such accelerators end to end and accelerate the electrons more – although not twice as more, since there is a cost associated with the plasma’s properties.

    The biggest hurdle between tabletop accelerators and the market is also something that makes the label of ‘tabletop’ meaningless. Today, just the part of the device where electrons accelerate can fit on a tabletop. The rest of the machine is still too big. For example, the team behind the 2016 study realised that they’d need as many of their shoebox-sized devices as to span 100 m to accelerate electrons to 0.1 TeV. In early 2020, the Stanford group improved their fingertip-sized accelerator to make it more robust and scalable – but such that the device’s acceleration gradient dropped 10x and it required pre-accelerated electrons to work. The machines required for the latter are as big as rooms.

    More recently, Physics World published an article on July 12 headlined ‘Table-top laser delivers intense extreme-ultraviolet light’. In the fifth paragraph, however, we find that this table needs to be around 2 m long. Is this an acceptable size for a table? I don’t want to discriminate against bigger tables but I thought ‘tabletop accelerator’ meant something like my study table (pictured above). This new device’s performance reportedly “exceeds the performance of existing, far bulkier XUV sources”, that “simulations done by the team suggest that further improvements could boost [its output] intensity by a factor of 1000,” and that it shrinks something that used to be 10 m wide to a fifth of its size. These are all good, but if by ‘tabletop’ we’re to include banquet-hall tables as well, the future is already here.

  • Scicommers as knowledge producers

    Reading the latest edition of Raghavendra Gadagkar’s column in The Wire Science, ‘More Fun Than Fun’, about how scientists should become communicators and communicators should be treated as knowledge-producers, I began wondering if the knowledge produced by the latter is in fact not the same knowledge but something entirely new. The idea that communicators simply make the scientists’ Promethean fire more palatable to a wider audience has led, among other things, to a belief widespread among scientists that science communicators are adjacent to science and aren’t part of the enterprise producing ‘scientific knowledge’ itself. And this perceived adjacency often belittles communicators by trivialising the work that they do and hiding the knowledge that only they produce.

    Explanatory writing that “enters into the mental world of uninitiated readers and helps them understand complex scientific concepts”, to use Gadagkar’s words, takes copious and focused work. (And if it doesn’t result in papers, citations and h-indices, just as well: no one should become trapped in bibliometrics the way so many scientists have.) In fact, describing the work of communicators in this way dismisses a specific kind of proof of work that is present in the final product – in much the same way scientists’ proofs of work are implicit in new solutions to old problems, development of new technologies, etc. The knowledge that people writing about science for a wider audience produce is, in my view, entirely distinct, even if the nature of the task at hand is explanatory.

    In his article, Gadagkar writes:

    Science writers should do more than just reporting, more than translating the gibberish of scientists into English or whatever language they may choose to write in. … Science writers are in a much better position to make lateral comparisons, understand the process of science, and detect possible biases and conflicts of interest, something that scientists, being insiders, cannot do very well. So rather than just expect them to clean up our messy prose, we should elevate science writers to the role of knowledge producers.

    My point is about knowledge arising from a more limited enterprise – i.e. explanation – but which I think can be generalised to all of journalism as well (and to other expository enterprises). And in making this point, I hope my two-pronged deviation from Gadagkar’s view is clear. First, science journalists should be treated as knowledge producers, but not in the limited confines of the scientific enterprise and certainly not just to expose biases; instead, communicators as knowledge producers exist in a wider arena – that of society, including its messy traditions and politics, itself. Here, knowledge is composed of much more than scientific facts. Second, science journalists are already knowledge producers, even when they’re ‘just’ “translating the gibberish of scientists”.

    Specifically, the knowledge that science journalists produce differs from the knowledge that scientists produce in at least two ways: it is accessible and it makes knowledge socially relevant. What scientists find is not what people know. Society broadly synthesises knowledge from information that it weights together with extra-scientific considerations, including biases like “which university is the scientist affiliated with” and concerns like “will the finding affect my quality of life”. Journalists are influential synthesisers who work with or around these and other psychosocial stressors to contextualise scientific findings, and thus science itself. Even when they write drab stories about obscure phenomena, they make an important choice: “this is what the reader gets to read, instead of something else”.

    These properties taken together encompass the journalist’s proof of work, which is knowledge accessible to a much larger audience. The scientific enterprise is not designed to produce this particular knowledge. Scientists may find that “leaves use chlorophyll to photosynthesise sunlight”; a skilled communicator will find that more people know this, know why it matters and know how they can put such knowledge to use, thus fostering a more empowered society. And the latter is entirely new knowledge – akin to an emergent object that is greater than the sum of its scientific bits.

  • Looking for ghost particles in a frustrated world

    In some of the many types of objects and events involving electrons, it is helpful to think that these particles are made up of three smaller particles, called spinons, holons and orbitons. Physicists call these supposedly imaginary particles quasiparticles. By assuming that they exist, we get to simplify our calculations of the electrons’ behaviour in these environments. Another example of a quasiparticle is the phonon – carriers of sound energy in solid materials.

    One such object, and en exotic one at that, is a spin liquid. These are actually solid materials that are magnets, but are incapable of aligning the spins of their constituent electrons in one consistent way. In conventional ferromagnets, the electrons’ spins are aligned all in the same direction in the presence of a magnetic field. In antiferromagnets, the spins are aligned in an alternating pattern. But in spin liquids, in the presence of a magnetic field, the alignment of electron spins constantly changes in a dynamic pattern. Such materials are said to be frustrated – in that even when they have a reason to be aligned, some other forces intervene to keep them changing.

    Think of ripples in a closed tank of water bouncing between the walls: the height of the waves would be analogous to the extent to which the electrons’ spins are aligned. See this short 2017 video by the CENN Nanocenter, Slovenia, for a visual description.

    When studying spin liquids, scientists have found that it is useful to assume that each electron is made of a spinon and a holon. The spinon carries the electron’s spin and the holon carries the charge. (The orbiton is there but not involved.) Physicists have elucidated the need for such quasiparticles through experiments in which electrons were subjected to extreme physical conditions. In 2009, researchers set up an experiment in which electrons would jump from the surface of a metal to a very narrow wire, in a chamber held only a few fractions above absolute zero. When they jumped, the particles suddenly found themselves with much less room to move around, especially to not get too close to the other electrons (since like charges repel). As a result, the electrons became more distended, in a manner of speaking, as their spinons and holons moved apart to adapt to their surroundings. Such spin-charge separation is rare but has been documented. (See also a similar results reported in 2006.)

    Now, in a new study (preprint here), physicists have reported yet more evidence, of a different kind, that the spinon-holon model is both legitimate and useful.

    Physicists from Princeton University, New Jersey, created a spin liquid in a crystal of ruthenium chloride. This is not simple: the crystal, first made ultra-pure, had to be maintained at 0.5 K (-272.65º C) inside a magnetic field of 7.3-11 tesla (at least 1.2-million-times as strong as Earth’s magnetic field) – the environment in which a stable spin liquid arises in this material. Next, they applied a small amount of heat along “one edge” of the crystal, and began recording its thermal conductivity – its ability to conduct heat.

    When a magnetic field is applied to certain materials in one direction, a temperature gradient, i.e. heat flow, emerges in the perpendicular direction. This is called the thermal Hall effect, and the material’s ability to conduct this heat is its thermal Hall conductivity (symbol κ, lowercase kappa).

    According to a previously published theory, the presence of spinons in the material should show up as an oscillating pattern on a graph showing κ versus the magnetic field.

    This pattern is an analogue of the Shubnikov-de Haas effect: the electrons of a metal, a semimetal or certain semiconductors oscillate if the material is at a very low temperature and in the presence of an intense magnetic field. (However, the mechanism of action between these materials and spin liquids is different.)

    The physicists observed that in the ruthenium chloride crystal, the value of κ oscillated along one direction as long as the magnetic field stayed between 7.3 and 11 tesla, confirming the presence of spinons and their relation to the spin liquid state. They also observed the period of oscillation – the time taken to complete one oscillation – varied in proportion to the inverse of the applied magnetic field. That is, if the magnetic field was weakened by some amount, the period would increase by a proportionate amount. This was an anomalous pattern; the researchers called it a “paradox” in their paper.

    Does this mean spinons are real?

    There’s a two-part answer to this question, and neither arises from the new paper but from what we already know about quasiparticles, and particles in general. But in the end, yes, they could be real.

    The first part is that instead of pondering the existence of quasiparticles, it may be more useful for us to discard the importance we accord to fundamental particles. We were taught in school that fundamental particles are indivisible. But what we know to be fundamental depends on the energy scale at which we probe these particles. Consider a closed tank of water that you keep heating. First, the liquid will vaporise, and at some point the compounds in the vapour will break apart. Next, the atoms themselves will disintegrate into their constituent particles. If you kept heating the tank (while preserving its structural integrity) for a long time, at some point, with sophisticated instruments, you may be able to observe the protons and neutrons come apart into quarks and gluons.

    For many decades, we thought protons and neutrons were fundamental particles – until we developed methods to observe their behaviour at higher and higher energies. And at one point, using ultra-sophisticated machines like the Large Hadron Collider, we discovered the state of matter called a quark-gluon plasma. As physicist Vijay Shenoy of the Indian Institute of Science, Bengaluru, told me in 2017:

    Something may look fundamental to us at scales of energies that are accessible to us – but if we probe at higher energy scales, we may see that it is also made up of other even more fundamental things (neutrons/protons are really quarks held together by gluons). We will then say that the original ‘fundamental particle’ is a quasiparticle excitation of the system of ‘even more fundamental things’! You could actually ask where this will end, at what energy scales… We really do not know the answer to this question. This is why the concept of a ‘fundamental particle’ is not a very useful concept in physics.

    Second: Physicists studying particles use quantum field theory (QFT) to make sense of the particles’ properties and behaviour. And in QFT, what we know to be ‘particles’ are really excitations – clumps of energy – of an underlying energy field. For example, electrons are excitations of an electric field; photons are excitations of an electromagnetic field; the hypothetical gravitons are excitations of a gravity field; and so on. In Shenoy’s words (emphasis in the original):

    An excitation is called a particle if, for a given momentum of the excitation, there is a well-defined energy. Quite remarkably, this definition of a particle embodies what we conventionally think of as a particle: small hard things that move about. … A ‘quasiparticle’ excitation is one that is very nearly a particle-like excitation: for the given momentum, it is a small spread of energy about some average value. The manifestation is such that, for practical purposes, if you watch this excitation over longer durations, it will behave like a particle in an experiment.

    Taking both parts together, it seems that instead of asking which parts are ‘fundamental’ and which are ‘imaginary’, it has been more fruitful for physicists to focus on the energy fields that give rise to all excitations in the first place.