Science

  • The scientist as inadvertent loser

    Twice this week, I’d had occasion to write about how science is an immutably human enterprise and therefore some of its loftier ideals are aspirational at best, and about how transparency is one of the chief USPs of preprint repositories and post-publication peer-review. As if on cue, I stumbled upon a strange case of extreme scientific malpractice that offered to hold up both points of view.

    In an article published January 30, three editors of the Journal of Theoretical Biology (JTB) reported that one of their handling editors had engaged in the following acts:

    1. “At the first stage of the submission process, the Handling Editor on multiple occasions handled papers for which there was a potential conflict of interest. This conflict consisted of the Handling Editor handling papers of close colleagues at the Handling Editor’s own institute, which is contrary to journal policies.”
    2. “At the second stage of the submission process when reviewers are chosen, the Handling Editor on multiple occasions selected reviewers who, through our investigation, we discovered was the Handling Editor working under a pseudonym…”
    3. Many forms of reviewer coercion
    4. “In many cases, the Handling Editor was added as a co-author at the final stage of the review process, which again is contrary to journal policies.”

    On the back of these acts of manipulation, this individual – whom the editors chose not to name for unknown reasons but one of whom all but identified on Twitter as a Kuo-Chen Chou (and backed up by an independent user) – proudly trumpets the following ‘achievement’ on his website:

    The same webpage also declares that Chou “has published over 730 peer-reviewed scientific papers” and that “his papers have been cited more than 71,041 times”.

    Without transparencya and without the right incentives, the scientific process – which I use loosely to denote all activities and decisions associated with synthesising, assimilating and organising scientific knowledge – becomes just as conducive to misconduct and unscrupulousness as any other enterprise if only because it allows people with even a little more power to exploit others’ relative powerlessness.

    a. Ironically, the JTB article lies behind a paywall.

    In fact, Chen had also been found guilty of similar practices when working with a different journal, called Bioinformatics, and an article its editors published last year has been cited prominently in the article by JTB’s editors.

    Even if the JTB and Bioinformatics cases are exceptional for their editors having failed to weed out gross misconduct shortly after its first occurrence – it’s not; but although there many such exceptional cases, they are still likely to be in the minority (an assumption on my part) – a completely transparent review process eliminates such possibilities as well as, and more importantly, naturally renders the process trustlessb. That is, you shouldn’t have to trust a reviewer to do right by your paper; the system itself should be designed such that there is no opportunity for a reviewer to do wrong.

    b. As in trustlessness, not untrustworthiness.

    Second, it seems Chou accrued over 71,000 citations because the number of citations has become a proxy for research excellence irrespective of whether the underlying research is actually excellent – a product of the unavoidable growth of a system in which evaluators replaced a complex combination of factors with a single number. As a result, Chou and others like him have been able to ‘hack’ the system, so to speak, and distort the scientific literature (which you might’ve seen as the stack of journals in a library representing troves of scientific knowledge).

    But as long as the science is fine, no harm done, right? Wrong.

    If you visualised the various authors of research papers as points and the lines connecting them to each other as citations, an inordinate number would converge on the point of Chou – and they would be wrong, led there not by Chou’s prowess as a scientist but misled there by his abilities as a credit-thief and extortionist.

    This graphing exercise isn’t simply a form of visual communication. Imagine your life as a scientist as a series of opportunities, where each opportunity is contested by multiple people and the people in charge of deciding who ‘wins’ at each stage aren’t some or all of well-trained, well-compensated or well-supported. If X ‘loses’ at one of the early stages and Y ‘wins’, Y has a commensurately greater chance of winning a subsequent contest and X, lower. Such contests often determine the level of funding, access to suitable guidance and even networking possibilities, so over multiple rounds, by virtue of the evaluators at each step having more reasons to be impressed by Y‘s CV because, say, they had more citations, and fewer reasons to be impressed with X‘s, X ends up with more reasons to exit science and switch careers.

    Additionally, because of the resources that Y has received opportunities to amass, they’re in a better position to conduct even more research, ascend to even more influential positions and – if they’re so inclined – accrue even more citations through means both straightforward and dubious. To me, such prejudicial biasing resembles the evolution of a Lorenz attractor: the initial conditions might appear to be the same to some approximation, but for a single trivial choice, one scientist ends up being disproportionately more successful than another.

    The answer of course is many things, including better ways to evaluate and reward research, and two of them in turn have to be to eliminate the use of numbers to denote human abilities and to make the journey of a manuscript from the lab to the wild as free of opaque, and therefore potentially arbitrary, decision-making as possible.

    Featured image: A still from an animation showing the divergence of nearby trajectories on a Lorenz system. Caption and credit: MicoFilós/Wikimedia Commons, CC BY-SA 3.0.

  • The chrysalis that isn’t there

    I wrote the following post while listening to this track. Perhaps you will enjoy reading it to the same sounds. Otherwise, please consider it a whimsical recommendation. 🙂

    I should really start keeping a log of different stories in the news all of which point to the little-acknowledged but only-evident fact that science – like so many things, including people – does not embody lofty ideals as much as the aspirations to those ideals. Nature News reported on January 31 that “a language analysis of titles and abstracts in more than 100,000 scientific articles,” published in the British Medical Journal (BMJ), had “found that papers with first and last authors who were both women were about 12% less likely than male-authored papers to include sensationalistic terms such as ‘unprecedented’, ‘novel’, ‘excellent’ or ‘remarkable’;” further, “The articles in each comparison were presumably of similar quality, but those with positive words in the title or abstract garnered 9% more citations overall.” The scientific literature, people!

    Science is only as good as its exponents, and there is neither meaning nor advantage to assuming that there is such a thing as a science beyond, outside of and without these people. Doing so inflates science’s importance when it doesn’t deserve to be, and suppresses its shortcomings and prevents them from being addressed. For example, the BMJ study prima facie points to gender discrimination but it also describes a scientific literature that you will never find out is skewed, and therefore unrepresentative of reality, unless you acknowledge that it is constituted by papers authored by people of two genders, on a planet where one gender has maintained a social hegemony for millennia – much like you will never know Earth has an axis of rotation unless you are able to see its continents or make sense of its weather.

    The scientific method describes a popular way to design experiments whose performance scientists can use to elucidate refined, and refinable, answers to increasingly complex questions. However, the method is an external object (of human construction) that only, and arguably asymptotically, mediates the relationship between the question and the answer. Everything that comes before the question and after the answer is mediated by a human consciousness undeniably shaped by social, cultural, economic and mental forces.

    Even the industry that we associate with modern science – composed of people who trained to be scientists over at least 15 years of education, then went on to instruct and/or study in research institutes, universities and laboratories, being required to teach a fixed number of classes, publish a minimum number of papers and accrue citations, and/or produce X graduate students, while drafting proposals and applying for grants, participating in workshops and conferences, editing journals, possibly administering scientific work and consulting on policy – is steeped in human needs and aspirations, and is even designed to make room for them, but many of us non-scientists are frequently and successfully tempted to address the act of being a scientist as an act of transformation: characterised by an instant in time when a person changes into something else, a higher creature of sorts, like a larva enters a magical chrysalis and exits a butterfly.

    But for a man to become a scientist has never meant the shedding of his identity or social stature; ultimately, to become a scientist is to terminate at some quasi-arbitrary moment the slow inculcation of well-founded knowledge crafted to serve a profitable industry. There is a science we know as simply the moment of discovery: it is the less problematic of the two kinds. The other, in the 21st century, is also funding, networking, negotiating, lobbying, travelling, fighting, communicating, introspecting and, inescapably, some suffering. Otherwise, scientific knowledge – one of the ultimate products of the modern scientific enterprise – wouldn’t be as well-organised, accessible and uplifting as it is today.

    But it would be silly to think that in the process of constructing this world-machine of sorts, we baked in the best of us, locked out the worst of us, and threw the key away. Instead, like all human endeavour, science evolves with us. While it may from time to time present opportunities to realise one or two ideals, it remains for the most part a deep and truthful reflection of ourselves. This assertion isn’t morally polarised, however; as they say, it is what it is – and this is precisely why we must acknowledge failures in the practice of science instead of sweeping them under the rug.

    One male scientist choosing more uninhibitedly to call his observation “unprecedented” than a female scientist might have been encouraged, among other things, by the peculiarities of a gendered scientific labour force and scientific enterprise, but many male scientists indulging just as freely in their evaluatory fantasies, such as they are, indicates a systemic corruption that transcends (but not escapes) science. The same goes for, as in another recent example, for the view that science is self-correcting. It is not because people are not, and they need to be pushed to be. In March 2019, for example, researchers uncovered at least 58 papers published in a six-week period whose authors had switched their desired outcomes between the start and end of their respective experiments to report positive, and to avoid reporting negative, results. When the researchers wrote to the authors as well as the editors of the journals that had published the problem papers, most of them denied there was an issue and refused to accept modifications.

    Again, the scientific literature, people!

  • A science for the non-1%

    David Michaels, an epidemiologist and a former US assistant secretary of labour for occupational safety and health under Barack Obama, writes in the Boston Review:

    [Product defence] operations have on their payrolls—or can bring in on a moment’s notice—toxicologists, epidemiologists, biostatisticians, risk assessors, and any other professionally trained, media-savvy experts deemed necessary (economists too, especially for inflating the costs and deflating the benefits of proposed regulation, as well as for antitrust issues). Much of their work involves production of scientific materials that purport to show that a product a corporation makes or uses or even discharges as air or water pollution is just not very dangerous. These useful “experts” produce impressive-looking reports and publish the results of their studies in peer-reviewed scientific journals (reviewed, of course, by peers of the hired guns writing the articles). Simply put, the product defence machine cooks the books, and if the first recipe doesn’t pan out with the desired results, they commission a new effort and try again.

    Members of the corporate class have played an instrumental role in undermining trust in science in the last century, and Michaels’s exposition provides an insightful glimpse of how they work, and why what they do works. However, the narrative Michaels employs, as illustrated above, treats scientists like minions – a group of people that will follow your instructions but will not endeavour to question how their research is going to be used as long as, presumably, their own goals are met – and also excuses them for it. This is silly: the corporate class couldn’t have done what it did without help from a sliver of the scientific class that sold its expertise to the highest bidder.

    Even if such actions may have been more the result of incompetence than of malice, for too long have scientists claimed vincible ignorance in their quasi-traditional tendency to prize unattached scientific progress more than scientific progress in step with societal aspirations. They need to step up, step out and participate in political programmes that deploy scientific knowledge to solve messy real-world problems, which frequently fail and just as frequently serve misguided ends (such as – but sure as hell not limited to – laundering the soiled reputation of a pedophile and convicted sex offender).

    But even so, even as the scientists’ conduct typifies the problem, the buck stops with the framework of incentives that guides them.

    Despite its connections with technologies that powered colonialism and war, science has somehow accrued a reputation of being clean. To want to be a scientist today is to want to make sense of the natural universe – an aspiration both simple and respectable – and to make a break from the piddling problems of here and now to the more spiritually refined omnipresent and eternal. However, this image can’t afford to maintain itself by taking the deeply human world it is embedded in for granted.

    Science has become the reason for state simply because the state is busy keeping science and politics separate. No academic programme in the world today considers scientific research to be at par with public engagement and political participationa when exactly this is necessary to establish science as an exercise through which, fundamentally, people construct knowledge about the world and then ensure it is used responsibly (as well as to demote it from the lofty pedestal where it currently lords over the social sciences and humanities). Instead, we have a system that encourages only the production of knowledge, tying it up with metrics of professional success, career advancement and, most importantly, a culture of higher educationb and research that won’t brook dissent and tolerates activist-scientists as lesser creatures.

    a. And it is to the government’s credit that political participation has become synonymous with electoral politics and the public expression of allegiance to political ideologies.

    b. Indeed, the problem most commonly manifests as a jaundiced impression of the purpose of teaching.

    The perpetuators of this structure are responsible for the formation and subsequent profitability of “the strategy of manufacturing doubt”, which Michaels writes “has worked wonders … as a public relations tool in the current debate over the use of scientific evidence in public policy. … [The] main motivation all along has been only to sow confusion and buy time, sometimes lots of time, allowing entire industries to thrive or individual companies to maintain market share while developing a new product.”

    To fight the vision of these perpetuators, to at least rescue the fruits of the methods of science from inadvertent ignominy, we need publicly active scientists to be the rule, not the exceptions to the rule. We need structural incentives to change to accommodate the fact that, if they don’t, this group of people will definitely remain limited to members of the upper class and/or upper castes. We need a stronger, closer marriage of science, the social sciences, business administration and policymaking.

    To be sure, I’m neither saying the mere presence of scientists in public debates will lead to swifter solutions nor that the absence of science alone in policymaking is responsible for so many of the crises of our times – but that their absence has left cracks so big, it’s quite difficult to consider if they can be sealed any other wayc. And yes, the world will slow down, the richer will become less rich and economic growth will become more halting, but these are all also excuses to maintain a status quo that has only exploited the non-1% for two centuries straight.

    c. Michaels concludes his piece with a list of techniques the product-defence faction has used to sow doubt and, in the resulting moments of vulnerability, ‘sell science’ – i.e. techniques that represent the absence of guiding voices.

    Of course, there’s only so much one can do if the political class isn’t receptive to one’s ideas – but we must begin somewhere, and what better place to begin than at the knowledgeable place?

  • Another controversy, another round of blaming preprints

    On February 1, Anand Ranganathan, the molecular biologist more popular as a columnist for Swarajya, amplified a new preprint paper from scientists at IIT Delhi that (purportedly) claims the Wuhan coronavirus’s (2019 nCoV’s) DNA appears to contain some genes also found in the human immunodeficiency virus but not in any other coronaviruses. Ranganathan also chose to magnify the preprint paper’s claim that the sequences’ presence was “non-fortuitous”.

    To be fair, the IIT Delhi group did not properly qualify what they meant by the use of this term, but this wouldn’t exculpate Ranganathan and others who followed him: to first amplify with alarmist language a claim that did not deserve such treatment, and then, once he discovered his mistake, to wonder out loud about whether such “non-peer reviewed studies” about “fast-moving, in-public-eye domains” should be published before scientific journals have subjected them to peer-review.

    https://twitter.com/ARanganathan72/status/1223444298034630656
    https://twitter.com/ARanganathan72/status/1223446546328326144
    https://twitter.com/ARanganathan72/status/1223463647143505920

    The more conservative scientist is likely to find ample room here to revive the claim that preprint papers only promote shoddy journalism, and that preprint papers that are part of the biomedical literature should be abolished entirely. This is bullshit.

    The ‘print’ in ‘preprint’ refers to the act of a traditional journal printing a paper for publication after peer-review. A paper is designated ‘preprint’ if it hasn’t undergone peer-review yet, even though it may or may not have been submitted to a scientific journal for consideration. To quote from an article championing the use of preprints during a medical emergency, by three of the six cofounders of medRxiv, the preprints repository for the biomedical literature:

    The advantages of preprints are that scientists can post them rapidly and receive feedback from their peers quickly, sometimes almost instantaneously. They also keep other scientists informed about what their colleagues are doing and build on that work. Preprints are archived in a way that they can be referenced and will always be available online. As the science evolves, newer versions of the paper can be posted, with older historical versions remaining available, including any associated comments made on them.

    In this regard, Ranganathan’s ringing the alarm bells (with language like “oh my god”) the first time he tweeted the link to the preprint paper without sufficiently evaluating the attendant science was his decision, and not prompted by the paper’s status as a preprint. Second, the bioRxiv preprint repository where the IIT Delhi document showed up has a comments section, and it was brimming with discussion within minutes of the paper being uploaded. More broadly, preprint repositories are equipped to accommodate peer-review. So if anyone had looked in the comments section before tweeting, they wouldn’t have had reason to jump the gun.

    Third, and most important: peer-review is not fool-proof. Instead, it is a legacy method employed by scientific journals to filter legitimate from illegitimate research and, more recently, higher quality from lower quality research (using ‘quality’ from the journals’ oft-twisted points of view, not as an objective standard of any kind).

    This framing supports three important takeaways from this little scandal.

    A. Much like preprint repositories, peer-reviewed journals also regularly publish rubbish. (Axiomatically, just as conventional journals also regularly publish the outcomes of good science, so do preprint repositories; in the case of 2019 nCoV alone, bioRxiv, medRxiv and SSRN together published at least 30 legitimate and noteworthy research articles.) It is just that conventional scientific journals conduct the peer-review before publication and preprint repositories (and research-discussion platforms like PubPeer), after. And, in fact, conducting the review after allows it to be continuous process able to respond to new information, and not a one-time event that culminates with the act of printing the paper.

    But notably, preprint repositories can recreate journals’ ability to closely control the review process and ensure only experts’ comments are in the fray by enrolling a team of voluntary curators. The arXiv preprint server has been successfully using a similar team to carefully eliminate manuscripts advancing pseudoscientific claims. So as such, it is easier to make sure people are familiar with the preprint and post-publication review paradigm than to take advantage of their confusion and call for preprint papers to be eliminated altogether.

    B. Those who support the idea that preprint papers are dangerous, and argue that peer-review is a better way to protect against unsupported claims, are by proxy advocating for the persistence of a knowledge hegemony. Peer-review is opaque, sustained by unpaid and overworked labour, and dispenses the same function that an open discussion often does at larger scale and with greater transparency. Indeed, the transparency represents the most important difference: since peer-review has traditionally been the demesne of journals, supporting peer-review is tantamount to designating journals as the sole and unquestionable arbiters of what knowledge enters the public domain and what doesn’t.

    (Here’s one example of how such gatekeeping can have tragic consequences for society.)

    C. Given these safeguards and perspectives, and as I have written before, bad journalists and bad comments will be bad irrespective of the window through which an idea has presented itself in the public domain. There is a way to cover different types of stories, and the decision to abdicate one’s responsibility to think carefully about the implications of what one is writing can never have a causal relationship with the subject matter. The Times of India and the Daily Mail will continue to publicise every new paper discussing whatever coffee, chocolate and/or wine does to the heart, and The Hindu and The Wire Science will publicise research published in preprint papers because we know how to be careful and of the risks to protect ourselves against.

    By extension, ‘reputable’ scientific journals that use pre-publication peer-review will continue to publish many papers that will someday be retracted.

    An ongoing scandal concerning spider biologist Jonathan Pruitt offers a useful parable – that journals don’t always publish bad science due to wilful negligence or poor peer-review alone but that such failures still do well to highlight the shortcomings of the latter. A string of papers the work on which Pruitt led were found to contain implausible data in support of some significant conclusions. Dan Bolnick, the editor of The American Naturalist, which became the first journal to retract Pruitt’s papers that it had published, wrote on his blog on January 30:

    I want to emphasise that regardless of the root cause of the data problems (error or intent), these people are victims who have been harmed by trusting data that they themselves did not generate. Having spent days sifting through these data files I can also attest to the fact that the suspect patterns are often non-obvious, so we should not be blaming these victims for failing to see something that requires significant effort to uncover by examining the data in ways that are not standard for any of this. … The associate editor [who Bolnick tasked with checking more of Pruitt’s papers] went as far back as digging into some of Pruitt’s PhD work, when he was a student with Susan Riechert at the University of Tennessee Knoxville. Similar problems were identified in those data… Seeking an explanation, I [emailed and then called] his PhD mentor, Susan Riechert, to discuss the biology of the spiders, his data collection habits, and his integrity. She was shocked, and disturbed, and surprised. That someone who knew him so well for many years could be unaware of this problem (and its extent), highlights for me how reasonable it is that the rest of us could be caught unaware.

    Why should we expect peer-review – or any kind of review, for that matter – to be better? The only thing we can do is be honest, transparent and reflexive.

  • The not-so-obvious obvious

    If your job requires you to pore through a dozen or two scientific papers every month – as mine does – you’ll start to notice a few every now and then couching a somewhat well-known fact in study-speak. I don’t mean scientific-speak, largely because there’s nothing wrong about trying to understand natural phenomena in the formalised language of science. However, there seems to be something iffy – often with humorous effect – about a statement like the following: “cutting emissions of ozone-forming gases offers a ‘unique opportunity’ to create a ‘natural climate solution’”1 (source). Well… d’uh. This is study-speak – to rephrase mostly self-evident knowledge or truisms in unnecessarily formalised language, not infrequently in the style employed in research papers, without adding any new information but often including an element of doubt when there is likely to be none.

    1. Caveat: These words were copied from a press release, so this could have been a case of the person composing the release being unaware of the study’s real significance. However, the words within single-quotes are copied from the corresponding paper itself. And this said, there have been some truly hilarious efforts to make sense of the obvious. For examples, consider many of the winners of the Ig Nobel Prizes.

    Of course, it always pays to be cautious, but where do you draw the line before a scientific result is simply one because it is required to initiate a new course of action? For example, the Univ. of Exeter study, the press release accompanying which discussed the effect of “ozone-forming gases” on the climate, recommends cutting emissions of substances that combine in the lower atmosphere to form ozone, a compound form of oxygen that is harmful to both humans and plants. But this is as non-“unique” an idea as the corresponding solution that arises (of letting plants live better) is “natural”.

    However, it’s possible the study’s authors needed to quantify these emissions to understand the extent to which ambient ozone concentration interferes with our climatic goals, and to use their data to inform the design and implementation of corresponding interventions. Such outcomes aren’t always obvious but they are there – often because the necessarily incremental nature of most scientific research can cut both ways. The pursuit of the obvious isn’t always as straightforward as one might believe.

    The Univ. of Exeter group may have accumulated sufficient and sufficiently significant evidence to support their conclusion, allowing themselves as well as others to build towards newer, and hopefully more novel, ideas. A ladder must have rungs at the bottom irrespective of how tall it is. But when the incremental sword cuts the other way, often due to perverse incentives that require scientists to publish as many papers as possible to secure professional success, things can get pretty nasty.

    For example, the Cornell University consumer behaviour researcher Brian Wansink was known to advise his students to “slice” the data obtained from a few experiments in as many different ways as possible in search of interesting patterns. Many of the papers he published were later found to contain numerous irreproducible conclusions – i.e. Wansink had searched so hard for patterns that he’d found quite a few even when they really weren’t there. As the British economist Ronald Coase said, “If you torture the data long enough, it will confess to anything.”

    The dark side of incremental research, and the virtue of incremental research done right, stems from the fact that it’s non-evidently difficult to ascertain the truth of a finding when the strength of the finding is expected to be so small that it really tests the notion of significance or so large – or so pronounced – that it transcends intuitive comprehension.

    For an example of the former, among particle physicists, a result qualifies as ‘fact’ if the chances of it being a fluke are 1 in 3.5 million. So the Large Hadron Collider (LHC), which was built to discover the Higgs boson, had to have performed at least 3.5 million proton-proton collisions capable of producing a Higgs boson and which its detectors could observe and which its computers could analyse to attain this significance.

    But while protons are available abundantly and the LHC can theoretically perform 645.8 trillion collisions per second, imagine undertaking an experiment that requires human participants to perform actions according to certain protocols. It’s never going to be possible to enrol billions of them for millions of hours to arrive at a rock-solid result. In such cases, researchers design experiments based on very specific questions, and such that the experimental protocols suppress, or even eliminate, interference, sources of doubt and confounding variables, and accentuate the effects of whatever action, decision or influence is being evaluated.

    Such experiments often also require the use of sophisticated – but nonetheless well-understood – statistical methods to further eliminate the effects of undesirable phenomena from the data and, to the extent possible, leave behind information of good-enough quality to support or reject the hypotheses. In the course of navigating this winding path from observation to discovery, researchers are susceptible to, say, misapplying a technique, overlooking a confounder or – like Wansink – overanalysing the data so much that a weak effect masquerades as a strong one but only because it’s been submerged in a sea of even weaker effects.

    Similar problems arise in experiments that require the use of models based on very large datasets, where researchers need to determine the relative contribution of each of thousands of causes on a given effect. The Univ. of Exeter study that determined ozone concentration in the lower atmosphere due to surface sources of different gases contains an example. The authors write in their paper (emphasis added):

    We have provided the first assessment of the quantitative benefits to global and regional land ecosystem health from halving air pollutant emissions in the major source sectors. … Future large-scale changes in land cover [such as] conversion of forests to crops and/or afforestation, would alter the results. While we provide an evaluation of uncertainty based on the low and high ozone sensitivity parameters, there are several other uncertainties in the ozone damage model when applied at large-scale. More observations across a wider range of ozone concentrations and plant species are needed to improve the robustness of the results.

    In effect, their data could be modified in future to reflect new information and/or methods, but in the meantime, and far from being a silly attempt at translating a claim into jargon-laden language, the study eliminates doubt to the extent possible with existing data and modelling techniques to ascertain something. And even in cases where this something is well known or already well understood, the validation of its existence could also serve to validate the methods the researchers employed to (re)discover it and – as mentioned before – generate data that is more likely to motivate political action than, say, demands from non-experts.

    In fact, the American mathematician Marc Abrahams, known much more for founding and awarding the Ig Nobel Prizes, identified this purpose of research as one of three possible reasons why people might try to “quantify the obvious” (source). The other two are being unaware of the obvious and, of course, to disprove the obvious.

  • A meeting with the PSA’s office

    The Office of the Principal Scientific Adviser (PSA) organised a meeting with science communicators from around India on January 27, in New Delhi. Some of my notes from the meeting are displayed below, published with three caveats.

    First, my notes are not to be treated as the minutes of the meeting; I only jotted down what I personally found interesting. Some 75% of the words in there are part of suggestions and recommendations advanced by different people; the remainder are, broadly, observations. They appear in no discernible order not because I jumbled them up but because participants offered both kinds of statements throughout. The meeting itself lasted for seven or so hours (including breaks for lunch and tea), so every single statement was also accompanied by extensive discussion. Finally, I have temporarily withheld some portions because I plan to discuss them in additional blog posts.

    Second, the meeting followed the Chatham House Rules, which means I am not at liberty to attribute statements uttered during the course of the meeting to their human originators. I have also not identified my own words where possible not because I want to hide but because, by virtue of these ideas appearing on my blog, I take full responsibility (but not authorship) for their publicisation.

    Third, though the meeting was organised by the Office of the PSA, its members were not the only ones of the government present at the meeting. Representatives of some other government-affiliated bodies were also in attendance. So statements obviously uttered by a government official – if any do come across that way – are not necessarily attributable to members of the Office of the PSA.


    “We invest a lot in science, we don’t use it imaginatively enough.”

    Three major science related issues:

    1. Climate change
    2. Dramatic consequences of our growth on biodiversity
    3. B/c of these two, how one issues addresses sustainable development
    • Different roles for journalists within and without the government
    • Meeting is about what each one of us can do — but what is that?
    • Each one of us can say “I could do better if only you could better empathise with what I do”
    • Need for skill-sharing events for science journalists/communicators
    • CSIR’s National Institute of Science Communication and Information Resources has a centre for science and media relations, and a national science library
    • Indian Council of Medical Research has a science communication policy but all press releases need to be okayed by health minister!
    • Knowledge making is wrapped up in identity
    • Regional language communicators don’t have access to press releases, etc. in regional languages, nor access to translators
    • Department of Science and Technology and IIT Kanpur working on machine-translations of scientific content of Wikipedia
    • Netherlands Science Foundation published a book compiling public responses to question ‘what do you think of science?’
    • In the process of teaching kids science, you can also get them to perform science and use the data (e.g. mapping nematode density in soil using Foldscope)
    • Slack group for science communicators, channels divided by topic
    • Leaders of scientific bodies need to be trained on how to deal with journalists, how to respond in interviews, etc.
    • Indian Space Research Organisation, Defence R&D Organisation and Department of Atomic Energy need to not be so shut off! What are they hiding? If nothing to hide, why aren’t they reachable?
    • Need structural reforms for institutional research outreach — can’t bank on skills, initiative of individual science communicators at institutes to ensure effective outreach
    • Need to decentralise PR efforts at institutions
    • People trained in science communication need to find jobs/employment
    • Pieces shortlisted for AWSAR award could be put on a CC BY-ND license so news publications can republish them en masse without edits
    • Please hold meetings like this at periodic intervals, let this not be a one-time thing
    • Issues with covering science: Lack of investment, few people covering science, not enough training opportunities, not enough science communication research in India
    • Need local meet-ups between journalists and scientists to get to know each other, facilitated by the government
    • Outreachers needn’t have to be highly regarded scientists, even grad students can give talks — and kids will come to listen
    • Twitter is an elite platform — science communicators that need to stay in touch need to do more; most science communicators don’t know each other!
    • Can we host one edition of the World Conference of Science Journalists in India?
    • What happened to the Indian Science Writers’ Association?
    • Today the mind is not without fear! The political climate is dire, people can’t freely speak their minds without fear of reprisal — only obvious that this should affect science journalism also
    • ISRO is a darling of the media, the government and the masses but has shit outreach! Rs 10,000 crore being spent on Gaganyaan but the amount of info on it in the public domain is poop.
    • CSIR’s Institute of Genomics and Integrative Biology is very open and accessible, director needs to be kept in the loop about some press interaction but that’s it; perhaps the same template can be recreated in other institutes?
    • Outreach at scientific institutions is a matter of trust: if director doesn’t trust scientists to speak up without permission, and if PR people don’t respond to emails or phone calls, impression is that there is no trust within the institute as well as that the institute would like journalists to not be curious
    • People trained in science communication (informally also) need a place to practice their newfound skills.
    • Private sector industry is in the blindspot of journalists
    • People can more easily relate to lived experiences; aesthetically pleasing (beautiful-looking) stories are important
    • Most people have not had access to the tools of science, we need to build more affordable and accessible tools
    • Don’t attribute to malfeasance what can be attributed to not paying attention, incompetence, etc.
    • Journalistic deep-dives are good but lack of resources to undertake, not many publications do it either, except maybe The Wire and Caravan; can science communicators and the government set up a longform mag together?
    • Create a national mentorship network where contact details of ‘mentors’ are shared and mentees enrolled in the programme can ask them questions, seek guidance, etc.
    • Consider setting up a ‘science media centre’ — but can existing and functional models in Australia and the UK be ported to India without facing any issues?
    • Entities like IndiaBioscience could handle biology research outreach for scientific institutes in, say, the South India region or Bangalore region with some support from the government. That would be better than an SMC-from-scratch.
    • Consider including science communication in government’s new draft Scientific Social Responsibility policy and other S&T innovation policies
    • Allocate a fixed portion of funding for research for public outreach and communication (such as 2%)
    • Need more formal recognition for science communication researchers within scientific institutions; members currently stuck in a limbo between outreach office and scientists, makes it difficult to acquire funds for work
    • Support individual citizen science initiatives
    • Need better distinction between outreach groups and press offices — we don’t have a good press office anywhere in the country! Press officers encourage journalistic activity, don’t just promote institute’s virtues but look out for the institute as situated in the country’s overall science and society landscape
    • Any plans to undertake similar deliberations on philosophy of science (including culture of research, ethics and moral responsibilities)?
    • Scientific institutions could consider hosting journalists for one day a month to get to know each other
    • What’s in it for the scientist to speak to a journalist about their work? Need stronger incentives — journalists can provide some of that by establishing trust with the scientist, but can journalists alone provide incentives? Is it even their responsibility?
    • Consider conducting a ‘scientific temper survey’ to understand science literacy as well as people’s perceptions of science — could help government formulate better policies, and communicators and journalists to better understand what exactly their challenges are
    • Need to formulate specific guidelines for science communication units at scientific research institutions as well as for funding agencies
    • Set up fellowships and grants for science communicators, but the government needs to think about attaching as few strings as possible to such assistance
    • Need for more government support for regional and local newspapers vis-à-vis covering science, especially local science
    • Need to use multimedia – especially short videos, podcasts illustrations and other aids – to communicate science instead of sticking to writing; visuals in particular could help surmount language barrier right away
  • Science v. tech, à la Cixin Liu

    A fascinating observation by Cixin Liu in an interview in Public Books, to John Plotz and translated by Pu Wang (numbers added):

    … technology precedes science. (1) Way before the rise of modern science, there were so many technologies, so many technological innovations. But today technology is deeply embedded in the development of science. Basically, in our contemporary world, science sets a glass ceiling for technology. The degree of technological development is predetermined by the advances of science. (2) … What is remarkably interesting is how technology becomes so interconnected with science. In the ancient Greek world, science develops out of logic and reason. There is no reliance on technology. The big game changer is Galileo’s method of doing experiments in order to prove a theory and then putting theory back into experimentation. After Galileo, science had to rely on technology. … Today, the frontiers of physics are totally conditioned on the developments of technology. This is unprecedented. (3)

    Perhaps an archaeology or palaeontology enthusiast might have regular chances to see the word ‘technology’ used to refer to Stone Age tools, Bronze Age pots and pans, etc. but I have almost always encountered these objects only as ‘relics’ or such in the popular literature. It’s easy to forget (1) because we have become so accustomed to thinking of technology as pieces of machines with complex electrical, electronic, hydraulic, motive, etc. components. I’m unsure of the extent to which this is an expression of my own ignorance but I’m convinced that our contemporary view of and use of technology, together with the fetishisation of science and engineering education over the humanities and social sciences, also plays a hand in maintaining this ignorance.

    The expression of (2) is also quite uncommon, especially in India, where the government’s overbearing preference for applied research has undermined blue-sky studies in favour of already-translated technologies with obvious commercial and developmental advantages. So when I think of ‘science and technology’ as a body of knowledge about various features of the natural universe, I immediately think of science as the long-ranging, exploratory exercise that lays the railway tracks into the future that the train of technology can later ride. Ergo, less glass ceiling and predetermination, and more springboard and liberation. Cixin’s next words offer the requisite elucidatory context: advances in particle physics are currently limited by the size of the particle collider we can build.

    (3) However, he may not be able to justify his view beyond specific examples simply because, to draw from the words of a theoretical physicist from many years ago – that they “require only a pen and paper to work” – it is possible to predict the world for a much lower cost than one would incur to build and study the future.

    Plotz subsequently, but thankfully briefly, loses the plot when he asks Cixin whether he thinks mathematics belongs in science, and to which Cixin provides a circuitous non-answer that somehow misses the obvious: science’s historical preeminence began when natural philosophers began to encode their observations in a build-as-you-go, yet largely self-consistent, mathematical language (my favourite instance is the invention of non-Euclidean geometry that enabled the theories of relativity). So instead of belonging within one of the two, mathematics is – among other things – better viewed as a bridge.

  • Necessity and sufficiency

    With apologies for recalling horrible people early in the day: I chanced upon this article quoting Lawrence Krauss talking about his friend Jeffrey Epstein from April 2011, and updated in July 2019. Excerpt (emphasis added):

    Renowned scientists whose research Epstein has generously funded through the years also stand by him. Professor Lawrence Krauss, a theoretical physicist …, has planned scientific conferences with Epstein in St. Thomas and remained close with him throughout his incarceration. “If anything, the unfortunate period he suffered has caused him to really think about what he wants to do with his money and his time, and support knowledge,” says Krauss. “Jeffrey has surrounded himself with beautiful women and young women but they’re not as young as the ones that were claimed. As a scientist I always judge things on empirical evidence and he always has women ages 19 to 23 around him, but I’ve never seen anything else, so as a scientist, my presumption is that whatever the problems were I would believe him over other people.” Though colleagues have criticized him over his relationship with Epstein, Krauss insists, “I don’t feel tarnished in any way by my relationship with Jeffrey; I feel raised by it.”

    Well, of course he felt raised by his friendship with Epstein. But more importantly, the part in bold is just ridiculous, and I hope Krauss was suitably slammed for saying such a stupid thing at the time.a It’s a subtle form of scientism commonly found in conversations that straddle two aggressively differing points of view – such as the line between believing and disbelieving the acts of a convicted sex offender or between right- and left-wing groups in India.

    Data is good, even crucial, as the numerical representation of experimental proof, and for this reason often immutable. But an insistence on data before anything else is foolish because it presupposes that the use of the scientific method – implied by the production and organisation of data – is a necessary as well as sufficient condition to ascertain an outcome. But in truth, science is often necessary but almost never sufficient.

    Implying in turn that all good scientists should judge everything by empirical evidence isn’t doing science or scientists any favours. Instead, such assertions might abet the impression of a scientist as someone unmoved by sociological, spiritual or artistic experiences, and science as a clump of methods all of which together presume to make sense of everything you will ever encounter, experience or infer. However, it’s in fact a body of knowledge obtained by applying the scientific method to study natural phenomena.

    Make what you will of science’s abilities and limitations based on this latter description, and not Krauss’s insular and stunted view that – in hindsight – may have been confident in its assertion if only because it afforded Krauss a way to excuse himself. And it is because of people like him (necessity), who defer to scientific principles even as they misappropriate and misuse these principles to enact their defensive ploys, together with the general tendency among political shills to use overreaching rhetoric and exaggerated claims of harm (sufficiency), that the scientific enterprise itself takes a hit in highly polarised debates word-wars.

    a. If Krauss insists on sticking to his scientistic guns, it might be prudent to remind him of counterfactual definiteness.

  • A sympathetic science

    If you feel the need to respond, please first make sure you have read the post in full.

    I posted the following tweet a short while ago:

    With reference to this:

    Which in turn was with reference to this:

    But a few seconds after publishing it, I deleted the tweet because I realised I didn’t agree with its message.

    That quote by Isaac Asimov is a favourite if only because it contains in those words a bigger idea that expands voraciously the moment it comes in contact with the human mind. Yes, there is a problem with understanding ignorance and knowledge as two edges of the same blade, but somewhere in this mixup, a half-formed aspiration to rational living lurks in silence.

    The author of another popular tweet commenting on the same topic did not say anything more than reproduce Kiran Bedi’s comment, issued after she shared her controversial ‘om’ tweet on January 4 (details here), that the chant is “worth listening to even if it’s fake”; the mocking laughter was implied, reaffirmed by invoking the name of the political party Bedi is affiliated to (the BJP – which certainly deserves the mockery).

    However, I feel the criticism from thousands of people around the country does not address the part of Bedi’s WhatsApp message that reaches beyond facts and towards sympathy. Granted, it is stupid to claim that that is what the Sun sounds like, just as Indians’ obsession with NASA is both inexplicable and misguided. That Bedi is a senior government official, a member of the national ruling party and has 12 million followers on Twitter doesn’t help.

    But what of Bedi suggesting that the controversy surrounding the provenance of the message doesn’t have to stand in the way of enjoying the message itself? Why doesn’t the criticism address that?

    Perhaps it is because people think it is irrelevant, that it is simply the elucidation of a subjective experience that either cannot be disputed or, more worryingly, is not worth engaging over. If it is the latter, then I fear the critics harbour an idea that what science – as the umbrella term for the body of knowledge obtained by the application of a certain method and allied practices – is not concerned with is not worth being concerned about. Even if all of the critics in this particular episode do not harbour this sentiment, I know from personal experience that there are even more out there who do.

    After publishing my tweet, I realised that Bedi’s statement that “it is worth listening to even if it’s fake” is not at odds with physicist Dibyendu Nandi’s words: that chanting the word ‘om’ is soothing and that its aesthetic benefits (if not anything greater) don’t need embellishment, certainly not in terms of pseudoscience and fake news. In fact, Bedi has admitted it is fake, and as a reasonable, secular and public-spirited observer, I believe that is all I can ask for – rather, that is all I can ask for from her in the aftermath of her regrettable action.

    If I had known what was going to happen earlier, my expectation would still have been limited – in a worst case scenario in which she insists on sharing the chant – to ask her to qualify the NASA claim as being false. Twelve million followers is nothing to be laughed at.

    But what I can ask of others (including myself) is this: mocking Bedi is fine, but what’s the harm in chanting the ‘om’ even if the claims surrounding it are false? What’s the harm in asserting that?

    If the reply is, “There is no harm” – okay.

    If the reply is, “There is no harm plus that is not in dispute” or that “There is harm because the assertion is rooted in a false, and falsifiable, premise” – I would say, “Maybe the assertion should be part of the conversation, such that the canonical response can be changed from <mockery of getting facts wrong>[1] to <mockery of getting facts wrong> + <discussing the claimed benefits of chanting ‘om’ and/or commenting on the ways in which adherence to factual knowledge can contribute to wellbeing>.”

    The discourse of rational aspiration currently lacks any concern for the human condition, and while scientificity, or scientificness, has been becoming a higher virtue by the day, it does not appear to admit that far from having the best interests of the people at heart, it presumes that whatever sprouts from its cold seeds should be nutrition enough.[2]

    [1] The tone of the response is beyond the scope of this post.

    [2] a. If you believe this is neither science’s purpose nor responsibility, then you must agree it must not be wielded sans the clarification either that it represents an apathetic knowledge system or that the adjudication of factitude does not preclude the rest of Bedi’s message. b. Irrespective of questions about science’s purpose, could this be considered to be part of the purpose of science communication? (This is not a rhetorical question.)

  • Sci-fi past the science

    There’s an interesting remark in the introductory portion of this article by Zeynep Tufekci (emphasis added):

    At its best, though, science fiction is a brilliant vehicle for exploring not the far future or the scientifically implausible but the interactions among science, technology and society. The what-if scenarios it poses can allow us to understand our own societies better, and sometimes that’s best done by dispensing with scientific plausibility.

    Given the context, such plausibility is likely predicated on the set of all pieces of knowledge minus the set of the unknown-unknown. This in turn indicates a significant divergence between scientific knowledge and knowledge of human society, philosophies and culture as we progress into the future, at least to the extent that there is a belief in the present that scientific knowledge already trails our knowledge of the sociological and political components required to build a more equitable society.

    This is pithy and non-trivial at the same time: pithy because the statement reaffirms the truism that science in and of itself lacks the moral centrifuge to separate good from bad, and non-trivial because it refutes the technoptimism that guides Elon Musk, Jeff Bezos, (the late) Paul Allen, etc.

    If you superimposed this condition on sci-fi the genre, it becomes clear that Isaac Asimov’s and Arthur Clarke’s works – which the world’s tech billionaires claim to have been inspired by in their pursuit of interplanetary human spaceflight, as Tufekci writes – were less about strengthening the role of science and technology in our lives and more about rendering it transparent, so we can look past the gadgets and the gadgetry towards the social structures they’re embedded in.

    In effect, Tufekci continues:

    Science fiction is sometimes denigrated as escapist literature, but the best examples of it are exactly the opposite.

    She argues in her short article, more of a long note, that this alternative reading of sci-fi and its purpose could encourage the billionaires to retool their ambitions and think about making life better on Earth. Food for thought, especially at the start of a new decade when there seems to be a blanket lien to hope – although I very much doubt the aspirations of Musk, Bezos and others were nurtured about such a simple fulcrum.