Some facts are just boring, like 1 + 1 = 2. You already knew them before they were presented as such, and now that you do, it’s hard to know what to do with them. Some facts are clearly important, even if you don’t know how you can use them, like the spark plug fires after there’s fuel in the chamber. These two kinds of facts may seem far apart but you also know on some level that by repeatedly applying the first kind of fact in different combinations, to different materials in different circumstances, you get the second (and it’s fun to make this journey).
Then there are some other facts that, while seemingly simple, provoke in your mind profound realisations – not something new as much as a way to understand something deeply, so well, that it’s easy for you to believe that that single neural pathway among the multitude in your head has forever changed. It’s an epiphany.
I came across such a fact this morning when reading an article about a star that may have gone supernova. The author packs the fact into one throwaway sentence.
Roughly every second, one of the observable Universe’s stars dies in a fiery explosion.
The observable universe is 90-something billion lightyears wide. The universe was born only 13.8 billion years ago but it has been expanding since, pushed faster and faster apart by dark energy. This is a vast, vast space – too vast for the human mind to comprehend. I’m not just saying that. Scientists must regularly come up against numbers like 8E50 (8 followed by 50 zeroes), but they don’t have to be concerned about comprehending the full magnitude of those numbers. They don’t need to know how big it is in some dimension. They have the tools – formulae, laws, equations, etc. – to tame those numbers into submission, to beat them into discoveries and predictions that can be put to human use. (Then again, they do need to deal with monstrous moonshine.)
But for the rest of us, the untameability can be terrifying. How big is a number like 8E50? In kilograms, it’s about a 100-times lower than the mass of the observable universe. It’s the estimated volume of the galaxy NGC 1705 in cubic metres. It’s approximately the lifespan of a black hole with the mass of the Sun. You know these facts, yet you don’t know them. They’re true but they’re also very, very big, so big that they’re well past the point of true comprehension, into the realm of the I’d-rather-not-know. Yet the sentence above affords a way to bring these numbers back.
The author writes that every second or so, a star goes supernova. According to one estimate, 0.1% of stars have enough mass to eventually become a black hole. The observable universe has 200 billion trillion stars. This means there are 2E20 stars in the universe that could become a black hole, if they’re not already. Considering the universe has lived around 38% of its life and assuming a uniform rate of black hole formation (a big assumption but should suffice to illustrate my point), the universe should be visibly darkening by now, considering photons of light shouldn’t have to travel much before encountering a black hole.
But it isn’t. The simple reason is that that’s how big the universe is. We learn about stars, other plants, black holes, nebulae, galaxies and so forth. There are lots and lots of them, sure, but you know what there is the most of? The things we often discuss the least: the interstellar medium, the space between stars, and the intergalactic medium, the space between galaxies. Places where there isn’t anything big enough, ironically, to be able to catch the popular imagination. One calculation, based on three assumptions, suggests matter occupies an incomprehensibly low fraction of the observable universe (1. 85% of this is supposed to be dark matter; 2. please don’t assume atoms are also mostly empty).
In numbers, the bigness of all this transcends comprehension – but knowing that billions upon billions of black holes still only trap a tiny amount of the light going around can be… sobering. And enlivening. Why, in the time you’ve taken to read this article, 300 more black holes will have formed. Pfft.
The Editor-in-Chief has retracted this article because it was published in error before the peer review process was completed. The content of this article has been removed for legal reasons. The authors have been offered to submit a revised manuscript for further peer review. All authors agree with this retraction.
This is the notice accompanying the retraction of a paper published in Springer Nature’s Journal of the Indian Society of Remote Sensing. The editor in chief is Shailesh Nayak, the director of the National Institute of Advanced Studies at IISc campus in Bengaluru. As Retraction Watch reported, the paper – about “suspicious activities” on the Indo-China border in 2020 – was being retracted for, legal reasons aside, being replete with grammatical errors. The excerpt on the Retraction Watch page also suggests it’s qualitatively less like a research paper and more like an internal submission; the paper’s corresponding author, an Aditya Kakde of the University of Petroleum and Energy Studies, a private institute in Dehradun, also didn’t comment on the retraction, and isn’t contesting it either.
The comment by Nayak, the editor in chief, is interesting: he says the badly-written paper had been published before it was peer-reviewed. First, how is this possible?
Second, I’m personally convinced Nayak is trying to protect his journal’s reputation by implying that the mistake was processual in nature, and that their functional peer-review system would have caught the paper’s quality problem. But this is also an ex post facto explanation that makes Nayak’s claim hard to believe, considering the process error was a big one.
Third, if you think you need an exercise as formally defined and intensive as a peer-review to catch such low-quality papers, I doubt your credentials as an editor.
Fourth, and to continue from my previous post, when editors publish bad papers like this, instead of helping authors correct their mistakes and thus avoid a retraction later for bad language, they’re practically setting up the authors to incur a retraction against their names.
Finally, why – in Nayak’s telling – was the paper retracted for “legal reasons”? It seems like a ridiculous, but also devious, thing to say. Considering the paper’s authors, including Kakde, haven’t been accused of other issues, I assume the paper’s contents are legitimate: that the authors have developed an image-analysis tool that purports to eliminate one step of some military surveillance procedure (although the images in the paper look quite simplistic). At the same time, one of the hallmarks of the current Indian government is its, and its supporters’, tendency to threaten their detractors with vexatious police and court cases, especially under draconian anti-terrorism and sedition provisions in Indian law.
So Nayak’s allusion “legal reasons” can’t be dismissed easily, as an attempt to be ambiguous and beyond reproach at the same time – although that’s just as possible (note: he’s a “distinguished scientist” in the Ministry of Earth Sciences).
Plagiarism is a tricky issue. If it’s straightforward to you, ask yourself if you’re assuming that the plagiariser (plagiarist?) is fluent in reading and writing, but especially writing, English. The answer’s probably ‘yes’. This is because for someone entering into an English-using universe for the first time, certain turns of phrase and certain ways to articulate complicated concepts stick with you the first time you read them, and when the time comes for you to spell out the same ideas and concepts, you passively, inadvertently recall them and reuse them. You don’t think – at least at first – that they’re someone else’s words, more so if you haven’t been taught, for no fault of yours, what academic plagiarism is and/or that it’s bad.
This is also why there’s a hierarchy of plagiarism. For example, if you’re writing a scientific paper and you copy another paper’s results, that’s worse than if you copy verbatim the explanation of a certain well-known idea. This is why former University Grants Commission chairman Praveen Chaddah wrote in 2014:
There are worse offences than text plagiarism — such as taking credit for someone else’s research ideas and lifting their results. These are harder to detect than copy-and-pasted text, so receive less attention. This should change. To help, academic journals could, for instance, change the ways in which they police and deal with such cases.
But if you’re fluent with writing English, if you know what plagiarism and plagiarise anyway (without seeking resources to help you beat its temptation), and/or if you’re stealing someone else’s idea and calling it your own, you deserve the flak and (proportionate) sanctions coming your way. In this context, a new Retraction Watch article by David Sanders makes for interesting reading. According to Sanders, in 2018, he wrote to the editors of a journal that had published a paper in 2011 with lots of plagiarised text. After a back-and-forth, the editors told Sanders they’d look into it. He asked them again in 2019 and May 2021 and received the same reply on both occasions. Then on July 26 the journal published a correction to the 2011 article. Sanders wasn’t happy and wrote back to the editors, one of whom replied thus:
Thank you for your email. We went through this case again, and discussed whether we may have made the wrong decision. We did follow the COPE guidelines step by step and used several case studies for further information. This process confirmed that an article should be retracted when it is misleading for the reader, either because the information within is incorrect, or when an author induces the reader to think that the data presented is his own. As this is a Review, copied from other Reviews, the information within does not per se mislead the reader, as the primary literature is still properly cited. We agree that this Review was not written in a desirable way, and that the authors plagiarised a large amount of text, but according to the guidelines the literature must be considered from the point of view of the reader, and retractions should not be used as a tool to punish authors. We therefore concluded that a corrigendum was the best way forward. Hence, we confirm our decision on this case.
Thank you again for flagging this case in the first place, which allowed us to correct the record and gain deeper insights into publishing ethics, even though this led to a solution we do not necessarily like.
Sanders wasn’t happy: he wrote on Retraction Watch that “the logic of [the editor’s] message is troubling. The authors engaged in what is defined by COPE (the Committee on Publication Ethics) as ‘Major Plagiarism’ for which the prescribed action is retraction of the published article and contacting the institution of the authors. And yet the journal did not retract.” The COPE guidelines summarise the differences between minor and major plagiarism this way:
Not being fluent in English could render the decisions made using this table less than fair, for example because an author could plagiarise several paragraphs but honestly have no intention to deceive – simply because they didn’t think they needed to be that careful. I know this might sound laughable to a scientist operating in the US or Europe, out of a better-run, better-organised and better-funded institute, and who has been properly in the ins and outs of academic ethics. But it’s true: the bulk of India’s scientists work outside the IITs, IISERs, DAE/DBT/DST-funded institutes and the more progressive private universities (although only one – Ashoka – comes to mind). Their teachers before them worked in the same resource-constrained environments, and for most of whom the purpose of scientific work wasn’t science as much as an income. Most of them probably never used plagiarism-checking tools either, at least not until they got into trouble one time and then found out about such things.
I myself found out about the latter in an interesting way – when I reported that Appa Rao Podile, the former vice-chancellor of the University of Hyderabad, had plagiarised in some of his papers, around the time students at the university were protesting the university’s response to the death of Rohith Vemula. When I emailed Podile for his response, he told me he would like my help with the tools with which he could spot plagiarism. I thought he was joking, but after a series of unofficial enquiries over the next year or so, I learnt that plagiarism-checking software was not at all the norm, even if solutions like Copyscape were relatively cheap, in state-funded colleges and second-tier universities around the country. I had no reason to leave Podile off the hook – but not because he hadn’t used plagiarism-checking software but because he was a vice-chancellor of a major university and had to have done better than claim ignorance.
(I also highly recommend this November 2019 article in The Point, asking whether plagiarism is wrong.)
According to Sanders, the editor who replied didn’t retract the paper because he thought it wasn’t ‘major plagiarism’, according to COPE – whereas Sanders thought it was. The editor appears to have reasoned his way out of the allegation, in the editor’s view at least, by saying that the material printed in the paper wasn’t misleading because it had been copied from non-misleading original material and that the supposedly lesser issue was that while it had been cited, it hadn’t been syntactically attributed as such (placed between double quotes, for example). The issue for Sanders, with whom I agree here, is that the authors had copied the material and presented it in a way that indicated they were its original creators. The lengths to which journal editors can go to avoid retracting papers, and therefore protect their journal’s reputation, ranking or whatever, is astounding. I also agree with Sanders when he says that by refusing to retract the article, the editors are practically encouraging misconduct.
I’d like to go a step further and ask: when journal editors think like this, where does that leave Indian scientists of the sort I’ve described above – who are likely to do better with the right help and guidance? In 2018, Rashmi Raniwala and Sudhir Raniwala wrote in The Wire Science that the term ‘predatory’, in ‘predatory journals’, was a misnomer:
… it is incorrect to call them ‘predatory’ journals because the term predatory suggests that there is a predator and a victim. The academicians who publish in these journals are not victims; most often, they are self-serving participants. The measure of success is the number of articles received by these journals. The journals provide a space to those who wanted easy credit. And a large number of us wanted this easy credit because we were, to begin with, not suitable for the academic profession and were there for the job. In essence, these journals could not have succeeded without an active participation and the connivance of some of us.
It was a good article at the time, especially in the immediate context of the Raniwalas’ fight to have known defaulters suitably punished. There are many bad-faith actors in the Indian scientific community and what the Raniwalas write about applies to them without reservation (ref. the cases of Chandra Krishnamurthy, R.A. Mashelkar, Deepak Pental, B.S. Rajput, V. Ramakrishnan, C.N.R. Rao, etc.). But I’m also confident enough to say now that predatory journals exist, typified by editors who place the journal before the authors of the articles that constitute it, who won’t make good-faith efforts to catch and correct mistakes at the time they’re pointed out. It’s marginally more disappointing that the editor who replied to Sanders replied at all; most don’t, as Elisabeth Bik has repeatedly reminded us. He bothered enough to engage – but not enough to give a real damn.
Cash payments for poor mothers increased brain function in babies, a study found, with potential implications for U.S. safety net policy. https://t.co/3rd06k0eih
First, it’s also a bad discovery (note: there’s a difference between right/wrong and good/bad). It is useful to found specific interventions on scientific findings – such as that providing pregnant women with iron supplements in a certain window of the pregnancy could reduce the risk of anaemia by X%. However, that the state should provide iron supplements to pregnant women belonging to certain socio-economic groups across the country shouldn’t be founded on scientific findings. Such welfarist schemes should be based on the implicit virtues of social welfare itself. In the case of the new study: the US government should continue with cash payments for poor mothers irrespective of their babies’ learning outcomes. The programme can’t stop if any of their babies are slow learners.
Second, I think the deeper problem in this example lies with the context in which the study’s findings could be useful. Scientists and economists have the liberty to study what they will, as well as report what they find (see third point). But consider a scenario in which lawmakers are presented with two policies, both rooted in the same ideologies and both presenting equally workable solutions to a persistent societal issue. Only one, however, has the results of a scientific study to back up its ability to achieve its outcomes (let’s call this ‘Policy A’). Which one will the lawmakers pick to fund?
Note here that this isn’t a straightforward negotiation between the lawmakers’ collective sensibilities and the quality of the study. The decision will also be influenced by the framework of accountability and justification within which the lawmakers operate. For example, those in small, progressive nations like Finland or New Zealand, where the general scientific literacy is high enough to recognise the ills of scientism, may have the liberty to set the study aside and then decide – but those in India, a large and nationalist nation with generally low scientific literacy, are likelier than not to construe the very availability of scientific backing, of any quality, to mean Policy A is better.
This is how studies like the one above could become a problem: by establishing a pseudo-privilege for policies that have ‘scientific findings’ to back up their promises. It also creates a rationalisation of the Republican Party’s view that by handing out “unconditional aid”, the state will discourage the recipients from working. While the Republicans’ contention is speculative in principle, in policy and, just to be comprehensive, in science, scientific studies that find the opposite play nicely into their hands – even in as straightforward a case as that of poor mothers. As the New York Times article itself writes:
Another researcher, Charles A. Nelson III of Harvard, reacted more cautiously, noting the full effect of the payments — $333 a month — would not be clear until the children took cognitive tests. While the brain patterns documented in the study are often associated with higher cognitive skills, he said, that is not always the case.
“It’s potentially a groundbreaking study,” said Dr. Nelson, who served as a consultant to the study. “If I was a policymaker, I’d pay attention to this, but it would be premature of me to pass a bill that gives every family $300 a month.”
A temporary federal program of near-universal children’s subsidies — up to $300 a month per child through an expanded child tax credit — expired this month after Mr. Biden failed to unite Democrats behind a large social policy bill that would have extended it. Most Republicans oppose the monthly grants, citing the cost and warning that unconditional aid, which they describe as welfare, discourages parents from working.
Sharing some of those concerns, Senator Joe Manchin III, Democrat of West Virginia, effectively blocked the Biden plan, though he has suggested that he might support payments limited to families of modest means and those with jobs. The payments in the research project, called Baby’s First Years, were provided regardless of whether the parents worked.
Third, and in continuation, it’s ridiculous to attach the approval for policies whose principles are clear and sound to the quality of data originating from scientific studies, which in turn depends on the quality of theoretical and experimental instruments scientists have at their disposal (“We hypothesized that infants in the high-cash gift group would have greater EEG power in the mid- to high-frequency bands and reduced power in a low-frequency band compared with infants in the low-cash gift group.”). And let’s not forget, on scientists coming along in time to ask the right questions.
Fourth, do scientists and economists really have the liberty to study and report what they will? There are two ways to slice this. 1: To clarify the limited context in which this question is worth considering – not at all in almost all cases, and only when a study uncovers the scientific basis for something that isn’t well-served by such a basis. This principle is recursive: it should preclude the need for a scientific study of whether support for certain policies has been set back by the presence or absence of scientific studies. 2: where does the demand for these studies originate? Clearly someone somewhere thought, “Do we know the policy’s effects in the population?” Science can provide quick answers in some cases but not in others, and in the latter, it should be prevented from creating the impression that the absence of evidence is the evidence of absence.
Who bears that responsibility? I believe that has fallen on the shoulders of politicians, social scientists, science communicators and exponents of the humanities alone for too long; scientists also need to exercise the corresponding restraint, and refrain from conducting studies in which they don’t specify the precise context (and not just that limited to science) in which their findings are valid, if at all. In the current case, NYT called the study’s findings “modest” – that the “researchers likened them in statistical magnitude to moving to the 75th position in a line of 100 from the 81st”. Modest results are also results, sure, but as we have come to expect with COVID-19 research, don’t conduct poor studies – and by extension don’t conduct studies of a social-science concept in a scientific way and expect it to be useful.
The Spanish architect Ricardo Bofill passed away on January 14, at the age of 82. I don’t know most of his work, which means this note of remembrance is less about Bofill the architect per se and more about Bofill the designer of the La Muralla Roja, an apartment complex in Manzanera, Spain. Spanish for ‘The Red Wall’, La Muralla Roja is a brutalist-style complex in the shape of a Greek cross (all arms of equal length) with around 50 houses, designed to maximise the availability of space, natural light and social interactions. But the most striking things about La Muralla Roja are its colours – the walls are red, the courtyards are blue and pink, and everything else is violet – and that all its houses have a view of the Balearic Sea. As such, La Muralla Roja is Bofill the architect’s conception of utopian living. Ironically, if you didn’t know about this complex, you probably noticed an architectural design it most likely inspired in the dystopian show Squid Game. I’ll leave you with some beautiful images of the complex (all by beasty ./Unsplash). See here for more photos, including those provided by Bofill himself.
Featured image: A view of La Muralla Roja. Photo: Zhifei Zhou/Unsplash.
I rewatched Eternals today and had some time to collect some of my thoughts on it. Spoilers ahead (including one each for The Tomorrow War and Shang-Chi and The Legend of The Ten Rings).
Too much deus ex machina – took 55 minutes minutes to find out what the Eternals are not capable of. And this trope continues through the film up: when Phastos makes the Uni-mind; when Ikaris brings the Domo down by putting a couple dents in it (that thing had neither discernible engines nor an aerodynamic design, so what exactly got damaged that it stopped levitating?); when Phastos bound Ikaris; and when Sersi froze the celestial. Specific to the last two instances: the bummer was that the audience is given no sense of how much power these individuals can be expected to wield (just as in Shang-Chi: great fight sequences, but no sense at the outset of what the rings are/aren’t capable of).
(Follow-up: How exactly did the deviants get trapped in ice? The same thing happens in The Tomorrow War, in which similar creatures get trapped in ice, but only because they were trapped inside containers trapped inside a crashed spaceship trapped in ice. In Eternals, wouldn’t the deviants have had to lie still for a very long time to get trapped in ice? Unless of course the Eternals caused an ice age.)
Every new narrative arc begins with them saving lives – gets very holier-than-thou very quickly.
“Conflicts lead to war, and war actually leads to advancements in life-saving technology and medicine.” This is Phastos’s rationalisation of Ajak’s anti-interventionist policy, but the policy’s been around for millennia while I thought this war-innovation nexus was at best two centuries old.
Too tropey.
The cast makes the film resemble a panel discussion with too many members: everyone gets one point in but that’s it – but also the event organiser wants them to be great points, so it’s mostly just some big picture points and nothing else.
Not sure whose side to take! Sure, the deviants are villainous by appearance, but hundreds of movies have taught us to look past that.
Why is Hindi spoken with an American accent (“nuch meri hero”)?! Also, good to see Hollywood’s Bollywood hasn’t changed much. Also, the whole valet thing didn’t sit well.
Ridiculous scene 1: when Gilgamesh finds out Ajak’s dead – morose music, serious dialogue – the pie slides off the pan onto his boot with the sort of sound befitting slapstick comedy.
Ridiculous scene 2: when the Amazon ambush is underway, Kingo fights off a few deviants and Karun (the valet) shouts, “Very nice, saaaar!”
(Follow-up: The film’s makers clearly tried to work in some comedy in between, or sometimes within, the action sequences, but it never works. The Eternals are just too serious the rest of the time for it, so they just come off a bit psychotic.)
Kingo, the Indian character, is often a doofus.
How’re they keeping track of where each Eternal ended up after five centuries of not being in touch? This isn’t trivial: in movies with human characters, barriers like this have often been insurmountable.
Maybe just me but this was a dull, even insipid end-of-the-world story. I much prefer Last Contact by Stephen Baxter: like Eternals, it concerns itself with a very small group of people confronting the end of the world, but who do so in an unlikely-ly comforting way.
Shallow characters – particularly Sprite, with her betrayal at the end, which no one saw coming, and not in the way people don’t see but then ask themselves why they didn’t suspect it, but in the way no one saw coming because they had zero reason to consider it.
At this point, including the end-credit scenes, it’s hard not to tire of the MCU – quite like we all tired of J.R.R. Tolkien’s Middle Earth saga even though the Tolkien Estate didn’t want us to, with its strategically spaced-out posthumous book releases. Like Discworld had turtles all the way down, the MCU apparently has turtles all the way to the top, and in increasingly less unpredictable ways.
TV news anchor at the end: “The sudden appearance of an enormous stone figurine in the Indian ocean…”. There’s clearly daylight over the Indian ocean at this point. But when Ikaris left Earth, en route to jumping into the Sun, he paused for a view of the Americas – which were also in daylight. How?
I also wrote a bit about the celestial, Tiamut, here.
Featured image: The opening scene of Eternals. Source: Hotstar.
Not something I usually blog about but boy is this funny. (At least) Star Sports has been airing an ad for an app called ‘Magicpin’ that I think helps you in some way when you shop for stuff. And this is how the ad goes:
So… a fight breaks out in the middle of the road over some pseudo-accident, as it usually does in India. A bystander starts to film the fight until one of the picaros charges on the camera-wielder, points to a side-street full of stores and says, “The magic isn’t happening here, it’s there.” And when the camera moves to view the stores, imaginary discount tags pop up onscreen, the music starts to play and the word ‘Magicpin’ blares in a medley of colours. I don’t know about you but it’s really hard for me to miss the allegory about an independent press here – as if to say, “a fight holding up the traffic for no good reason isn’t important, do look away, go shopping, get distracted”. Plus it’s apprently also what late capitalism expects of us, considering Magicpin thinks the skit is… amusing.
Science journalist Laura Spinney wrote an article in The Guardian on January 9, 2022, entitled ‘Are we witnessing the dawn of post-theory science?’. This excerpt from the article captures its points well, I thought:
Or take protein structures. A protein’s function is largely determined by its structure, so if you want to design a drug that blocks or enhances a given protein’s action, you need to know its structure. AlphaFold was trained on structures that were derived experimentally, using techniques such as X-ray crystallography and at the moment its predictions are considered more reliable for proteins where there is some experimental data available than for those where there is none. But its reliability is improving all the time, says Janet Thornton, former director of the EMBL European Bioinformatics Institute (EMBL-EBI) near Cambridge, and it isn’t the lack of a theory that will stop drug designers using it. “What AlphaFold does is also discovery,” she says, “and it will only improve our understanding of life and therapeutics.”
Essentially, the article is concerned with machine-learning’s ability to parse large amounts of data, find patterns in them and use them to generate theories – taking over an important realm of human endeavour. In keeping with tradition, it doesn’t answer the question in its headline with a definitive ‘yes’ but with a hard ‘maybe’ to a soft ‘no’. Spinney herself ends by quoting Picasso: “Computers are useless. They can only give you answers” – although the para right before belies the painter’s confidence with a prayer that the human way to think about theories is still meaningful and useful:
The final objection to post-theory science is that there is likely to be useful old-style theory – that is, generalisations extracted from discrete examples – that remains to be discovered and only humans can do that because it requires intuition. In other words, it requires a kind of instinctive homing in on those properties of the examples that are relevant to the general rule. One reason we consider Newton brilliant is that in order to come up with his second law he had to ignore some data. He had to imagine, for example, that things were falling in a vacuum, free of the interfering effects of air resistance.
I’m personally cynical about such claims. If we think we are going to be obsolete, there must be a part of the picture we’re missing.
There was an idea partly similar to this ‘post-theory hypothesis’ a few years ago, and pointing the other way. In 2013, philosopher Richard Dawid wrote a 190-page essay attempting to make the case that string theory shouldn’t be held back by the lack of experimental evidence, i.e. that it was post-empirical. Of course, Spinney is writing about machines taking over the responsibility of, but not precluding the need for, theorising – whereas Dawid and others have argued that string theory doesn’t need experimental data to stay true.
The idea of falsifiability is important here. If a theory is flawed and if you can design an experiment that would reveal that flaw, the theory is said to be falsifiable. A theory can be flawless but still falsifiable: for example, Newton’s theory of gravity is complete and useful in a limited context but, for example, can’t explain the precession of the perihelion of Mercury’s orbit. An example of an unfalsifiable theory is the one underlying astrology. In science, falsifiable theories are said to be better than unfalsifiable ones.
I don’t know what impact Dawid’s book-length effort had, although others before and after him have supported the view that scientific theories should no longer be falsifiable in order to be legitimate. Sean Carroll for one. While I’m not familiar enough with criticisms of the philosophy of falsifiability, I found a better reason to consider the case to trust the validity of string theory sans experimental evidence in a June 2017 preprint paper written by Eva Silverstein:
It is sometimes said that theory has strayed too far from experiment/observation. Historically, there are classic cases with long time delays between theory and experiment – Maxwell’s and Einstein’s waves being prime examples, at 25 and 100 years respectively. These are also good examples of how theory is constrained by serious mathematical and thought-experimental con- sistency conditions.
Of course electromagnetism and general relativity are not representative of most theoretical ideas, but the point remains valid. When it comes to the vast theory space being explored now, most testable ideas will be constrained or falsified. Even there I believe there is substantial scientific value to this: we learn something significant by ruling out a valid theoretical possibility, as long as it is internally consistent and interesting. We also learn important lessons in excluding potential alternative theories based on theoretical consistency criteria.
This said, Dawid’s book, entitled String Theory and the Scientific Method, was perhaps the most popular prouncement of his views in recent years (at least in terms of coverage in the non-technical press), even if by then he’d’ been propounding them for nine years and if his supporters included a bevy of influential physicists. Very simply put, an important part of Dawid’s arguments was that string theory, as a theory, has certain characteristics that make it the only possible theory for all the epistemic niches that it fills, so as long as we expect all those niches to filled by a single theory, string theory may be true by virtue of being the sole possible option.
It’s not hard to see the holes in this line of reasoning, but again, I’ve considerably simplified his idea. But this said, physicist Peter Woit has been (from what little I’ve seen) the most vocal critic of string theorists’ appeals to ‘post-empirical realism’ and has often directed his ire against the uniqueness hypothesis, significantly because accepting it would endanger, for the sake of just one theory’s survival, the foundation upon which almost every other valid scientific theory stands. You must admit this is a powerful argument, and to my mind more persuasive than Silverstein’s argument.
String theory is a proof of the dangers of relying excessively on non-empirical arguments. It raised great expectations thirty years ago, when it promised to [solve a bunch of difficult problems in physics]. Nothing of this has come true. String theorists, instead, have [made a bunch of other predictions to explain why it couldn’t solve what it set out to solve]. All this was false.
From a Popperian point of view, these failures do not falsify the theory, because the theory is so flexible that it can be adjusted to escape failed predictions. But from a Bayesian point of view, each of these failures decreases the credibility in the theory, because a positive result would have increased it. The recent failure of the prediction of supersymmetric particles at LHC is the most fragrant example. By Bayesian standards, it lowers the degree of belief in string theory dramatically. This is an empirical argument. Still, Joe Polchinski, prominent string theorist, writes in that he evaluates the probability of string to be correct at 98.5% (!).
Scientists that devoted their life to a theory have difficulty to let it go, hanging on non-empirical arguments to save their beliefs, in the face of empirical results that Bayes confirmation theory counts as negative. This is human. A philosophy that takes this as an exemplar scientific attitude is a bad philosophy of science.
I’m partway through Renny Thomas’s new book, Science and Religion in India: Beyond Disenchantment. Its description on the Routledge page reads:
This book provides an in-depth ethnographic study of science and religion in the context of South Asia, giving voice to Indian scientists and shedding valuable light on their engagement with religion. Drawing on biographical, autobiographical, historical, and ethnographic material, the volume focuses on scientists’ religious life and practices, and the variety of ways in which they express them. Renny Thomas challenges the idea that science and religion in India are naturally connected and argues that the discussion has to go beyond binary models of ‘conflict’ and ‘complementarity’. By complicating the understanding of science and religion in India, the book engages with new ways of looking at these categories.
To be fair to Renny as well as to prospective readers, I’m hardly familiar with scholarship in this area of study and in no position to be able to confidently critique the book’s arguments. I’m reading it to learn. With this caveat out of the way…
I’ve been somewhat familiar with Renny’s work and my expectation of his new book to be informative and insightful has been more than met. I like two things in particular based on the approximately 40% I’ve read so far (and not necessarily from the beginning). First, Science and Religion quotes scientists with whom Renny spoke to glean insights generously. A very wise man told me recently that in most cases, it’s possible to get the gist of (non-fiction) books written by research scholars and focusing on their areas of work just by reading the introductory chapter. I think this book may be the exception that makes the rule for me. On occasion Renny also quotes from books by other scientists and scholars to make his point, which I say to imply that for readers like me, who are interested in but haven’t had the chance to formally study these topics, Science and Religion can be a sort of introductory text as well.
For example, in one place, Renny quotes some 150 words from Raja Ramanna’s autobiography, where the latter – a distinguished physicist and one of the more prominent endorsers of the famous 1981 ‘statement on scientific temper’ – recalls in spirited fashion his visit to Gangotri. The passage reminded me of an article by American historian of science Daniel Sarewitz published many years ago, in which he described his experience of walking through the Angkor Wat temple complex in Cambodia. I like to credit Sarewitz’s non-academic articles for getting me interested in the sociology of science, especially critiques of science as a “secularising medium”, to use Renny’s words, but I have also been guilty of having entered this space of thought and writing through accounts of spiritual experiences written by scientists from countries other than India. But now, thanks to Science and Religion, I have the beginnings of a resolution.
Second, the book’s language is extremely readable: undergraduate students who are enthusiastic about science should be able to read it for pleasure (and I hope students of science and engineering do). I myself was interested in reading it because I’ve wanted, and still want, to understand what goes on in the minds of people like ISRO chairman K. Sivan when they insist on visiting Tirupati before every major rocket launch. And Renny clarifies his awareness of these basic curiosities early in the book:
… scientists continue to be the ‘special’ folk in India. It is this image of ‘special’ folk and science’s alleged relationship with ‘objectivity’ which makes people uneasy when scientists go to temple, engage in prayer, and openly declare their allegiance to religious beliefs. The dominance and power of science and its status as a superior epistemology is part of the popular imagination. The continuing media discussion on ISRO (Indian Space Research Organisation) scientists when they offer prayer before any mission is an example.
Renny also clarifies the religious and caste composition of his interlocutors at the outset as well as dedicates a chapter to discussing the ways in which caste and religious identities present themselves in laboratory settings, and the ways in which they’re acknowledged and dismissed – but mostly dismissed. An awareness of caste and religion is also important to understand the Sivan question, according to Science and Religion. Nearly midway through the book, Renny discusses a “strategic adjustment” among scientists that allows them to practice science and believe in gods “without revealing the apparent contradictions between the two”. Here, one scientist identifies one of the origins of religious belief in an individual to be their “cultural upbringing”; but later in the book, in conversations with Brahmin scientists (and partly in the context of an implicit belief that the practice of science is vouchsafed for Brahmins in India), Renny reveals that they don’t distinguish between cultural and religious practices. For example, scientists who claim to be staunch atheists are also strict vegetarians, don the ‘holy thread’ and, most tellingly for me, insist on getting their sons and daughters married off to people belonging to the same caste.
They argued that they visited temples and pilgrimage centres not for worship but out of an architectural and aesthetic interest, to marvel at the architectural beauty. As Indians, they are proud of these historical places and pilgrimage centres. They happily invite their guests from other countries to these places with a sense of pride and historicity. Some of the atheist scientists I spoke to informed me that they would offer puja and seek darshan while visiting the temples and historically relevant pilgrimage places, especially when they go with their family; “to make them happy.” They argued that they wouldn’t question the religious beliefs and practices of others and professed that it was a personal choice to be religious or non-religious. They also felt that religion and belief in God provided psychological succor to believers in their hardships and one should not oppose them. Many of the atheist scientists think that festivals such as Diwali or Ayudha Puja are cultural events.
In their worldview, the distinction between religion and culture has dissolved – and which clearly emphasises the importance of considering the placedness of science just as much as we consider the placedness of religion. By way of example, Science and Religion finds both religion and science at work in laboratories, but en route it also discovers that to do science in certain parts of India – but especially South India, where many of the scientists in his book are located – is to do science in a particular milieu distorted by caste: here, the “lifeworld” is to Brahmins as water is to fish. Perhaps this is how Sivan thinks, too,although he is likely to be performing the subsequent rituals more passively, and deliberately and in self-interest, assuming he seeks his sense of his social standing based on and his deservingness of social support from the wider community of fellow Brahmins: that we must pray and make some offerings to god because that’s how we always did it growing up.
At least, these are my preliminary thoughts. I’m looking forward to finishing Science and Religion this month (I’m a slow reader) and looking forward to learning more in the process.
In mid-2012, shortly after physicists working with the Large Hadron Collider (LHC) in Europe had announced the discovery of a particle that looked a lot like the Higgs boson, there was some clamour in India over news reports not paying enough attention or homage to the work of Satyendra Nath Bose. Bose and Albert Einstein together developed Bose-Einstein statistics, a framework of rules and principles that describe how fundamental particles called bosons behave. (Paul A.M. Dirac named these particles in Bose’s honour.) The director-general of CERN, the institute that hosts the LHC, had visited India shortly after the announcement and said in a speech in Kolkata that in honour of Bose, he and other physicists had decided to capitalise the ‘b’ in ‘boson’.
It was a petty victory of a petty demand, but few realised that it was also misguided. Bose made the first known (or at least published) attempts to understand the particles that would come to be called bosons – but neither he nor Einstein anticipated the existence of the Higgs boson. There have also been some arguments (justified, I think) that Bose wasn’t awarded a Nobel Prize for his ideas because he didn’t make testable predictions; Einstein received the Nobel Prize for physics in 1915 for anticipating the photoelectric effect. The point is that it was unreasonable to expect Bose’s work to be highlighted, much less attributed, as some had demanded at the time, every time we find a new boson particle.
What such demands only did was to signal an expectation that the reflection of every important contribution by an Indian scientist ought to be found in every major discovery or invention. Such calls detrimentally affect the public perception of science because they are essentially contextless.
Let’s imagine that discovery of the Higgs boson was the result of series of successes, depicted thus:
O—o—o—o—o—O—O—o—o—O—o—o—o—O
An ‘O’ shows a major success and an ‘o’ shows a minor success, where major/minor could mean the relative significance within particle physics communities, the extent to which physicists anticipated it or simply the amount of journal/media coverage it received. In this sequence, Bose’s paper on a certain class of subatomic particles could be the first ‘O’ and the discovery of the Higgs boson the last ‘O’. And looking at this sequence, one could say Bose’s work led to a lot of the work that came after and ultimately led to the Higgs boson. However, doing that would diminish the amount of study, creativity and persistence that went into each subsequent finding – and would also ignore the fact that we have identified only one branch of endeavour, leading from Bose’s work to the Higgs boson, whereas in reality there are hundreds of branches crisscrossing each other at every o, big or small – and then there are countless epiphanies, ideas and flashes, each one less the product of following the scientific method and more of a mysterious combination of science and intuition.
By reducing the opportunity to celebrate Bose’s work by pointing to just the Higgs boson point on the branch, we lose the opportunities to know and celebrate the importance of Bose’s work for all the points in between, but especially the points that we still haven’t taken the trouble to understand.
Recently, a couple people forwarded to me a video on WhatsApp of an Indian-American electrical engineer named Nisar Ahmed. I learnt when in college (studying engineering) that Nisar Ahmed was the co-inventor, along with K. Ramamohan Rao, of the direct cosine transform, a technique to transmit a given amount of information using fewer bits than those contained in the information itself. The video introduced Ahmed’s work as the basis for our being able to take video-conferencing for granted; direct cosine transform allows audiovisual data to be compressed by two, maybe three orders of magnitude, making its transmission across the internet much less resource-intensive than if it had to be transmitted without compression.
However, the video did little to address the immediate aftermath of Ahmed’s and Rao’s paper, the other work by other scientists that built on it, as well as its use in other settings, and rested on the drawing just one connection between two fairly unrelated events (direct cosine transform and their derivatives, many of them created in the same decade, heralded signal compression, but they didn’t particularly anticipate different forms of communication).
This flattening of the history of science, and technology as the case may be, may be entertaining but it offers no insights into the processes at work behind these inventions, and certainly doesn’t admit any other achivements before each development. In the video, Ahmed reads out tweets by people reacting to his work as depicted on the show This Is Us. One of them says that it’s because of him, and because of This Is Us, that people are now able to exchange photos and videos of each other around the world, without worrying about distance. But… no; Ahmed himself says in the video, “I couldn’t predict how fast the technology would move” (based on his work).
Put it simply, I find such forms of communication – and thereunto the way we are prompted to think about science – objectionable because they are content with ‘what’, and aren’t interested in ‘when’, ‘why’ or ‘how’. And simply enumerating the ‘what’ is practically non-scientific, more so when they’re a few particularly sensational whats over others that encourage us to ignore the inconvenient details. Other similar recent examples were G.N. Ramachandran, whose work on protein structure, especially Ramachandran plots, have been connected to pharmaceutical companies’ quest for new drugs and vaccines, and Har Gobind Khorana, whose work on synthesising RNA has been connected to mRNA vaccines.