Science

  • An epistocracy

    The All India Council for Technical Education (AICTE) has proposed a new textbook that will discuss the ‘Indian knowledge system’ via a number of pseudoscientific claims about the supposed inventions and discoveries of ancient India, The Print reported on September 26. The Ministry of Human Resource Development (MHRD) signed off on the move, and the textbook – drawn up by the Bharatiya Vidya Bhavan educational trust – is set to be introduced in 80% of the institutions the AICTE oversees.

    According to the Bharatiya Vidya Bhavan website, “the courses of study” to be introduced via the textbook “were started by the Bhavan’s Centre for Study and Research in Indology under the Delhi Kendra after entering into an agreement with the AICTE”. They include “basic structure of Indian knowledge system; modern science and Indian knowledge system; yoga and holistic health care”, followed by “essence of Indian knowledge tradition covering philosophical tradition; Indian linguistic tradition; Indian artistic tradition and case studies”.

    In all, the textbook will be available to undergraduate students of engineering in institutions other than the IITs and the NITs but still covering – according to the Bhavan – “over 3,000 engineering colleges in the country”.

    Although it is hard to fathom what is going on here, it is clear that the government is not allowing itself to be guided by reason. Otherwise, who would introduce a textbook that would render our graduates even more unemployable, or under-employed, than they already are? There is also a telling statement from an unnamed scholar at the Bhavan who was involved in drafting the textbook; as told to The Print: “For ages now, we have been learning how the British invented things because they ruled us for hundreds of years and wanted us to learn what they felt like. It is now high time to change those things and we hope to do that with this course”.

    The words “what they felt like” indicate that the people who have enabled the drafting and introduction of this book, including elected members of Parliament, harbour a sense of disenfranchisement and now feel entitled to their due: an India made great again under the light of its ancient knowledge, as if the last 2,000 years did not happen. It also does not matter whether the facts as embodied in that knowledge can be considered at par with the methods of modern science. What matters is that the Government of India has today created an opportunity for those who were disempowered by non-Hindu forces to flourish and that they must seize it. And they have.

    In other words, this is a battle for power. It is important for those trying to fight against the introduction of this textbook or whatever else to see it as such because, for example, MHRD minister Prakash Javadekar is not waiting to be told that drinking cow urine to cure cancer is pseudoscientific. It is not a communication gap; Javadekar in all likelihood is not going to drink it himself (even though he is involved in creating a platform to tell the masses that they should).

    Instead, the stakeholders of this textbook are attempting to fortify a power structure that prizes the exclusion of knowledge. Knowledge is power, after all – but an epistocracy cannot replace a democracy; “ignorance doesn’t oppress in the same way that knowledge does,” to adapt the words of David Runciman. For example, the textbook repeatedly references an older text called the ‘Yantra Sarvasva’ and endeavours to establish it as a singular source of certain “facts”. And who can read this text? The upper castes.

    In turn, by awarding funds and space for research to those who claim to be disseminating ancient super-awesome knowledge and shielding them from public scrutiny, the Narendra Modi government is subjecting science to power. A person who peddles a “fact” that Indians flew airplanes fuelled by donkey urine 4,000 years ago no longer need aspire to scholarly credentials; he only has to want to belong to a socio-religious grouping that wields power.

    A textbook that claims India invented batteries millennia before someone in Europe did is a weapon in this movement but does not embody the movement itself. Attempts to make this textbook go away will not make future textbooks go away, and attempts to counter the government’s messaging using the language of science alone will not suffice. For example, good education is key, and our teachers, researchers, educationists and civil society are a crucial part of the resistance. But even as they complain about rising levels of mediocrity and inefficiency, perpetrated by ceaseless administrative meddling, the government does not seek to solve the problem as much as use it as an excuse to perpetrate further mediocrity and discrimination.

    There was no greater proof of this than when a member of the National Steering Committee constituted by the Department of Science and Technology to “validate research on panchgavyatold The Wire in 2017, “With all-round incompetence [of the Indian scientific community], this is only to be expected. … If you had 10-12 interesting and well-thought-out good national-level R&D programmes on the table, [the ‘cowpathy’] efforts will be seen to be marginal and on the fringe. But with nothing on the table, this gains prominence from the government, which will be pushing such an agenda.”

    But we do have well-thought-out national-level R&D programmes. If they are not being picked by the government, it must be forced to provide an explanation as to why, and justify all of its decisions, instead of letting it bask in the privilege of our cynicism and use the excuse of our silence to sustain its incompetence. Bharatiya Vidya Bhavan’s textbook exists in the wider political economy of banning beef, lynching Dalits, inciting riots, silencing the media and subverting the law, and not in an isolated silo labeled ‘Science vs. Pseudoscience’. It is a call to action for academics and everyone else to protest the MHRD’s decision and – without stopping there – for everyone and the academics to vocally oppose all other moves by public institutions and officials to curtail our liberties.

    It is also important for us to acknowledge this because we will have to redraft the terms of our victory accordingly. To extend the metaphor of a weapon: the battle can be won by taking away the opponent’s guns, but the war will be won only when the opponent finds its cause to be hopeless. We must fight the battles but we must also end the war.

    The Wire
    September 27, 2018

  • US court settles bitter gene editing patent case

    On September 10, a US court settled an increasingly churlish patent dispute between two research institutions in the country, the University of California (UC) in Berkeley and the Broad Institute, Massachusetts, with great consequence for the commercial use of a powerful gene-editing technology called CRISPR-Cas9.

    The dispute centred on CRISPR’s usability in two different kinds of biological lifeforms: prokaryotes and eukaryotes. UC had devised a way to edit the genetic makeup of prokaryotes using CRISPR in 2014. Broad followed a year later with a method to use CRISPR in eukaryotes. UC had subsequently contested the patentability of Broad’s method saying that it was a derivative of UC’s method and couldn’t be patented separately.

    The court as well as the US patent office have disagreed and upheld Broad’s patent.

    The following FAQ breaks the case down to its nuclear components and assesses the verdict’s implications and future courses of action.

    What are CRISPR and CRISPR-Cas9?

    CRISPR is a natural defensive mechanism that prokaryotes use to protect themselves against viruses. Prokaryotes are smaller, less complex lifeforms and include bacteria and archaea: they are unicellular, the cells lack a membrane as well as membrane-bound organelles. The bigger lifeforms are classified as eukaryotes, which are multicellular, whose cells have a membrane and the cells also contain membrane-bound organelles. Because of these and other differences (see here, p. 8), it wasn’t clear if a CRISPR system built for use in prokaryotes could be adapted by a person of reasonable skill for use in eukaryotes with a reasonable chance of success. This distinction is at the heart of the patents dispute.

    CRISPR-Cas9 is essentially a technology that combines CRISPR with a protein called Cas9 to form a molecular tool. This tool can swim to a eukaryote’s DNA, pick out a specific section of the genes and snip it at both ends, removing it from the DNA sequence. Once the section is out, the DNA strand repairs itself to restore the genes. If the original section of genes was faulty (e.g. containing an undesirable mutation), then CRISPR-Cas9 can be used to remove it from the DNA and force the DNA strand to repair itself to a healthier version.

    Researchers have already reported that they are close to using this technology to treat a debilitating condition called Duchenne muscular dystrophy.

    Of course, there are other gene-editing technologies, including zing-finger nucleases and transcription activator-like effector nucleases, but CRISPR-Cas9 has proved to be more efficient, effective and easier to use. At the same time, a few concerns are starting to emergeabout unintended side-effects.

    What was the case timeline?

    The UC team, led by Jennifer Doudna, published a paper in August 2012 describing how an RNA-based system called CRISPR-Cas9 could be used to edit DNA in prokaryotic cells. Editing DNA is a lucrative prospect – then and now – because it allows us in theory to modify the fundamental constitution of biological life, curing debilitating illnesses as much as modifying crops. And what the UC team had found, together with Emmanuelle Charpentier, then of the University of Umea in Sweden, was the first tool that could achieve this. After their paper was published, the UC team filed a patent with the US Patent and Trademarks Office (USPTO) for the use of CRISPR in prokaryotes.

    While UC’s patent was pending, a team led by Feng Zhang from the Broad Institute, setup by the Massachusetts Institute of Technology and Harvard University, Boston, published a paper in 2013 and then built a CRISPR system in 2014 that could work in eukaryotes. Zhang and co. then filed for an expedited patent that was granted in 2017. At this point, UC complained to the USPTO that the Broad patent infringed on its own – that, effectively, Zhang et al’s work was not patentably distinct from Doudna et al’s work. UC’s own patent for CRISPR use in prokaryotes was granted in early 2018.

    In late 2017, the Patent Trial and Appeal Board (PTAB) of the USPTO upheld the Broad patent, effectively stating that the DNA-editing technologies used in prokaryotes and eukaryotes were “patentably distinct”. Specifically, it had ruled that there was no interference-in-fact, i.e. that UC’s general description of use of CRISPR in biological systems could not have anticipated, under reasonable circumstances, Broad’s more specific CRISPR invention for use in eukaryotes. An interference-in-fact check is pegged on a so called ‘obviousness review’. A 1966 SCOTUS case defines four factors using which it can be undertaken:

    (1) the scope and content of the prior art; (2) the differences between the claims and the prior art; (3) the level of ordinary skill in the art; and (4) objective considerations of non-obviousness

    UC decided to appeal the PTAB’s verdict with the US Court of Appeals of the Federal Circuit (CAFC). The latter came to its decision on September 10, ruling in favour of the USPTO and upholding the Broad patent.

    What happens next?

    A lot of things. Let’s classify them as financial, academic, legal and administrative.

    Financial – Where there’s a patent, there’s money. However, there’s more money for Broad than for UC because almost all application of the CRISPR technology will happen in eukaryotes, a domain that includes humans and plants. And because the Broad patent has been upheld, this effectively means the UC patent can apply only to prokaryotes and not to eukaryotes.

    Public attitudes to this affirmation were partly reflected in the share values of three companies intent on commercialising CRISPR tech: Crispr Therapeutics AG (cofounded by Charpentier) and Intellia Therapeutics have licenses with UC and their shares fell by 5.3% and 2.5% respectively; Editas Medicine Inc., which has licenses with Broad, climbed by 6.8%.

    review in 2017 stated that although “CRISPR IP ownership is claimed by at least seven different parties”, the Broad patent could be a “blocking patent” because of its ancestral nature. This is one reason why the Broad Institute has already issued 13 licenses, more than any of the other patent-holders. In all, the review estimated that the American gene-editing industry will be worth $3.5 billion by 2019, with CRISPR propelling biotechnology to the status of “second highest funded sector in the United States”.

    Academic – The contest between UC and Broad has only worsened the mutual, and deleterious, embitterment between the institutions. In 2015, Broad launched an acrimonious campaign to turn public opinion in its favour, which included attempts to rewrite the history of DNA-editing research and present Zhang’s achievement in stronger light. The possibly most damaging thing Broad did was to quote UC’s Doudna herself as having expressed frustration and doubt about whether a CRISPR system for use in bacteria could be adapted for use in eukaryotic cells.

    These quotes were used in Broad’s filings for the patent dispute, undermining UC’s case. However, scientists have argued that science is almost never free of frustration and that Doudna was also right to express doubt because that’s what any good scientist would do: lead with the uncertainty until something to the contrary could be demonstrated. However, Broad effectively penalised Doudna for being a good scientist – an action that Michael Eisen, a biologist in Doudna’s department at UC, has said is rooted in universities being able to profit from patents created with taxpayer dollars.

    Legal – It’s important to recognise what UC has actually lost here. UC appealed the PTAB verdict, bringing it to the CAFC, who in turn ruled that the PTAB had not wronged in its conclusion. The judge did not reevaluate the evidence and did not hear arguments from the two parties; no new evidence was presented. The court only affirmed that UC, in the eyes of the law, did not have grounds to contend the PTAB verdict. A salient portion from the judgment follows, where the judge writes that some parts of the CRISPR/Cas9 system as used in prokaryotes could have been adapted for use in eukaryotes but that that’s besides the point (emphasis added):

    UC expended substantial time and effort to convince this court that substantial evidence supports the view it would like us to adopt, namely, that a person of ordinary skill would have had a reasonable expectation of success in implementing the CRISPR-Cas9 system in eukaryotes. There is certainly evidence in the record that could support this position. The prior art contained a number of techniques that had been used for adapting prokaryotic systems for use in eukaryotic cells, obstacles adopting other prokaryotic systems had been overcome, and Dr. Carroll suggested using those techniques to implement CRISPR-Cas9 in eukaryotes. We are, however, an appellate body. We do not reweigh the evidence. It is not our role to ask whether substantial evidence supports fact-findings not made by the Board, but instead whether such evidence supports the findings that were in fact made. Here, we conclude that it does.

    Therefore, this is a judgment of the law, not a judgment of the science.

    According to Jacob Sherkow, a professor at the New York Law School, UC can either petition the CAFC for a rehearing or appeal to the Supreme Court. Sherkow added that neither strategy is likely to work because he doesn’t think “this case presents any *novel* legal issues” (emphasis in the original). This means UC will likely return to the patent office and attempt to “salvage what they can from their patent application”.

    There is also another problem. To quote Chemical and Engineering News,

    … it recently became clear that another CRISPR scientist, Virginijus Šikšnys of Vilnius University [Lithuania], filed a patent for CRISPR/Cas9 just weeks before UC Berkeley filed its patent in 2012. While UC Berkeley and Broad were entangled in their dispute, the Šikšnys patent was approved and made public, meaning that USPTO can now hold the Šikšnys patent against UC Berkeley. “That has the potential to sink whatever is left from Berkeley’s patent application,” Sherkow says.

    Administrative – This part is confusing. In the US, the USPTO upheld the Broad patent in February 2017. But in Europe, the European Patent Office (EPO) ruled in favour of Doudna and Charpentier in March 2017. So depending on the jurisdiction, companies that want to commercialise CRISPR technology (for eukaryotes) will have to work with UC in Europe and the Broad Institute in the US. At least one company, DowDuPont, which is using CRISPR to engineer corn and soybean crops to be cultivable without pesticides, has purchased licenses with both institutions.

    The different judgments arise from one difference in how the EPO and the USPTO evaluate ‘no interference-in-fact’. According to a May 2017 report by Sherkow, “In Europe, one is entitled to a broad patent on a new technique, if it demonstrates an ‘inventive step’ over prior methods, even if there [is] no guarantee that it will work for all of its claimed applications.” In the US, on the other hand, each “claimed application” has to be demonstrated and is separately patentable if one application doesn’t follow obviously from the previous. The EPO decision is open to challenge and Broad is likely to use the opportunity to do so.

    By the way, the country with the second-most patents related to CRISPR is China, after the US. Chinese research institutions and industry players have been focusing mostly on knockout mechanisms of CRISPR, which control how undesirable genes in a DNA sequence are removed. To quote at length from the 2017 review,

    The Chinese government has been actively involved in gene-editing funding. The National Natural Science Foundation of China (NNSF), invested $3.5 million in over 40 CRISPR projects during 2015. Through the NNSF and the National Basic Research Program, the Chinese government has funded the first use of CRISPR for the modification of human embryos. Additionally, Shenzhen Jinjia Color Printing Group Co., a public company, has pledged $0.5 million to fund Sun Yat-sen University for studying CRISPR in embryos.

    What do scientists say?

    Scientists’ reactions are still coming in, although no consensus is likely to emerge soon. In the meantime, awards make for a reasonable proxy to determine what scientists think is laudable. On this count, Doudna and Charpentier are clear leaders. Since 2014, Doudna has won 20 awards (excluding one from UNESCO), Charpentier has won dozens and Zhang, nine (although must be noted that Doudna and Charpentier have been scientists for longer than Zhang has). Doudna, Charpentier and Šikšnys were also jointly awarded the 2018 Kavli Prize in Nanoscience.

    The Wire
    September 11, 2018

  • Criticism of ISRO

    The Statesman‘s editorial on India’s human spaceflight programme ends with the following line:

    Only after placing the seventh and the last satellite in the NavIC system costing Rs 1,400 crore did ISRO realise the atomic clocks in the satellites had become dysfunctional, rendering the fleet a dud.

    This line is wrong.

    1. ISRO could not have realised some satellites in the constellation were having issues with their atomic clocks any earlier, so the ‘only’ is misplaced
    2. Only two satellites out of seven were having issues with their clocks, and ISRO has made efforts to replace them.
    3. The fleet was rendered unusable but that may not be fair to say given it was temporary. The dysfunctional instruments have been replaced and the constellation currently awaits operationalisation.

    There may have been other issues with the IRNSS but the last line in the editorial wasn’t it.

    Another example: many people are of the impression that Narendra Modi’s announcement on August 15, that India will launch a human into space by 2022, caught ISRO chairman K. Sivan by surprise. This is true – but it was only the announcement that caught Sivan by surprise, not the ambition itself. ISRO has been preparing for human spaceflight for over a decade now. It is certainly not the sort of ambition that can be prepared for and achieved in four years.

    However, NewsBytes said:

    Commenting on Modi’s announcement, ISRO chief K Sivan had then said, “It came as a big surprise to us.” Yet, the question is, should it have come as a big surprise to ISRO? Logic says no. ISRO, being the agency responsible for all of India’s space missions, should ideally have been consulted before lofty promises were made. But, Modi went ahead with it anyway. Given the lack of notice, ISRO is now engaged in frantic attempts to recruit astronauts, improve existing technology, and develop new technology to meet the deadline set by Modi.

    I’m glad more newspersons are writing critically of ISRO. But space is a sector where there’s very little low-hanging fruit that can be plucked and juiced into a political analysis, so there’s a lot more work required to separate a critique of ISRO from chest-thumping and render the former meaningful.

    §

    Random thought: Facts can be assimilated into a bundle and bundles lend themselves to interpretation. Now, there’s bound to be a correlation between between the facts-to-interpretations (F-I) ratio and the correctness of news coverage. The larger the F-I ratio is, the more likely it is going to be find more small mistakes in multiple news reports (i.e. on the topic of those facts) and big mistakes in a few – i.e. bigger range. On the other hand, the smaller the F-I ratio is, there are likely to be fewer smaller as well as bigger mistakes – i.e. smaller range. Now, by comparing these two ranges across press coverage of a variety of technical subjects where quantitative answers are common (e.g. in physics but not in sociology), and using normalised values of F-I if necessary, would it be possible to elicit the relative strengths and weaknesses of the mainstream media among those subjects?

    Featured image credit: Oleg Laptev/Unsplash.

  • How do you determine the naturalness of homosexuality?

    “Homosexual carnal intercourse between two consenting adults” is legal in India now. It wasn’t for lack of reason or scientific data that the item of legislation that rendered sodomy illegal – Section 377 – had been retained for so long. Instead, it was more a question of whether sodomy offended public decency and morality. On September 6, the Supreme Court of India said no consensual sexual act between adults, whether of the same gender or otherwise, could be considered illegal or offensive to public decency or morality.

    The US had this moment in 2003, but there, science did play a role. In the landmark case Lawrence v. Texas, the Supreme Court of the US was able to rule that homosexuality was not a sin against nature on the back of a growing body of evidence that homosexuality exists in nature. More broadly, science helped determine the construction of sexuality in human and non-human species and rescued it from the chokehold of religious ideals and the stigma it carried. CJI Dipak Misra and Justice A.M. Khanwilkar may not have laboured through the scientific evidence in their own judgment but the veins of rationalism are evident in their syntax. Consider this excerpt (from the full; emphasis added):

    What nature gives is natural. That is called nature within. Thus, that part of the personality of a person has to be respected and not despised or looked down upon. The said inherent nature and the associated natural impulses in that regard are to be accepted. Non-acceptance of it by any societal norm or notion and punishment by law on some obsolete idea and idealism affects the kernel of the identity of an individual. Destruction of individual identity would tantamount to crushing of intrinsic dignity that cumulatively encapsulates the values of privacy, choice, freedom of speech and other expressions. It can be viewed from another angle. An individual in exercise of his choice may feel that he/she should be left alone but no one, and we mean, no one, should impose solitude on him/her.

    From another part of the same judgment:

    It is submitted on behalf of the petitioners and the intervenors that homosexuality, bisexuality and other sexual orientations are equally natural and reflective of expression of choice and inclination founded on consent of two persons who are eligible in law to express such consent and it is neither a physical nor a mental illness, rather they are natural variations of expression and free thinking process and to make it a criminal offence is offensive of the well established principles pertaining to individual dignity and decisional autonomy inherent in the personality of a person, a great discomfort to gender identity, destruction of the right to privacy which is a pivotal facet of Article 21 of the Constitution, unpalatable to the highly cherished idea of freedom and a trauma to the conception of expression of biological desire which revolves around the pattern of mosaic of true manifestation of identity. That apart, the phrase ‘order of nature’ is limited to the procreative concept that may have been conceived as natural by a systemic conservative approach and such limitations do not really take note of inborn traits or developed orientations or, for that matter, consensual acts which relate to responses to series of free exercise of assertions of one‘s bodily autonomy. … It is urged that the American Psychological Association has opined that sexual orientation is a natural condition and attraction towards the same sex or opposite sex are both naturally equal, the only difference being that the same sex attraction arises in far lesser numbers.

    Many of these arguments hinge on what it means to be natural. But what is nature, and what is naturalness*? The Wikipedia article on homosexual behaviour among animals carries an instructive line in this regard, and vis-a-vis the tenet of peccatum contra naturam (Latin for “sin against nature”): “The observation of homosexual behaviour in animals can be seen as both an argument for and against the acceptance of homosexuality in humans.” It’s ‘for’ because if animals do it, then it’s natural; it’s ‘against’ because humans are not meant to be like other animals. It’s a ridiculous position to be in. I find a quote originally about economics to be useful here:

    … if background conditions determine, in a way which in principle falls outside a theory, what counts as the events over which the theory ranges, the theory is at the mercy of changes in these conditions which at any moment can undermine the predictive power of the theory.

    The philosopher Richard Norman had intended to develop a theory that could predict how much, rather what kind of, resistance certain technologies would meet from certain cultures based on what traditions each technology appeared to offend. He succeeded in that he was able to explain why some cultures struggled, and continue to struggle, with the acceptability of technologies like vitro fertilisation and contraception, and what the latter might have in common with homosexuality. He pegged it on background conditions. Russell Blackford, a philosopher at the University of Newcastle, Australia, summarised Norman’s thesis thus in a 2006 review (emphasis added)

    According to Norman’s approach, anything that may threaten a culture’s basic assumptions about how ordinary human life works – especially assumptions about sex and its relationship with conception and birth, the development and rearing of children, the roles of men and women, the processes of ageing and death – is likely be disquieting to at least some people. For example, homosexual practices may seem to threaten a background condition that relates to sex and procreation. If there are recognised choices that include sexual acts with no possibility of pregnancy, then one of the background conditions has been lost.

    Blackford writes in another part of the review (emphasis added):

    Norman argues that the discomfort that some people feel about IVF and futuristic prospects such as that of biological immortality comes from a sense that important background conditions to choice – relating to procreation and death – are threatened. In this context, a “threat” to the background conditions seems to mean that certain conditions may no longer pertain. A sense that some background conditions are under threat can be expressed as a claim that nature is being interfered with. When such claims are made, nature is being equated with the background conditions recognised within the culture concerned. Norman, however, defends IVF on the basis that incremental changes to our own culture’s background conditions can be absorbed into our thinking.

    However, given that the Bharatiya Janata Party has refused to issue a statement on the historic SC verdict, signalling its moral ambiguity (at the very least) on the subject, it seems unlikely that the party’s members – i.e. the country’s ministers – will be open to making incremental changes in their worldview to accommodate the naturalisation of “unnatural” sexual acts, so to speak.

    *Not to be confused with the naturalness of particle physics, or maybe it is.

    The Wire
    September 7, 2018

    Featured image credit: gagnonm1993/pixabay.

  • Satire: Ducks in water could increase oxygen content

    According to Biplab Deb, the chief minister of Tripura, the oxygen content of water will increase if ducks swim in it. [Satire begins here] This is a sensational new discovery that has drastic implications for Earth’s future. The lawmaker’s thinking suggests it might have something to do with duck-farts.

    There are thousands of water bodies around the world where masses of ducks have been swimming for tens of millions of years, and which could now be flush with oxygen. As a result, our planet now appears to be due for major bio-ecological changes as the abundance of oxygen is likely to spur cascading evolutionary effects.

    In fact, it is being speculated that, in the aftermath of Deb’s confirmation, the sixth extinction of the Anthropocene epoch might just be halted in its tracks and forced to do a volte face; all it will take is lots of ducks. This might explain why oil companies in Texas are confident that their proposal to have the government erect a $12-billion ‘sea wall’ to protect their coastal facilities against rising water levels will be taken seriously.

    At the same time, there also appears to be growing public resentment against scientists, with people wondering whether supposed researchers spending tax dollars might have kept this simple solution away from governments in an effort to maintain their self-importance. Major news publications like OneOp are reporting that this could be an urban naxal conspiracy and that a concerned ministry is expected to conduct raids soon. IndiaIndiaIndia reported that there’s a joke somewhere in here about going quack.

    According to Indian Express, it appears Deb had also discovered that the ducks would recycle the oxygen in the water and prevent its molecules from going to waste. Thankfully for the minister as well as for the rest of us, oxygen molecules don’t affect the pH value of water, or we would also be confronted with a major acidity/salinity catastrophe. In all, it’s good news for everyone, including the people who will supply the 50,000 ducklings Deb says he will distribute among Tripura’s fisherfolk.

    A senior scientist who didn’t wish to be named expressed surprise at the finding, and said he had applied for a grant to study the molecular chemistry of duck-farts. “I expect to hear back in five years,” he said. The same individual also expressed regret later. “We all had a chance to find this out before but we did not. It’s because we didn’t study the Vedas as thoroughly as we should have. Hopefully we will learn from this mistake. Om.”

    NPMC

    Featured image credit: Ryk Naves/Unsplash.

  • Absolute hot

    There’s only one absolute zero but there are multiple absolute ‘hots’, depending on the temperature at which various theories of physics break down. This is an interesting conception because, while absolute zero is very well-defined and perfectly understood, absolute hot simply stands for the exact opposite not in a physical sense but in an epistemological one: it is the temperature at which the object of study resembles something not understood at all. According to the short Wikipedia article on it, there are two well-known absolute hots:

    1. Planck temperature – when the force of gravity becomes as strong as the other fundamental forces, leading to a system describable only by theories of quantum gravity, which don’t exist yet
    2. Hagedorn temperature – when the system’s energy becomes so large that instead of heating up further, it begins to produce hadrons (particles made up of quarks and gluons, like protons and neutrons) or turns into a quark-gluon plasma

    Over drinks yesterday with the physicist known as The Soufflé, he provided the example of a black hole. Thermodynamics stipulates that there is an upper limit to the amount of energy that can be packed into a given volume of space-time. So if you keep heating this volume even after it has breached its energy threshold, then it will transform into a black hole (by the rules of general relativity). For this system, its absolute hot will have been reached, and from the epistemological point of view, we don’t know the microscopic structure of black holes. So there.

    However, it seems not all physical systems behave this way, i.e. become something unrecognisable beyond their absolute hot temperature. Quantum thermodynamics describes such systems as having negative temperatures on the kelvin scale. You are probably thinking it is simply colder than absolute zero – a forbidden state in classical thermodynamics – but this is not it. There seems to be a paradox here but it is more a cognitive illusion. That is, the paradox comes undone when you acknowledge the difference between energy and entropy.

    The energy of a system is the theoretically maximum capacity it has to perform work. The entropy of the system is the amount of energy that cannot be used to do work, also interpreted as a degree of disorderliness. When a ‘conventional’ system is heated, its energy and entropy both increase. In a system with negative temperature, heating increases its energy while bringing its entropy down. In other words, a system with negative temperature becomes more energetic as well as is able to dedicate a larger fraction of that energy towards work at highertemperatures.

    Such a system is believed to exist only when it can access quantum phenomena. More fundamentally, such a system is possible only if the number of high energy states it has are limited. In classical systems, which is anything that you can observe in your daily life, such as a pot of tea, objects can be heated as high a temperature as needed. But in the quantum realm, akin to what classical thermodynamics says about the birth of black holes – that its energy density became so high that space-time wrapped around the system – systems of elementary particles are often allowed to have possess only certain energies. As a result, even if the system is heated beyond its absolute hot, its energy can’t change, or at least there will be nothing to show for it.

    While it was a monumentally drab subject in college, thermodynamics – as I have learnt since – can be endlessly fascinating the same way, say, the study of financial instruments can illuminate the pulse of capitalism. This is because thermodynamics – as in the study of heat, energy and entropy – encapsulates the physical pulse of the natural universe. You simply need to go where its laws take you to piece together many things about reality.

    Of course, a thermodynamic view of the world may not always be the most useful way to study it. At the same time, there will almost always be a way to translate some theory of the world into thermodynamic equivalents. In that sense, the laws and rules of thermodynamics allow its practitioners to speak a kind of universal language the way Douglas Adams’s Babel fish does.

    The most famous example of this in the popular conception of scientific research is the work of Stephen Hawking. Together with Jacob Bekenstein and others, Hawking used thermodynamic calculations to show (on paper) that black holes were mortal and in fact emitted radiation out into the universe, instead of sucking everything in. He also found that the total entropy contained inside a black hole – its overall disorderliness – was closely related to its surface area. This was in the 1970s, but the idea that there are opportunities to understand the insides of a black hole by studying its outsides is as profound today as it was then.

  • Collective spin modes in ultracold atoms

    Physicists created a Bose-Einstein condensate of chromium atoms, ensured the atomic spins were each aligned 90º to the condensate’s plane, applied a magnetic field gradient and separated the atoms by a small but relatively significant distance, fired radio pulses at the condensate to get the atoms’ spins to rotate – and then measured the way the atoms were spinning. They found that instead of each atom having its own direction of spin, they all exhibited a collective spin that they tried to maintain!

    This is fascinating because such behaviour has previously only been observed in solids in liquids, where atoms are more closely situated, and not in a Bose-Einstein condensate, which is more like a dilute gas. That it has been observed in the latter points to the presence of quantum mechanical phenomena that are reaching across atoms to influence them to behave collectively.

    A Bose-Einstein condensate is a group of particles that has been cooled to such a low temperature that each particle behaves like just one kind of particle, the boson. In this state, all of the particles acquire the same quantum numbers and coexist to form a new phase of matter: the condensate.

    There are four kinds of quantum numbers for every particle, and each particle can’t have the same set of four numbers as that of another particle in the same system. E.g. all the electrons in an atom have different values for each of these numbers. However, particles called bosons (such as the photon) flout this rule when cooled to a really low temperature, when they form a Bose-Einstein condensate: a system of particles that all have the same four quantum numbers, i.e. occupying the same lowest energy state.

    In this state, all the particles in the condensate together behave like a liquid-like fluid while being more similar to a dilute gas. Physically this may sound boring but in quantum mechanics, a Bose-Einstein condensate is known to have unique properties that particles don’t otherwise exhibit.

    In the experiment described above, physicists created a Bose-Einstein condensate by cooling approx. 40,000 chromium atoms to 400 nK and then confining them using an optical trap. While atoms aren’t exactly particles, and are instead imagined to be composed of them, the Stern-Gerlach experiment showed in 1922 that atomic-scale systems, including atoms, do exhibit quantum mechanical properties as a whole.

    The chromium atoms’ spins – for simplicity’s sake imagined to be the atoms’ individual orientation – were aligned perpendicular to the axis of the rugby-ball-shaped Bose-Einstein condensate. Next, using a technique similar to the Stern-Gerlach experiment, the physicists applied a graded, i.e. uneven, magnetic field along the plane of the condensate. This caused each atom’s spin to become coupled with – or affected by – those of its neighbours such that all the atoms were encouraged to have the same alignment (keeping the condensate in its ground state). The graded magnetic field also caused the atoms to move apart slightly. Finally, radio pulses were fired at the atoms such that they produced a torque that caused the atoms to spin, i.e. change their orientation.

    When the spins fall out of alignment, the spin coupling should also fall out of alignment, and the atoms would all become aligned differently. … at least this is what the physicists thought would happen. It didn’t. The atoms were found to be reorienting under the radio pulses’ assault in a spin wave. It was if each atom’s spin was holding the hands of the two spins on either side of it and refusing to let go, causing the atoms to move together.

    In this video, looking upon the surface of the liquid is akin to looking upon a sea of atoms in the condensate. Imagine you were looking at the waterbody edge on. The ripples would be the atomic spins bobbing up and down because of the radio pulses, which would be the metaphorical stones thrown in the water. According to the physicists, when the magnetic field’s gradient is smaller, the shape of the bobbing motion – a.k.a. the spin wave – would more look like the graph below:

    This is the first time such a phenomenon has been observed in a Bose-Einstein condensate and more so in a dilute gas. In their effort to understand what could be causing this so-called collective spin mode, the physicists also found some interesting connections. As they write in their preprint paper:

    Although complex oscillatory behaviours are obtained when b [the magnetic field gradient] is large, at low gradients we observe a rather simple damped oscillatory behaviour for both the population dynamics and the separation [between atoms], … The amplitude of oscillation also depends on b, and vanishes for b → 0. … These observations indicate that the interaction with magnetic field gradients has excited a collective mode which couples the [condensate’s] spin degrees of freedom to [its] spatial degrees of freedom. (emphasis mine)

    Even more interestingly, according to the physicists, the condensate under these specially engineered circumstances behaved like a ferrofluid, a type of fluid that, in the words of Physics World, “becomes strongly magnetised when placed in a magnetic field”. They realised this was the case because they found that they could predict the condensate’s behaviour using the rules of ferrofluid hydrodynamics.

  • Remembering S. Pancharatnam

    Scientists have combined one atom of sodium (Na) and one of caesium (Cs) to form one molecule of NaCs, achieving the most precisely controlled chemical reaction in history. They were able to achieve this using a fascinating bit of technology called a magneto-optical trap. While the trap itself has a sophisticated design, its essential modus operandus is founded on a deceptively simple technique called Doppler cooling.

    If a laser is shined on an atom that is moving towards the source of light, then the atom will absorb a photon (due to the Doppler effect). Because of the conservation of momentum, the atom ‘acquires’ the photon’s momentum as well, and its own momentum drops. The laser is tuned such that its frequency imparts the atom with a photon that kicks one electron to a higher energy state. When the electron drops back down to its original state, it emits the photon, and the atom spits it out.

    The emitted photon’s recoil gives the atom another momentum ‘kick’ (a la Newton’s third law), but because it happens in a random direction, the atom has been effectively slowed in the direction it was originally moving in. By repeating this process over and over, an atom can be slowed down considerably (from hundreds of metres per second to a few centimetres per second), dragging its kinetic energy down as well in the process.

    Since the kinetic energy of a set of atoms defines the temperature of the group, this Doppler cooling can effectively cool atoms down. The technique is most suited for atoms that have a simple electronic structure – where, for example, the electrons don’t have more than two possible states to be in: ground state and one excited state. However, most atoms do exhibit such hyperfine structure, limiting the applications of Doppler cooling. Additionally, there is also a Doppler cooling limit when the technique is applied because the atom’s kinetic energy can’t be lowered below the recoil temperature imparted by the departing photon.

    One alternative is called Sisyphus cooling. Instead of constantly removing the kinetic energy of an atom, Sisyphus cooling uses a combination of lasers to create a jagged potential gradient such that an atom in motion is forced to from a region of lower potential to one that is higher.

    Imagine this ‘jag’ as a series of mountains. The atom moves up the first mountain, in the process of which its kinetic energy is converted to potential energy. At the summit, an optical pump – a technique similar to Doppler cooling – removes this potential energy, dropping the atom to a state with lower energy than it had before climbing the mountain. And because the atom is still in motion, it begins to climb the second mountain, after which it is left with even lower energy.

    Once the atom has crossed a series of mountains, successive conversions of kinetic to potential energy, and successive pump-outs of this potential energy, leave it with very little energy to call its own. In short, it has been cooled to a sub-Doppler temperature. The title of ‘Sisyphus’ is self-explanatory at this point: like the Greek king cursed to roll a boulder uphill only for it to roll back down as he neared the peak, the atom is also forced to climb uphill only for the optical pump to send it back down each time.

    Interestingly, Claude Cohen-Tannoudji, the French physicist who devised Sisyphus cooling and won a piece of the physics Nobel Prize in 1997 for it, published a paper in Current Science on the subject in 1994. This issue of Current Science was dedicated to the work of Shivaramakrishnan Pancharatnam, a physicist noted for his work in optics. The foreword, penned by George William Series, with whom Pancharatnam worked from 1964 at St Catherine’s College, Oxford, until he died in 1969, states,

    [He] made some outstanding contributions to optics, first, in the fifties, in the area of polarisation and coherence phenomena in the classical regime, and then, in the sixties, in the study of atoms simultaneously interacting with resonant radiation and low frequency magnetic fields. His work in the latter area drew international attention before it was cut short by his early death at the age of thirty-five. … But it is fair to say that his work received renewed attention and acclaim only after the recognition, in the eighties, that he had derived and used the concept of geometric phases in his studies of the interference of polarised light.

    Cohen-Tannoudji acknowledges Pancharatnam’s research as part of the foundation on which more advanced cooling/trapping techniques, like the Sisyphus, rest. From his paper,

    All Pancharathnam’s works were done at a time where the only light sources available for optical pumping experiments were spectral lamps, excited by DC or microwave discharges and emitting a light with a broad spectral width and a weak intensity. The spectacular development of unable laser sources, which started in the early seventies, stimulated several experimental and theoretical studies. … A new research field, called laser cooling and trapping of atoms, has appeared and is expanding very rapidly. … In this special issue dedicated to the memory of S. Pancharathnam, I would like to briefly describe two examples of recent developments which, I am sure, would have pleased him, because they use concepts which were quite familiar to him.

    Pancharatnam’s doctoral adviser was C.V. Raman, at the Raman Research Institute. He is most well known for independently discovering the geometric phase in the study of waves in 1956.

    All waves can be described by their phase and amplitude. When the values of both parameters are changed at the same time and in slow-motion, one can observe the wave evolving through different states. In some cases, when the phase and amplitude are cycled through a series of values and brought back to their original, the wave looks different from what it did at the start. The effective shift in phase is calling the geometric phase.

    The British physicist Michael Berry was able to provide a generalised description of the geometric phase in 1986, and it has since been commonly known as the Berry phase. He, too, had published an article in that issue of Current Science, in which he acknowledges that he couldn’t properly appreciate the relevance of Pancharatnam’s paper on the geometric phase until he visited Sivaraj Ramaseshan in Bangalore in 1987. Berry’s article concludes thus:

    Now, as we remember Pancharatnam’s untimely death in his creative prime, and celebrate his youthful achievements, it is time to look again through all his work. Who knows what further delicious physics this will reveal?

    Delicious indeed. Modern science – such as one that can guide two atoms, manoeuvred one by one, step by step, to strike a chemical bond under the watchful gaze of physicists trying to build better quantum computers – stands on the shoulders of many giants. One of them was Pancharatnam.

  • ‘Weak charge’ measurement holds up SM prediction

    Various dark matter detectors around the world, massive particle accelerators and colliders, powerful telescopes on the ground and in space all have their distinct agendas but ultimately what unites them is humankind’s quest to understand what the hell this universe is on about. There are unanswered questions in every branch of scientific endeavour that will keep us busy for millennia to come.

    Among them, physics seems to be sufferingly uniquely, as it stumbles even as we speak through a ‘nightmare scenario’: the most sensitive measurements we have made of the physical reality around us, at the largest and smallest scales, don’t agree with what physicists have been able to work out on paper. Something’s gotta give – but scientists don’t know where or how they will find their answers.

    The Qweak experiment at the Jefferson Lab, Virginia, is one of scores of experiments around the world trying to find a way out of the nightmare scenario. And Qweak is doing that by studying how the rate at which electrons scatter off a proton is affected by the electrons’ polarisation (a.k.a. spin polarisation: whether the spin of each electron is “left” or “right”).

    Unlike instruments like the Large Hadron Collider, which are very big, operate at much higher energies, are expensive and are used to look for new particles hiding in spacetime, Qweak and others like it make ultra-precise measurements of known values, in effect studying the effects of particles both known and unknown on natural phenomena.

    And if these experiments are able to find that these values deviate at some level from that predicted by the theory, physicists will have the break they’re looking for. For example, if Qweak is the one to break new ground, then physicists will have reason to suspect that the two nuclear forces of nature, simply called strong and weak, hold some secrets.

    However, Qweak’s latest – and possibly its last – results don’t break new ground. In fact, they assert that the current theory of particle physics is correct, the same theory that physicists are trying to break free of.

    Most of us are familiar with protons and electrons: they’re subatomic particles, carry positive and negative charges resp., and are the stuff of one chapter of high-school physics. What students of science find out quite later is that electrons are fundamental particles – they’re not made up of smaller particles – but protons are not. Protons are made up of quarks and gluons.

    Interactions between electrons and quarks/gluons is mediated by two fundamental forces: the electromagnetic and the weak nuclear. The electromagnetic force is much stronger than the aptly named weak nuclear force. On the other hand, it is agnostic to the electron’s polarisation while the weak nuclear force is sensitive to it. In fact, the weak nuclear force is known to respond differently to left- and right-handed particles.

    When electrons are bombarded at protons, the electrons are scattered off. Scientists at measure how often this happens and at what angle, together with the electrons’ polarisation – and try to find correlations between the two sets of data.

    An illustration showing the expected outcomes when left- and right-handed electrons, visualised as mirror-images of each other, scatter off of a proton. Credit: doi:10.1038/s41586-018-0096-0
    An illustration showing the expected outcomes when left- and right-handed electrons, visualised as mirror-images of each other, scatter off of a proton. Credit: doi:10.1038/s41586-018-0096-0

    At Qweak, the electrons were accelerated to 1.16 GeV and bombarded at a tank of liquid hydrogen. A detector positioned near the tank picked up on electrons scattered at angles between 5.8º and 11.6º. By finely tuning different aspects of this setup, the scientists were able to up the measurement precision to 10 parts per billion.

    For example, they were able to achieve a detection rate of 7 billion per second, a target luminosity of 1.7 x 1039 cm-2 s-1 and provide a polarised beam of electrons at 180 µA – all considered high for an experiment of this kind.

    The scientists were looking for patterns in the detector data that would tell them something about the proton’s weak charge: the strength with which it interacts with electrons via the weak nuclear force. (Its notation is Qweak, hence the experiment’s name.)

    At Qweak, they’re doing this by studying how the electrons are scattered versus their polarisation. The Standard Model (SM) of particle physics, the theory that physicists work with to understand the behaviour of elementary particles, predicts that the number of left- and right-handed electrons scattered should differ by one for every 10 million interactions. If this number is found to be bigger or smaller than usual when measured in the wild, then the Standard Model will be in trouble – much to physicists’ delight.

    SM’s corresponding value for the proton’s weak charge is 0.0708. At Qweak, the value was measured to be 0.0719 ± 0.0045, i.e. between 0.0674 and 0.0764, completely agreeing with the SM prediction. Something’s gotta give – but it’s not going to be the proton’s weak charge for now.

    Paper: Precision measurement of the weak charge of the proton

    Featured image credit: Pexels/Unsplash.

  • Myth of harmful cell phone radiation is good business for IndiGo

    When I fly, I always fly IndiGo. They’re not perfect but they and their services have become familiar, from their website (where I book my tickets) to when I exit the airport at my destination. The efficiency with which the IndiGo staff works – rather the economy of processes they follow – has seemed well thought-out. (For example, the air hostesses are sweet but the pilot also chips in over the intercom, keeping passengers updated about how high and fast they’re flying, etc.).

    On my most recently flight, however, this facade of sanity was disturbed when I saw the following advertisement in their in-flight magazine:

    Credit: Vasudevan Mukunth
    Credit: Vasudevan Mukunth

    You can see how that’d have gotten my goat. Indio strives to offer a highly optimised journey for the domestic traveller – including a healthy dose of pseudoscience. The funny thing is that the handheld extension plugged into the mobile phone has an electrical and electronic architecture similar to the one working inside the phone; the only difference is the absence of a signal receiver and emitter. It then follows that whatever radiation one is alleging the phone is serving as a hub of is all around us: if your phone is not on a call right now, some other phone in your vicinity surely is.

    Cell phone radiation is not harmful because it is not ionising radiation. It’s that simple. Only ionising radiation can harm the body. It’s okay to want to protect yourself from threats but to believe your mobile phone is giving your head or your genitals cancer is stupid. On top of this, the product being advertised – aptly called the Phoni3 – promises to cut out 95% of the nonexistent harmful radiation. *facepalm* This is consumerism at the peak of its sway.

    In fact, I’m curious why neither the makers of Phoni3 nor IndiGo saw fit to speak about background radiation. Did you know that the radiation your body is exposed to in the course of a six-hour flight is 444-times higher than the dose it receives if you live within 80 km of a nuclear power plant for a year? The reason we don’t panic is because even this elevated dose poses no danger to the human body. And the reason we don’t see an advertisement for lead-lined jackets or portable Faraday cages to wear/carry during air travel in the in-flight magazine is because it will be bad for business.

    But anything short of hurting IndiGo can pass go. To wit, the following message is at the bottom of the same page containing the phone Phoni ad:

    Credit: Vasudevan Mukunth
    Credit: Vasudevan Mukunth

    The government should ban advertisements for such products if only because, in this specific case, the Telecom Regulatory Authority of India (TRAI) has been working to dispel beliefs that cell phone radiation is harmful to the body. Unless the civil aviation authority bans such ads, TRAI’s efforts will be in vain. The IndiGo in-flight magazine is available for 180 passengers per flight of an Airbus A320, and the airline flies 131 such flights across the country a day (as of April 10, 2017). That’s more visibility than the TRAI can manage without significant effort.

    Featured image credit: Javier Cañada/Unsplash.