Uncategorized

  • A useful book to have around

    14oeb_Space_jpg_1719808fIndia’s Rise as a Space Power is a book by Prof. Udupi Ramachandra Rao, former Chairman, ISRO (1988-1994), that provides some useful historical context of the space research organization from a scientist’s perspective, not an administrator’s.

    Through it, Prof. Rao talks about how our space program was carefully crafted with a series of satellites and launch vehicles, and how each one of them has contributed to where the organization, as such, is today: an immutable symbol of power in the Third World and India’s pride. He starts with the foundation of ISRO, goes on to the visions of Vikram Sarabhai and Satish Dhawan, then introduces the story of Aryabhata, our first satellite, followed by Bhaskara I and II, the IRS series, the INSAT program, the ASLV, PSLV and GSLV, and finally, the contributions of all these instruments to the Indian economy. The period in which Prof. Rao served as Chairman coincided with an acceleration of innovations at ISRO – when he assumed the helm, the IRS was being developed; when he left, development of the cryogenic engine was underway.

    However, India’s Rise… leaves out that aspect of his work that he was most well-positioned to discuss all along: politics. The Indian polity is heavily invested in ISRO, and constantly looks to it for solutions to a diverse array of problems, from telecommunications to meteorology. While ISRO may never have struggled to receive government funding, its run-ins with the 11 governments in its 45-year tenure will have made for a telling story on the Indian government’s association with on of its most successful scientific/technological bodies. Where Prof. Rao makes comments, it is usually on one of two things: either to say discuss why scientists are better leaders of organizations like ISRO than administrators, or how foreign governments floated or sank technology-transfer deals with India.

    … Mr. T.N. Seshan, who was the Additional Secretary in the Department of Space, a senior member of the negotiation team deputed under my leadership, made this trip [to Glavkosmos, a Soviet company that was to equip and provide the launcher for the first-generation Indian Remote Sensing satellites] unpleasant by throwing up tantrums just because he was not the leader of the Indian delegation. Subsequently, Prof. Dhawan had to tell him in no uncertain terms that any high-level delegation such as the above would only be led by a scientist and not an administrator, a healthy practice followed in [the Dept. of Space] form the very beginning. (pp. 124)

    This aspect notwithstanding, India’s Rise… is a useful book to have around now, when ISRO seems poised to enter its next era: that of the successful use of its cryogenic engines to lift heavier payloads into higher orbits. It contains a lot of interesting information about different programs and the attention to detail is distributed evenly, if sometimes unnecessarily. There is also an accompanying collection of possibly rare photographs; my favorite shows a rocket’s nose-cone being transported by bicycle to the launchpad. Overall, the book makes for excellent reference, and thanks to Prof. Rao’s scientific background, there is a sound representation of technical concepts devoid of misrepresentation. Here’s my review of it for The Hindu.

  • And the GSLV flew!

    The Copernican
    January 6, 2014

    Congratulations, ISRO, for successfully launching the GSLV-D5 (and the GSAT-14 satellite with it) on January 5. Even as I write this, ISRO has put out an update on its website: “First orbit raising operation of GSAT-14 is successfully completed by firing the Apogee Motor for 3,134 seconds on Jan 06, 2014.”

    With this launch comes the third success in eight launches of the GSLV program since 2001, and the first success with the indigenously developed cryogenic rocket-engine. As The Hindu reported, use of this technology widens India’s launch capability to include 2-2.5 tonne satellites. This propels India into becoming a cost-effective port for launching heavier satellites, not just lighter ones as before.

    The GSLV-D5 (which stands for ‘developmental flight 5′) is a variant of the GSLV Mark II rocket, the successor to the GSLV Mark I. Both these rockets have three stages: solid, liquid and cryogenic. The solid stage possesses the design heritage of the American Nike-Apache engine; the liquid stage, of the French Vulcain engine. The third cryogenic upper stage was developed at the Liquid Propulsion Systems Centre, Tamil Nadu—ISRO’s counterpart of NASA’s JPL.

    There is a significant difference of capability based on which engines are used. ISRO’s other more successful launch vehicle, the Polar Satellite Launch Vehicle (PSLV), uses four stages: alternating solid and liquid ones. Its payload capacity to the geostationary transfer orbit (GTO), from which the Mars Orbiter Mission was launched, is 1,410 kg. With the cryogenic engine, the GSLV’s capacity to the same orbit is 2,500 kg. By being able to lift more equipment, the GSLV hypothetically foretells our ability to launch more sophisticated instruments in the future.

    The better engine

    The cryogenic engine’s complexity resides in its ability to enhance the fuel’s flow through the engine.

    An engine’s thrust—its propulsive force—is higher if the fuel flows faster through it. Solid fuels don’t flow, but they let off more energy when burnt than liquid fuels. Gaseous fuels barely flow and have to be stored in heavy, pressurised containers.

    Liquid fuels flow, have higher energy density than gases, and they can be stored in light tanks that don’t weigh the rocket down as much. The volume they occupy can be further reduced by pressurising them. Recall that the previous launch attempt of the GSLV-D5, in August 2013, was called off 74 minutes before take-off because fuel had leaked from the liquid stage during the pre-pressurisation phase.

    Even so, there seems no reason to use gaseous fuels. However, when hydrogen burns in the presence of oxygen, both gases at normal pressure and temperature, the energy released provides an effective exhaust velocity of 4.4 km/s—one of the highest (p. 23, ‘Cosmic Perspectives in Space Physics’, S. Biswas, 2000). It was to use them more effectively that cryogenic engines were developed.

    In a cryogenic engine, the gases are cooled to very low temperatures, at which point they become liquids—acquiring the benefits of liquid fuels also. However, not all gases are considered for use. Consider this excerpt from a NASA report written in the 1960s:

    A gas is considered to be cryogen if it can be changed to a liquid by the removal of heat and by subsequent temperature reduction to a very low value. The temperature range that is of interest in cryogenics is not defined precisely; however, most researchers consider a gas to be cryogenic if it can be liquefied at or below -240 degrees fahrenheit [-151.11 degrees celsius]. The most common cryogenic fluids are air, argon, helium, hydrogen, methane, neon, nitrogen and oxygen.

    The difficulties arose from accommodating tanks of super-cold liquid propellants—which includes both the fuel and the oxidiser—inside a rocket engine. The liquefaction temperature for hydrogen is 20 kelvin, just above absolute zero; for oxygen, 89 kelvin.

    Chain of problems

    For starters, cryopumps are used to trap the gases and cool them. Then, special pumps called turbopumps are required to move the propellants into the combustion chamber at higher flow-rates and pressures. Next, relatively expensive igniters are required to set off combustion, which also has to be controlled with computers to prevent them from burning off too soon. And so forth.

    Because using cryogenic technology drove advancements in one area of a propulsion system, other areas also required commensurate upgrades. Space engineers learnt many lessons from the American Saturn launch vehicles, whose advanced engines (for the time) were born of using cryogenic technology. They flew between 1961 and 1975.

    In the book ‘Rocket Propulsion Elements’ (2010) by George Sutton and Oscar Biblarz, some other disadvantages of using cryogenic propellants are described (p. 697):

    Cryogenic propellants cannot be used for long periods except when tanks are well insulated and escaping vapours are recondensed. Propellant loading occurs at the launch stand or test facility and requires cryogenic propellant storage facilities.

    With cryogenic liquid propellants there is a start delay caused by the time needed to cool the system flow passage hardware to cryogenic temperatures. Cryogenically cooled fluids also continuously vaporise. Moreover, any moisture in the same tank could condense as ice, adulterating the fluid.

    It was in simultaneously overcoming all these issues, with no help from other space-faring agencies, that ISRO took time. Now that the Mark II has been successfully launched, the organisation can set its eyes on loftier goals—such as successfully launching the next, mostly different variant of the GSLV: the Mark III, which is projected to have a payload capacity of 4,500-5,000 kg to GTO.

    While we are some way off from considering the GSLV for manned missions, which requires mastery of reentry technology and spaceflight survival, the GSLV Mark III, if successful, could make India an invaluable hub for launching heavier satellites at costs lesser than ESA’s Ariane program, which India used in lieu of the GSLV.

    Good luck, ISRO!

  • Rethinking cryptocurrency

    I’m still unsure about bitcoins’ uncertain future as far as mainstream adoption is concerned, but such issues have been hogging media limelight so much so that people are missing out on why bitcoins are actually awesome. They’re not awesome because they’re worth about $800 apiece (at the time of writing this) or because they threaten to trivialize the existence of banks. These concerns have nothing to do with bitcoins – they’re simply anti-establishment frustrations in post-recession era. Bitcoins, and other cryptocurrencies like it, are awesome because of their technical framework which enables:

    1. Public verification of validity (as opposed to third-party verification)
    2. Zero transaction costs (although this is likely to change)

    Thinking about bitcoins as alternatives to dollars only five years into the cryptycurrency’s existence is stupid. Even scoffing at how steep the learning curve is (to learn about how to acquire and moblize bitcoins) is stupid. Instead, what we must be focusing on are the characteristics of the technology that makes the two mentioned techniques possible because they have great reformative potential in a country like India (if adopted correctly, which I suppose is a subjective ideal, but hey). Zero transaction costs enable individual and small enterprises to avoid painful scaling costs, while public verification enables only value to be transferred across a network instead of forcing two parties to share information unrelated ot the transaction itself with a bank, etc. Here’s my OpEd on this idea for The Hindu.

  • Predatory publishing, vulnerable prey

    On December 29, the International Conference on Recent Innovations in Engineering, Science and Technology (ICRIEST) is kicked off in Pune. It’s not a very well-known conference, but might as well have been for all the wrong reasons.

    On December 16 and 20, Navin Kabra, from Pune, submitted two papers to ICRIEST. Both were accepted and, following a notification from the conference’s organizers, Mr. Kabra was told he could present the papers on December 29 if he registered himself at a cost of Rs. 5,000.

    Herein lies the rub. The papers that Mr. Kabra submitted are meaningless. They claim to be about computer science, but were created entirely by the SCIGen fake-paper generator available here. The first one, titled “Impact of Symmetries on Cryptoanalysis”, is rife with tautological statements, and could not possibly have cleared peer-review. However, in the acceptance letter that Mr. Kabra received by email, paper is claimed to have been accepted after being subjected to some process of scrutiny, scoring 60, 70, 80 and 90.75 among some reviewers.

    Why is the conference refusing to reject such a paper, then? Is it subsisting on the incompetence of secretarial staff? Or is it so desperate for papers that rejection rates are absurdly low?

    Mr. Kabra’s second paper, “Use of cloud-computing and social media to determine box office performance”, might say otherwise. This one is even more brazen, containing these lines in its introduction:

    As is clear from the title of this paper, this paper deals with the entertainment industry. So, we do provide entertainment in this paper. So, if you are reading this paper for entertainment, we suggest a heuristic that will allow you to read this paper efficiently. You should read any paragraph that starts with the first 4 words in bold and italics – those have been written by the author in painstaking detail. However, if a paragraph does not start with bold and italics, feel free to skip it because it is gibberish auto-generated by the good folks at SCIGen.

    If this paragraph went through, then the administrators of ICRIEST are likely to possess no semblance of interest in academic research. In fact, they could be running the conference as a front to make some quick bucks.

    Mr. Kabra professes an immediate reason for his perpetrating this scheme. “Lots of students are falling prey to such scams, and I want to raise awareness amongst students,” he wrote in an email.

    He tells me that for the last three years, students pursuing a Bachelor of Engineering in a college affiliated with the University of Pune have been required to submit their final project to a conference, “a ridiculous requirement” thinks Mr. Kabra. As usual, not all colleges are enforcing this rule; those that are, on the other hand, are pushing students. Beyond falsifying data and plagiarizing reports to get them past evaluators, the next best thing to secure a good grade is to sneak it into some conference.

    Research standards in the university are likely not helping, either. Such successful submissions as hoped for by teachers at Indian institutions will never happen for as long as the quality of research in the institution itself is low. Enough scientometric data exists from the last decade to support this, although I don’t know how if it breaks down to graduate and undergraduate research.

    (While it may be argued that scientific output is not the only way to measure the quality of scientific research at an institution, you should know something’s afoot when the quantity of output is either very high or very low relative to, say, the corresponding number of citations and the country’s R&D expenditure.)

    Another reason to think neither the university nor the students’ ‘mentors’ are helping is someone who spoke on behalf of the University to Mr. Kabra had no idea about ICRIEST. To quote from the Mid-Day article that’s covered this incident,

    “I don’t know of any research organisation named IRAJ. I am sorry, I am just not aware about any such conference happening in the city,” said Dr Gajanan Kharate, dean of engineering in the University of Pune.

    Does the U-of-Pune care if students have submitted paper to bogus journals? Do they check contents of the research themselves or do they rely on whether students’ ‘papers’ are accepted or not? No matter; what will change hence? I’m not sure. I won’t be surprised if nothing changes at all. However, there is a place to start.

    Prof. Jeffrey Beall is the Scholarly Initiatives Librarian at the University of Colorado, Denver, and he maintains an exhaustive list of questionable journals and publishers. This list is well-referenced, constantly updated, and commonly referred to to check for dubious characters that might have approached research scholars.

    On the list is the Institute for Research and Journals (IRAJ), which is organizing ICRIEST. In an article in The Hindu on September 26, 2012, Prof. Beall says, “They want others to work for free, and they want to make money off the good reputations of honest researchers.”

    Mr. Kabra told me he had registered himself for the presentation—and not before he was able to bargain with them, “like … with a vegetable vendor”, and avail a 50 per cent discount on the fees. As silly as it sounds, this is not the mark of a reputable institution but a telltale sign of a publisher incapable of understanding the indignity of such bargains.

    Another publisher on Prof. Beall’s list, Asian Journal of Mathematical Sciences, is sly enough to offer a 50 per cent fee-waiver because they “do not want fees to prevent the publication of worthy work”. Yet another journal, Academy Publish, is just honest: “We currently offer a 75 per cent discount to all invitees.”

    Other signs, of course, are the use of words with incorrect spellings, as in “Dear Sir/Mam”.

    At the end of the day, Mr. Kabra was unable to go ahead with the presentation because he said he was depressed by the sight of Masters students at ICRIEST—some who’d come there, on the west coast, from the eastern-coast state of Odisha. That’s the journey they’re willing to make when pushed by the lure for grades from one side and the existence of conferences like ICRIEST on the other.

  • Solving mysteries, by William & Adso

    The following is an excerpt from The Name of the Rose, Umberto Eco’s debut novel from 1980. The story is set in an Italian monastery in 1327, and is an intellectually heady murder mystery doused in symbolism and linguistic ambivalence. Two characters, William of Baskerville and Adso of Melk, are conversing about using deductive reasoning to solve mysteries.

    “Adso,” William said, “solving a mystery is not the same as deducing from first principles. Nor does it amount simply to collecting a number of particular data from which to infer a general law. It means, rather, facing one or two or three particular data apparently with nothing in common, and trying to imagine whether they could represent so many instances of a general law you don’t yet know, and which perhaps has never been pronounced. To be sure, if you know, as the philosopher says, that man, the horse, and the mule are all without bile and are all long-lived, you can venture the principle that animals without bile live a long time. But take the case of animals with horns. Why do they have horns? Suddenly you realize that all animals with horns are without teeth in the upper jaw. This would be a fine discovery, if you did not also realize that, alas, there are animals without teeth in the upper jaw who, however, do not have horns: the camel, to name one. And finally you realize that all animals without teeth in the upper jaw have four stomachs. Well, then, you can suppose that one who cannot chew well must need four stomachs to digest food better. But what about the horns? You then try to imagine a material cause for horns—say, the lack of teeth provides the animal with an excess of osseous matter that must emerge somewhere else. But is that sufficient explanation? No, because the camel has no upper teeth, has four stomachs, but does not have horns. And you must also imagine a final cause. The osseous matter emerges in horns only in animals without other means of defense. But the camel has a very tough hide and doesn’t need horns. So the law could be …”

    “But what have horns to do with anything?” I asked impatiently. “And why are you concerned with animals having horns?”

    “I have never concerned myself with them…”

    When I first read this book almost seven years ago, I remember reading these lines with awe (I was reading my first books on the philosophy of science then). Like a fool on whom common sense was then lost but somehow not their meaning itself, I memorized the lines, and then promptly forgot the context in which they appeared. While randomly surfing through the web today, I found them once more, so here they are. They belong to the chapter titled “In which Alinardo seems to give valuable information, and William reveals his method of arriving at a probable truth through a series of unquestionable errors.”

  • CNR Rao, a faceted gem

    Inquisitions are sure to follow if you’ve won India’s highest civilian honor on the back of a little-known career. At the same time, if that career’s been forged on scientific research, then blame all that’s little-known on media apathy, flick away what fleeting specks of guilt persist, and congratulate the winner for years of “great work” (which of course you didn’t hear about till news portals “broke” the news – even to the point of getting things, as usual, terribly wrong).

    Yesterday, it was announced Prof. C.N.R. Rao of the Jawaharlal Nehru Centre for Advanced Scientific Research (JNCASR), a chemist, was being awarded the Bharat Ratna for his prolific research and, presumably, his contributions to science education in India. In a career spanning more than 50 years, Rao helped set up the JNCASR and was pivotal in establishing the five IISERs (Kolkata, Pune, Mohali, Bhopal and Thiruvananthapuram). In between, he was the Chairman of the Scientific Advisory Council for four Indian Prime Ministers: Rajiv Gandhi, Deve Gowda, I.K. Gujral and Manmohan Singh.

    As a researcher, Rao works in solid-state and structural chemistry and superconductivity, with more than 1,500 published papers and an h-index of 90. He was made a Fellow of the Royal Society in 1982, received the Hughes Medal in 2000; the Indian Science Award in 2004; and the French Legion of Honour in 2005. He’s received various other awards, too, and has honorary PhDs from over 50 universities the world over. All these distinctions, and more, have been covered by journalists in their reports published, and continuing to be published, hours after the PMO announced that he’d be conferred India’s highest civilian award.

    What was conspicuously missing from the coverage was Rao’s involvement in a series of plagiarism charges levelled against him for papers of his published through 2011 and 2012. Both of India’s two most widely-read English dailies didn’t include it in their reports, while a third, smaller publication had a mention in its last line (I didn’t bother to check other publications – but imagine, between the two biggies, some 9.8 million readers in the country haven’t been reminded about what Rao was engaged in). Why would news-channels choose to leave it out? Some reasons could be…

    1. Rao didn’t engage in plagiarism, just that one of his co-authors, a student tasked with writing the introductory elements of the paper, did.
    2. Rao has published over 1,500 papers; even in the papers where plagiarised content was found, the experiments and results were original. These charges are, thus, freak occurrences.
    3. It’s a tiny blip on an illustrious career, and with a Bharat Ratna in the picture, minor charges of plagiarism can be left out because they don’t contribute to the “effect” of the man.

    This is where I’d remind you about a smart Op-ed by IMS researcher Rahul Siddharthan that appeared in The Hindu on March 9, 2012. Here’s a line from the paper that points to the concerns I have with Rao:

    Unfortunately, the senior authors (Rao, who was the last author, and S.B. Krupanidhi of IISc, Bangalore) did three other things. They both publicly blamed the first author, a graduate student of Krupanidhi. They both denied that it was plagiarism. And Rao declared that he had had little personal involvement with this paper.

    If any of the three excuses listed above are being cited by journalists, Siddharthan’s piece defeats them, instead drawing forth a caricature of Rao and his character that seem disagreeable. I would like to think that Rao was simply absent-minded, but I’m unable to. Siddharthan’s words make Rao sound as if he was disgruntled with an unexpected outcome, that it was as a result of simply neglecting to supervise work that he wanted to end up taking credit for – no matter that the experiments and results presented in the paper were original.

    To wit, here’s another paragraph from Siddharthan’s piece:

    Rao and his colleagues were undoubtedly aware of the previous paper, since they plagiarised from it; yet they cite it only once, briefly and without discussion, in the introduction. Not only do they fail to compare their results with a very relevant prior publication: they nowhere even hint to the reader that such work exists.

    To be clear, my grouse isn’t with C.N.R. Rao winning the Bharat Ratna but the lightness with which newspapers have chosen to suppress the fact that Rao, in some way, was unaware (or, equally bad, aware) about plagiarised content in his work.

    Worse, in an article by K.S. Jayaraman in Nature in February 2012, Rao speaks about the importance of good language skills among students, and the need for an institutional mechanism to enforce it. In an interview published in Current Science in May 2011, he talks about the importance of grooming youngsters and providing the supportive environment he thinks mandatory for them to succeed. Is this Rao leading by example, then, to show the dire need for such mechanisms and environments?

    *

    While an exalted picture of him persists into Day 2 in the Indian mainstream media, I remember that at the moment of announcement, many of my scientist- and science-writing-friends expressed mild confusion over the choice. First thought: Surely there were others? A few minutes later: But why? An hour later: Is he in the league of Raman or Kalam? These giants of Indian science and technology commanded a public perception that transcended their work.

    Then again, are all these questions being raised simply in the wake of years of media apathy toward Rao’s work in the public sphere?

  • ‘No string theorists in non-elite institutions’

    Shiraz Naval Minwalla, a professor of theoretical physics at the Tata Institute of Fundamental Research (TIFR), Mumbai, won the New Horizons in Physics Prize for 2013 on November 5. The prize – which recognizes ‘promising researchers’ and comes with a cash prize of $100,000 – is awarded by the Fundamental Physics Prize Foundation, set up by Russian billionaire Yuri Milner in 2012.

    Shiraz has been cited for his contributions to the study of string theory and quantum field theory, particularly for improving our understanding of the equations governing fluid dynamics, and using them to verify the predictions of all quantum field theories as opposed to a limited class of theories before.

    On November 12, Shiraz was also awarded the Infosys Foundation Prize in the physical sciences category. He was the youngest among this year’s winners.

    I interviewed him over Skype for The Hindu (major hat-tip to Akshat Rathi), which is where this interview first appeared (on November 13, 2013). Shiraz had some important things to say, including the now-proverbial ‘the Indian elementary school system sucks’, and that India is anomalously strong in the arena of string theory research, although it doesn’t yet match up to the US’s output qualitatively, but that almost none of it happens in non-elite institutions.

    Here we go.

    Why do you work with string theory and quantum field theory? Why are you interested in these subjects?

    Because it seems like one of the roads to completing one element of the unfinished task of physics. In the last century, there have been two big developments in physic. The quantum revolution, which established the language of quantum mechanics for dealing with physical systems, and the general theory of relativity, which established the dynamic nature of spacetime as reality in the world and realized it was responsible for gravity. These two paradigms have been incredibly successful in their domains of applicability. Quantum theory is ubiquitous in physics, and is also the basis for theories of elementary particle physics. The general relativity way of thinking has been successful with astrophysics and cosmology, i.e. successful at larger scales.

    These paradigms have been individually confirmed and individually very successful, yet we have no way of putting them together, no single mathematically consistent framework. This is why I work with string theory and quantum field theory because I think it is the correct path to realize a unified quantum theory of gravity.

    What’s the nature of your work that has snagged the New Horizons Prize? Could you describe it in simpler terms?

    The context for this discussion is the AdS/CFT correspondence of string theory. AdS/CFT asserts that certain conformal quantum field theories admit a reformulation as higher dimensional theories of gravity under appropriate circumstances. Now it has long been expected that the dynamics of any quantum field theory reduces, under appropriate circumstances, to the equations of hydrodynamics. If you put these two statements together it should follow that Einstein’s equations of gravity reduce, under appropriate circumstances, to the equations of hydrodynamics.

    My collaborators and I were able to directly verify this expectation. The equations of hydrodynamics that Einstein’s equations reduce have particular values of transport coefficients. And there was a surprise here. It turns out that the equations charged relativistic hydrodynamics that came out of this procedure were slightly different in form from those listed in textbooks on the subject, like the text of [Lev] Landau and [Evgeny] Lifshitz. The resolution of this apparent paradox was obtained by [Dam] Son and [Piotr] Surowka and in subsequent work, where it was demonstrated that the textbook expectations for the equations of hydrodynamics are incomplete. The correct equations sometimes have more terms, in agreement with our constructions.

    The improved understanding of the equations of hydrodynamics is general in nature; it applies to all quantum field theories, including those like quantum chromodynamics that are of interest to real world experiments. I think this is a good (though minor) example of the impact of string theory on experiments. At our current stage of understanding of string theory, we can effectively do calculations only in particularly simple – particularly symmetric – theories. But we are able to analyse these theories very completely; do the calculations completely correctly. We can then use these calculations to test various general predictions about the behaviour of all quantum field theories. These expectations sometimes turn out to be incorrect. With the string calculations to guide you can then correct these predictions. The corrected general expectations then apply to all quantum field theories, not just those very symmetric ones that string theory is able to analyse in detail.

    How do you see the Prize helping your research work? Does this make it easier for you to secure grants, etc.?

    It pads my CV. [Laughs] So… anything I apply for henceforth becomes a little more likely to work out, but it won’t have a transformative impact on my career nor influence it in any way, frankly. It’s a great honour, of course. It makes me happy, it’s an encouragement. But I’m quite motivated without that. [After being asked about winning the Infosys Foundation Prize] I’m thrilled, but I’m also a little overwhelmed. I hope I live up to all the expectations. About being young – I hope this means that my best work is ahead of me.

    What do you think about the Fundamental Physics Prize in general? About what Yuri Milner has done for the world of physics research?

    Until last week, I hadn’t thought about it very much at all. The first thing to say is when Milner explained to me his motivations in constituting this prize, I understood it. Let me explain. As you know, Milner was a PhD student in physics before he left the field to invest in the Internet, etc. He said he left because he felt he wasn’t good enough to do important work.

    He said one motivation was that people who are doing well needn’t found Internet companies. This is his personal opinion, one should respect that. Second: He felt that 70 or 80 years ago, physicists were celebrities who played a large role in motivating some young people to do science. Nowadays, there are no such people. I think I agree. Milner wants to do what he can to push the clock back on that. Third: Milner is uniquely well-positioned because he understands physics research because of his own background and he understands the world of business. So, he wanted to bridge these worlds. All these are reasonable ways of looking at the world.

    If I had a lot of money, this isn’t the way I would have gone about it. There are many more efficient ways. For instance, more smaller prizes for younger people makes more sense than few big prizes for well established people. Some of the money could have gone as grants. I haven’t seriously thought about this, though. The fact is Milner didn’t have to do this but he did. It’s a good thing. This is his gesture, and I’m glad.

    Are the Fundamental Physics Prizes in any way bringing “validity” to your areas of research? Are they bringing more favourable attention you wouldn’t have been able to get otherwise?

    Well, of late, it has become fashionable sometimes to attack string theory in certain parts of the world of physics. In such an environment, it is nice to see there are other people who think differently.

    What are your thoughts on the quality of physics research stemming from India? Are there enough opportunities for researchers at all levels of their careers?

    Let me start with string theoretic work, which I’m aware of, and then extrapolate. String theory work done in India is pretty good. If you compared the output from India to the US, the work emerging from the US is way ahead qualitatively. But if you compared it to Japan’s output, I would say it’s clear that India does better. Japan has a large string theory community supported by American-style salaries whereas India runs on a shoestring. Given that and the fact that India is a very poor country, that’s quite remarkable. There’s no other country with a GDP per capita comparable to India’s whose string theoretic output is anywhere as good. In fact, the output is better than any country in the European Union, but at the same time not comparable to the EU’s as a whole. So you get an idea of the scale: reasonably good, not fantastic.

    The striking weakness of research in India is that research happens by and large only in a few elite institutions. But in the last five years, it has been broadening out a bit. TIFR and the Harish-Chandra Research Institute [HRI] have good research groups; there are some reasonably good young groups in Indian Institute of Science [IIS], Bengaluru; Institute of Mathematical Sciences, Chennai; some small groups in the Chennai Mathematical Institute, IIT-Madras, IIT-Bombay, IIT-Kanpur, all growing in strength, The Indian Institute of Science Education and Research (IISER), Pune, has also made good hires in string theory.

    So, it’s spreading out. The good thing is young people are being hired in many good places. What is striking is we don’t yet have participation from universities; there are no string theorists in non-elite institutions. Delhi University has a few, very few. This is in striking contrast with the US, where there are many groups in many universities, which gives the community great depth of research.

    If I were to give Indian research a grade sheet, I’d say not bad but could do much better. There are 1.2 billion people in the country, so we should be producing commensurate output in research. We shouldn’t content ourselves by thinking we’re doing better than [South] Korea. Of course it is an unfair thing to ask for, but that should be the aim. For example, at TIFR, when we interview students for admission, we find that we usually have very few really good candidates. It’s not that they aren’t smart; people are smart everywhere. It’s just one reason: that the elementary school system in the country is abysmal. Most Indians come out of school unable to contribute meaningfully to any intellectual activity. Even Indian colleges have the same quality of output. The obvious thing is to make every school in India a reasonable school [laughs]. Such an obvious thing but we don’t do it.

    Is there sufficient institutional and governmental support for researchers?

    At the top levels, yes. I feel that places with the kind of rock-solid support that TIFR gives its faculty are few and far between. In the US many such places exist. But if you went to the UK, the only comparable places are perhaps Cambridge and Oxford. Whereas if you went to the second tier Durham University, you’ll see it’s not as good a place to be as TIFR. In fact, this is true for most universities around the world.

    Institutions like TIFR, IIS, HRI and the National Centre for the Biological Sciences give good support and scientists should recognize this. There are few comparable places in the Third World. What we’re missing however is the depth. The US research community has got so good because of its depth. Genuine, exciting research is not done just in the Ivy League institutions. Even small places have a Nobel Laureate teaching there. So, India may have lots of universities but they are somehow not able to produce good work.

    We’ve had a couple Indians already in what’s going to be three years of the Fundamental Physics Prizes – before you, there was Ashoke Sen. But in the Nobel Prizes in physics, we’ve had a stubborn no-show since Subramanyan Chandrasekhar won it in 1983. Why do you think that is?

    There are two immediate responses. First is that, as I mentioned, India has an anomalously strong string theory presence. Why? I don’t know. India is especially strong with string theory. And the Fundamental Physics Prize Foundation has so far had some focus on this. The Nobel Prizes on the other hand require experimental verification of hypotheses. So, for as long as the Foundation has focused on the mathematics in physics, India has done well.

    What are you going to do with your $100,000?

    I haven’t seriously thought about it.

    At the time of my interview, I had no idea he was about to win the Infosys Foundation Prize as well. It seems he’s in great demand! Good luck, Shiraz. 🙂

  • Why do we need dark matter?

    The first thing that goes wrong whenever a new discovery is reported, an old one is invalidated, or some vaguely important scientific result is announced has often to do with misrepresentation in the mainstream media. Right now, we’re in the aftermath of one such event: the October 30 announcement of results from a very sensitive dark matter detector. The detector, called the Large Underground Xenon Experiment (LUX), is installed in the Black Hills of South Dakota and operated by the Sanford Underground Research Facility.

    Often the case is that what gets scientists excited may not get the layman excited, too, unless the media wants it to. So also with the announcement of results from LUX:

    • The detector hasn’t found dark matter
    • It hasn’t found a particular particle that some scientists thought could be dark matter in a particular energy range
    • It hasn’t ruled out that some other particles could be dark matter.

    Unfortunately, as Matt Strassler noted, the BBC gave its report on the announcement a very misleading headline. We’re nowhere near figuring out what dark matter is as much as we’re figuring out what dark matter isn’t. Both these aspects are important because once we know dark matter isn’t something, we can fix our theories and start looking for something else. As for what dark matter is… here goes.

    What is dark matter?

    Dark matter is a kind of matter that is thought to occupy a little more than 80 per cent of this universe.

    Why is it called ‘dark matter’?

    This kind of matter’s name has to do with a property that scientists believe it should have: it does not absorb or emit light, remaining (optically) dark to our search for it.

    What is dark matter made of?

    We don’t know. Scientists think it could be composed of strange particles. Some other scientists think it could be composed of known particles that are for some reason behaving differently. At the moment, the leading candidate is a particle called the WIMP (weakly interacting massive particle), just like particles called electrons are an indicator of there being an electric field or particles called Higgs bosons are an indicator of there being a Higgs field. A WIMP gets its name because it doesn’t interact with other matter particles except through the gravitational force.

    We don’t know how heavy or light WIMPs are or even what each WIMP’s mass could be. So, using different detectors, scientists are combing through different mass-ranges. And by ‘combing’, what they’re doing is using extremely sensitive instruments hidden thousands of feet under rocky terrain (or obiting the planet in a satellite) in an environment so clean that even undesired particles cannot interact with the detector (to some extent). In this state, the detector remains on ‘full alert’ to note the faintest interactions its components have with certain particles in the atmosphere – such as WIMPs.

    The LUX detector team, in its October 30 announcement, ruled out that WIMPs existed in the ~10 GeV/c2 mass range (because of a silence of its components trying to pick up some particles in that range). This is important because results from some other detectors around the world suggested that a WIMP could be found in this range.

    Can we trust LUX’s result?

    Pretty much but not entirely – like the case with most measurements in particle physics experiments. Physicists announcing these results are only saying they aren’t likely to be any other entities masquerading as what they’re looking for. It’s a chance, and never really 100 per cent. But you’ve got to draw the line at some point. Even if there’s always going to be a 0.000…01 per cent chance of something happening, the quantity of observations and the quality of the detector should give you an idea about when to move on.

    Where are the other detectors looking for dark matter?

    Some are in orbit, some are underground. Check out FermiLATAlpha Magnetic Spectrometer,Payload for Antimatter Exploration and Light-nuclei Astrophysics, XENON100, CDMSLarge Hadron ColliderCoGeNT, etc.

    So how was BBC wrong with its headline?

    We’re not nearing the final phase of the search for dark matter. We’re only starting to consider the possibility that WIMPs might not be the dark matter particle candidates we should be looking for. Time to look at other candidates like axions. Of course, it wasn’t just BBC. CBS and Popular Science got it wrong, too, together with a sprinkling of other news websites.

    Why do we need dark matter?

    We haven’t been able to directly detect it, we think it has certain (unverified) properties to explain why it evades detection, we don’t know what it’s made of, and we don’t really know where to look if we think we know what it’s made of. Why then do we still cling to the idea of there being dark matter in the universe, that too in amounts overwhelming ‘normal’ matter by almost five times?

    Answer: Because it’s the simplest explanation we can come up with to explain certain anomalous phenomena that existing theories of physics can’t.

    Phenomenon #1

    When the universe was created in a Big Bang, matter was released into it and sound waves propagated through it as ripples. The early universe was very, very hot, and electrons hadn’t yet condensed and become bound with the matter. They freely scattered radiation, whose intensity was also affected by the sound waves around it.

    About 380,000 years after the Bang, the universe cooled and electrons became bound to matter. After this event, some radiation pervading throughout the universe was left behind like residue, observable to this day. When scientists used their knowledge of these events and their properties to work backwards to the time of the Bang, they found that the amount of matter that should’ve carried all that sound didn’t match up with what we could account for today.

    They attributed the rest to what they called dark matter.

    Phenomenon #2

    Another way this mass deficiency manifests is in the observation of gravitational lensing. When light from a distant object passes near a massive object, such as a galaxy or a cluster of galaxies, their gravitational pull bends the light around them. When this bent beam reaches an observer on Earth, the image it carries will appear larger because it will have undergone angular magnification. If these clusters didn’t contain dark matter, physicists would observer much weaker lensing than they actually do.

    Phenomenon #3

    That’s not all. The stars in a galaxy rotate around the galactic centre, where most of its mass is located. According to theory, the velocity of the stars in a galaxy should drop off the farther they get from the centre. However, observations have revealed that, instead of dropping off, the velocity is actually almost constant even as one gets farther from the centre. So, something is also pulling the outermost stars inward, holding them together and keeping them from flying outward and away from the galaxy. This inward force astrophysicists think could be the gravitational force due to dark matter.

    So… what next?

    LUX was a very high sensitivity dark matter detector, the most sensitive in existence actually. However, its sensitivity is attuned to look for low-mass WIMPs, and its first results rule out anything in the 5-20 GeV/c2 range. WIMPs of a higher mass are still a possibility, and, who knows, might be found at detectors that work with the CERN collider.

    Moreover, agreement between various detectors about the mass of WIMPs has also been iffy. For example, detectors like CDMS and CoGeNT have hinted that a ~10 GeV/c2 WIMP should exist. LUX has only now ruled this out; the XENON100 detector, on the other hand, has been around since 2008 and has been unable to find WIMPs in this mass-range altogether, and it’s more sensitive than CDMS or CoGeNT.

    What’s next is some waiting and letting the LUX carry on with its surveys. In fact, the LUX has its peak sensitivity at 33 GeV/c2. Maybe there’s something there. Another thing to keep in mind is that we’ve only just started looking for dark matter particles. Remember how long it took us to figure out ‘normal’ matter particles? Perhaps future higher sensitive detectors (like XENON1T and LUX-ZEPLIN) have something for us.

    (This post first appeared at The Copernican on November 3, 2013.)

  • V for vendetta

    The Hindu published an article on October 28, 2013, titled “He has arrears in engineering, PhD in physics“. The article spoke of a 19-something year old Rohit Gunturi, a final year student of engineering at Anna University but already a PhD holder in physics from UC Berkeley. Inspiring as this is, the story at first glance throws up many alarms which I’m surprised were missed by the reporter, Ms. V.

    A clarifying story was published later the same day by the reporter admitting that Mr. Gunturi’s claims were false.

    This episode struck me for the following reasons:

    From the audience’s perspective

    • The newspaper is not allowed to slip up – howsoever little, whenever it may be.
    • If and when the audience finds that a mistake has been committed in the newspaper, it turns very self-righteous.
    • Reporters are almost always remembered for their mistakes, not the lot of other things that they get right.
    • It is okay to publicly shame the reporter for one slip-up.
    • If the reporter slips up, he/she is stupid.

    From the reporter’s perspective

    • It was admissible to assume that statements could be taken at face-value.
    • It was okay to comment on scientific research without checking with an expert in that field.
    • It was permissible to profile an individual without checking for conflicts of interest.
    • The information came from the Vice Chancellor of a large university, so it was true.*

    From the newspaper management’s perspective

    • You’d think these guys would be more careful – but the same, original uncorrected version of the story appears the next day in the Coimbatore edition

    All these occurrences came together to blow up the issue in the public sphere. In essence, the whole thing has played out as a second Sokal Affair, this one a rap on the knuckles for Indian newspapers as such – although I doubt incidents such as this are uncommon.

    Moreover, the cauterizing reaction from engineering students from around the city was appalling. Any stone Ms. V has to throw now at Anna University or IIT Madras will almost invariably hit a student who’s either made fun of her or has read something that did. Of course, I have no idea of her prior relations with these people.

    At the same time, I’m given to understand the students are not happy that she’s written stories about there being a lack of water in hostel toilets or fruits being lacking in their diets, etc., in the past. Do you think it’s silly? I’d like to know what things are like in the two largest government educational institutions in Chennai, my city. And if you disagreed, your tiff should be with The Hindu, not with Ms. V for doing her job.

    And last: See the starred statement above (under the points of the reporter’s perspective)? How do you guard against people of that stature making half-true statements to a journalist? You can’t, really, but that doesn’t mean Ms. V is free to go. She claims she was presented with a document by the VC showing Mr. Gunturi had been awarded a PhD by UC Berkeley. She later admitted it was unsigned. This should serve as caution that nobody is above a fact-check.

  • The literature of metaphysics (or, ‘Losing your marbles’ )

    For a while now, I’ve been intent on explaining stuff from particle physics.

    A lot of it is intuitive if you go beyond the mathematics and are ready to look at packets of energy as extremely small marbles. And then, you’ll find out some marbles have some charge, some the opposite charge, and some have no charge at all, and so forth. And then, it’s just a matter of time before you figure out how these properties work with each other (“Like charges repel, unlike charges attract”, etc).

    These things are easy to explain. In fact, they’re relatively easy to demonstrate, too, and that’s why not a lot of people are out there who want to read and understand this kind of stuff. They already get it.

    Where particle physics gets really messed up is in the math. Why the math, you might ask, and I wouldn’t say that’s a good question. Given how particle physics is studied experimentally – by smashing together those little marbles at almost the speed of light and then furtively looking for exotic fallout from the resulting debris – math is necessary to explain a lot of what happens the way it does.

    This is because the marbles, a.k.a. the particles, also differ in ways that cannot be physically perceived in many circumstances but whose consequences are physical enough. These unobservable differences are pretty neatly encapsulated by mathematics.

    It’s like a magician’s sleight of hand. He’ll stick a coin into a pocket in his pants and then pull the same coin out from his mouth. If you’re sitting right there, you’re going to wonder “How did he do that?!” Until you figure it out, it’s magic to you.

    Theoretical particle physics, which deals with a lot of particulate math, is like that. Weird particles are going to show up in the experiments. The experimental physicists are going to be at a loss to explain why. The theoretician, in the meantime, is going to work out how the “observable” coin that went into the pocket came out of the mouth.

    The math just makes this process easy because it helps put down on paper information about something that may or may not exist. And if really doesn’t exist, then the math’s going to come up awry.

    Math is good… if you get it. There’s definitely going to be a problem learning math the way it’s generally taught in schools: as a subject. We’re brought up to study math, not really to use it to solve problems. There’s not much to study once you go beyond the basic laws, some set theory, geometry, and the fundamentals of calculus. After that, math becomes a tool and a very powerful one at that.

    Math becomes a globally recognised way to put down the most abstract of your thoughts, fiddle around with them, see if they make sense logically, and then “learn” them back into your mind whence they came. When you can use math like this, you’ll be ready to tackle complex equations, too, because you’ll know they’re not complex at all. They’re just somebody else’s thoughts in this alpha-numerical language that’s being reinvented continuously.

    Consider, for instance, the quantum chromodynamic (QCD) factorisation theorem from theoretical particle physics:

    This hulking beast of an equation implies that *deep breath*, at a given scale (µand a value of the Bjorken scaling variable (x), the nucleonic structure function is derived by the area of overlap between the function describing the probability of finding a parton inside a nucleon (f(x, µ)and the summa (Σ) of all functions describing the probabilities of all partons within the nucleon *phew*.

    In other words, it only describes how a fast incoming particle collides with a target particle based on how probable certain outcomes are!

    The way I see it, math is the literature of metaphysics.

    For instance, when we’re tackling particle physics and the many unobservables that come with it, there’s going to be a lot of creativity and imagination, and thinking, involved. There’s no way we’d have had as much as order as we do in the “zoo of particles” today without some ingenious ideas from some great physicists – or, the way I see it, great philosophers.

    For instance, the American philosopher Murray Gell-Mann and the Israeli philosopher Yuval Ne’eman independently observed in the 1960s that their peers were overlooking an inherent symmetry among particles. Gell-Mann’s solution, called the Eightfold Way, demonstrated how different kinds of mesons, a type of particles, were related to each other in simple ways if you laid them around in an octagon.

    A complex mechanism of interaction was done away with by Gell-Mann and Ne’eman, and substituted with one that brought to light simpler ones, all through a little bit of creativity and some geometry. The meson octet is well-known today because it brought to light a natural symmetry in the universe. Looking at the octagon, we can see it’s symmetrical across three diagonals that connect directly opposite vertices.

    The study of these symmetries, and what the physics could be behind it, gave birth to the quark model as well as won Gell-Mann the 1969 Nobel Prize in physics.

    What we perceive as philosophy, mathematics and science today were simply all subsumed under natural philosophy earlier. Before the advent of instruments to interact with the world with, it was easier, and much more logical, for humans to observe what was happening around them, and find patterns. This involved the uses of our senses, and this school of philosophy is called empiricism.

    At the time, as it is today, the best way to tell if one process was related to another was by finding common patterns. As more natural phenomena were observed and more patterns came to light, classifications became more organised. As they grew in size and variations, too, something had to be done for philosophers to communicate their observations easily.

    And so, numbers and shapes were used first – they’re the simplest level of abstraction; let’s call it “0”. Then, where they knew numbers were involved but not what their values were, variables were brought in: “1”. When many variables were involved, and some relationships between variables came to light, equations were used: “2”. When a group of equations was observed to be able to explain many different phenomena, they became classifiable into fields: “3”. When a larger field could be broken down into smaller, simpler ones, derivatives were born: “4”. When a lot of smaller fields could be grouped in such a way that they could work together, we got systems: “5”. And so on…

    Today, we know that there are multitudes of systems – an ecosystem of systems! The construction of a building is a system, the working of a telescope is a system, the breaking of a chair is a system, and the constipation of bowels is a system. All of them are governed by a unifying natural philosophy, what we facilely know today as the laws of nature.

    Because of the immense diversification born as a result of centuries of study along the same principles, different philosophers like to focus on different systems so that, in one lifetime, they can learn it, then work with it, and then use it to craft contributions. This trend of specialising gave birth to mathematicians, physicists, chemists, engineers, etc.*

    But the logical framework we use to think about our chosen field, the set of tools we use to communicate our thoughts to others within and without the field, is one: mathematics. And as the body of all that thought-literature expands, we get different mathematic tools to work with.

    Seen this way, which I do, I’m not reluctant to using equations in what I write. There is no surer way than using math to explain what really someone was thinking when they came up with something. Looking at an equation, you can tell which fields it addresses, and by extension “where the author is coming from”.

    Unfortunately, the more popular perception of equations is way uglier, leading many a reader to simply shut the browser-tab if it’s thrown up an equation as part of an answer. Didn’t Hawking, after all, famously conclude that each equation in a book halved the book’s sales?

    That belief has to change, and I’m going to do my bit one equation at a time… It could take a while.

    (*Here, an instigatory statement by philosopher Paul Feyerabend comes to mind:

    The withdrawal of philosophy into a “professional” shell of its own has had disastrous consequences. The younger generation of physicists, the Feynmans, the Schwingers, etc., may be very bright; they may be more intelligent than their predecessors, than Bohr, Einstein, Schrodinger, Boltzmann, Mach and so on. But they are uncivilized savages, they lack in philosophical depth — and this is the fault of the very same idea of professionalism which you are now defending.“)

    (This blog post first appeared at The Copernican on December 27, 2013.)