Scicomm

  • Why covering ISRO is a pain

    The following is a bulleted list of reasons why covering developments on the Indian spaceflight programme can be nerve-wracking.

    • ISRO does not have a media engagement policy that lays out when it will communicate information to journalists and how, so there is seldom a guarantee of correctness when reporting developing events
    • ISRO’s updates themselves are haphazard: sometimes they’re tweeted, sometimes they’re issued as singles lines on their websites, sometimes there’s a ‘media release’, sometimes there’s a PIB release, and so on
    • As opposed to the organisation itself, ISRO members can be gabby – but you can never tell exactly who is going to be gabby or when
    • Some ISRO scientists insert important information in the middle of innocuous speeches delivered at minor events in schools and colleges
    • Every once in a while, one particular publication will become ‘blessed’ with sources within the org. and churn out page after page of updates+
    • Like the male superstars of Tamil cinema, ISRO benefits from the pernicious jingoism it is almost always surrounded with but does nothing to dispel it (ref. the mental cost of walking some beats over others)
    • There is a policy that says employees of Indian institutions don’t have to seek the okay of their superiors to speak to the press unless when speaking ill; ISRO’s own and more stringent policy supersedes it
    • There are four ways to acquire any substantive information (beyond getting close to officials and following the ‘blessed’ publications): bots that crawl the isro.gov.in domain looking for PDFs, Q&A records of the Lok/Rajya Sabha, Indian language newspapers that cover local events, and former employees
    • If a comprehensive history of ISRO exists, it is bound to be in someone’s PhD thesis, locked up in the annals of a foreign publication or found scattered across the Indian media landscape, so covering ISRO has to be a full-time job that leaves room or time for little else
    • Information, and even commentary, will flow freely when everything is going well; when shit hits the fan, there is near-complete silence
    • In similar vein, journalists publishing any criticism of ISRO almost never hear from any officials within the org.
    • (A relatively minor point in this company) I don’t think anyone knows what the copyright restrictions on ISRO-produced images and videos are, so much so that NASA’s images of ISRO’s assets are easier to use

    + I say this without disparaging the journalist, who must have worked hard to cultivate such a network. The problem is that ISRO has constantly privileged such networks over more systematic engagement, forcing journalists to resort to access journalism.

  • Firstpost’s selfish journalism

    I’m sure you’ve heard of the concept of false balance, which is based on the conviction that there are two sides to every story even when there aren’t or when it’s not clear to anyone what the other side is. I’m also sure you’re aware of how journalism based on false balance can legitimise fake news and pseudoscience, as we used to see so often with climate change until the mid-2010s.

    The problem with believing there exists a balance between two viewpoints where there is actually none is rooted in the belief that both points are equally valid, which in turn is rooted in ignorance and/or prejudice. However, it would appear there is another form of false-balance reportage that is rooted in selfishness and/or apathy – one where a publication publishes an article that, at some point, acknowledges that A and B are not equally valid but whose headline and lede declare that they are. Here’s a fresh example from Firstpost:

    The lede goes thus:

    After months of delay in its launch, the Indian Space Research Organisation (ISRO) said that the country’s second moon mission — the Rs 800 crore ‘Chandrayaan-2’ — is designed to hunt for deposits of Helium-3 — a waste-free nuclear energy that could answer many of Earth’s energy problems.

    Chandrayaan 2 isn’t going to prospect the Moon for helium-3, or any other potential sources of clean energy for that matter, if only because we don’t have the wherewithal to use such materials to produce energy. Second, the problem with C2, as with many of ISRO’s space science missions at the moment, is that there is no roadmap. I don’t know what or who Firstpost‘s sources were for it to have pieced together this BS.

    However, after talking about this as if any of it made sense, the article quotes my article in The Wire to say “even if we are successful in bringing back huge deposits of Helium-3 from the moon, we are far away from having the technology to harness it”.

    So what has Firstpost done here? a) It reignited the pseudo-debate over ISRO’s non-existent plans to mine the Moon for helium-3; b) it re-legitimised Sivan’s, and others’, ridiculous point of view that India should lead the way in this endeavour; and, most importantly, c) it cashed in on the fallacy even as it suggested it may have recognised that the helium-3 story is erected entirely on speculation and daydreams.

    In effect, this is nauseatingly selfish and, insofar as it is journalism, apathetic. It does not have the public interest in mind; in fact, it completely disregards it. And in case someone demands to know how I can claim to know better than K. Sivan, who claimed last year that it’s important for India to be at the forefront of helium-3 mining, only that anecdote about what Bertrand Russell – a staunch atheist – would say should he come face to face with god comes to mind: “Well, I would say that you did not provide much evidence.”

  • Diversifying into other beats

    I delivered my annual talk AMA at the NCBS science writing workshop yesterday. While the questions the students asked were mostly the same as last year (and the year before that), I also took the opportunity to request them to consider diversifying into other subjects. Most, if not all, journalists entering India’s science journalism space every year want to compose stories about the life sciences and/or ecology. As a result, however, while there are numerous journalists to write about issues in these areas, there are fewer than a handful to deal with developments in all the other ones – from theoretical particle physics to computer science to chemical engineering.

    This gives the impression to the consumers of journalism that research in these areas isn’t worth writing about or, more perniciously, that developments in these areas aren’t to be discussed (and debated, if need be) in the public domain. And this in turn contributes to a vicious cycle, where “there no stories about physics” and “there is no interest in publishing stories about physics” successively keep readers/editors and the journalists, resp., at bay.

    However, from an editor’s perspective, the problem has an eminently simple solution: induct and then publish reporters producing work on research on these subjects. This doesn’t always have to be of newly minted producers but could also benefit from existing ones actively diversifying into beats other than their first choices over the course of a few years.

    This sort of diversification doesn’t happen regularly but if it does, it could also benefit younger journalists who are looking to make their presence felt. For example, it’s easier to stand out from the crowd writing about, say, semiconductor fabrication than about ecological research (although this isn’t to say one is more important than the other). When more such writing is produced, editors also stand to gain because they can offer readers a more even coverage of research in the country instead of painting a lopsided picture.

    One might argue that there needs to be demand from readers as well, but the relationship between editors and readers isn’t a straightforward demand-supply contest. If that were the case, the news would have become synonymous with populist drivel a long time ago. Instead, it’s more about progressively creating newer interests in the longer run that are a combination of informative and interesting. Put one way, this means the editor should be able to bypass the ‘interestedness indicator’ once in a while to publish stories that readers didn’t know they needed (such as The Wire‘s piece on quantum biology earlier this month).

    Such a thing obviously wouldn’t be possible without journalists pitching stories other than what they usually do, and of course editors who have signalled that they are willing to take such risks.

  • A different kind of refrigerator

    Say your house on the ground-floor is ankle deep in water – a common sight in most Indian cities during the monsoons. So you grab a pail and start throwing the water out in a four-step process:

    1. You dip the empty pail into the water and fill it up
    2. You carry the pail to the drain
    3. You empty the pail
    4. You bring it back to the flooded place

    You repeat these four steps over and over until all the water has been removed.

    The refrigerator has a similar working principle. Instead of water, there’s heat. The more heat you remove from inside the machine, the more it cools down. And instead of a pail, there’s a fluid called the refrigerant. These are the four steps it follows through to do its job:

    1. You flow the cool refrigerant through a pipe that wraps around the box where the food is kept; it absorbs heat from the box
    2. You pump the refrigerant to a component that will cool it back down by removing the heat
    3. The component does its thing
    4. You pump the heat back to the pipes wrapped around the box

    Your air conditioner works the same way, except instead of flowing the refrigerant around the whole room, it interacts with small quantities of air.

    This cycle of four steps used with refrigerators and air-conditioners is called the vapour compression cycle. It is a type of heat-pump cycle, which is the broader class of cycles that machines use to move heat from a cooler environment into a warmer environment. (Heat flows naturally from warmer to cooler environments so you don’t need a machine to do that.)

    The vapour-compression cycle is employed by many millions of machines around the world – from small household refrigerators to industrial scale warehouses. However, its popularity is slowly declining because the most common refrigerants are environmental pollutants, and their manufacture and use involve processes and other materials that are polluting in their own right.

    Scientists and engineers have been looking for more climate-friendly alternatives along different lines of inquiry. Such multiplicity exists because the way heat-pump cycles are conventionally executed is improvident and, in some ways, contrived. There are lots of moving parts, each with its own failings, that interact with each other to give rise to multiple ways in which the system can lose energy. As a result, these machines have low efficiency. This also means there is a lot that can be improved.

    One promising class of alternatives is materials that exhibit caloric effects. Typically, this means that when the material is exposed to an external energy field, like a magnetic field, it releases/absorbs heat into/from its surroundings, and vice versa when the field is removed. Materials that respond like this to a magnetic field are said to exhibit the magnetocaloric effect. The element gadolinium is a famous example.

    Other types of caloric effects include the electrocaloric effect, the mechanocaloric effect, the barocaloric effect and, of particular interest in the current case, the elastocaloric effect. It is exactly what it sounds like: the external ‘field’ applied to elicit the caloric effect takes the form of mechanical strain. That is, when the material is strained, it heats up; when the strain is released, it cools down.

    As with all the other caloric effects, executing the elastocaloric effect doesn’t require multiple parts. The hardware that acts on the refrigerant is the same as the refrigerant itself: the material. And instead of the refrigerant undergoing energy-intensive phase transitions through different states of matter, from liquid to gas and back again, the heat is moved through changes in the way the material’s atoms are arranged.

    These are called structural phase transitions. The shape and architecture of the atomic lattice confers different mechanical and electrical properties, among others, on the overall material. The way they are arranged also determines the amount of potential energy contained in the arrangement as a whole. Different structural phases stand for different amounts of energy. So a material that can easily move between two arrangements with different potential energies can be used to absorb and release heat.

    Scientists have known of such materials since the early 1980s. The real challenge today is to find a material that matches the efficiency and the lifetime of the vapour compression cycle together with a mechanism that applies the strain as well as possible. In other words, the material should be able to undergo the heat-pump cycle millions of times while being at least as efficient as the vapour compression cycle before it fails. Second, the device used to apply and release the stress should do its job with the least consequence for the overall energy efficiency.

    To the end of a material with an appreciable temperature performance, researchers from China, Spain and the US have created an alloy of nickel, magnesium and titanium that exhibits a colossal elastocaloric effect. The advantage here is that this material is greener than the vapour compression cycle. Its underlying structural phase transition is called the martensitic transformation: when a mechanical strain is applied, the atoms slide into a different arrangement, and the material absorbs heat from its surroundings in the process.

    The alloy’s composition on paper is as follows: in every 1,000 atoms, 500 are of nickel, 315 are of manganese and 185 are of titanium; the researchers also added a little bit of boron to improve stability. When they applied 700 MPa of stress to an ingot of the alloy, its volume changed by 2% and warmed by 26.9 K. When they removed the stress, it cooled by 31.5 K. These numbers, the researchers write in their paper, “far [exceed] that directly measured in all elastocaloric, electrocaloric, and barocaloric materials in any form (thin film, wire, bulk, etc.).” The numbers are also nearly equal in value, which means the elastocaloric effect is reversible: the alloy can alternatively gain and lose similar amounts of heat.

    The paper was published on June 26, 2019.

    In effect, the elastocaloric four-step heat-pump cycle would go like this:

    1. Expose the alloy to the volume to be cooled
    2. Apply the stress so the material absorbs heat from the volume
    3. Drain the heat from the material
    4. Re-expose it to the volume to be cooled

    The researchers have also worked out a way to figure which materials can exhibit such a large elastocaloric effect, or in fact just large caloric effects. The secret is a combination of three factors, all of which depend on the atomic arrangement. First, the material has to be ferroelastic: when mechanical stress is applied, the configuration of atoms needs to change spontaneously. Second: it needs to have “good mechanical properties” (quoted from the paper).

    The third factor depends on the unit cell volume. The unit cell is the smallest repeating unit of the arrangement. In the martensitic transformation shown above, it is one square in the undeformed grid; in an actual material, it would be a cube. According to the researchers, the more the volume of the unit cell changes during the martensitic transformation the better.

    So by maximising the contribution of each of these three factors, and then determining what the material composition would have to be for all of them to occur together, the researchers have given their peers some new tools to use to uncover other materials that display giant caloric effects. This way, they hope, new materials can be discovered to build the perfect elastocaloric refrigerator or air-conditioner. It is just another way to make our world a better place, though one you probably haven’t heard of.

  • Groundwater extinction

    In a report published on June 14, 2018, NITI Aayog, a policy think-tank established by the Government of India, claimed that 21 Indian cities would run out of their supply of groundwater by 2020. The report, especially this statistic, went on to be widely cited as a figure representing the water crisis currently facing the country (including multiple reports on The Wire). However, it appears now that this claim may not in fact be accurate.

    Joanna Slater, the India bureau chief of The Washington Post, reported through a series of tweets on June 28 that NITI Aayog’s claim could be the result of a questionable extrapolation of district-level data provided by the Central Ground Water Board (CGWB), a body under the Union ministry of water resources. The claim in the report itself is attributed to the World Bank, the World Resources Institute (WRI), Hindustan Times and The Hindu.

    However, according to Slater’s follow-ups, the WRI wasn’t the source of the claim, whereas other news reports had attributed it to the World Bank. When Slater reached out to the organisation, it denied knowledge the claim’s provenance. After she reached out to Niti Aayog, it pointed its finger at the CGWB, and which in turn denied having claimed that the 21 cities would not have access to groundwater after 2020.

    The eventual source turned out to be a CGWB report published in June 2017, a year before Niti Aayog’s report was out, and with data updated until March 2013. It provided data showing that Indian cities (gauged at the district-level) are using their respective supply of groundwater faster than the resource is being replenished; the ongoing crisis in the city of Chennai is proof that this is true. But the report doesn’t account for groundwater replenishment efforts after 2013 as well as contributions from “sources like lakes and reservoirs” (to use Slater’s words).

    Slater and others have said that faulty claims are not the way to illustrate this crisis, even if the crisis itself may be real. One unintended side-effect is that such reports might give the impression that we are in more trouble than we really are, which in turn could leave people feeling helpless, despondent and unwilling to act further.

    Second, at a time when both the state and central governments are being forced to pay attention to water issues, making a problem seem worse than it actually is could support solutions we don’t need at the expense of addressing problems that we ignored.

    For example, the BBC published a report in February last year stating that Bengaluru would soon run out of drinking and bathing water because the lakes surrounding the city weren’t clean enough. However, S. Vishwanath, a noted proponent of the sustainable use of water in the city, rebutted on Citizen Matters focusing on four reasons the BBC’s claim diverted attention from actual problems (quoting verbatim):

    1. “Bengaluru never has depended on its lakes and tanks formally for its water supply since the commissioning of the Hesarghatta project in 1896
    2. Even if we imagine the population of the Bengaluru metropolitan area to be 2.5 crores, rainwater itself [comes up to] 109 litres per head per day
    3. Wastewater treatment and recycling is picking up, thanks to sustained pressure from civil society and courts
    4. Most … doomsday predictions actually don’t take into account that the groundwater table is pretty high in the city centre … due to the availability of Cauvery water and leakages getting recharged in the ground”

    In similar vein, the Tamil Nadu state government plans to set up two more desalination plants to quench Chennai’s thirst. Given that the real problem in Chennai is that the city destroyed the rivers it banked on and paved over natural groundwater recharge basins, water-related crises in the future become opportunities for the government to usher in ‘development’ projects without addressing the underlying causes.

    The Wire
    June 29, 2019

  • To be a depressed person reading about research on depression

    It’s a strangely unsettling experience to read about research on an affliction that one has, to understand how scientists are obtaining insights into it using a variety of techniques that allow them to look past the walls of the human and into their mind, so to speak, with the intention of developing new therapeutic techniques or improving old ones. This is principally because it suggests, to me, that we – humankind – don’t scientifically know about X in toto whereas I – the individual sufferer – claims to understand what it is like to live with X.

    Of course, I concede that the experiment in question is an exercise in quantification and doesn’t seek (at least if its authors so intend) to displace my own experience of the condition. Nonetheless, the tension exists, especially when scientists claim to be able to model X with a set of equations.

    Do they suggest I’m a set of equations, that they claim to understand how I have been living my life for eight years using a bunch of symbols on paper through which they think they could divine my entire being?

    I have been learning, writing and reading about physics for the last decade and have been a science journalist and editor since 2012. Experiences in this time have allowed me a privileged view (mostly for the short span in which it could be assimilated) of what the scientific enterprise is, how it works, how scientific knowledge is organised, etc. As a result, I believe I am better placed to understand, for example, the particular mode of reductionism employed when scientists simulate a predetermined part of this or that condition in order to understand it better.

    This isn’t a blanket empathy, however; it’s more an admission of open-mindedness, such as it is. While not speaking about a specific experiment, I have come to understand that such de facto reductive experiments are necessary – especially when the evolution of certain significant parameters can be carefully controlled – because the corresponding results are otherwise impossible to deduce through other means, at least with the same quality. In fact, in my view, this is less reductionism and invisibilisation and more ansatz and heuristics.

    This is why I also see a flip side: the way scientists approach the problem, so to speak, has potential to redefine some aspects of my relationship with the affliction for the better. (It was a central part of my CBT programme.) To be clear, this isn’t about the prescriptive nature of what the scientists have been able to conclude through their studies and experiments but about the questions they chose to ask and the ways in which they decided to answer, and evaluate, them.

    For example, on June 17, the journal Nature Human Behaviour published a paper that concluded, based on reinforcement learning techniques, that “anxious or depressed humans change their behaviour much faster after something bad happens”, to quote from an explanatory post written by one of the authors. They were able to do so because, “for each real person – those with mood and anxiety symptoms and those without – we [could] generate an artificial computerised agent that mimics their behaviour.”

    Without commenting at all on the study’s robustness or the legitimacy of the paper, I’d say this sounds about right from personal experience: I display “mood and anxiety symptoms” and tend to play things very safe, which often means I’m very slow to have new experiences. Now, I have the opportunity to conduct a few experiments of my own to better ascertain that this is the case and then devise solutions, assisted by the study’s methods, that will help me eliminate this part of the problem. As the same note states, “Developing a deeper understanding of [how] symptoms emerge may eventually allow us to close [the] treatment gap” (with reference to the success rate of CBT  medication, apparently about 66-75%).

    Which brings me to the other thing about research on an affliction that one has: it exposes you. This may not seem like a significant problem but from the individual’s perspective, it can be. When a discovery that is specific to my condition is broadcast, I often feel, if only at first, that I am no longer in control of what people do and don’t know about me. Maybe “it’s textbook”, as they say, but I will never acknowledge that about myself even if it is, at whichever level, true, nor would I like others to believe that I am as predictable as a set of equations would have it – but at the same time I don’t want anyone to believe the method of interrogation employed in the study is illegitimate.

    Thankfully, this feeling often dissipates quickly because the public narrative, at least among scientists, who are also likely to be discussing the findings for longer, is often depersonalised. However, there is that brief period of heightened apprehension – a sense of social nudity, as it were – and I have wondered if it tempts people into conforming with preset templates of public conduct vis-à-vis their affliction: either be completely open about it or completely closed off. I chose to be open about it; fortunately, I am also very comfortable with being this way.

  • Can gravitational waves be waylaid by gravity?

    Yesterday, I learnt the answer is ‘yes’. Gravitational waves can be gravitationally lensed. It seems obvious once you think about it, but not something that strikes you (assuming you’re not a physicist) right away.

    When physicists solve problems relating to the spacetime continuum, they imagine it as a four-dimensional manifold: three of space and one of time. Objects exist in the bulk of this manifold and visualisations like the one below are what two-dimensional slices of the continuum look like. This unified picture of space and time was a significant advancement in the history of physics.

    While Hendrik Lorentz and Hermann Minkowski first noticed this feature in the early 20th century, they did so only to rationalise empirical data. Albert Einstein was the first physicist to fully figure out the why of it, through his theories of relativity.

    Specifically, according to the general theory, massive objects bend the spacetime continuum around themselves. Because light passes through the continuum, its path bends along the continuum when passing near massive bodies. Seen head-on, a massive object – like a black hole – appears to encircle a light-source in its background in a ring of light. This is because the black hole’s mass has caused spacetime to curve around the black hole, creating a cosmic mirage of the light emitted by the object in its background (see video below) as seen by the observer. By focusing light flowing in different directions around it towards one point, the black hole has effectively behaved like a lens.

    So much is true of light, which is a form of electromagnetic radiation. And just the way electrically charged particles emit such radiation when they accelerate, massive particles emit gravitational waves when they accelerate. These gravitational waves are said to carry gravitational energy.

    Gravitational energy is effectively the potential energy of a body due to its mass. Put another way, a more massive object would pull a smaller body in its vicinity towards itself faster than a less massive object would. The difference between these abilities is quantified as a difference between the objects’ gravitational energies.

    Credit: ALMA (NRAO/ESO/NAOJ)/Luis Calçada (ESO)

    Such energy is released through the spacetime continuum when the mass of a massive object changes. For example, when two binary black holes combine to form a larger one, the larger one usually has less mass than the masses of the two lighter ones together. The difference arises because some of the mass has been converted into gravitational energy. In another example, when a massive object accelerates, it distorts its gravitational field; these distortions propagate outwards through the continuum as gravitational energy.

    Scientists and engineers have constructed instruments on Earth to detect gravitational energy in the form of gravitational waves. When an object releases gravitational energy into the spacetime continuum, the energy ripples through the continuum the way a stone dropped in water instigates ripples on the surface. And just the way the ripples alternatively stretch and compress the water, gravitational waves alternatively stretch and compress the continuum as they move through it (at the speed of light).

    Instruments like the twin Laser Interferometer Gravitational-wave Observatories (LIGO) are designed to pick up on these passing distortions while blocking out all others. That is, when LIGO records a distortion passing through the parts of the continuum where its detectors are located, scientists will know it has just detected a gravitational wave. Because the frequency of a wave is directly proportional to its energy, scientists can use the properties of the gravitational wave as measured by LIGO to deduce the properties of its original source.

    (As you might have guessed, even a cat running across the room emits gravitational waves. However, the frequency of these waves is so very low that it is almost impossible to build instruments to measure them, nor are we likely to find such an exercise useful.)

    I learnt today that it is also possible for instruments like LIGO to be able to detect the gravitational lensing of gravitational waves. When an object like a black hole warps the spacetime continuum around it, it lenses light – and it is easy to see how it would lens gravitational waves as well. The lensing effect is the result not of the black hole’s ‘direct’ interaction with light as much as its distortion of the continuum. Ergo, anything that traverses the continuum, including gravitational waves, is bound to be lensed by the black hole.

    The human body evolved eyes to receive information encoded in visible light, so we can directly see lensed visible-light. However, we don’t possess any organs that would allow us to do the same thing with gravitational waves. Instead, we will need to use existing instruments, like LIGO, to detect these particular distortions. How do we do that?

    When two black holes are rapidly revolving around each other, getting closer and closer, they shed more and more of their potential energy as gravitational waves. In effect, the frequency of these waves is quickly increasing together with their amplitude, and LIGO registers this as a chirp (see video below). Once the two black holes have merged, both frequency and amplitude drop to zero (because a solitary spinning black hole does not emit gravitational waves).

    In the event of a lensing, however, LIGO will effectively detect two sets of gravitational waves. One set will arrive at LIGO straight from the source. The second set – originally sent off in a different direction – will become lensed towards LIGO. And because the lensed wave will effectively have travelled a longer distance, it will arrive a short while after the direct wave.

    However, LIGO will not register two chirps; in fact, it will register no chirps at all. Instead, the direct wave and the lensed wave will interfere with each other inside the instrument to produce a characteristically mixed signal. By the laws of wave mechanics, this signal will have increasing frequency, as in the chirp, but uneven amplitude. If it were sonified, the signal’s sound would climb in pitch but have irregular volume.

    A statistical analysis published in early 2018 (in a preprint paper) claimed that LIGO should be able to detect gravitationally lensed gravitational waves at the rate of about once per year (and the proposed Einstein Telescope, at about 80 per year!). A peer-reviewed paper published in January 2019 suggested that LIGO’s design specs allow it to detect lensing effects due to a black hole weighing 10-100,000-times as much as the Sun.

    Just like ‘direct’ gravitational waves give away some information about their sources, lensed gravitational waves should also give something away about the objects that deflected them. So if we become able to use LIGO, and/or other gravitational wave detectors of the future, to detect gravitationally lensed gravitational waves, we will have the potential to learn even more about the universe’s inhabitants than gravitational-wave astronomy currently allows us to.

    Thanks to inputs from Madhusudhan Raman, @ntavish, @alsogoesbyV and @vaa3.

  • Alt-M.O.M.

    Posters for a new TV show called M.O.M. – The Women Behind Mission Mangal, produced by Ekta Kapoor and distributed by AltBalaji, look strange. One poster shows four women, presumably the show’s protagonists, flanking a large rocket in the centre that appears to be a Russian Soyuz launcher. Another shows their faces lined up over an ascending NASA Space Shuttle. However, ISRO launched the Mars Orbiter Mission (MOM) in November 2013 with a PSLV XL rocket.

    I wrote this up for The Wire, using it as an opportunity to discuss ISRO’s image-sharing policies and the still-ambiguous guidelines that surround it (over and beyond the Indian government’s occasional tendency to change URL structures on official websites without so much as a 302 redirect). Once my piece was published, I promptly received a call from an AltBalaji spokesperson who said they were “contractually obligated” to not use any official symbols or names because the show was a fictional adaptation.

    This was news to me if only because Kapoor had written on Instagram that the show was “partly fictional”, as well as another reason. AltBalaji’s marketing exercise clearly wants to ride the wave of popularity that ISRO’s MOM continues to enjoy. If it didn’t, it wouldn’t have tried to shoehorn the show’s name into the same acronym, instead of picking the 17,575 other options it had. According to AltBalaji’s statement, their M.O.M. stands for “Mission Over Mars”, which doesn’t even make sense – but hey.

    With some snooping around, I also found that while NASA had a pretty relaxed image-sharing policy, exempting the use of the Space Shuttle image on poster #2, Roscosmos is stricter: reusing its images for commercial purposes requires permission first. Based on my conversation with AltBalaji, it didn’t seem like they’d obtained such permission. As @zingaroo pointed out on Twitter, the producers could simply have used the image of a completely made-up rocket, obviating the need to receive anyone’s permission.

    They didn’t, which only makes it seem more and more like there’s an opportunism at work here that AltBalaji won’t admit to but will still cash in on, all the while providing a confused picture of what really is going on.

  • Making history at the speed of light

    Last week, Sophia Gad-Nasr, an astroparticle physicist and PhD student at University of California, Irvine, tweeted this question:

    To which I replied:

    Once you start thinking about it, this is a really mind-boggling thing. A part of history – as in the past – has physical character. This is because the fastest anything can travel in the universe is at the speed of light, including information.

    In this regard, history is like the blockchain: it’s regarded as history only if multiple people, and not just you, are able to agree on what exactly happened (just like a cryptocurrency transaction is acknowledged only if all members of the blockchain have registered it individually). So if you know something and you’d like to have your friend know it as well, you ping them on WhatsApp, make a call, shout it across the room, etc. None of these messages can travel faster than at the speed of light in vacuum.

    As a result, history itself – as information encoded in physical mediums – cannot propagate faster than at the speed of light. Of course, you can nitpick that history doesn’t travel and that it’s communication that’s limited to the speed of light, to which I’d retort with the claim that history is made at the speed of light. And this claim has many, many consequences for our knowledge of the universe.

    For example, we know that the universe is expanding because a mysterious form of energy, called dark energy, is pulling it apart, faster and faster. While the effects thus far can only be experienced at the intergalactic scale, it’s plausible that there is a point of time in the future when the universe will be expanding so fast that its pace will outstrip the speed at which we can communicate, leaving us stranded in a volume of spacetime that we can never, ever communicate beyond and past which information from the outside won’t reach us. (I discussed this in greater detail in June 2016.)

    For another, astronomers and cosmologists who want to know more about what the early universe could have looked like need simply to build more powerful telescopes that gaze deeper into the cosmos. This is evident by the formulation of the unit of distance called the light-year: it is the distance light travels in one year (in vacuum, about 9.46 trillion km). Therefore, light that is 100 years away from reaching us is likely to carry information from a century ago. Light that is billions of years away from reaching us is likely to carry information encoded billions of years ago.

    And to find this light – these photons – we need telescopes that can look billions of kilometres into the depths of space. (Note: By ‘look’, I don’t mean that these telescopes snatch distant photons and transport them to our location; instead, they’re simply instruments that are sensitive enough to register photons considerably weakened in the course of their long voyage.) As of today, the farthest object astronomers have observed, and verified, is a galaxy named GN-z11 at a distance of 32 billion light-years.

    If you’re wondering how this is possible when the universe formed only 13.8 billion years ago, it’s because the universe has been expanding since. In fact, the farthest astronomers can observe today (on paper, at least) is a distance of about 46.5 billion light-years in any direction, making up a sphere known as the observable universe. Its outermost edge corresponds to a time 378,000 years after the Big Bang. Thanks to dark energy, the fraction this sphere constitutes of the whole universe is shrinking. Anyway, this means GN-z11 formed less than half a billion years after the Big Bang.

    In 1941, Isaac Asimov published his short story Nightfall, whose plot centred on just the moment when light from the last star visible in the sky twinkles out, never to be seen again because the universe is expanding faster than at the speed of light. Though the moment comes to be because of the increasing vastness of space, Asimov rightly identifies it as the onset of a perpetual claustrophobia, comparing it to the journey of a group of people through a dark tunnel for 15 minutes.

    What was the matter with these people?’ asked Theremon finally.

    ‘Essentially the same thing that was the matter with you when you thought the walls of the room were crushing in on you in the dark. There is a psychological term for mankind’s instinctive fear of the absence of light. We call it “claustrophobia”, because the lack of light is always tied up with enclosed places, so that fear of one is fear of the other. You see?’

    ‘And those people of the tunnel?’

    ‘Those people of the tunnel consisted of those unfortunates whose mentality did not quite possess the resiliency to overcome the claustrophobia that overtook them in the Darkness. Fifteen minutes without light is a long time; you only had two or three minutes, and I believe you were fairly upset.

    ‘The people of the tunnel had what is called a “claustrophobic fixation”. Their latent fear of darkness and enclosed places had crystalized and become active, and, as far as we can tell, permanent. That’s what fifteen minutes in the dark will do.’

    There was a long silence, and Theremon’s forehead wrinkled slowly into a frown. ‘I don’t believe it’s that bad.’

    ‘You mean you don’t want to believe,’ snapped Sheerin. ‘You’re afraid to believe. Look out the window!’

    Theremon did so, and the psychologist continued without pausing. ‘Imagine darkness – everywhere. No light, as far as you can see. The houses, the trees, the fields, the earth, the sky – black! And stars thrown in, for all I know – whatever they are. Can you conceive it?’

    ‘Yes, I can,’ declared Theremon truculently.

    And Sheerin slammed his fist down upon the table in sudden passion. ‘You lie! You can’t conceive that. Your brain wasn’t built for the conception any more than it was built for the conception of infinity or of eternity. You can only talk about it. A fraction of the reality upsets you, and when the real thing comes, your brain is going to be presented with the phenomenon outside its limits of comprehension. You will go mad, completely and permanently! There is no question of it!’

    He added sadly, ‘And another couple of millennia of painful struggle comes to nothing. Tomorrow there won’t be a city standing unharmed…’

  • To explain the world

    Simplicity is a deceptively simple thing. Recently, a scientist who was trying to explain something in general relativity to me did so in the following way:

    One simple way to understand … is as follows. Imagine that one sets up spherical polar coordinates, so that space is described by r, theta, phi and time is described by t. Then in this frame what one would normally call a non-rotating observer is one who has no angular velocity in theta and phi i.e. if the proper time of the observer is tau, then {d theta over d tau} = {d phi over d tau} = 0.

    (Emphasis added)

    This is anything but simple, and this problem isn’t limited to this scientist alone. Lots of them regularly conflate explanation with elaboration. More recently, another scientist – by way of describing a peer’s achievements – simply listed them in chronological order. It was the perfect example of ‘tell, don’t show’:

    Starting with the discovery of strangeness, called Gell-Mann-Nishijima formula, the Eightfold Way of SU(3), current algebra, he finally reached the theory of strong interactions, namely quantum chromodynamics. So his name is there in all the components of the theory of strong interactions, now a part of Standard Model. His other fundamental contributions are in renormalisation group, an important part of quantum field theory and in the V-A form of weak interaction. He also proposed a mechanism by which neutrinos acquire very small masses, the so called the See-Saw mechanism. He had broad interests going beyond his contributions in theoretical physics.

    Explanation requires the explainer to speak multiple languages. For example, explaining the event horizon to someone in class X means being able to translate what you know in the language of graduate-level physics to the language of Newtonian mechanics, first principles of optics, simple geometric shapes and recourse through carefully chosen metaphors. It means enabling the listener to synthesise knowledge in other contexts based on what you have said. But not doing any of this, sticking to just one language and using more and more words from that language cannot be an act of explanation, or even simplification, unless your interlocutor also speaks that language fluently.

    Ultimately, it seems that while not all scientists can also be good science writers, there is a part of the writing process on display here that precedes the writing itself, and which is less difficult to execute: the way you think. To be able to teach well and explain well, I think one needs to be able to think in ways that will mitigate epistemological disparities between two people such that the person with more knowledge empowers the one with less to climb up the knowledge ladder.

    This in turn requires one to examine the precise differences between why you know what you know and why your audience doesn’t know what you know. This is not the same as “the difference between what you know and what the audience knows” because it is then simply an exercise in comparison – an exercise in preserving the status quo even. Instead, to know the why of the difference is also to know how the difference can be bridged – resulting in an exercising in eliminating disparity.