Scicomm

  • Disastrous hype

    This is one of the worst press releases accompanying a study I’ve seen:

    The headline and the body appear to have nothing to do with the study itself, which explores the creative properties of an explosion with certain attributes. However, the press office of the University of Central Florida has drafted a popular version that claims researchers – who are engineers more than physicists – have “detailed the mechanisms that could cause the [Big Bang] explosion, which is key for the models that scientists use to understand the origin of the universe.” I checked with a physicist, who agreed: “I don’t see how this is relevant to the Big Bang at all. Considering the paper is coming out of the department of mechanical and aerospace engineering, I highly doubt the authors intended for it to be reported on this way.”

    Press releases that hype results are often the product of an overzealous university press office working without inputs from the researchers that obtained those results, and this is probably the case here as well. The paper’s abstract and some quotes by one of the researchers, Kareem Ahmed from the University of Central Florida, indicate the study isn’t about the Big Bang but about similarities between “massive thermonuclear explosions in space and small chemical explosions on Earth”. However, the press release’s author slipped in a reference to the Big Bang because, hey, it was an explosion too.

    The Big Bang was like no other stellar explosion; its material constituents were vastly different from anything that goes boom today – whether on Earth or in space – and physicists have various ideas about what could have motivated the bang to happen in the first place. The first supernovas are also thought to have occurred a few billion years after the Big Bang. This said, Ahmed was quoted saying something that could have used more clarification in the press release:

    We explore these supersonic reactions for propulsion, and as a result of that, we came across this mechanism that looked very interesting. When we started to dig deeper, we realized that this is relatable to something as profound as the origin of the universe.

    Err…

  • The climate and the A.I.

    A few days ago, the New York Times and other major international publications sounded the alarm over a new study that claimed various coastal cities around the world would be underwater to different degrees by 2050. However, something seemed off; it couldn’t have been straightforward for the authors of the study to plot how much the sea-level rise would affect India’s coastal settlements. Specifically, the numbers required to calculate how many people in a city would be underwater aren’t readily available in India, if at all they do exist. Without this bit of information, it’s easy to disproportionately over- or underestimate certain outcomes for India on the basis of simulations and models. And earlier this evening, as if on cue, this thread appeared:

    This post isn’t a declaration of smugness (although it is tempting) but to turn your attention to one of Palanichamy’s tweets in the thread:

    One of the biggest differences between the developed and the developing worlds is clean, reliable, accessible data. There’s a reason USAfacts.org exists whereas in India, data discovery is as painstaking a part of the journalistic process as is reporting on it and getting the report published. Government records are fairly recent. They’re not always available at the same location on the web (data.gov.in has been remedying this to some extent). They’re often incomplete or not machine-readable. Every so often, the government doesn’t even publish the data – or changes how it’s obtained, rendering the latest dataset incompatible with previous versions.

    This is why attempts to model Indian situations and similar situations in significantly different parts of the world (i.e. developed and developing, not India and, say, Mexico) in the same study are likely to deviate from reality: the authors might have extrapolated the data for the Indian situation using methods derived from non-native datasets. According to Palanichamy, the sea-level rise study took AI’s help for this – and herein lies the rub. With this study itself as an example, there are only going to be more – and potentially more sensational – efforts to determine the effects of continued global heating on coastal assets, whether cities or factories, paralleling greater investments to deal with the consequences.

    In this scenario, AI, and algorithms in general, will only play a more prominent part in determining how, when and where our attention and money should be spent, and controlling the extent to which people think scientists’ predictions and reality are in agreement. Obviously the deeper problem here lies with the entities responsible for collecting and publishing the data – and aren’t doing so – but given how the climate crisis is forcing the world’s governments to rapidly globalise their action plans, the developing world needs to inculcate the courage and clarity to slow down, and scrutinise the AI and other tools scientists use to offer their recommendations.

    It’s not a straightforward road from having the data to knowing what it implies for a city in India, a city in Australia and a city in Canada.

  • India’s Delhi-only air pollution problem

    I woke up this morning to a PTI report telling me Delhi’s air quality had fallen to ‘very poor’ on Deepavali, the Hindu ostensible festival of lights, with many people defying the Supreme Court’s direction to burst firecrackers only between 8 pm and 10 pm. This defiance is unsurprising: the Supreme Court doesn’t apply to Delhi because, and not even though, the response to the pollution was just Delhi-centric.

    In fact, it’s probably only a problem because Delhi is having trouble breathing, despite the fact that the national capital is the eleventh-most polluted city in the world, behind eight other Indian ones.

    The report also noted, “On Saturday, the Delhi government launched a four-day laser show to discourage residents from bursting firecrackers and celebrating Diwali with lights and music. During the show, laser lights were beamed in sync with patriotic songs and Ramayana narration.”

    So the air pollution problem rang alarm bells and the government solved just that problem. Nothing else was a problem so it solved nothing else. The beams of light the Delhi government shot up into the sky would have caused light pollution, disturbing insects, birds and nocturnal creatures. The sound would no doubt have been loud, disturbing animals and people in the area. It’s a mystery why we don’t have familial, intimate celebrations.

    There is a concept in environmental philosophy called the hyperobject: a dynamic super-entity that lots of people can measure and feel at the same time but not see or touch. Global warming is a famous hyperobject, described by certain attributes, including its prevalence and its shifting patterns. Delhi’s pollution has two hyperobjects. One is what the urban poor experiences – a beast that gets in the way of daily life, that you can’t wish away (let alone fight), and which is invisible to everyone else. The is the one in the news: stunted, inchoate and classist, it includes only air pollution because its effects have become unignorable, and sound and light don’t feature in it – nor does anything even a degree removed from the singular sources of smoke and fumes.

    For example, someone (considered smart) recently said to me, “The city should collect trash better to avoid roadside garbage fires in winter.” Then what about the people who set those fires for warmth because they don’t have warm shelter for the night? “They will find another way.”

    The Delhi-centrism is also visible with the ‘green firecrackers’ business. According to the CSIR National Environmental Engineering Research Institute (NEERI), which developed the crackers, its scientists “developed new formulations for reduced emission light and sound emitting crackers”. But it turns out the reduction doesn’t apply to sound.

    The ‘green’ crackers’ novel features include “matching performance in sound (100-120dBA) with commercial crackers”. A 100-120 dBA is debilitating. The non-crazy crackers clock about 60-80 dBA. (dB stands for decibels, a logarithmic measure of sound pressure change; the ‘A’ corresponds to the A-setting, a scale used to measure sounds according to human loudness.)

    In 2014, during my neighbours’ spate of cracker-bursting, I “used an app to make 300 measurements over 5 minutes” from a distance of about 80 metres, and obtained the following readings:

    Min: 41.51 dB(A)
    Max: 83.88 dB(A)
    Avg.: 66.41 dB(A)

    The Noise Pollution (Regulation and Control) Rules 2000 limit noise in the daytime (6 am to 10 pm) to 55 dB(A), and the fine for breaking the rules was just Rs 100, or $1.5, before the Supreme Court stepped up taking cognisance of the air pollution during Deepavali. This is penalty is all the more laughable considering Delhi was ranked the world’s second-noisiest city in 2017. There’s only so much the Delhi police, including traffic police, can do, with the 15 noise meters they’ve been provided.

    In February 2019, Romulus Whitaker, India’s ‘snake man’, expressed his anguish over a hotel next door to the Madras Crocodile Bank Trust blasting loud music that was “triggering aberrant behaviour” among the animals (to paraphrase the author). If animals don’t concern you: the 2014 Heinz Nixdorf Recall study found noise is a risk factor for atherosclerosis. Delhi’s residents also have the “maximum amount of hearing loss proportionate to their age”.

    As Dr Deepak Natarajan, a Delhi-based cardiologist, wrote in 2015, “It is ironic that the people setting out to teach the world the salutatory effects of … quietness celebrate Yoga Day without a thought for the noise that we generate every day.”

    Someone else tweeted yesterday, after purchasing some ‘green’ firecrackers, that science “as always” (or something similar) provided the solution. But science has no agency: like a car, people drive it. It doesn’t ask questions about where the driver wants to go or complain when he drives too rashly. And in the story of fixing Delhi’s air pollution, the government has driven the car like Salman Khan.

  • New Scientist violates the laws of physics (updated)

    new article in the New Scientist begins with a statement of Newton’s third law that is blissfully ignorant of the irony. The article’s headline is:

    The magazine is notorious for its use of sensationalist headlines and seems to have done it again. Jon Cartwright, the author of the article, has done a decent job of explaining the ‘helical drive’ proposed by a manager at NASA named David Burns, and hasn’t himself suggested that the drive violates any laws of physics. It seems more like someone else was responsible for the headline and decided to give it the signature New Scientist twist.

    The featured image is a disaster, showing concept art of Robert Shawyer’s infamous em-drive. Shawyer had claimed the device could in fact violate the laws of physics by converting the momentum of microwaves confined in a chamber into thrust. Various experts have debunked the em-drive as fantasy, but their caution against suggesting the laws of physics could be broken so easily appears to have missed the New Scientist.

    Update, 7.06 am, October 16, 2019: In a new article, Chris Lee at Ars Technica has explained why the helical drive won’t work, and comes down harshly on Burns for publicising his idea before getting it checked with his peers at NASA, which would’ve spared him the embarrassment that Lee dished out. That said, Lee is also a professional physicist, and perhaps Cartwright isn’t entirely in the clear if the answer to why the helical drive won’t work is as straightforward as Lee makes it to be.

    With the helical drive, Burns proposes to use an object that moves back and forth inside a box, bouncing off either end. Each bounce imparts momentum to the box but the net momentum after two bounces is zero because they’re in equal and opposite directions. But if the object could become heavier just before it strikes one end and lighter before it strikes the other, the box will receive a ‘kick’ at one end and start moving that direction.

    Burns then says if we could replace the object with a particle and the box with a particle accelerator, it should be possible to accelerate the particle in one direction, let it bounce off, then decelerate it in the other direction and recover most of the energy imparted to it, and repeat. This way, the whole setup can be made to constantly accelerate in one direction.

    The flip side is that the mass-energy equivalence is central to Burns’s idea, but according to the theory of special relativity that it’s embedded in, it’s actually the mass-energy-momentum equivalence. As Lee put it, special relativity conserves energy and momentum together, which means a heavier particle bouncing off one end of the setup won’t keep accelerating the setup in its direction. Instead, when the particle becomes heavier and acquires more momentum, it does so by absorbing virtual photons from an omnipresent energy field. When the particle slows down, it emits these photons into the field around it.

    According to special relativity and Newton’s third law, the release process will accelerate the setup, and the absorption process will decelerate the setup. The particle knocking on either ends is just incidental.

  • A revolutionary exoplanet

    In 1992, Aleksander Wolszczan and Dale Frail became the first astronomers to publicly announce that they had discovered the first planets outside the Solar System, orbiting the dense core of a dead star about 2,300 lightyears away. This event is considered to be the first definitive detection of exoplanets, a portmanteau of extrasolar planets. However, Michel Mayor and Didier Queloz were recognised today with one half of the 2019 Nobel Prize for physics for discovering an exoplanet three years after Wolszczan and Frail did. This might be confusing – but it becomes clear once you stop to consider the planet itself.

    51 Pegasi b orbits a star named 51 Pegasi about 50 lightyears away from Earth. In 1995, Queloz and Mayor were studying the light and other radiation coming from the star when they noticed that it was wobbling ever so slightly. By measuring the star’s radial velocity and using an analytical technique called Doppler spectroscopy, Queloz and Mayor realised there was a planet orbiting it. Further observations indicated that the planet was a ‘hot Jupiter’, a giant planet with a surface temperature of ~1,000º C orbiting really close to the star.

    In 2017, Dutch and American astronomers studied the planet in even greater detail. They found its atmosphere was 0.01% water (a significant amount), it weighed about half as much as Jupiter and orbited 51 Pegasi once every four days.

    This was surprising. 51 Pegasi is a Sun-like star, meaning its brightness and colour are similar to the Sun’s. However, this ‘foreign’ system looked nothing like our own Solar System. It contained a giant planet much like Jupiter but which was a lot closer to its star than Mercury is to the Sun.

    Astronomers were startled because their ideas of what a planetary system should look like was based on what the Solar System looked like: the Sun at the centre, four rocky planets in the inner system, followed by gas- and ice-giants and then a large, ringed debris field in the form of an outer asteroid belt. Many researchers even thought hot Jupiters couldn’t exist. But the 51 Pegasi system changed all that.

    It was so different that Queloz and Mayor were first met with some skepticism, including questions about whether they’d misread the data and whether the wobble they’d seen was some quirk of the star itself. However, as time passed, astronomers only became more convinced that they indeed had an oddball system on their hands. David Gray had penned a paper in 1997 arguing that 51 Pegasi’s wobble could be understood without requiring a planet to orbit it. He published another paper in 1998 correcting himself and lending credence to Queloz’s and Mayor’s claim. The duo received bigger support by inspiring other astronomers to take another look at their data and check if they’d missed any telltale signs of a planet. In time, they would discover more hot Jupiters, also called pegasean planets, orbiting conventional stars.

    Through the next decade, it would become increasingly clear that the oddball system was in fact the Solar System. To date, astronomers have confirmed the existence of over 4,100 exoplanets. None of them belong to planetary systems that look anything like our own. More specifically, the Solar System appears to be unique because it doesn’t have any planets really close to the Sun; doesn’t have any planets heavier than Earth but lighter than Neptune – an unusually large mass gap; and most of whose planets revolve in nearly circular orbits.

    Obviously the discovery forced astronomers to rethink how the Solar System could have formed versus how typical exoplanetary systems form. For example, scientists were able to develop two competing models for how hot Jupiters could have come to be: either by forming farther away from the host star and then migrating inwards or by forming much closer to the star and just staying there. But as astronomers undertook more observations of stars in the universe, they realised the region closest to the star often doesn’t have enough material to clump together to form such large planets.

    Simulations also suggest than when a Jupiter-sized planet migrates from 5 AU to 0.1 AU, its passage could make way for Earth-mass planets to later form in the star’s habitable zone. The implication is that planetary systems that have hot Jupiters could also harbour potentially life-bearing worlds.

    But there might not be many such systems. It’s notable that fewer than 10% of exoplanets are known to be hot Jupiters (only seven of them have an orbital period of less than one Earth-day). They’re just more prominent in the news as well as in the scientific literature because astronomers think they’re more interesting objects of study, further attesting to the significance of 51 Pegasi b. But even in their low numbers, hot Jupiters have been raising questions.

    For example, according to data obtained by the NASA Kepler space telescope, which looked for the fleeting shadows that planets passing in front of their stars cast on the starlight, only 0.3-0.5% of the stars it observed had hot Jupiters. But observations using the radial velocity method, which Queloz and Mayor had also used in 1995, indicated a prevalence of 1.2%. Jason Wright, an astronomer at the Pennsylvania State University, wrote in 2012 that this discrepancy signalled a potentially deeper mystery: “It seems that the radial velocity surveys, which probe nearby stars, are finding a ‘hot-Jupiter rich’ environment, while Kepler, probing much more distant stars, sees lots of planets but hardly any hot Jupiters. What is different about those more distant stars? … Just another exoplanet mystery to be solved…”.

    All of this is the legacy of the discovery of 51 Pegasi b. And given the specific context in which it was discovered and how the knowledge of its existence transformed how we think about our planetary neighbourhoods and neighbourhoods in other parts of the universe, it might be fair to say the Nobel Prize for Queloz and Mayor is in recognition of their willingness to stand by their data, seeing a planet where others didn’t.

    The Wire
    October 8, 2019

  • Disentangling entanglement

    There has been considerable speculation if the winners of this year’s Nobel Prize for physics, due to be announced at 2.30 pm IST on October 8, will include Alain Aspect and Anton Zeilinger. They’ve both made significant experimental contributions related to quantum information theory and the fundamental nature of quantum mechanics, including entanglement.

    Their work, at least the potentially prize-winning part of it, is centred on a class of experiments called Bell tests. If you perform a Bell test, you’re essentially checking the extent to which the rules of quantum mechanics are compatible with the rules of classical physics.

    Whether or not Aspect, Zeilinger and/or others win a Nobel Prize this year, what they did achieve is worth putting in words. Of course, many other writers, authors, scientists, etc. have already performed this activity; I’d like to redo it if only because writing helps commit things to memory and because the various performers of Bell tests are likely to win some prominent prize, given how modern technologies like quantum cryptography are inflating the importance of their work, and at that time I’ll have ready reference material.

    (There is yet another reason Aspect and Zeilinger could win a Nobel Prize. As with the medicine prizes, many of whose laureates previously won a Lasker Award, many of the physics laureates have previously won the Wolf Prize. And Aspect and Zeilinger jointly won the Wolf Prize for physics in 2010 along with John Clauser.)

    The following elucidation is divided into two parts: principles and tests. My principal sources are Wikipedia, some physics magazines, Quantum Physics for Poets by Leon Lederman and Christopher Hill (2011), and a textbook of quantum mechanics by John L. Powell and Bernd Crasemann (1998).

    §

    Principles

    From the late 1920s, Albert Einstein began to publicly express his discomfort with the emerging theory of quantum mechanics. He claimed that a quantum mechanical description of reality allowed “spooky” things that the rules of classical mechanics, including his theories of relativity, forbid. He further contended that both classical mechanics and quantum mechanics couldn’t be true at the same time and that there had to be a deeper theory of reality with its own, thus-far hidden variables.

    Remember the Schrödinger’s cat thought experiment: place a cat in a box with a bowl of poison and close the lid; until you open the box to make an observation, the cat may be considered to be both alive and dead. Erwin Schrödinger came up with this example to ridicule the implications of Niels Bohr’s and Werner Heisenberg’s idea that the quantum state of a subatomic particle, like an electron, was described by a mathematical object called the wave function.

    The wave function has many unique properties. One of these is superposition: the ability of an object to exist in multiple states at once. Another is decoherence (although this isn’t a property as much as a phenomenon common to many quantum systems): when you observed the object. it would probabilistically collapse into one fixed state.

    Imagine having a box full of billiard balls, each of which is both blue and green at the same time. But the moment you open the box to look, each ball decides to become either blue or green. This (metaphor) is on the face of it a kooky description of reality. Einstein definitely wasn’t happy with it; he believed that quantum mechanics was just a theory of what we thought we knew and that there was a deeper theory of reality that didn’t offer such absurd explanations.

    In 1935, Einstein, Boris Podolsky and Nathan Rosen advanced a thought experiment based on these ideas that seemed to yield ridiculous results, in a deliberate effort to provoke his ‘opponents’ to reconsider their ideas. Say there’s a heavy particle with zero spin – a property of elementary particles – inside a box in Bangalore. At some point, it decays into two smaller particles. One of these ought to have a spin of 1/2 and other of -1/2 to abide by the conservation of spin. You send one of these particles to your friend in Chennai and the other to a friend in Mumbai. Until these people observe their respective particles, the latter are to be considered to be in a mixed state – a superposition. In the final step, your friend in Chennai observes the particle to measure a spin of -1/2. This immediately implies that the particle sent to Mumbai should have a spin of 1/2.

    If you’d performed this experiment with two billiard balls instead, one blue and one green, the person in Bangalore would’ve known which ball went to which friend. But in the Einstein-Podolsky-Rosen (EPR) thought experiment, the person in Bangalore couldn’t have known which particle was sent to which city, only that each particle existed in a superposition of two states, spin 1/2 and spin -1/2. This situation was unacceptable to Einstein because it was inimical certain assumptions on which the theories of relativity were founded.

    The moment the friend in Chennai observed her particle to have spin -1/2, the one in Mumbai would have known without measuring her particle that it had a spin of 1/2. If it didn’t, the conservation of spin would be violated. If it did, then the wave function of the Mumbai particle would have collapsed to a spin 1/2 state the moment the wave function of the Chennai particle had collapsed to a spin -1/2 state, indicating faster-than-light communication between the particles. Either way, quantum mechanics could not produce a sensible outcome.

    Two particles whose wave functions are linked the way they were in the EPR paradox are said to be entangled. Einstein memorably described entanglement as “spooky action at a distance”. He used the EPR paradox to suggest quantum mechanics couldn’t possibly be legit, certainly not without messing with the rules that made classical mechanics legit.

    So the question of whether quantum mechanics was a fundamental description of reality or whether there were any hidden variables representing a deeper theory stood for nearly thirty years.

    Then, in 1964, an Irish physicist at CERN named John Stewart Bell figured out a way to answer this question using what has since been called Bell’s theorem. He defined a set of inequalities – statements of the form “P is greater than Q” – that were definitely true for classical mechanics. If an experiment conducted with electrons, for example, also concluded that “P is greater than Q“, it would support the idea that quantum mechanics (vis-à-vis electrons) has ‘hidden’ parts that would explain things like entanglement more along the lines of classical mechanics.

    But if an experiment couldn’t conclude that “P is greater than Q“, it would support the idea that there are no hidden variables, that quantum mechanics is a complete theory and, finally, that it implicitly supports spooky actions at a distance.

    The theorem here was a statement. To quote myself from a 2013 post (emphasis added):

    for quantum mechanics to be a complete theory – applicable everywhere and always – either locality or realism must be untrue. Locality is the idea that instantaneous or [faster-than-light] communication is impossible. Realism is the idea that even if an object cannot be detected at some times, its existence cannot be disputed [like electrons or protons].

    Zeilinger and Aspect, among others, are recognised for having performed these experiments, called Bell tests.

    Technological advancements through the late 20th and early 21st centuries have produced more and more nuanced editions of different kinds of Bell tests. However, one thing has been clear from the first tests, in 1981, to the last: they have all consistently violated Bell’s inequalities, indicating that quantum mechanics does not have hidden variables and our reality does allow bizarre things like superposition and entanglement to happen.

    To quote from Quantum Physics for Poets (p. 214-215):

    Bell’s theorem addresses the EPR paradox by establishing that measurements on object a actually do have some kind of instant effect on the measurement at b, even though the two are very far apart. It distinguishes this shocking interpretation from a more commonplace one in which only our knowledge of the state of b changes. This has a direct bearing on the meaning of the wave function and, from the consequences of Bell’s theorem, experimentally establishes that the wave function completely defines the system in that a ‘collapse’ is a real physical happening.


    Tests

    Though Bell defined his inequalities in such a way that they would lend themselves to study in a single test, experimenters often stumbled upon loopholes in the result as a consequence of the experiment’s design not being robust enough to evade quantum mechanics’s propensity to confound observers. Think of a loophole as a caveat; an experimenter runs a test and comes to you and says, “P is greater than Q but…”, followed by an excuse that makes the result less reliable. For a long time, physicists couldn’t figure out how to get rid of all these excuses and just be able to say – or not say – “P is greater than Q“.

    If millions of photons are entangled in an experiment, the detectors used to detect, and observe, the photons may not be good enough to detect all of them or the photons may not survive their journey to the detectors properly. This fair-sampling loophole could give rise to doubts about whether a photon collapsed into a particular state because of entanglement or if it was simply coincidence.

    To prevent this, physicists could bring the detectors closer together but this would create the communication loophole. If two entangled photons are separated by 100 km and the second observation is made more than 0.0003 seconds after the first, it’s still possible that optical information could’ve been exchanged between the two particles. To sidestep this possibility, the two observations have to be separated by a distance greater than what light could travel in the time it takes to make the measurements. (Alain Aspect and his team also pointed their two detectors in random directions in one of their tests.)

    Third, physicists can tell if two photons received in separate locations were in fact entangled with each other, and not other photons, based on the precise time at which they’re detected. So unless physicists precisely calibrate the detection window for each pair, hidden variables could have time to interfere and induce effects the test isn’t designed to check for, creating a coincidence loophole.

    If physicists perform a test such that detectors repeatedly measure the particles involved in, say, two labs in Chennai and Mumbai, it’s not impossible for statistical dependencies to arise between measurements. To work around this memory loophole, the experiment simply has to use different measurement settings for each pair.

    Apart from these, experimenters also have to minimise any potential error within the instruments involved in the test. If they can’t eliminate the errors entirely, they will then have to modify the experimental design to compensate for any confounding influence due to the errors.

    So the ideal Bell test – the one with no caveats – would be one where the experimenters are able to close all loopholes at the same time. In fact, physicists soon realised that the fair-sampling and communication loopholes were the more important ones.

    In 1972, John Clauser and Stuart Freedman performed the first Bell test by entangling photons and measuring their polarisation at two separate detectors. Aspect led the first group that closed the communication loophole, in 1982; he subsequently conducted more tests that improved his first results. Anton Zeilinger and his team made advancements on the fair-sampling loophole.

    One particularly important experimental result showed up in August 2015: Robert Hanson and his team at the Technical University of Delft, in the Netherlands, had found a way to close the fair-sampling and communication loopholes at the same time. To quote Zeeya Merali’s report in Nature News at the time (lightly edited for brevity):

    The researchers started with two unentangled electrons sitting in diamond crystals held in different labs on the Delft campus, 1.3 km apart. Each electron was individually entangled with a photon, and both of those photons were then zipped to a third location. There, the two photons were entangled with each other – and this caused both their partner electrons to become entangled, too. … the team managed to generate 245 entangled pairs of electrons over … nine days. The team’s measurements exceeded Bell’s bound, once again supporting the standard quantum view. Moreover, the experiment closed both loopholes at once: because the electrons were easy to monitor, the detection loophole was not an issue, and they were separated far enough apart to close the communication loophole, too.

    By December 2015, Anton Zeilinger and co. were able to close the communication and fair-sampling loopholes in a single test with a 1-in-2-octillion chance of error, using a different experimental setup from Hanson’s. In fact, Zeilinger’s team actually closed three loopholes including the freedom-of-choice loophole. According to Merali, this is “the possibility that hidden variables could somehow manipulate the experimenters’ choices of what properties to measure, tricking them into thinking quantum theory is correct”.

    But at the time Hanson et al announced their result, Matthew Leifer, a physicist the Perimeter Institute in Canada, told Nature News (in the same report) that because “we can never prove that [the converse of freedom of choice] is not the case, … it’s fair to say that most physicists don’t worry too much about this.”

    We haven’t gone into much detail about Bell’s inequalities themselves but if our goal is to understand why Aspect and Zeilinger, and Clauser too, deserve to win a Nobel Prize, it’s because of the ingenious tests they devised to test Bell’s, and Einstein’s, ideas and the implications of what they’ve found in the process.

    For example, Bell crafted his test of the EPR paradox in the form of a ‘no-go theorem’: if it satisfied certain conditions, a theory was designated non-local, like quantum mechanics; if it didn’t satisfy all those conditions, the theory be classified as local, like Einstein’s special relativity. So Bell tests are effectively gatekeepers that can attest whether or not a theory – or a system – is behaving in a quantum way and each loophole is like an attempt to hack the attestation process.

    In 1991, Artur Ekert, who would later be acknowledged as one of the inventors of quantum cryptography, realised this perspective could have applications in securing communications. Engineers could encode information in entangled particles, send them to remote locations, and allow detectors there to communicate with each other securely by observing these particles and decoding the information. The engineers can then perform Bell tests to determine if anyone might be eavesdropping on these communications using one or some of the loopholes.

  • The virtues and vices of reestablishing contact with Vikram

    There was a PTI report yesterday that the Indian Space Research Organisation (ISRO) is still trying to reestablish contact with the Vikram lander of the Chandrayaan 2 mission. The lander had crashed onto the lunar surface on September 7 instead of touching down. The incident severed its communications link with ISRO ground control, leaving the org. unsure about the lander’s fate although all signs pointed to it being kaput.

    Subsequent attempts to photograph the designated landing site using the Chandrayaan 2 orbiter as well as the NASA Lunar Reconnaissance Orbiter didn’t provide any meaningful clues about what could’ve happened except that the crash-landing could’ve smashed Vikram to pieces too small to be observable from orbit.

    When reporting on ISRO or following the news about developments related to it, the outside-in view is everything. It’s sort of like a mapping between two sets. If the first set represents the relative significance of various projects within ISRO and the second the significance as perceived by the public according to what shows up in the news, then Chandrayaan 2, human spaceflight and maybe the impending launch of the Small Satellite Launch Vehicle are going to look like moderately sized objects in set 1 but really big in set 2.

    The popular impression of what ISRO is working on is skewed towards projects that have received greater media coverage. This is a pithy truism but it’s important to acknowledge because ISRO’s own public outreach is practically nonexistent, so there are no ‘normalising’ forces working to correct the skew.

    This is why it seems like a problem when ISRO – after spending over a week refusing to admit that the Chandrayaan 2 mission’s surface component had failed and its chairman K. Sivan echoing an internal review’s claim that the mission had in fact succeeded to the extent of 98% – says it’s still trying to reestablish contact without properly describing what that means.

    It’s all you hear about vis-à-vis the Indian space programme in the news these days, if not about astronaut training or that the ‘mini-PSLV’ had a customer even before it had a test flight, potentially contribute to the unfortunate impression that these are ISRO’s priorities at the moment when in fact the relative significance of these missions – i.e. their size within set 1 – is arranged differently.

    For example, the idea of trying to reestablish contact with the Vikram lander has been featured in at least three news reports in the last week, subsequently amplified through republishing and syndication, whereas the act of reestablishing contact could be as simple as one person pointing an antenna in the general direction of the Vikram lander, blasting a loud ‘what’s up’ message in the radio frequency and listening intently for a ‘not much’ reply. On the other hand, there’s a bunch of R&D, manufacturing practices and space-science discussions ISRO’s currently working on but which receive little to no coverage in the mainstream press.

    So when Sivan repeatedly states across many days that they’re still trying to reestablish contact with Vikram, or when he’s repeatedly asked the same question by journalists with no imagination about ISRO’s breadth and scope, it may not necessarily signal a reluctance to admit failure in the face of overwhelming evidence that the mission has in fact failed (e.g., apart from not being able to visually spot the lander, the lander’s batteries aren’t designed to survive the long and freezing lunar night, so it’s extremely unlikely that it has power to respond to the ‘what’s up’). It could just be that either Sivan, the journalists or both – but it’s unlikely to be the journalists unless they’re aware of the resources it takes to attempt to reestablish contact – are happy to keep reminding the people that ISRO’s going to try very, very hard before it can abandon the lander.

    Such metronymic messaging is politically favourable as well to maintain the Chandrayaan 2 mission’s place in the nationalist techno-pantheon. But it should also be abundantly clear at this point that Sivan’s decision to position himself as the organisation’s sole point of contact for media professionals at the first hint of trouble, his organisation’s increasing opacity to public view, if not scrutiny, and many journalists’ inexplicable lack of curiosity about things to ask the chairman all feed one another, ultimately sidelining other branches of ISRO and the public interest itself.

  • Authority, authoritarianism and a scicomm paradox

    I received a sharp reminder to better distinguish between activists and experts irrespective of how right the activists appear to be with the case of Ustad, that tiger shifted from its original habitat in Ranthambore sanctuary to Sajjangarh Zoo in 2015 after it killed three people. Local officials were in favour of the relocation to make life easier for villagers whose livelihoods depended on the forest whereas activists wanted Ustad to be brought back to Ranthambore, citing procedural irregularities, poor living conditions and presuming to know what was best for the animal.

    One vocal activist at the agitation’s forefront and to whose suggestions I had deferred when covering this story turned out to be a dentist in Mumbai, far removed from the rural reality that Ustad and the villagers co-habited as well as the opinions and priorities of conservationists about how Ustad should be handled. As I would later find out, almost all experts (excluding the two or three I’d spoken to) agreed Ustad had to be relocated and that doing so wasn’t as big a deal as the activists made it out to be, notwithstanding the irregularities.

    I have never treated activists as experts since but many other publications continue to make the same mistake. There are many problems with this false equivalence, including the equation of expertise with amplitude, insofar as it pertains to scientific activity, for example conservation, climate change, etc. Another issue is that activists – especially those who live and work in a different area and who haven’t accrued the day-to-day experiences of those whose rights they’re shouting for – tend to make decisions on principle and disfavour choices motivated by pragmatic thinking.

    Second, when some experts join forces with activists to render themselves or their possibly controversial opinions more visible, the journalist’s – and by extension the people’s – road to the truth becomes even more convoluted than it should be. Finally, of course, using activists in place of experts in a story isn’t fair to activists themselves: activism has its place in society, and it would be a disservice to depict activism as something it isn’t.

    This alerts us to the challenge of maintaining a balancing act.

    One of the trends of the 21st century has been the democratisation of information – to liberate it from technological and economic prisons and make it available and accessible to people who are otherwise unlikely to do so. This in turn has made many people self-proclaimed experts of this or that, from animal welfare to particle physics. And this in turn is mostly good because, in spite of faux expertise and the proliferation of fake news, democratising the availability of information (but not its production; that’s a different story) empowers people to question authority.

    Indeed, it’s possible fake news is as big a problem as it is today because many governments and other organisations have deployed it as a weapon against the availability of information and distributed mechanisms to verify it. Information is wealth after all and it doesn’t bode well for authoritarian systems predicated on the centralisation of power to have the answers to most questions available one Google, Sci-Hub or Twitter search away.

    The balancing act comes alive in the tension between preserving authority without imposing an authoritarian structure. That is, where do you draw the line?

    For example, Eric Balfour isn’t the man you should be listening to to understand how killer whales interpret and exercise freedom (see tweet below); you should be speaking to an animal welfare expert instead. However, the question arises if the expert is hegemon here, furthering an agenda on behalf of the research community to which she belongs by delegitimising knowledge obtained from sources other than her textbooks. (Cf. scientism.)

    This impression is solidified when scientists don’t speak up, choosing to remain within their ivory towers, and weakened when they do speak up. This isn’t to say all scientists should also be science communicators – that’s a strawman – but that all scientists should be okay with sharing their comments with the press with reasonable preconditions.

    In India, for example, very, very few scientists engage freely with the press and the people, and even fewer speak up against the government when the latter misfires (which is often). Without dismissing the valid restrictions and reservations that some of them have – including not being able to trust many journalists to know how science works – it’s readily apparent that the number of scientists who do speak up is minuscule relative to the number of scientists who can.

    An (English-speaking) animal welfare expert is probably just as easy to find in India as they might be in the US but consider palaeontologists or museologists, who are harder to find in India (sometimes you don’t realise that until you’re looking for a quote). When they don’t speak up – to journalists, even if not of their own volition – during a controversy, even as they also assert that only they can originate true expertise, the people are left trapped in a paradox, sometimes even branded fools to fall for fake news. But you can’t have it both ways, right?

    These issues stem from two roots: derision and ignorance, both of science communication.

    Of the scientists endowed with sufficient resources (including personal privilege and wealth): some don’t want to undertake scicomm, some don’t know enough to make a decision about whether to undertake scicomm, and some wish to undertake scicomm. Of these, scientists of the first type, who actively resist communicating research – whether theirs or others, believing it to be a lesser or even undesirable enterprise – wish to perpetuate their presumed authority and their authoritarian ‘reign’ by hoarding their knowledge. They are responsible for the derision.

    These people are responsible at least in part for the emergence of Balfouresque activists: celebrity-voices that amplify issues but wrongly, with or without the support of larger organisations, often claiming to question the agenda of an unholy union of scientists and businesses, alluding to conspiracies designed to keep the general populace from asking too many questions, and ultimately secured by the belief that they’re fighting authoritarian systems and not authority itself.

    Scientists of the second type, who are unaware of why science communication exists and its role in society, are obviously the ignorant.

    For example, when scientists from the UK had a paper published in 2017 about the Sutlej river’s connection to the Indus Valley civilisation, I reached out to two geoscientists for comment, after having ascertained that they weren’t particularly busy or anything. Neither had replied after 48 hours, not even with a ‘no’. So I googled “fluvio-deltaic morphology”, picked the first result that was a university webpage and emailed the senior-most scientist there. This man, Maarten Kleinhans at the University of Utrecht, wrote back almost immediately and in detail. One of the two geoscientists wrote me a month later: “Please check carefully, I am not an author of the paper.”

    More recently, the 2018 Young Investigators’ Meet in Guwahati included a panel discussion on science communication (of which I was part). After fielding questions from the audience – mostly from senior scientists already convinced of the need for good science communication, such as B.K. Thelma and Roop Malik – and breaking for tea, another panelist and I were mobbed by young biologists completely baffled as to why journalists wanted to interrogate scientific papers when that’s exactly why peer-review exists.

    All of this is less about fighting quacks bearing little to no burden of proof and more about responding to the widespread and cheap availability of information. Like it or not, science communication is here to stay because it’s one of the more credible ways to suppress the undesirable side-effects of implementing and accessing a ‘right to information’ policy paradigm. Similarly, you can’t have a right to information together with a right to withhold information; the latter has to be defined in the form of exceptions to the former. Otherwise, prepare for activism to replace expertise.

  • Good writing is an atom

    https://twitter.com/HochTwit/status/1174875013708746752

    The act of writing well is like an atom, or the universe. There is matter but it is thinly distributed, with lots of empty space in between. Removing this seeming nothingness won’t help, however. Its presence is necessary for things to remain the way they are and work just as well. Similarly, writing is not simply the deployment of words. There is often the need to stop mid-word and take stock of what you have composed thus far and what the best way to proceed could be, even as you remain mindful of the elegance of the sentence you are currently constructing and its appropriate situation in the overarching narrative. In the end, there will be lots of words to show for your effort but you will have spent even more time thinking about what you were doing and how you were doing it. Good writing, like the internal configuration of a set of protons, neutrons and electrons, is – physically speaking – very little about the labels attached to describe them. And good writing, like the vacuum energy of empty space, acquires its breadth and timelessness because it encompasses a lot of things that one cannot directly see.

  • Moon landings

    Ahead of Chandrayaan 2’s date with the lunar surface on September 7, the following line has been bandied about in the Indian as well as foreign media:

    Only three countries – the US, Russia and China – have attempted and succeeded in soft-landing a payload on the Moon.

    If Chandrayaan 2’s Vikram lander succeeds in its mission, India will be in elite company.

    However, before we rush to attribute this to the technological prowess of the Indian space programme, and the Indian Space Research Organisation (ISRO), I wonder if there is a confounding factor that should give us pause.

    As India is on the cusp of becoming only the fourth country to have attempted and succeeded in soft-landing a payload on the Moon, how many countries have attempted this feat in total?

    That number is only four: the US, Russia, China and Israel. A private Israeli mission named Beresheet attempted and failed to soft-land on the Moon in April this year.

    Without diminishing the magnitude of what the Vikram lander is going to attempt, it seems fair to state that India is also in elite company in terms of having a space programme big and matured enough to aim at soft-landing payloads on the Moon in the first place.

    This in turn prompts the consideration that attempting to soft-land on the Moon is the prerogative of large space programmes – which is evident. However, does this imply that what we’re comparing here is not the specific technological achievement of soft-landing on the Moon but in fact the relative sizes of different national space programmes?

    The Americans and Russians have together tried around 20 soft-landings and succeeded 16 times. On the flip side, these two countries pioneered the technologies required to achieve this feat at an accelerated pace during the Cold War space race, so perhaps an adjustment must be made for the failure rate.

    Either way, ISRO’s prospective feat would have been indisputably impressive if, say, a dozen countries had attempted to soft-land on the Moon and failed. But that is not the case, so how can we be sure a lunar soft-landing isn’t something that a sufficiently well-equipped space programme achieves?

    Put differently, can a lunar soft-landing be used as a reliable indicator that a national space programme has simply graduated on a technological scale, from one ‘ability level’ to the next?