Scicomm

  • A tale of vortices, skyrmions, paths and shapes

    There are many types of superconductors. Some of them can be explained by an early theory of superconductivity called Bardeen-Cooper-Schrieffer (BCS) theory.

    In these materials, vibrations in the atomic lattice force the electrons in the material to overcome their mutual repulsion and team up in pairs, if the material’s temperature is below a particular threshold (very low). These pairs of electrons, called Cooper pairs, have some properties that individual electrons can’t have. One of them is that all Cooper pairs together form an exotic state of matter called a Bose-Einstein condensate, which can flow through the material with much less resistance than individuals electrons experience. This is the gist of BCS theory.

    When the Cooper pairs are involved in the transmission of an electric current through the material, the material is an electrical superconductor.

    Some of the properties of the two electrons in each Cooper pair can influence the overall superconductivity itself. One of them is the orbital angular momentum, which is an intrinsic property of all particles. If both electrons have equal orbital angular momentum but are pointing in different directions, the relative orbital angular momentum is 0. Such materials are called s-wave superconductors.

    Sometimes, in s-wave superconductors, some of the electric current – or supercurrent – starts flowing in a vortex within the material. If these vortices can be coupled with a magnetic structure called a skyrmion, physicists believe they can give rise to some new behaviour previously not seen in materials, some of them with important applications in quantum computing. Coupling here implies that a change in the properties of the vortex should induce changes in the skyrmion, and vice versa.

    However, physicists have had a tough time creating a vortex-skyrmion coupling that they can control. As Gustav Bihlmayer, a staff scientist at the Jülich Research Centre, Germany, wrote for APS Physics, “experimental studies of these systems are still rare. Both parts” of the structures bearing these features “must stay within specific ranges of temperature and magnetic-field strength to realise the desired … phase, and the length scales of skyrmions and vortices must be similar in order to study their coupling.”

    In a new paper, a research team from Nanyang Technical University, Singapore, has reported that they have achieved just such a coupling: they created a skyrmion in a chiral magnet and used it to induce the formation of a supercurrent vortex in an s-wave superconductor. In their observations, they found this coupling to be stable and controllable – important attributes to have if the setup is to find practical application.

    A chiral magnet is a material whose internal magnetic field “typically” has a spiral or swirling pattern. A supercurrent vortex in an electrical superconductor is analogous to a skyrmion in a chiral magnet; a skyrmion is a “knot of twisting magnetic field lines” (source).

    The researchers sandwiched an s-wave superconductor and a chiral magnet together. When the magnetic field of a skyrmion in the chiral magnet interacted with the superconductor at the interface, it induced a spin-polarised supercurrent (i.e. the participating electrons’ spin are aligned along a certain direction). This phenomenon is called the Rashba-Edelstein effect, and it essentially converts electric charge to electron spin and vice versa. To do so, the effect requires the two materials to be in contact and depends among other things on properties of the skyrmion’s magnetic field.

    There’s another mechanism of interaction in which the chiral magnet and the superconductor don’t have to be in touch, and which the researchers successfully attempted to recreate. They preferred this mechanism, called stray-field coupling, to demonstrate a skyrmion-vortex system for a variety of practical reasons. For example, the chiral magnet is placed in an external magnetic field during the experiment. Taking the Rashba-Edelstein route means to achieve “stable skyrmions at low temperatures in thin films”, the field needs to be stronger than 1 T. (Earth’s magnetic field measures 25-65 µT.) Such a field could damage the s-wave superconductor.

    For the stray-field coupling mechanism, the researchers inserted an insulator between the chiral magnet and the superconductor. Then, when they applied a small magnetic field, Bihlmayer wrote, the field “nucleated” skyrmions in the structure. “Stray magnetic fields from the skyrmions [then] induced vortices in the [superconducting] film, which were observed with scanning tunnelling spectroscopy.”


    Experiments like this one reside at the cutting edge of modern condensed-matter physics. A lot of their complexity resides in scientists being able to closely control the conditions in which different quantum effects play out, using similarly advanced tools and techniques to understand what could be going on inside the materials, and to pick the right combination of materials to use.

    For example, the heterostructure the physicists used to manifest the stray-field coupling mechanism had the following composition, from top to bottom:

    • Platinum, 2 nm (layer thickness)
    • Niobium, 25 nm
    • Magnesium oxide, 5 nm
    • Platinum, 2 nm

    The next four layers are repeated 10 times in this order:

    • Platinum, 1 nm
    • Cobalt, 0.5 nm
    • Iron, 0.5 nm
    • Iridium, 1 nm

    Back to the overall stack:

    • Platinum, 10 nm
    • Tantalum, 2 nm
    • Silicon dioxide (substrate)

    The first three make up the superconductor, the magnesium oxide is the insulator, and the rest (except the substrate) make up the chiral magnet.

    It’s possible to erect a stack like this through trial and error, with no deeper understanding dictating the choice of materials. But when the universe of possibilities – of elements, compounds and alloys, their shapes and dimensions, and ambient conditions in which they interact – is so vast, the exercise could take many decades. But here we are, at a time when scientists have explored various properties of materials and their interactions, and are able to engineer novel behaviours into existence, blurring the line between discovery and invention. Even in the absence of applications, such observations are nothing short of fascinating.

    Applications aren’t wanting, however.


    quasiparticle is a packet of energy that behaves like a particle in a specific context even though it isn’t actually one. For example, the proton is a quasiparticle because it’s really a clump of smaller particles (quarks and gluons) that together behave in a fixed, predictable way. A phonon is a quasiparticle that represents some vibrational (or sound) energy being transmitted through a material. A magnon is a quasiparticle that represents some magnetic energy being transmitted through a material.

    On the other hand, an electron is said to be a particle, not a quasiparticle – as are neutrinos, photons, Higgs bosons, etc.

    Now and then physicists abstract packets of energy as particles in order to simplify their calculations.

    (Aside: I’m aware of the blurred line between particles and quasiparticles. For a technical but – if you’re prepared to Google a few things – fascinating interview with condensed-matter physicist Vijay Shenoy on this topic, see here.)

    We understand how these quasiparticles behave in three-dimensional space – the space we ourselves occupy. Their properties are likely to change if we study them in lower or higher dimensions. (Even if directly studying them in such conditions is hard, we know their behaviour will change because the theory describing their behaviour predicts it.) But there is one quasiparticle that exists in two dimensions, and is quite different in a strange way from the others. They are called anyons.

    Say you have two electrons in an atom orbiting the nucleus. If you exchanged their positions with each other, the measurable properties of the atom will stay the same. If you swapped the electrons once more to bring them back to their original positions, the properties will still remain unchanged. However, if you switched the positions of two anyons in a quantum system, something about the system will change. More broadly, if you started with a bunch of anyons in a system and successively exchanged their positions until they had a specific final arrangement, the system’s properties will have changed differently depending on the sequence of exchanges.

    This is called path dependency, and anyons are unique in possessing this property. In technical language, anyons are non-Abelian quasiparticles. They’re interesting for many reasons, but one application stands out. Quantum computers are devices that use the quantum mechanical properties of particles, or quasiparticles, to execute logical decisions (the same way ‘classical’ computers use semiconductors). Anyons’ path dependency is useful here. Arranging anyons in one sequence to achieve a final arrangement can be mapped to one piece of information (e.g. 1), and arranging anyons by a different sequence to achieve the same final arrangement can be mapped to different information (e.g. 0). This way, what information can be encoded depends on the availability of different paths to a common final state.

    In addition, an important issue with existing quantum computers is that they are too fragile: even a slight interaction with the environment can cause the devices to malfunction. Using anyons for the qubits could overcome this problem because the information stored doesn’t depend on the qubits’ existing states but the paths that they have taken there. So as long as the paths have been executed properly, environmental interactions that may disturb the anyons’ final states won’t matter.

    However, creating such anyons isn’t easy.

    Now, recall that s-wave superconductors are characterised by the relative orbital angular momentum of electrons in the Cooper pairs being 0 (i.e. equal but in opposite directions). In some other materials, it’s possible that the relative value is 1. These are the p-wave superconductors. And at the centre of a supercurrent vortex in a p-wave superconductor, physicists expect to find non-Abelian anyons.

    So the ability to create and manipulate these vortices in superconductors, as well as, more broadly, explore and understand how magnet-superconductor heterostructures work, is bound to be handy.


    The Nanyang team’s paper calls the vortices and skyrmions “topological excitations”. An ‘excitation’ here is an accumulation of energy in a system over and above what the system has in its ground state. Ergo, it’s excited. A topological excitation refers to energy manifested in changes to the system’s topology.

    On this subject, one of my favourite bits of science is topological phase transitions.

    I usually don’t quote from Wikipedia but communicating condensed-matter physics is exacting. According to Wikipedia, “topology is concerned with the properties of a geometric object that are preserved under continuous deformations, such as stretching, twisting, crumpling and bending”. For example, no matter how much you squeeze or stretch a donut (without breaking it), it’s going to be a ring with one hole. Going one step further, your coffee mug and a donut are topologically similar: they’re both objects with one hole.

    I also don’t like the Nobel Prizes but some of the research that they spotlight is nonetheless awe-inspiring. In 2016, the prize was awarded to Duncan Haldane, John Kosterlitz and David Thouless for “theoretical discoveries of topological phase transitions and topological phases of matter”.

    Quoting myself from 2016:

    There are four popularly known phases of matter: plasma, gas, liquid and solid. If you cooled plasma, its phase would transit to that of a gas; if you cooled gases, you’d get a liquid; if you cooled liquids, you’d get a solid. If you kept cooling a solid until you were almost at absolute zero, you’d find substances behaving strangely because, suddenly, quantum mechanical effects show up. These phases of matter are broadly called quantum phases. And their phase transitions are different from when plasma becomes a gas, a gas becomes a liquid, and so on.

    A Kosterlitz-Thouless transition describes a type of quantum phase transition. A substance in the quantum phase, like all substances, tries to possess as low energy as possible. When it gains some extra energy, it sheds it. And how it sheds it depends on what the laws of physics allow. Kosterlitz and Thouless found that, at times, the surface of a flat quantum phase – like the surface of liquid helium – develops vortices, akin to a flattened tornado. These vortices always formed in pairs, so the surface always had an even number of vortices. And at very low temperatures, the vortices were always tightly coupled: they remained close to each other even when they moved across the surface.

    The bigger discovery came next. When Kosterlitz and Thouless raised the temperature of the surface, the vortices moved apart and moved around freely, as if they no longer belonged to each other. In terms of thermodynamics alone, the vortices being alone or together wouldn’t depend on the temperature, so something else was at play. The duo had found a kind of phase transition – because it did involve a change in temperature – that didn’t change the substance itself but only a topological shift in how it behaved. In other words, the substance was able to shed energy by coupling the vortices.

    Reality is so wonderfully weird. It’s also curious that some concepts that seemed significant when I was learning science in school (like invention versus discovery) and in college (like particle versus quasiparticle) – concepts that seemed meaningful and necessary to understand what was really going on – don’t really matter in the larger scheme of things.

  • Physicists produce video of time crystal in action 😱

    Have you heard of time crystals?

    A crystal is any object whose atoms are arranged in a fixed pattern in space, with the pattern repeating itself. So what we typically know to be crystals are really space crystals. We didn’t have to bother with the prefix because space crystals were the only kind of crystals we knew until time crystals came along.

    Time crystals are crystalline objects whose atoms exhibit behaviour that repeats itself in time, as periodic events. The atoms of a time crystal spin in a fixed and coordinated pattern, changing direction at fixed intervals.

    Physicists sometimes prefer to quantify these spin patterns as quasiparticles to simplify their calculations. Quasiparticles are not particles per se. To understand what they are, consider a popular one called phonons. Say you strike a metal spoon on the table, producing a mild ringing sound. This sound is the result of sound waves propagating through the metal’s grid of atoms, carrying vibrational energy. You could also understand each wave to be a particle instead, carrying the same amount of energy that each sound wave carries. These quasiparticles are called phonons.

    In the same way, patterns of spinning charged particles also carry some energy. Each electron in an atom, for example, generates a tiny magnetic field around itself as it spins. The directions in which the electrons in a material spin collectively determine many properties of the material’s macroscopic magnetic field. Sometimes, shifts in some electrons’ magnetic fields could set off a disturbance in the macroscopic field – like waves of magnetic energy rippling out. You could quantify these ‘spin waves’ in the form of quasiparticles called magnons. Note that magnons quantify spin waves; the waves themselves can be from electrons, ions or other charged particles.

    As quasiparticles, magnons behave like a class of particles called bosons – which are nature’s force-carriers. Photons are bosons that mediate the electromagnetic force; W and Z bosons mediate the weak nuclear force responsible for radioactivity; gluons mediate the strong nuclear force, which carries the energy you see released by nuclear weapons; scientists have hypothesised the existence of gravitons, for gravity, but haven’t found them yet. Like all bosons, magnons don’t obey Pauli’s exclusion principle and they can be made to form exotic states of matter like superfluids and Bose-Einstein condensates.

    Other quasiparticles include excitons and polarons (useful in the study of electronic circuits), plasmons (of plasma) and polaritons (of light-matter interactions).

    Physicist Frank Wilczek proposed the existence of time crystals in 2012. One reason time crystals are interesting to physicists is that they break time-translation symmetry in their ground state.

    This statement has two important parts. The first concerns time-translation symmetry-breaking. Scientists assume the laws of physics are the same in all directions – yet we still have objects like crystals, whose atoms are arranged in specific patterns that repeat themselves. Say the atoms of a crystal are arranged in a hexagonal pattern. If you kept the position of one atom fixed and rotated the atomic lattice around it or if you moved to the left or right of that atom, in both cases by an arbitrary amount, your view of the lattice will also change. This happens because crystals break spatial symmetry. Similarly, time symmetry is broken if an event repeats itself in time – like, say, a magnetic field whose structure changes between two shapes over and over.

    The second part of the statement concerns the (thermodynamic) ground state – the state of any quantum mechanical system when it has its lowest possible energy. (‘Quantum mechanical system’ is a generic term for any system – like a group of electrons – in which quantum mechanical effects have the dominant influence on the system’s state and behaviour. An example of a non-quantum-mechanical system is the Solar System, where gravity dominates.) Wilczek revived interest in time crystals as objects that break time-translation symmetry in their ground states. Put another way, they are quantum mechanical systems whose constituent particles perform a periodic activity without changing the overall energy of the system.

    The advent of quantum mechanics and relativity theory in the early 20th century alerted physicists to the existence of various symmetries and, through the work of Emmy Noether, their connection to different conservation laws. For example, a system in which the laws of nature were the same throughout history and will be in future – i.e. preserves time-translation symmetry – will also conserve energy. Does this mean time crystals violate the law of conservation of energy? No. The atoms’ or electrons’ spin is not the result of the electrons’ or atoms’ kinetic energy but is an inherent quantum mechanical property. This energy can’t be used to perform work the same way, say, a motor can pump water. The system’s total energy is still conserved.

    Now, physicists from Germany have reported that they have observed a time crystal ‘in action’ – a feat notable on three levels. First, it’s impressive that they have created a time crystal in the first place (even if they are not the first to do so). The researchers passed radio frequency waves through a strip of nickel-iron alloy a few micrometers wide. According to ScienceAlert, this ‘current’ “produced an oscillating magnetic field on the strip, with magnetic waves travelling onto it from both ends”. As a result, they “stimulated the magnons in the strip, and these moving magnons then condensed into a repeating pattern”.

    Second, while quasiparticles are not actual particles per se, they exhibit some properties of particles. One of them is scattering, like two billiard balls might bounce off each other to go off in different directions at different speeds. Similarly, the researchers created more magnons and scattered them off the magnons involved in the repeating pattern. The post-scatter magnons had a shorter wavelength than they did originally, in line with expectations, and the researchers also found that they could control this wavelength by adjusting the frequency of the stimulating radio waves.

    An ability to control such values often means the process could have an application. The ability to precisely manipulate systems involving the spin of electrons has evolved into a field called spintronics. Like electronics makes use of the electrical properties of subatomic particles, spintronics is expected to leverage spin-related properties and enable ultra-fast hard-drives and other technologies.

    Third, the researchers were able to produce a video showing the magnons moving around. This is remarkable because the thing that makes a time crystal so unique is the result of quantum mechanical processes, which are microscopic in nature. It’s not often that you can observe their effects on the macroscopic scale. The principal reason the researchers were able achieve this is feat is the method they used to create the time crystal.

    Previous efforts to create time crystals have used systems like quantum gases and Bose-Einstein condensates, both of which require sophisticated apparatuses to work with, in ultra-cold conditions, and whose behaviour researchers can track only by carefully measuring their physical and other properties. On the other hand, the current experiment works at room temperature and uses a more ‘straightforward’ setup that is also fairly large-scale – enough to be visible under an X-ray microscope.

    Working this microscope is no small feat, however. Charged particles emit radiation when they’re accelerated along a circular path. An accelerator called BESSY II in Berlin uses this principle to produce X-rays. Then the microscope, called MAXYMUS, focuses the X-rays onto an extremely small spot – a few nanometers wide – and “scans across the sample”, according to its official webpage. A “variety of X-ray detectors”, including a camera, observe how the X-rays interact with the sample to produce the final images. Here’s the resulting video of the time crystal, captured at 40 billion frames per second:

    I asked one of the paper’s coauthors, Joachim Gräfe, a research group leader in the department of modern magnetic systems at the Max Planck Institute for Intelligent Systems, Stuttgart, two follow-up questions. He was kind enough to reply in detail; his answers are reproduced in full below:

    1. A time crystal represents a system that breaks time translation symmetry in its ground state. When you use radio-frequency waves to stimulate the magnons in the nickel-iron alloy, the system is no longer in its ground state – right?

    The ground state debate is the interesting part of the discussion for theoreticians. Our paper is more about the experimental observation and an interaction towards a use case. It is argued that a time crystal cannot be a thermodynamic ground state. However, it is in a ground state in a periodically alternating potential, i.e. a dynamic ground state. The intriguing thing about time crystals is that they are in ground states in these periodically alternating potentials, but they do not/will not necessarily have the same periodicity as the alternating potential.

    The condensation of the magnonic time crystal is a ground state of the system in the presence of the RF field (the periodically alternating potential), but it will dissipate through damping when the RF field is switched off. However, even in a system without damping, it would not form without the RF field. It really needs the periodically alternating potential. It is really a requirement to have a dynamic system to have a time crystal. I hope I have not confused you more than before my answer. Time crystals are quite mind boggling. 😵🤯

    1. Previous experiments to observe time crystals in action have used sophisticated systems like quantum gases and Bose-Einstein condensates (BECs). Your experiment’s setup is a lot more straightforward, in a manner of speaking. Why do you think previous research teams didn’t just use your setup? Or does your setup have any particular difficulty that you overcame in the course of your study?

    Interesting question. With the benefit of hindsight: our time crystal is quite obvious, why didn’t anybody else do it? Magnons only recently have emerged … as a sandbox for bosonic quantum effects (indeed, you can show BEC and superfluidity for magnons as well). So it is quite straightforward to turn towards magnons as bosons for these studies. However, our X-ray microscope (at the synchrotron light source) was probably the only instrument at the time to have the required spatial and temporal resolution with magnetic contrast to shoot a video of the space-time crystal. Most other magnon detection methods (in the lab) are indirect and don’t yield such a nice video.

    On the other hand, I believe that the interesting thing about our paper is not that it was incredibly difficult to observe the space time crystal, but that it is rather simple to create one. Apparently, you can easily create a large (magnonic) space time crystal at room temperature and do something with it. Showing that it is easy to create a space time crystal opens this effect up for technological exploitation.

  • Anti-softening science for the state

    The group of ministers (GoM) report on “government communication” has recommended that the government promote “soft topics” in the media like “yoga” and “tigers”. We can only speculate what this means, and that shouldn’t be hard. The overall spirit of the document is insecurity and paranoia, manifested as fantasies of reining in the country’s independent media into doing the government’s bidding. The promotion of “soft” stories is in line with this aspiration – “soft” here can only mean stories that don’t criticise the government, its actions or policies, and be like ‘harmless entertainment’ for a politically inert audience. It’s also no coincidence that the two examples on offer of such stories skirt the edges of health and environmental journalism; other examples are sure to include reports of scientific discoveries.

    Science is closely related to the Indian state in many ways. The current government in particular, in power since 2014, has been promoting application-oriented R&D (a bias especially visible in budgetary allocations); encouraging ill-prepared research facilities to self-finance; privileging certain private interests (esp. the Reliance and Adani groups) vis-à-vis natural resources like coal, coastal zones and spectrum allocations; pillaging India’s ecological commons for industrialisation; promoting pseudoscience (which further disempowers those closer to society’s margins); interfering at universities by appointing vice-chancellors friendly to the ruling party (and if that doesn’t work, jailing students on ridiculous charges that include dissent); curtailing academic freedom; and hounding after scientists and institutions that threaten its preferred narratives.

    With this in mind, it’s important for science journalism outlets and science journalists to not become complicit – inadvertently or otherwise – in the state project to “soften” science, and start reporting, if they aren’t already, on issues with a closer eye on their repercussions on the wider society. The idea that science journalism can or should be objective the way science is is nonsensical because the idea that science is an objective enterprise is nonsensical. The scientific method is a technique to obtain information about the natural universe while steadily subtracting the influence of human biases and other limitations. However, what scientists choose to study, how they design their studies and what is ultimately construed to be knowledge are all deeply human enterprises.

    On top of this, science journalism is driven by journalists’ sense of good and bad: We write favourably about the former and argue against the latter. We write about some telescope unravelling a long-standing cosmogonic problem and also publish an article calling out homeopathy’s bullshit. We write a scientific paper that uses ingenious methods to prove its point and also call out Indian academia as an unsafe space for queer-trans people.

    Some have advanced a defence that simply focusing on “good science” can inculcate in the audience a sense of what is “worthy” and “desirable” while denying “bad science” the platform and publicity it seeks. This is objectionable on two counts.

    First, who decides what is “worthy”? For example, some scientists, especially in the ‘senior’ cadre and the more influential and/or powerful for it, make this choice by deferring to the wisdom of scientific journals, chosen according to their impact factors, and what the journals have deemed worthy of publishing. But abiding by this heuristic only means we continue to participate in and extend the lifetime of the existing ways of knowledge production that privilege white scientists, male scientists and richer scientists – and sensational positive results on topics that the scientists staffing the journals’ editorial boards would like to focus on.

    Second, being limited to goodness at a time when badness abounds is bad, at least severely tone-deaf (but I’m disinclined to be so charitable). Very broadly, that science is inherently amoral is a pithy factoid by this point. There have been far too many incidents in history for anyone to still be able to overlook, in good faith, the fact that science’s prescriptions unguided by human morals and values are quite likely to lead to humanitarian disasters. We may even be living through one such. Scientists’ rapid and successful development of new vaccines against a new pathogen was followed by a global rush to acquire enough doses. But the world’s industrial and economic powers have ensured that the strongest among them have enough to vaccine their entire populations more than once, have blocked petitions at global fora to loosen patents on these vaccines to expand manufacturing and distribution, have forced desperate countries to purchase doses at prices higher than those for developed blocs like the EU, and have allowed corporate behemoths to make monumental profits even as they force third-world nations to pledge sovereign assets to secure supplies. It’s fallacious to claim scientific labour makes the world a better place when the fruits of such labour must still be filtered, like so much else, through the capitalist sieve.

    There are many questions for the science journalist to consider here: why have some communities in certain countries been affected more than others? Why is there so little data on the vaccines’ consequences for pregnant women? Do we know enough to discuss the pandemic’s effects on women? Why, at a time when so many scientists and engineers were working to design new ventilators, was there no unified standard to ensure usability? If the world has demonstrated that it’s possible to design, test, manufacture and administer vaccines against a new virus in such a short time, why have we been waiting so long for effective defences against neglected tropical diseases? How do the racial, gender and ethnic identifies of clinical trials affect trial outcomes? Is it ethical for countries that hosted vaccine clinical trials to get the first doses? Should we compulsorily prohibit patents on drugs, therapies and devices important to ending pandemics? If so, what might the consequences be for drug development? And what good is a vaccine if we can’t also ensure all the world’s 7.x billion people can be vaccinated simultaneously?

    The pandemic isn’t a particularly ‘easy’ example either. For example, if the government promises to develop new supercomputers, who can use them and what problems will they be used to solve? How can we improve the quality and quantity of research conducted at institutes funded by state governments? Why do so many scientists at public universities plagiarise scientific papers? On what basis are the winners of the S.S. Bhatnagar Award chosen? Should we formally do away with subscription-funded scientific journals in favour of open-access publishing, overlay journals and post-publication peer-review? Is methane really a “clean fuel” even though its extraction and transportation will impose a considerable dirty cost? Why can’t we have more GM foods in the market even though the science is ‘good’? Is it worthwhile to invest Rs 10,000 crore in a human spaceflight programme that lacks long-term vision? And so forth.

    Simply focusing on “good science” at our present time is not enough. I also reject the argument that it’s not for science journalists to protect or defend science simply because science, whatever it’s interpreted to mean, is not the preserve of scientists. As an enterprise rooted in its famous method, science is a tool of empowerment: it encourages discovery and deliberation; I’m not sure if it’s fair to say it encourages dissent as well but there is evidence that science can accommodate it without resorting to violence and subjugation.

    It’s not for nothing that I’m more comfortable holding up an aspirin tablet for someone with a headache than a jar of leaves from the Patanjali Ayurved stable: being able to know how and why something works is power in the same way knowing how the pharmaceutical industry manipulates markets, how to file an RTI application, what makes an FIR valid or invalid, what the election commission’s model code of conduct stipulates or what kind of land a mall can be built on is power. All of it represents control, especially the ability to say ‘no’ and mean it.

    This is ultimately what the GoM report fantasises about – and what the present government desires: the annulment of individual and institutional resistance, one subset of which is the neutralisation of science’s ability to provoke questions about atoms and black holes as much as about the circumstances in which scientists study them, about the nature, utility and purpose of knowledge, and the relationships between science, capital and the state.


    Addendum

    In January 2020, the Office of the Principal Scientific Adviser (PSA) to the Government of India organised a meeting with science journalists and communicators from around the country to discuss what the two parties could do for each other. Us journalists and communicators aired a lot of grievances during the meeting as well as suggestions on fixing long-standing and/or particularly thorny problems (some notes here).

    In light of the government’s renewed attention on curbing press freedom and ludicrous suggestions in the report, such as one by S. Gurumurthy that the news should be a “mixture of truth and untruth”, I’m not sure where that leaves the PSA’s plans for future consultation nor – considering parts of the report seemingly manufactured consent – whether good-faith consultation will be possible going ahead. I can only hope that members of this community at least evoke and keep the faith.

  • The commentariot

    The following post is an orange flag – a quieter alarm raised in anticipation of something worse that hasn’t transpired yet but is likely in the offing. Earlier today, at the end of a call with a scientist for a story, the scientist implied that my job – as science journalist – required nothing of me but to be a commentator, whereas his required him to be a ‘maker’ and that that was superior. At the outset, this is offensive because if you don’t think journalism requires both creative and non-creative work to conduct ethically, you either don’t know what journalism is or you’re taking its moving parts for granted.

    But the scientist’s comment merited an orange flag, I thought, because it’s the fourth time I’ve heard something like that in the last three months – and is a point of view I can’t help but think is attached in some way to our present national government and the political climate it has engendered. (All four scientists worked for government-funded institutes but I say this only because of the slant of their own views.)

    The Modi government is, among many other things, a cult of personality centred on the prime minister and his fabled habit of getting things done, even if they’re undemocratic or just unconstitutional. Many of the government’s reforms today are often cast as being in stark contrast to the Congress’s rule of the country – that “Modi did what no other prime minister had dared.” The illegitimacy of these boasts aside, the government and its supporters are obviously proud of their ability to act swiftly and have rendered inaction in any form a sin (to the point where this government has also been notorious for repackaging previous governments’ schemes as its own).

    They have also consigned many activities as being sinful for the same reason because their practice is much too tempered, or whose outcomes they believe “don’t go far enough”, for their taste. Journalism is one of them. A conversation a few months ago with a person who was both scientist and government official alerted me as to how real this sentiment might be in government circles when they said, “I have real work unlike you and I will get back to you with a concrete answer in two or three days.” The other scientists also said something similar. The right-wing has often cast the mainstream Indian journalism establishment as elite, classist, corrupt and apologist, and the accusation that it doesn’t do any real work – “certainly not to the nation’s benefit” – simply extends this view.

    But for scientists to denigrate the work of science journalists, especially since their training should have alerted them to different ways in which science is both good and hard, is more than dispiriting. It’s a sign that “journalists don’t do good work” is more than just an ideological spearpoint used to undermine adversarial journalism, that it is something at least parts of the establishment believe to be true. And it also suggests that the stories we publish are being read as nothing more than the babble of a lazy commentariot.

  • The clocks that used atoms and black holes to stay in sync

    You’re familiar with clocks. There’s probably one if you look up just a little, at the upper corner of your laptop or smartphone screen, showing you what time of day it is, allowing you to quickly grasp the number of daytime or nighttime hours, depending on your needs.

    There some other clocks that are less concerned about displaying ‘clock time’ and more about measuring the passage of. These devices are useful for applications designed to understand this dimension in a deeper sense. The usefulness of these clocks also depends more strongly on the timekeeping techniques they employ.

    For example, consider the caesium atomic clock. Like all clocks, it is a combination of three things: an oscillator, a resonator and a detector. The oscillator is a finely tuned laser that shines on an ultra-cold gas of caesium atoms in a series of pulses. If the laser has the right frequency, an electron in a caesium atom will absorb a corresponding photon, jump to a higher energy level before then jumping back to its original place by emitting radiation of exactly 9,192,631,770 Hz. This radiation is the resonator.

    The detector will be looking for radiation of this frequency – and the moment it has detected 9,192,631,770 waves (from crest to trough), it will signal that one second has passed. This is also why, technically, a caesium clock can be used to measure out a nine-billionth of a second.

    Scientists have need for even more precise clocks, clocks that use extremely stable resonators and, increasingly of late, clocks that combine both advantages. This is why scientists developed optical atomic clocks. The caesium atomic clock has a resonant frequency of 9,192,631,770 Hz, which lies in the microwave part of the electromagnetic spectrum. Optical atomic clocks use resonators that have a frequency in the optical part. This is much higher.

    For example, physicists at the Inter-University Centre for Astronomy and Astrophysics and the Indian Institute of Science Education and Research, both Pune, are building clocks that use ytterbium and strontium ions, respectively, with resonator frequencies of 642,121,496,772,645 Hz and 429,228,066,418,009 Hz. So technically, these clocks can measure out 600-trillionths and 400-trillionths of a second, allowing scientists ultra-precise insights into how long very short-lived events really last or how closely theoretical predictions and experimental observations match up.

    In fact, because we have not managed to measure 400-trillionths of a kilogram, of a metre or in fact of any other SI unit, time is currently the most precisely measured physical quantity ever.


    Sometimes, scientists need to use multiple atomic clocks in the course of an experiment or to ascertain how synchronised they are. This is not a trivial exercise.

    For example, say you have two clocks whose performance you need to compare. If they are simple digital clocks, you could check how precisely each one of them records the amount of time between, say, astronomical dawn and astronomical dusk (the moments when the Sun is 18º below the horizon before sunrise and after sunset, respectively). Here, you take the act of looking at each clock face for granted. If the clocks are right in front of you, light travels nearly instantaneously between your eye and the display. And because the clocks tick one second at a time, you can repeat the task of checking their synchronisation as often as you need to just by looking.

    What do you do if you need to know how well two optical atomic clocks are matched up continuously and if they are separated by, say, a thousand kilometres? Scientists in Europe demonstrated one solution to this problem in 2015.

    They had optical clocks in Paris and Braunschweig connected with fibre optic cables to a processing station in Strasbourg. The resonant frequency of each clock was encoded in a ‘transfer laser’ that was then beamed through the cables to Strasbourg, where a detector measured the two laser pulses to decode the relative beat of each clock in real-time. The total length of the fibre optic cables in this case was 1,415 km. With this “all-optical” setup plus signal processing techniques, the research team reported a precision of three parts in 10-19 after an averaging time of just 1,000 seconds – a cutting-edge feat.

    But scientists are likely to need one step better, if only because they also anticipate that the advent of optical atomic clocks at facilities around the world is likely to lead to a redefinition of the SI unit of time. The second’s current definition – “the time duration of 9,192,631,770 periods of the radiation” emitted by electrons transitioning between two particular energy levels of a caesium-133 atom – originated in 1967, when microwave atomic clocks were the state of the art.

    Today, optical atomic clocks have this honour – and because they are more stable and use a higher resonator frequency than their microwave counterparts, it only makes sense to update the definition of a second. When this happens, optical clocks around the world will have to speak to each other constantly to make sure what each of them is measuring to be one second is the same everywhere.

    Some of these clocks will be a few hundred kilometres apart, and others a lot more. In fact, scientists have figured it would be useful to have a way for two optical atomic clocks located on different continents to be able to work with each other. This represents the current version of the coordination problem, and scientists in Europe and Japan recently demonstrated a solution. It involves astronomy, because astronomy has a similar problem.


    Everything in the universe is constantly in motion, which means telling the position of one moving object from another – like that of Venus from Earth – is bound to be more complicated from the start than knowing where your friend lives in a different city.

    But astronomers have still figured out a way to establish a fixed reference frame that provides useful information about the location of different cosmic objects through space and time. They call it the International Celestial Reference Frame (ICRF). Its centre is located at the barycentre of the Solar System – the point around which all the planets in the Solar System orbit. Each of its three axes points in the direction of groups of objects called defining sources.

    Many of these objects are quasars. ‘Quasar’ is a portmanteau of ‘quasi-stellar’, and is the name of the region at the centre of a galaxy where there is a supermassive black hole surrounded by a highly energised disk of gas and dust. Quasars are as such extremely bright. Astronomers spotted the first of them because they showed up in radio-telescope data as previously unknown star-like sources of radio waves. Because each galaxy can technically have only one quasar each, the number of quasars in the sky is not very high (relatively speaking) and most quasars are also located at such great distances that the radio waves they emit become very weak by the time they reach Earth’s radio telescopes.

    So on Earth, physicists either use very powerful telescopes to detect them or a collection of telescopes that work together using a technique called very-long baseline interferometry (VLBI). The idea is elegant but the execution is complicated.

    Say some process in the accretion disk around the black hole at the Milky Way’s centre emits radio waves into space. These waves propagate through the universe. At some point, after many thousands of years, they reach radio telescopes on Earth. Because the telescopes are located at vastly different locations, in Maharashtra, Canary Islands and Hawaii, say, they will each detect and measure the radio wave signals at slightly different points of time. There may also be slight differences in the waves’ characteristics because they are likely to have moved through different forms and densities of matter in their journey through space.

    Computers combine the exact times at which the signals arrive at each telescope and the signals’ physical properties (like frequency, phase, etc.) with a sophisticated technique called cross-correlation to produce a better-resolved picture of the source that emitted them than if they had used data from only one telescope.

    In fact, the resolving power of a radio telescope is proportional to the telescope’s baseline. If scientists are using only one telescope to make an observation, the baseline is equal to the dish’s diameter. But with VLBI radio astronomy, the baseline is equal to the longest distance between two telescopes in the array. This is why this technique is so powerful.

    For example, to capture the first direct image of the black hole at the Milky Way’s centre, some 52,000 lightyears away, astronomers combined an array of eight telescopes located in North America, South America, Hawaii, Europe and the South Pole to form the Event Horizon Telescope. At any given time, the baseline would be determined by two telescopes that can observe the black hole simultaneously. And as Earth rotated, different pairs of telescopes would work together to keep observing the black hole even as their own view of the black hole would change.

    Each telescope would record a signal together with a very precise timestamp, provided by an atomic clock installed at the same facility or nearby, in a hard-drive. Once an observing run ended, all the hard-drives would be shipped to a processing facility, where computers would combine the signal and time data from them to create an image of the source.

    As it happens, the image of the black hole the Event Horizon collaboration released in 2019 could have been available sooner if not for the fact that there are no flights from April to October from the South Pole. So astrophysics also has some coordination problems, but astrophysicists have been able to figure them out thanks to tools like VLBI. Perhaps it’s not surprising then that scientists have thought to use VLBI to solve optical atomic clocks’ coordination problem as well.

    According to a paper published in July 2020, the current version of ICRF is the third iteration, was adopted on January 1, 2019, and uses 4,588 sources. Of these, the positions of exactly 500 sources – including some quasars – are known with “extreme accuracy”. Using this information, the European-Japanese team reversed the purpose of VLBI to serve atomic clocks.

    Using VLBI to measure the positions and features of distant astronomical objects is called VLBI astrometry. Doing the same to measure distances on Earth, like the European-Japanese team has done, is called VLBI geodesy. In the former, astronomers use VLBI to reduce uncertainties about distant sources of radio waves by being as certain as possible about the distance between the telescopes (and other mitigating factors like atmospheric distortion). Flip this: if you are as certain as possible about the distance from Earth to a particular quasar, you can use VLBI to reduce uncertainties about the distance between two atomic clocks instead.

    And the science and technologies we have available today have allowed astronomers to resolve details down to a few billionths of a degree in astrometry – and to a few millimetres in geodesy.

    The European-Japanese team implemented the same idea. The team members used three radio telescopes. Two of them, located in Medicina (Italy) and Koganei (Japan), were small, with dishes of diameter 2.4 m, but with a total baseline of 8,700 km. The Medicina telescope was connected to a ytterbium optical atomic clock in Torino and the Koganei telescope to a strontium optical atomic clock in the same facility.

    First, the Torino clock’s resonator frequency was converted from the optical part of the spectrum to the microwave part using a device called a frequency comb, like in the schematic shown below.

    (To quote myself from an older article: “A frequency comb is an advanced laser whose output radiation lies in multiple, evenly-spaced frequencies. This output can be used to convert high-frequency optical signals into more easily countable lower-frequency microwave signals.”)

    This microwave frequency is transferred to a laser that is beamed through a fibre optic cable to the Medicina telescope. Similarly, at Koganei, the strontium clock’s resonator frequency is converted using a frequency comb to a corresponding microwave counterpart. At this point, both telescopes have time readings from optical atomic clocks in the form of more easily counted microwave radiation.

    In the second step, the scientists used VLBI to determine as accurately as possible the time difference between the two telescopes. For this, the telescopes observed a quasar whose position was known to a high degree of accuracy in the ICRF system.

    Since quasars are inherently far away and the two telescopes are quite small (as radio telescopes go), they were able to detect the quasar signal only weakly. To adjust for this, the team connected both telescopes via high-speed internet links to a large 34-m radio telescope in Kashima, also in Japan. This way, the team writes in its paper published in October 2020,

    “the delay observable between the transportable stations can be calculated as the difference of the two delays with the large antenna after applying a small correction factor”.

    Once the scientists had a delay figure, they worked backwards to estimate when exactly the two telescopes ought to have recorded their respective signals, based on which they could calculate the ratio of the microwave frequencies, and finally based on which they could calculate the ratio of the two clocks’ optical frequencies – autonomously, in real-time. To quote once again from the team’s paper:

    “One node was installed at NICT headquarters in Koganei (Japan) while the other was transported to the Radio Astronomical Observatory operated by INAF in Medicina (Italy), forming an intercontinental baseline of 8,700 km. Observational data at Medicina and Koganei were stored on hard-disk drives at each station and transferred over high-speed internet networks to the correlation centre in Kashima for analysis. Ten frequency measurements were performed via VLBI between October 2018 and February 2019, and from these we calculated the frequency difference between the reference clocks at the two stations: the local hydrogen masers in Medicina and Koganei. Each session lasted from 28 h to 36 h and included at least 400 scans observing between 16 and 25 radio sources in the ICRF list.”

    This way, they reported the ability to determine the frequency ratio with an uncertainty of 10-16 after ten-thousand seconds, and perhaps as low as 10-17 after a longer averaging time of ten days.

    This is very good, but more importantly it’s better than the uncertainty arising from directly comparing the frequencies of two optical atomic clocks by relaying data through satellites. An uncertainty of 10-17 also means physicists can use multiple optical atomic clocks to study extremely slow changes, and potentially be confident about the results down to 0.00000000000000001 seconds.


    The architecture of the solution also presents some unique advantages, as well as food for thought.

    The setup effectively requires optical atomic clocks to be connected to small, even portable, radio telescopes as long as these telescopes are then connected to a larger one located somewhere else through a high-speed internet connection. These small instruments “can be operated without the need for a radio transmission licence,” the team writes in the paper, and “where laboratories lack the facilities or sky coverage to house a VLBI station, they can be connected by local optical-fibre links” like the one between Medicina and Torino.

    The scientists have effectively used existing methods to solve a new problem instead of finding an altogether new solution. This isn’t to say new solutions are disfavoured but only that the achievement, apart from being relatively low cost and well-understood, is ingenious, and keeps the use of optical atomic clocks for all the applications they portend from becoming too resource-intensive.

    It’s also fascinating that the clocks participating in this exercise are effectively a group of machines translating between processes playing out at two vastly different scales – one of minuscule electrons emitting tiny amounts of radiation over short distances and the other of radiation of similar provenance emerging from the exceedingly unique neighbourhoods of colossal black holes, travelling for many millennia at the speed of light through the cosmos.

    Perhaps this was to be expected, considering the idea of using a clock is fundamentally a quest for a foothold, a way to translate the order lying at the intersection of seemingly chaotic physical processes, all directed by the laws of nature, to a metronome that the human mind can tick to.

    Featured image: A simulation of a black hole from the 2014 film ‘Interstellar’. Source: YouTube.

  • Reading fog data from INSAT 3DR

    At 7.57 am today, the India Meteorological Department’s Twitter handle posted this lovely image of fog over North India on January 21, as captured by the INSAT 3DR satellite. However, it didn’t bother explaining what the colours meant or how the satellite captured this information. So I dug a little.

    At the bottom right of the image is a useful clue: “Night Microphysics”. According to this paper, the INSAT 3D satellite has an RGB (red, green, blue) imager whose colours are determined by two factors: solar reflectance and brightness temperature. Solar reflectance is a ratio of the amount of solar energy reflected by a surface and the amount of solar energy incident on it. Brightness temperature has to do with the relationship between the temperature of an object and the corresponding brightness of its surface. It is different from temperature as we usually understand it – by touching a glass of hot tea, say – because brightness temperature also has to do with how the tea glass emits the thermal radiation: at different frequencies in different directions.

    INSAT 3D’s ‘day microphysics’ data component studies solar reflectance at three wavelengths: 0.5 µm (visible), 1.6 µm (shortwave infrared) and 10.8 µm (thermal infrared). The strength of the visible signal determines the amount of green colour; the strength of the shortwave infrared signal, the amount of red colour; and the strength of the thermal infrared signal, the amount of blue colour. This way, the INSAT 3D computer determines the colour on each point of the screen.

    According to the paper:

    The major applications of this colour scheme are an analysis of different cloud types, initial stages of convection, maturing stages of a thunderstorm, identification of snow area and the detection of fires.

    The authors also note that the INSAT 3D is useful to image snow: while the solar reflectance of snow and the clouds is similar in the visible part of the spectrum, snow absorbs radiation of 1.6 µm strongly. As a result, when the satellite is imaging snow, the red component of the colour scheme becomes very weak.

    The night microphysics is a little more involved. Here, two colours are determined not by a single signal but by the strength of the difference between two signals. The computer determines the amount of red colour according to the difference between two thermal infrared signals: 12 µm and 10 µm. The amount of green colour varies according to the difference between a thermal infrared and a middle infrared signal: 10.8 µm and 3.9 µm. The amount of blue colour is not a difference but is determined by the strength of a thermal infrared signal of wavelength 10.8 µm.

    For example, in the image above, the data indicates three kinds of clouds. (‘K’ denotes the temperature differences in kelvin.) A mature cumulonimbus cell, possibly part of a tropical storm, hangs over West Bengal and is visible mostly in red, but whose blue component indicates it is also very cold. Somewhere north of Delhi, flecks of green dominate, indicating a preponderance of lower clouds. Further north, a the sky is dominated by a heavy, high cloud system that encompasses lower clouds as well.

    By combining day and night microphysics data, atmospheric scientists can elucidate the presence of moisture droplets of different shapes and temperature differences over time, and in turn track the formation, evolution and depletion of cyclones and other weather events.

    For example, taking advantage of the fact that INSAT 3D can produce images based on signals of multiple wavelengths, the authors of the paper have proposed day and night microphysics data that they say would indicate a thunderstorm impending in one to three hours.

    Both INSAT 3D and INSAT 3DR use radiometers to make their spectral measurements. A radiometer is a device that measures various useful properties of radiation, typically by taking advantage of radiation’s interaction with matter (e.g. in the form of temperature or electrical activity).

    Both satellites also carry atmospheric sounders. They measure temperature and humidity and study water vapour as a function of their heights from the ground.

    Scientists combine the radiometer and sounder measurements to understand various atmospheric characteristics.

    According to the INSAT 3DR brochure, its radiometer is an upgraded version of the very-high-resolution radiometer (VHRR) that the Kalpana 1 and INSAT 3A satellites used (launched in 2002 and 2003, respectively).

    The Space Application Centre’s brief for INSAT 3A states: “For meteorological observation, INSAT-3A carries a three channel Very High Resolution Radiometer (VHRR) with 2 km resolution in the visible band and 8 km resolution in thermal infrared and water vapour bands.” The radiometers onboard 3D and 3DR have “significant improvements in spatial resolution, number of spectral channels and functionality”.

    The Kalpana 1 and INSATs 3A, 3D and 3DR satellites aided India’s weather monitoring and warning services with the best technology available in the country at the time, and with each new satellite being an improved as well as better-equipped version of the previous one. So while Kalpana 1 had a launch mass of 1,060 kg and carried a early VHRR and a data-relay transponder, INSAT 3DR had a launch mass of 2,211 kg – in 2016 – and carried an upgraded VHRR, a sounder, a data-relay transponder and a search-and-rescue transponder.

    India deactivated Kalpana 1 in September 2017, after 15 years in orbit. The INSAT 3A, 3D and 3DR satellites are currently active in a geostationary orbit around Earth, at inclinations respectively of 93.5º, 82º and 74º E longitudes.

  • The Wire Science is hiring

    Location: Bengaluru or New Delhi

    The Wire Science is looking for a sub-editor to conceptualise, edit and produce high-quality news articles and features in a digital newsroom.

    Requirements

    • Good faculty with the English language
    • Excellent copy-editing skills
    • A strong news sense
    • A strong interest in new scientific findings
    • Know how to read scientific papers
    • Familiarity with concepts related to the scientific method and scientific publishing
    • Familiarity with popular social media platforms and their features
    • Familiarity with the WordPress content management system (CMS)
    • Ability to handle data (obtaining data, sorting and cleaning datasets, using tools like Flourish to visualise)
    • Strong reasoning skills
    • 1-3 years’ work experience
    • Optional: have a background in science or engineering

    Responsibilities

    • Edit articles according to The Wire Science‘s requirements, within tight deadlines
    • Make editorial decisions in reasonable time and communicate them constructively
    • Liaise with our reporters and freelancers, and work together to produce stories
    • Work with The Wire Science‘s editor to develop ideas for stories
    • Compose short news stories
    • Work on multimedia rendering of published stories (i.e. convert text stories to audio/video stories)
    • Work with the tech and audience engagement teams to help produce and implement features

    Salary will be competitive.

    Dalit, Adivasi, OBC and minority candidates are encouraged to apply.

    If you’re interested, please write to Vasudevan Mukunth at science@thewire.in. Mention you’re applying for The Wire Science sub-editor position in the subject line of your email. In addition to attaching your resumé or CV, please include a short cover letter in the email’s body describing why you think you should be considered.

    If your application is shortlisted, we will contact you for a written test followed by an interview.

  • A Q&A about my job and science journalism

    A couple weeks ago, some students from a university in South India got in touch to ask a few questions about my job and about science communication. The correspondence was entirely over email, and I’m pasting it in full below (with permission). I’ve edited a few parts in one of two ways – to make myself clearer or to hide sensitive information – and removed one question because its purpose was clarificatory.

    1) What does your role as a science editor look like day to day?

    My day as science editor begins at around 7 am. I start off by catching up on the day’s headlines and other news, especially all the major newspapers and social media channels. I also handle a part of The Wire Science‘s social media presence, so I schedule some posts in the first hour.

    Then, from 8 am onwards, I begin going through the publishing schedule – which is a document I prepare on the previous evening, listing all the articles that writers are expected to file on that day, as well as what I need to edit/publish and in which position on the homepage. At 9.30 am, my colleagues and I get on a conference call to discuss the day’s top stories and to hear from our reporters on which stories they will be pursuing that day (and any stories we might be chasing ourselves). The call lasts for about an hour.

    From 10.30-11 am onwards, I edit articles, reply to emails, commission new articles, discuss potential story ideas with some reporters, scientists and my colleagues, check on the news cycle every now and then, make sure the site is running smoothly, discuss changes or tweaks to be made to the front-end with our tech team, and keep an eye on my finances (how much I’ve commissioned for, who I need to pay, payment deadlines, pending allocations, etc.).

    All of this ends at about 4.30 pm. I close my laptop at that point but I continue to have work until 6 pm or so, mostly in the form of emails and maybe some calls. The last thing I do is prepare the publishing schedule for the next day. Then I shut shop.

    2) With leading global newspapers restructuring the copy desk, what are the changes the Indian newspapers have made in the copy desk after the internet boom?

    I’m not entirely familiar with the most recent changes because I stopped working with a print establishment six years ago. When I was part of the editorial team at The Hindu, the most significant change related to the advent of the internet had less to do with the copy desk per se and more to do with the business model. At least the latter seemed more pressing to me.

    But this said, in my view there is a noticeable difference between how one might write for a newspaper and for the web. So a more efficient copy-editing team has to be able to handle both styles, as well as be able to edit copy to optimise for audience engagement and readability both online and offline.

    3) Indian publications are infamous for mistakes in the copy. Is this a result of competition for breaking news or a lack of knack for editing?

    This is a question I have been asking myself since I started working. I think a part of the answer you’re looking for lies in the first statement of your question. Indian copy-editors are “infamous for mistakes” – but mistakes according to whom?

    The English language came to India in different ways, it is not homegrown. British colonists brought English to India, so English took root in India as the language of administration. English is the de facto language worldwide for the conduct of science, so scientists have to learn it. Similarly, there are other ways in which the use of English has been rendered useful and important and necessary. English wasn’t all these things in and of itself, not without its colonial underpinnings.

    So today, in India, English is – among other things – the language you learn to be employable, especially with MNCs or such. And because of its historical relationships, English is taught only in certain schools, schools that typically have mostly students from upper-caste/upper-class families. English is also spoken only by certain groups of people who may wish to secret it as a class symbol, etc. I’m speaking very broadly here. My point is that English is reserved typically for people who can afford it, both financially and socio-culturally. Not everyone speaks ‘good’ English (as defined by one particular lexicon or whatever) nor can they be expected to.

    So what you may see as mistakes in the copy may just be a product of people not being fluent in English, and composing sentences in ways other than you might as a result. India has a contested relationship with English and that should only be expected at the level of newsrooms as well.

    However, if your question had to do with carelessness among copy-editors – I don’t know if that is a very general problem (nor do I know what the issues might be in a newsroom publishing in an Indian language). Yes, in many establishments, the management doesn’t pay as much attention to the quality of writing as it should, perhaps in an effort to cut costs. And in such cases, there is a significant quality cost.

    But again, we should ask ourselves as to whom that affects. If a poorly edited article is impossible to read or uses words and ideas carelessly, or twists facts, that is just bad. But if a poorly composed article is able to get its points across without misrepresenting anyone, whom does that affect? No one, in my opinion, so that is okay. (It could also be the case that the person whose work you’re editing sees the way they write as a political act of sorts, and if you think such an issue might be in play, it becomes important to discuss it with them.)

    Of course, the matter of getting one’s point across is very subjective, and as a news organisation we must ensure the article is edited to the extent that there can be no confusion whatsoever – and edited that much more carefully if it’s about sensitive issues, like the results of a scientific study. And at the same time we must also stick to a word limit and think about audience engagement.

    My job as the editor is to ensure that people are understood, but in order to help them be understood better and better, I must be aware of my own privileges and keep subtracting them from the editorial equation (in my personal case: my proficiency with the English language, which includes many Americanisms and Britishisms). I can’t impose my voice on my writers in the name of helping them. So there is a fine line here that editors need to tread carefully.

    4) What are the key points that a science editor should keep in mind while dealing with copy?

    Aside from the points I raised in my previous answer, there are some issues that are specific to being a good science editor. I don’t claim to be good (that is for others to say) – but based on what I have seen in the pages of other publications, I would only say that not every editor can be a science editor without some specific training first. This is because there are some things that are specific to science as an enterprise, as a social affair, that are not immediately apparent to people who don’t have a background in science.

    For example, the most common issue I see is in the way scientific papers are reported – as if they are the last word on that topic. Many people, including many journalists, seem to think that if a scientific study has found coffee cures cancer, then it must be that coffee cures cancer, period. But every scientific paper is limited by the context in which the experiment was conducted, by the limits of what we already know, etc.

    I have heard some people define science as a pursuit of the truth but in reality it’s a sort of opposite – science is a way to subtract uncertainty. Imagine shining a torch within a room as you’re looking for something, except the torch can only find things that you don’t want, so you can throw them away. Then you turn on the lights. Papers are frequently wrong and/or are updated to yield new results. This seldom makes the previous paper directly fraudulent or wrong; it’s just the way science works. And this perspective on science can help you think through what a science editor’s job is as well.

    Another thing that’s important to know is that science progresses in incremental fashion and that the more sensational results are either extremely unlikely or simply misunderstood.

    If you are keen on plumbing deeper depths, you could also consider questions about where authority comes from and how it is constructed in a narrative, the importance of indeterminate knowledge-states, the pros and cons of scientism, what constitutes scientific knowledge, how scientific publishing works, etc.

    A science editor has to know all these things and ensure that in the process of running a newsroom or editing a publication, they don’t misuse, misconstrue or misrepresent scientific work and scientists. And in this process, I think it’s important for a science editor to not be considered to be subservient to the interests of science or scientists. Editors have their own goals, and more broadly speaking science communication in all forms needs to be seen and addressed in its own right – as an entity that doesn’t owe anything to science or scientists, per se.

    5) In a country where press freedom is often sacrificed, how does one deal with political pieces, especially when there is proof against a matter concerning the government?

    I’m not sure what you mean by “proof against a matter concerning the government.” But in my view, the likelihood of different outcomes depends on the business model. If, for example, you the publisher make a lot of money from a hotshot industrialist and his company, then obviously you are going to tread carefully when handling stories about that person or the company. How you make your money dictates who you are ultimately answerable to. If you make your money by selling newspapers to your readers, or collecting donations from them like The Wire does, you are answerable to your readers.

    In this case, if we are handling a story in which the government is implicated in a bad way, we will do our due diligence and publish the story. This ‘due diligence’ is important: you need to be sure you have the requisite proof, that all parts of the story are reliable and verifiable, that you have documentary evidence of your claims, and that you have given the implicated party a chance to defend themselves (e.g. by being quoted in the story).

    This said, absolute press freedom is not so simple to achieve. It doesn’t just need brave editors and reporters. It also needs institutions that will protect journalists’ rights and freedoms, and also shield them reliably from harm or malice. If the courts are not likely to uphold a journalist’s rights or if the police refuse proper protection when the threat of physical violence is apparent, blaming journalists for “sacrificing” press freedom is ignorant. There is a risk-benefit analysis worth having here, if only to remember that while the benefit of a free press is immense, the risks shouldn’t be taken lightly.

    6) Research papers are lengthy and editors have deadlines. How do you make sure to communicate information with the right context for a wider audience?

    Often the quickest way to achieve this is to pick your paper and take it to an independent scientist working in the same field. These independent comments are important for the story. But specific to your question, these scientists – if they have the time and are so inclined – can often also help you understand the paper’s contents properly, and point out potential issues, flaws, caveats, etc. These inputs can help you compose your story faster.

    I would also say that if you are an editor looking for an article on a newly published research paper, you would be better off commissioning a reporter who is familiar, to whatever extent, with that topic. Obviously if you assign a business reporter to cover a paper about nanofluidic biosensors, the end result is going to be somewhere between iffy and disastrous. So to make sure the story has got its context right, I would begin by assigning the right reporter and making sure they’ve got comments from independent scientists in their copy.

    7) What are some of the major challenges faced by science communicators and reporters in India?

    This is a very important question, and I can’t hope to answer it concisely or even completely. In January this year, the office of the Principal Scientific Advisor to the Government of India organised a meeting with a couple dozen science journalists and communicators from around India. I was one of the attendees. Many of the issues we discussed, which would also be answers to your question, are described here.

    If, for the purpose of your assignment, you would like me to pick one – I would go with the fact that science journalism, and science communication more broadly, is not widely acknowledged as an enterprise in its own right. As a result, many people don’t see the value in what science journalists do. A second and closely related issue is that scientists often don’t respond on time, even if they respond at all. I’m not sure of the extent to which this is an etiquette issue. But by calling it an etiquette issue, I also don’t want to overlook the possibility that some scientists don’t respond because they don’t think science journalism is important.

    I was invited to attend the Young Investigators’ Meeting in Guwahati in March 2019. There, I met a big bunch of young scientists who really didn’t know why science journalism exists or what its purpose is. One of them seemed to think that since scientific papers pass through peer review and are published in journals, science journalists are wasting their time by attempting to discuss the contents of those papers with a general audience. This is an unnecessary barrier to my work – but it persists, so I must constantly work around or over it.

    8) What are the consequences if a research paper has been misreported?

    The consequence depends on the type and scope of misreporting. If you have consulted an independent scientist in the course of your reporting, you give yourself a good chance of avoiding reporting mistakes.

    But of course mistakes do slip through. And with an online publication such as The Wire – if a published article is found to have a mistake, we usually correct the mistake once it has been pointed out to us, along with a clarification at the bottom of the article acknowledging the issue and recording the time at which the change was made. If you write an article that is printed and is later found to have a mistake, the newspaper will typically issue an erratum (a small note correcting a mistake) the next day.

    If an article is found to have a really glaring mistake after it is published – and I mean an absolute howler – the article could be taken down or retracted from the newspaper’s record along with an explanation. But this rarely happens.

    9) In many ways, copy editing disconnects you from your voice. Does it hamper your creativity as a writer?

    It’s hard to find room for one’s voice in a news publication. About nine-tenths of the time, each of us is working on a news copy, in which a voice is neither expected nor can add much value of its own. This said, when there is room to express oneself more, to write in one’s voice, so to speak, copy-editing doesn’t have to remove it entirely.

    Working with voices is a tricky thing. When writers pitch or write articles in which their voices are likely to show up, I always ask them beforehand as to what they intend to express. This intention is important because it helps me edit the article accordingly (or decide whether to edit it at all). The writer’s voice is part of this negotiation. Like I said before, my job as the editor is to make sure my writers convey their points clearly and effectively. And if I find that their voice conflicts with the message or vice versa, I will discuss it with them. It’s a very contested process and I don’t know if there is a black-and-white answer to your question.

    It’s always possible, of course, if you’re working with a bad editor and they just remodel your work to suit their needs without checking with you. But short of that, it’s a negotiation.

  • How do you study a laser firing for one-quadrillionth of a second?

    I’m grateful to Mukund Thattai, at the National Centre for Biological Sciences, Bengaluru, for explaining many of the basic concepts at work in the following article.

    An important application of lasers today is in the form of extremely short-lived laser pulses used to illuminate extremely short-lived events that often play out across extremely short distances. The liberal use of ‘extreme’ here is justified: these pulses last for no more than one-quadrillionth of a second each. By the time you blink your eye once, 100 trillion of these pulses could have been fired. Some of the more advanced applications even require pulses that last 1,000-times shorter.

    In fact, thanks to advances in laser physics, there are branches of study today called attophysics and femtochemistry that employ such fleeting pulses to reveal hidden phenomena that many of the most powerful detectors may be too slow to catch. The atto- prefix denotes an order of magnitude of -18. That is, one attosecond is 1 x 10-18 seconds and one attometer is 1 x 10-18 metres. To quote from this technical article, “One attosecond compares to one second in the way one second compares to the age of the universe. The timescale is so short that light in vacuum … travels only about 3 nanometers during 1 attosecond.”

    One of the more common applications is in the form of the pump-probe technique. An ultra-fast laser pulse is first fired at, say, a group of atoms, which causes the atoms to move in an interesting way. This is the pump. Within fractions of a second, a similarly short ‘probe’ laser is fired at the atoms to discern their positions. By repeating this process many times over, and fine-tuning the delay between the pump and probe shots, researchers can figure out exactly how the atoms responded across very short timescales.

    In this application and others like it, the pulses have to be fired at controllable intervals and to deliver very predictable amounts of energy. The devices that generate these pulses often provide these features, but it is often necessary to independently study the pulses and fine-tune them according to different applications’ needs. This post discusses one such way and how physicists improved on it.

    As electromagnetic radiation, every laser pulse is composed of an electric field and a magnetic field oscillating perpendicular to each other. Of these, consider the electric field (only because it’s easier to study; thanks to Maxwell’s equations, what we learn about the electric field can be inferred accordingly for the magnetic field as well):

    The blue line depicts the oscillating electric wave, also called the carrier wave (because it carries the energy). The dotted line around it depicts the wave’s envelope. It’s desirable to have the carrier’s crest and the envelope’s crest coincide – i.e. for the carrier wave to peak at the same point the envelope as a whole peaks. However, trains of laser pulses, generated for various applications, typically drift: the crest of every subsequent carrier wave is slightly more out of step with the envelope’s crest. According to one paper, it arises “due to fluctuations of dispersion, caused by changes in path length, and pump energy experienced by consecutive pulses in a pulse train.” In effect, the researcher can’t know the exact amount of energy contained in each pulse, and how that may affect the target.

    The extent to which the carrier wave and the envelope are out of step is expressed in terms of the carrier-envelope offset (CEO) phase, measured in degrees (or radians). Knowing the CEO phase is crucial for experiments that involve ultra-precise measurements because the phase is likely to affect the measurements in question, and needs to be adjusted for. According to the same paper, “Fluctuations in the [CEO phase] translate into variations in the electric field that hamper shot-to-shot reproducibility of the experimental conditions and deteriorate the temporal resolution.”

    This is why, in turn, physicists have developed techniques to measure the CEO phase and other properties of propagating waves. One of them is called attosecond streaking. Physicists stick a gas of atoms in a container, fire a laser at it to ionise them and release electrons. The field to be studied is then fired into this gas, so its electric-wave component pushes on these electrons. Specifically, as the electric field’s waves rise and fall, they accelerate the electrons to different extents over time, giving rise to streaks of motion – and the technique’s name. A time-of-flight spectrometer measures this streaking to determine the field’s properties. (The magnetic field also affects the electrons, but it suffices to focus on the electric field for this post.)

    This sounds straightforward but the setup is cumbersome: the study needs to be conducted in a vacuum and electron time-of-flight spectrometers are expensive. But while there are other ways to measure the wave properties of extreme fields, attosecond streaking has been one of the most successful (in one instance, it was used to measure the CEO phase at a shot frequency of 400,000 times per second).

    As a workaround, physicists from Germany and Canada recently reported in the journal Optica a simpler way, based on one change. Instead of setting up a time-of-flight spectrometer, they propose using the pushed electrons to induce an electric current in electrodes, in such a way that the properties of the current contain information about the CEO phase. This way, researchers can drop both the spectrometer and, because the electrons aren’t being investigated directly, the vacuum chamber.

    The researchers used fused silica, a material with a wide band-gap, for the electrodes. The band-gap is the amount of energy a material’s electrons need to be imparted so they can ‘jump’ from the valence band to the conduction band, turning the material into a conductor. The band-gap in metals is zero: if you placed a metallic object in an electric field, it will develop an internal current linearly proportional to the field strength. Semiconductors have a small band-gap, which means some electric fields can give rise to a current while others can’t – a feature that modern electronics exploit very well.

    Dielectric materials have a (relatively) large band-gap. When it is exposed to a low electric field, a dielectric won’t conduct electricity but its internal arrangement of positive and negative charges will move slightly, creating a minor internal electric field. But when the field strength crosses a particular threshold, the material will ‘break down’ and become a conductor – like a bolt of lightning piercing the air.

    Next, the team circularly polarised the laser pulse to be studied. Polarisation refers to the electric field’s orientation in space, and the effect of circular polarisation is to cause the electric field to rotate. And as the field moves forward, its path traces a spiral, like so:

    The reason for doing this, according to the team’s paper, is that when the circularly polarised laser pulse knocks electrons out of atoms, the electrons’ momentum is “perpendicular to the direction of the maximum electric field”. So as the CEO phase changes, the electrons’ directions of drift also change. The team used an arrangement of three electrodes, connected to each other in two circuits (see diagram below) such that the electrons flowing in different directions induce currents of proportionately different strengths in the two arms. Amplifiers attached to the electrodes then magnify these currents and open them up for further analysis. Since the envelope’s peak, or maximum, can be determined beforehand as well as doesn’t drift over time, the CEO phase can be calculated straightforwardly.

    (The experimental setup, shown below, is a bit different: since the team had to check if their method works, they deliberately insert a CEO phase in the pulse and check if the setup picks up on it.)

    The team writes towards the end of the paper, “The most important asset of the new technique, besides its striking simplicity, is its potential for single-shot [CEO phase] measurements at much higher repetition rates than achievable with today’s techniques.” It attributes this feat to attosecond streaking being limited by the ability of the time-of-flight spectrometer whereas its setup is limited, in the kHz range, only by the time the amplifiers need to boost the electric signals, and in the “multi-MHz” range by the ability of the volume of gas being struck to respond sufficiently rapidly to the laser pulses. The team also states that its electrode-mediated measurement method renders the setup favourable to radiation of longer wavelengths as well.

    Featured image: A collection of lasers of different frequencies in the visible-light range. Credit: 彭嘉傑/Wikimedia Commons, CC BY 2.5 Generic.

  • Powerful microscopy technique brings proteins into focus

    Cryo-electron microscopy (cryo-EM) as a technology has become more important because the field that it revolutionised – structural biology – has become more important. The international scientific community had this rise in fortunes, so to speak, acknowledged when the Nobel Prize for chemistry was awarded to three people in 2017 for perfecting its use to study important biomolecules and molecular processes.

    (Who received the prize is immaterial, considering more than just three people are likely to have contributed to the development of cryo-EM; however, the prize-giving committee’s choice of field to spotlight is a direction worth following.)

    In 2015, two separate groups of scientists used cryo-EM to image objects 2.8 Å and 2.2 Å (1 nm is one-billionth of a metre; 1 Å is one-tenth of this) wide. These distances are considered to be atomic because they represent the ability to image features about as big as a smallish atom, comparable to that of, say, sodium. Before cryo-EM, scientists could image such distances only with X-ray crystallography, which requires the samples to be studied to be crystallised first. This isn’t always possible.

    But though cryo-EM didn’t require specimens to be crystallised, they had to be placed in a vacuum first. In vacuum, water evaporates, and when water evaporates from biological objects like tissue, the specimen could lose its structural integrity and collapse or deform. The trio that won the chemistry prize in 2017 developed multiple workarounds for this and other problems. Taken together, their innovations allowed scientists to find cryo-EM to be more and more valuable for research.

    One of the laureates, Joachim Frank, developed computational techniques in the 1970s and 1980s to enhance, correct and in other ways modify images obtained with cryo-EM. And one of these techniques in turn was particularly important.

    An object will reflect a wave if the object’s size is comparable to the wave’s wavelength. Humans see a chair or table because the chair or table reflects visible light, and our eyes detect the reflected electromagnetic waves. A cryo-EM ‘sees’ its samples using electrons, which have a smaller wavelength than photons and can thus reveal even smaller objects.

    However, there’s a catch. The more energetic an electron is, the lower its wavelength is, and the smaller the feature it can resolve – but a high-energy electron can also damage the specimen altogether. Frank’s contributions allowed scientists to reduce the number of electrons or their energy to obtain equally good images of their specimens, leading to resolutions of 2.2 Å.

    Today, structural biology continues to be important, but its demands have become more exacting. To elucidate the structures of smaller and smaller molecules, scientists need cryo-EM and other tools to be able to resolve smaller and smaller features, but come up against significant physical barriers.

    For example, while Frank’s techniques allowed scientists to reduce the number of electrons required to obtain the image of a sample, using fewer probe particles also meant a lower signal-to-noise ratio (SNR). So the need for new techniques, new solutions, to these old problems has become apparent.

    In a paper published online on October 21, a group of scientists from Belgium, the Netherlands and the UK describe “three technological developments that further increase the SNR of cryo-EM images”. These are a new kind of electron source, a new energy filter and a new electron camera.

    The electron source is something the authors call a cold field emission electron gun (CFEG). Some electron microscopes use field emission guns (FEGs) to shoot sharply focused, coherent beams of electrons optimised to have energies that will produce a bright image. A CFEG is a FEG that reduces the brightness in favour of reducing the average difference in energies between electrons. The higher this difference – or the energy spread – is, the more blur there will be in the image.

    The authors’ pitch is that FEGs help produce brighter but more blurred images than CFEGs, and that CFEGs help produce significantly better images when the goal is to image features smaller than 2 Å. Specifically, they write, the SNR increases 2.5x at a resolution of 1.5 Å and 9.5x at 1.2 Å.

    The second improvement has to do with the choice of electrons used to compose the final image. The electrons fired by the gun (CFEG or otherwise) go on to have one of two types of collisions with the specimen. In an elastic collision, the electron’s kinetic energy doesn’t change – i.e. it doesn’t impart its kinetic energy to the specimen. In an inelastic collision, the electron’s kinetic energy changes because the electron has passed on some of it to the specimen itself. This energy transfer can produce noise, lower the SNR and distort the final image.

    The authors propose using a filter that removes electrons that have undergone inelastic collisions from the final assessment. In simple terms, the filter comprises a slit through which only electrons of a certain energy can pass and a prism that bends their path towards a detector. This said, they do acknowledge that it will be interesting to explore in future whether inelastically scattered electrons can be be better accounted for instead of being eliminated altogether – akin to silencing a classroom by expelling unruly children versus retaining them and teaching them to keep quiet.

    The final improvement is to use the “next-generation” Falcon 4 direct-electron detector. This is the latest iteration in a line of products developed by Thermo Fisher Scientific, to count the number of electrons impinging on a surface as accurately as possible, their relative location and at a desirable exposure. The Falcon 4 has a square detection area 14 µm to a side, a sampling frequency of 248 Hz and a “sub-pixel accuracy” (according to the authors) that allows the device to not lose track of electrons even if they impinge close to each other on the detector.

    Combining all three improvements, the authors write that they were able to image a human membrane protein called ß3 GABA_A R with a resolution of 1.7 Å and mouse apoferritin at 1.22 Å. (The protein called ferritin binds to iron and stores/releases it; apoferritin is ferritin sans iron.)

    “The increased SNR of cryo-EM images enabled by the technology described here,” the authors conclude, “will expand [the technique] to more difficult samples, including membrane proteins in lipid bilayers, small proteins and structurally heterogeneous macromolecular complexes.”

    At these resolutions, scientists are closing in on images not just of macromolecules of biological importance but of parts of these molecules – and can in effect elucidate the structures that correspond to specific functions or processes. This is somewhat like going from knowing that viruses infect cells to determining the specific parts of a virus and a cell implicated in the infiltration process.

    A very germane example is that of the novel coronavirus. In April this year, a group of researchers from France and the US reported the cryo-EM structure of the virus’s spike glycoprotein, which binds to the ACE2 protein on the surface of some cells to gain entry. By knowing this structure, other researchers can design the more perfect inhibitors to disrupt the glycoprotein’s function, as well as vaccines that mimic its presence to provoke the desired immune response.

    In this regard, a resolution of 1-2 Å corresponds to the dimensions of individual covalent bonds. So by extending the cryo-EM’s ability to decipher smaller and smaller features, researchers can strike at smaller, more precise molecular mechanisms to produce more efficient, perhaps more closely controlled and finely targeted, effects.

    Featured image: Scientists using a 300-kV cryo-EM at the Max Planck Institute of Molecular Physiology, Dortmund. Credit: MPI Dortmund.