Uncategorized

  • Tom Kibble (1932-2016)

    Featured image: From left to right: Tom Kibble, Gerald Guralnik, Richard Hagen, François Englert and Robert Brout. Credit: Wikimedia Commons.

    Sir Tom Kibble passed away on June 2, I learnt this morning with a bit of sadness that I’d missed the news. It’s hard to write about someone in a way that prompts others either to find out more about that person or, if they knew him or his work, to recall their memories of him when I myself would like only to do the former now. So let me quickly spell out why I think you should pay attention: Kibble was one of the six theorists who, in 1964, came up with the ABEGHHK’tH mechanism to explain how gauge bosons acquired mass. The ‘K’ in those letters stands for ‘Kibble’. However, we only remember that mechanism with the second ‘H’, which stands for Higgs; the other letters fell off for reasons not entirely clear – although convenience might’ve played a role. And while everyone refers to the mechanism as the Higgs mechanism, Peter Higgs, the man himself, continues to call it the ABEGHHK’tH mechanism.

    Anyway, Kibble was known for three achievements. The first was to co-formulate – alongside Gerald Guralnik and Richard Hagen – the ABEGHHK’tH mechanism. It was validated in early 2013, earning only Higgs and ‘E’, François Englert, the Nobel Prize for physics that year. The second came in 1967, to explain how the mechanism accords the W and Z bosons, the carriers of the weak nuclear force, with mass but not the photons. The solution was crucial to validate the electroweak theory, and whose three conceivers (Sheldon Glashow, Abdus Salam and Steven Weinberg) won the Nobel Prize for physics in 1979. The third was the postulation of the Kibble-Żurek mechanism, which explains the formation of topological defects in the early universe by applying the principles of quantum mechanics to cosmological objects. This work was done alongside the Polish-American physicist Wojciech Żurek.

    I spoke to Kibble once, only for a few minutes, at a conference at the Institute of Mathematical Sciences, Chennai, in December 2013 (at the same conference where I met George Sterman as well). This was five months after Fabiola Gianotti had made the famous announcement at CERN that the LHC had found a particle that looked like the Higgs boson. I’d asked Kibble what he made of the announcement, and where we’d go from here. He said, as I’m sure he would’ve a thousand times before, that it was very exciting to be proven right after 50 years; that it’d definitively closed one of the biggest knowledge gaps in modern theoretical particle physics; and that there was still work to be done by studying the Higgs boson for more clues about the nature of the universe. He had to rush; a TV crew was standing next to me, nudging me for some time with him. I was glad to see it was Puthiya Thalaimurai, a Tamil-language news channel, because it meant the ‘K’ had endured.

    Rest in peace, Tom Kibble.

  • ‘Infinite in All Directions’, a science newsletter

    At 10 am (IST) every Monday, I will be sending out a list of links to science stories from around the web, curated by significance and accompanied by a useful blurb, as a newsletter. If you’re interested, please sign up here. If you’d like to know more before signing up, read on.

    It’s called Infinite in All Directions – a term coined by Freeman Dyson for nothing really except the notion behind this statement from his book of the same name: “No matter how far we go into the future, there will always be new things happening, new information coming in, new worlds to explore, a constantly expanding domain of life, consciousness and memory.”

    I will be collecting the links and sending the newsletter out on behalf of The Wire, whose science section I edit. And so, you can trust the links to not be to esoteric pieces (which I’m fond of) but to pieces I’d have liked to have covered at The Wire but couldn’t.

    More than that, the idea for the newsletter is essentially a derivative of a reading challenge a friend proposed a while ago: wherein a group of us would recommend books for each other to read, especially titles that we might not come upon by ourselves.

    Some of you might remember that a (rather, the same) friend and I used to send out the Curious Bends newsletter until sometime last year. The Infinite in All Directions newsletter will be similarly structured but won’t necessarily be India-centric. In fact, a (smaller than half) section of the newsletter may even be consistently skewed toward the history and philosophy of science. But you can trust that the issues will all be contemporary.

    Apart from my ‘touch’ coming through with the selection, I will also occasionally include my take on some topics (typically astro/physics). You’re welcome to disagree (just be nice) – all replies to the newsletter will land up in my inbox. You’re also more than welcome to send me links to include in future issues.

    Finally: Each newsletter will not have a fixed number of links – I don’t want to link you to pieces I myself haven’t been able to appreciate. At the same time, there will be at least five or so links. I think The Wire alone puts out that many good stories each week.

    I hope you enjoy reading the newsletter. As with this blog, Infinite in All Directions will be a labour of love. Please share it with your friends and anybody who might be interested in such a service. Again, here is the link to subscribe.

  • A universe out of sight

    Two things before we begin:

    1. The first subsection of this post assumes that humankind has colonised some distant extrasolar planet(s) within the observable universe, and that humanity won’t be wiped out in 5 billion years.
    2. Both subsections assume a pessimistic outlook, and neither projections they dwell on might ever come to be while humanity still exists. Nonetheless, it’s still fun to consider them and their science, and, most importantly, their potential to fuel fiction.

    Cosmology

    Astronomers using the Hubble Space Telescope have captured the most comprehensive picture ever assembled of the evolving Universe — and one of the most colourful. The study is called the Ultraviolet Coverage of the Hubble Ultra Deep Field. Caption and credit: hubble_esa/Flickr, CC BY 2.0
    Astronomers using the Hubble Space Telescope have captured the most comprehensive picture ever assembled of the evolving universe — and one of the most colourful. The study is called the Ultraviolet Coverage of the Hubble Ultra Deep Field. Caption and credit: hubble_esa/Flickr, CC BY 2.0

    Note: An edited version of this post has been published on The Wire.

    A new study whose results were reported this morning made for a disconcerting read: it seems the universe is expanding 5-9% faster than we figured it was.

    That the universe is expanding at all is disappointing, that it is growing in volume like a balloon and continuously birthing more emptiness within itself. Because of the suddenly larger distances between things, each passing day leaves us lonelier than we were yesterday. The universe’s expansion is accelerating, too, and that doesn’t simply mean objects getting farther away. It means some photons from those objects never reaching our telescopes despite travelling at lightspeed, doomed to yearn forever like Tantalus in Tartarus. At some point in the future, a part of the universe will become completely invisible to our telescopes, remaining that way no matter how hard we try.

    And the darkness will only grow, until a day out of an Asimov story confronts us: a powerful telescope bearing witness to the last light of a star before it is stolen from us for all time. Even if such a day is far, far into the future – the effect of the universe’s expansion is perceptible only on intergalactic scales, as the Hubble constant indicates, and simply negligible within the Solar System – the day exists.

    This is why we are uniquely positioned: to be able to see as much as we are able to see. At the same time, it is pointless to wonder how much more we are able to see than our successors because it calls into question what we have ever been able to see. Say the whole universe occupies a volume of X, that the part of it that remains accessible to us contains a volume Y, and what we are able to see today is Z. Then: Z < Y < X. We can dream of some future technological innovation that will engender a rapid expansion of what we are able to see, but with Y being what it is, we will likely forever play catch-up (unless we find tachyons, navigable wormholes, or the universe beginning to decelerate someday).

    How fast is the universe expanding? There is a fixed number to this called the deceleration parameter:

    q = – (1 + /H2),

    where H is the Hubble constant and  is its first derivative. The Hubble constant is the speed at which an object one megaparsec from us is moving away at. So, if q is positive, the universe’s expansion is slowing down. If q is zero, then H is the time since the Big Bang. And if q is negative – as scientists have found to be the case – then the universe’s expansion is accelerating.

    The age and ultimate fate of the universe can be determined by measuring the Hubble constant today and extrapolating with the observed value of the deceleration parameter, uniquely characterised by values of density parameters (Ω_M for matter and Ω_Λ for dark energy). Caption and credit: Wikimedia Commons
    The age and ultimate fate of the universe can be determined by measuring the Hubble constant today and extrapolating with the observed value of the deceleration parameter, uniquely characterised by values of density parameters (Ω_M for matter and Ω_Λ for dark energy). Caption and credit: Wikimedia Commons

    We measure the expansion of the universe from our position: on its surface (because, no, we’re not inside the universe). We look at light coming from distant objects, like supernovae; we work out how much that light is ‘red-shifted’; and we compare that to previous measurements. Here’s a rough guide.

    What kind of objects do we use to measure these distances? Cosmologists prefer type Ia supernovae. In a type Ia supernova, a white-dwarf (the core of a dead stare made entirely of electrons) is slowly sucking in matter from an object orbiting it until it becomes hot enough to trigger fusion reaction. In the next few seconds, the reaction expels 1044 joules of energy, visible as a bright fleck in the gaze of a suitable telescope. Such explosions have a unique attribute: the mass of the white-dwarf that goes boom is uniform, which means type Ia supernova across the universe are almost equally bright. This is why cosmologists refer to them as ‘cosmic candles’. Based on how faint these candles are, you can tell how far away they are burning.

    After a type Ia supernova occurs, photons set off from its surface toward a telescope on Earth. However, because the universe is continuously expanding, the distance between us and the supernova is continuously increasing. The effective interpretation is that the explosion appears to be moving away from us, becoming fainter. How much it has moved away is derived from the redshift. The wave nature of radiation allows us to think of light as having a frequency and a wavelength. When an object that is moving away from us emits light toward us, the waves of light appear to become stretched, i.e. the wavelength seems to become distended. If the light is in the visible part of the spectrum when starting out, then by the time it reached Earth, the increase in its wavelength will make it seem redder. And so the name.

    The redshift, z – technically known as the cosmological redshift – can be calculated as:

    z = (λobserved – λemitted)/λemitted

    In English: the redshift is the factor by which the observed wavelength is changed from the emitted wavelength. If z = 1, then the observed wavelength is twice as much as the emitted wavelength. If z = 5, then the observed wavelength is six-times as much as the emitted wavelength. The farthest galaxy we know (MACS0647-JD) is estimated to be at a distance wherefrom = 10.7 (corresponding to 13.3 billion lightyears).

    Anyway, z is used to calculate the cosmological scale-factor, a(t). This is the formula:

    a(t) = 1/(1 + z)

    a(t) is then used to calculate the distance between two objects:

    d(t) = a(t) d0,

    where d(t) is the distance between the two objects at time t and d0 is the distance between them at some reference time t0. Since the scale factor would be constant throughout the universe, d(t) and d0 can be stand-ins for the ‘size’ of the universe itself.

    So, let’s say a type Ia supernova lit up at a redshift of 0.6. This gives a(t) = 0.625 = 5/8. So: d(t) = 5/8 * d0. In English, this means that the universe was 5/8th its current size when the supernova went off. Using z = 10.7, we infer that the universe was one-twelfth its current size when light started its journey from MACS0647-JD to reach us.

    As it happens, residual radiation from the primordial universe is still around today – as the cosmic microwave background radiation. It originated 378,000 years after the Big Bang, following a period called the recombination epoch, 13.8 billion years ago. Its redshift is 1,089. Phew.

    The relation between redshift (z) and distance (in billions of light years). d_H is the comoving distance between you and the object you're observing. Where it flattens out is the distance out to the edge of the observable universe. Credit: Redshiftimprove/Wikimedia Commons, CC BY-SA 3.0
    The relation between redshift (z) and distance (in billions of light years). d_H is the comoving distance between you and the object you’re observing. Where it flattens out is the distance out to the edge of the observable universe. Credit: Redshiftimprove/Wikimedia Commons, CC BY-SA 3.0

    A curious redshift is z = 1.4, corresponding to a distance of about 4,200 megaparsec (~0.13 trillion trillion km). Objects that are already this far from us will be moving away faster than at the speed of light. However, this isn’t faster-than-light travel because it doesn’t involve travelling. It’s just a case of the distance between us and the object increasing at such a rate that, if that distance was once covered by light in time t0, light will now need t > t0 to cover it*. The corresponding a(t) = 0.42. I wonder at times if this is what Douglas Adams was referring to (… and at other times I don’t because the exact z at which this happens is 1.69, which means a(t) = 0.37. But it’s something to think about).

    Ultimately, we will never be able to detect any electromagnetic radiation from before the recombination epoch 13.8 billion years ago; then again, the universe has since expanded, leaving the supposed edge of the observable universe 46.5 billion lightyears away in any direction. In the same vein, we can imagine there will be a distance (closing in) at which objects are moving away from us so fast that the photons from their surface never reach us. These objects will define the outermost edges of the potentially observable universe, nature’s paltry alms to our insatiable hunger.

    Now, a gentle reminder that the universe is expanding a wee bit faster than we thought it was. This means that our theoretical predictions, founded on Einstein’s theories of relativity, have been wrong for some reason; perhaps we haven’t properly accounted for the effects of dark matter? This also means that, in an Asimovian tale, there could be a twist in the plot.

    *When making such a measurement, Earthlings assume that Earth as seen from the object is at rest and that it’s the object that is moving. In other words: we measure the relative velocity. A third observer will notice both Earth and the object to be moving away, and her measurement of the velocity between us will be different.


    Particle physics

    Candidate Higgs boson event from collisions in 2012 between protons in the ATLAS detector on the LHC. Credit: ATLAS/CERN
    Candidate Higgs boson event from collisions in 2012 between protons in the ATLAS detector on the LHC. Credit: ATLAS/CERN

    If the news that our universe is expanding 5-9% faster than we thought sooner portends a stellar barrenness in the future, then another foretells a fecundity of opportunities: in the opening days of its 2016 run, the Large Hadron Collider produced more data in a single day than it did in the entirety of its first run (which led to the discovery of the Higgs boson).

    Now, so much about the cosmos was easy to visualise, abiding as it all did with Einstein’s conceptualisation of physics: as inherently classical, and never violating the principles of locality and causality. However, Einstein’s physics explains only one of the two infinities that modern physics has been able to comprehend – the other being the world of subatomic particles. And the kind of physics that reigns over the particles isn’t classical in any sense, and sometimes takes liberties with locality and causality as well. At the same time, it isn’t arbitrary either. How then do we reconcile these two sides of quantum physics?

    Through the rules of statistics. Take the example of the Higgs boson: it is not created every time two protons smash together, no matter how energetic the protons are. It is created at a fixed rate – once every ~X collisions. Even better: we say that whenever a Higgs boson forms, it decays to a group of specific particles one-Yth of the time. The value of Y is related to a number called the coupling constant. The lower Y is, the higher the coupling constant is, and more often will the Higgs boson decay into that group of particles. When estimating a coupling constant, theoretical physicists assess the various ways in which the decays can happen (e.g., Higgs boson → two photons).

    A similar interpretation is that the coupling constant determines how strongly a particle and a force acting on that particle will interact. Between the electron and the electromagnetic force is the fine-structure constant,

    α = e2/2ε0hc;

    and between quarks and the strong nuclear force is the constant defining the strength of the asymptotic freedom:

    αs(k2) = [β0ln(k22)]-1

    So, if the LHC’s experiments require P (number of) Higgs bosons to make their measurements, and its detectors are tuned to detect that group of particles, then at least P-times-that-coupling-constant collisions ought to have happened. The LHC might be a bad example because it’s a machine on the Energy Frontier: it is tasked with attaining higher and higher energies so that, at the moment the protons collide, heavier and much shorter-lived particles can show themselves. A better example would be a machine on the Intensity Frontier: its aim would be to produce orders of magnitude more collisions to spot extremely rare processes, such as particles that are formed very rarely. Then again, it’s not as straightforward as just being prolific.

    It’s like rolling an unbiased die. The chance that you’ll roll a four is 1/6 (i.e. the coupling constant) – but it could happen that if you roll the die six times, you never get a four. This is because the chance can also be represented as 10/60. Then again, you could roll the die 60 times and still never get a four (though the odds of that happened are even lower). So you decide to take it to the next level: you build a die-rolling machine that rolls the die a thousand times. You would surely have gotten some fours – but say you didn’t get fours one-sixth of the time. So you take it up a notch: you make the machine roll the die a million times. The odds of a four should by now start converging toward 1/6. This is how a particle accelerator-collider aims to work, and succeeds.

    And this is why the LHC producing as much data as it already has this year is exciting news. That much data means a lot more opportunities for ‘new physics’ – phenomena beyond what our theories can currently explain – to manifest itself. Analysing all this data completely will take many years (physicists continue to publish papers based on results gleaned from data generated in the first run), and all of it will be useful in some way even if very little of it ends up contributing to new ideas.

    The steady (logarithmic) rise in luminosity – the number of collision events detected – at the CMS detector on the LHC. Credit: CMS/CERN
    The steady (logarithmic) rise in luminosity – the number of collision events detected – at the CMS detector on the LHC. Credit: CMS/CERN

    Occasionally, an oddball will show up – like a pentaquark, a state of five quarks bound together. As particles in their own right, they might not be as exciting as the Higgs boson, but in the larger schemes of things, they have a role to call their own. For example, the existence of a pentaquark teaches physicists about what sorts of configurations of the strong nuclear force, which holds the quarks together, are really possible, and what sorts are not. However, let’s say the LHC data throws up nothing. What then?

    Tumult is what. In the first run, the LHC used to smash two beams of billions of protons, each beam accelerated to 4 TeV and separated into 2,000+ bunches, head on at the rate of two opposing bunches every 50 nanoseconds. In the second run, after upgrades through early 2015, the LHC smashes bunches accelerated to 6.5 TeV once every 25 nanoseconds. In the process, the number of collisions per sq. cm per second increased tenfold, to 1 × 1034. These heightened numbers are so new physics has fewer places to hide; we are at the verge of desperation to tease them out, to plumb the weakest coupling constants, because existing theories have not been able to answer all of our questions about fundamental physics (why things are the way they are, etc.). And even the barest hint of something new, something we haven’t seen before, will:

    • Tell us that we haven’t seen all that there is to see**, that there is yet more, and
    • Validate this or that speculative theory over a host of others, and point us down a new path to tread

    Axiomatically, these are the desiderata at stake should the LHC find nothing, even more so that it’s yielded a massive dataset. Of course, not all will be lost: larger, more powerful, more innovative colliders will be built – even as a disappointment will linger. Let’s imagine for a moment that all of them continue to find nothing, and that persistent day comes to be when the cosmos falls out of our reach, too. Wouldn’t that be maddening?

    **I’m not sure of what an expanding universe’s effects on gravitational waves will be, but I presume it will be the same as its effect on electromagnetic radiation. Both are energy transmissions travelling on the universe’s surface at the speed of light, right? Do correct me if I’m wrong.

  • Stenograph the science down

    A piece in Zee News, headlined ISRO to test next reusable launch vehicle after studying data of May 23 flight, begins thus:

    The Indian Space Research Organisation has successfully launched it’s first ever ‘Made-in-India’ space shuttle RLV-Technology Demonstrator on May 23, 2016. After the launch, the Indian space agency will now test the next reusable launch vehicle test after studying May 23 flight data. A senior official in the Indian space agency says that India will test the next set of space technologies relating to the reusable launch vehicle (RLV) after studying the data collected from the May 23 flight of RLV-Technology Demonstrator. “We will have to study the data generated from the May 23 flight. Then we have to decide on the next set of technologies to be tested on the next flight. We have not finalised the time frame for the next RLV flight,” K Sivan, director, Vikram Sarabhai Space Centre (VSSC) said on Wednesday.

    Apart from presenting very little new information with each passing sentence, the piece also buries an important quote, and what could well have been the piece’s real peg, more than half the way down:

    As per data the RLV-TD landed softly in Bay of Bengal. As per our calculations it would have disintegrated at the speed at which it touched the sea,” Sivan said.

    It sounds like Sivan is admitting to a mistake in the calculations. There should have been a follow-up question at this point – asking him to elaborate on the mismatch – because this is valuable new information. Instead, the piece marches on as if Sivan had just commented on the weather. And in hindsight, the piece’s first few paragraphs present information that is blatantly obvious: of course results from the first test are going to inform the design of the second test. What new information are we to glean from such a statement?

    Or is it that we’re paying no attention to the science and instead reproducing Sivan’s words line by line because they’re made of gold?

    A tangential comment: The piece’s second, third and fourth sentences say the same thing. Sandwiching one meaty sentence between layers of faff is a symptom of writing for newspapers – where there is some space to fill for the sake of there being some attention to grab. At the same time, such writing is unthinkingly carried to the web because many publishers believe that staking a claim to ‘publishing on the web’ only means making podcasts and interactive graphics. What about concision?

  • No Space Age for us

    There’s a 500-word section on the Wikipedia page for the NASA Space Shuttle that describes the markings on the programme’s iconic orbiter vehicle (OV). Specifically, it talks about where the words ‘NASA’ and ‘USA’ appeared on the vehicle’s body, if there were any other markings, as well as some modifications to how the flag was positioned. Small-time trivia-hunters like myself love this sort of thing because, whether in my imagination or writing, being able to recall and describe these markings provides a strong sense of character to the OV, apart from making it more memorable to my readers as well as myself.

    These are the symbols in our memories, the emblem of choices that weren’t dictated by engineering requirements but by human wants, ambitions. And it’s important to remember that these signatures exist and even more so to remember them because of what they signify: ownership, belonging, identity.

    Then again, the markings on an OV are a part of its visual identity. A majority of humans have not seen the OV take off and land, and there are many of us who can’t remember what that looked like on TV either. For us, the visual identity and its attendant shapes and colours may not be very cathartic – but we are also among those who have consumed information of these fascinating, awe-inspiring vehicles through news articles, podcasts, archival footage, etc., on the internet. There are feelings attached to some vague recollections of a name; we recall feats as well as some kind of character, as if the name belonged to a human. We remember where we were, what we were doing when the first flights of iconic missions took off. We use the triggers of our nostalgia to personalise our histories. Using some symbol or other, we forge a connection and make it ours.

    This ourness is precisely what is lost, rather effectively diluted, through the use of bad metaphors, through ignorance and through silence. Great technology and great communication strive in opposite directions: the former is responsible, though in only an insentient and mechanistic way, for underscoring the distance – technological as much as physical – between starlight and the human eye that recognises it; the latter hopes to make us forget that distance. And in the absence of communication, our knowledge becomes clogged with noise and the facile beauty of our machines; without our symbols, we don’t see the imprints of humanity in the night sky but only our loneliness.

    Such considerations are far removed from our daily lives. We don’t stop (okay, maybe Dennis Overbye does) to think about what our journalism needs to demand from history-making institutions – such as the Indian Space Research Organisation (ISRO) – apart from the precise details of those important moments. We don’t question the foundations of their glories as much as enquire after the glories themselves. We don’t engender the creation of sanctions against long-term equitable and sustainable growth. We thump our chests when probes are navigated to Mars on a Hollywood budget but we’re not outraged when only one scientific result has come of it. We are gratuitous with our praise even when all we’re processing are second-handed tidbits. We are proud of ISRO’s being removed from bureaucratic interference and, somehow, we are okay with ISRO giving access only to those journalists who have endeared themselves by reproducing press releases for two decades.

    There’s no legislation that even says all knowledge generated by ISRO lies in the public domain. Irrespective of it being unlikely that ISRO will pursue legal action against me, I do deserve the right to use ISRO’s findings unto my private ends without anxiety. I’m reminded every once in a while that I, or one of my colleagues, could get into trouble for reusing images of the IRNSS launches from isro.gov.in in a didactic video we made at The Wire (or even the image at the top of this piece). At the same time, many of us are proponents of the open access, open science and open knowledge movements.

    We remember the multiwavelength astronomy satellite launched in September 2015 as “India’s Hubble” – which only serves to remind us how much smaller the ASTROSAT is than its American counterpart. How many of you know that one of the ASTROSAT instruments is one of the world’s best at studying gamma-ray bursts? We discover, like hungry dogs, ISRO’s first tests of a proto-RLV as “India’s space shuttle”; when, and if, we do have the RLV in 2030, wouldn’t we be thrilled to know that there is something wonderful about it not just of national provenance but of Indian provenance, too?

    Instead, what we are beginning to see is that India – with its strapped-on space programme – is emulating its predecessors, reliving jubilations from a previous age. We see that there is no more of an Indianess in them as much as there is an HDR recap of American and Soviet aspirations. Without communication, without the symbols of its progress being bandied about, without pride (and just a little bit of arrogance thrown in), it is becoming increasingly harder through the decades for us – as journalists or otherwise – to lay claim to something, a scrap of paper, a scrap of attitude, that will make a part of the Space Age feel like our own.

    At some point, I fear we will miss the starlight for the distance in between.

    Update: We are more concerned for our machines than for our dreams. Hardly anyone is helping put together the bigger picture; hardly anyone is taking control of what we will remember, leaving us to pick up on piecemeal details, to piece together a fragmented, disjointed memory of what ISRO used to be. There is no freedom in making up your version of a moment in history. There needs to be more information; there need to be souvenirs and memorabilia; and the onus of making them needs to be not on the consumers of this culture but the producers.

  • An identity for ISRO through a space agreement it may or may not sign

    Indians, regardless of politics or ideology, have a high opinion of the Indian Space Research Organisation (ISRO). Conversations centred on it usually retain a positive arc, sometimes even verging on the exaggerated in lay circles – in part because the organisation’s stunted PR policies haven’t given the people much to go by, in part because of pride. Then again, the numbers by themselves are impressive: Since 1993, there have been 32 successful PSLV launches with over 90 instruments sent into space; ISRO has sent probes to observe the Moon and Mars up close; launched a multi-wavelength space-probe; started work on a human spaceflight program; developed two active launch vehicles with two others still in the works; and it is continuing its work on cryogenic and scramjet engines.

    The case of the cryogenic engine is particularly interesting and, as it happens, relevant to a certain agreement that India and the US haven’t been able to sign for more than a decade now. These details and more were revealed when a clutch of diplomatic cables containing the transcript of conversations between officials from the Government of India, ISRO, the US Trade Representative (USTR) and other federal agencies surfaced on Wikileaks in the week of May 16. One of themdelineates some concerns the Americans had about how the Indian public regarded US attempts to stall the transfer of cryogenic engines from the erstwhile USSR to India, and the complications that were born as a result.

    In 1986, ISRO initiated the development of a one-tonne cryogenic engine for use on its planned Geosynchronous Satellite Launch Vehicle (GSLV). Two years later, an American company offered to sell RL-10 cryogenic engines (used onboard the Atlas-Centaur Launch Vehicle) to ISRO but the offer was turned down because the cost was too high ($800 million) and an offer to give us the knowhow to make the engines was subject to approval by the US government, which wasn’t assured. Next, Arianespace, a French company, offered to sell two of its HM7 cryogenic engines along with the knowhow for $1,200 million. This offer was also rejected. Then, around 1989, a Soviet company named Glavkosmos offered to sell two cryogenic engines, transfer the knowhow as well as train some ISRO personnel – all for Rs.230 crore ($132 million at the time). This offer was taken up.

    However, 15 months later, the US government demanded that the deal be called off because it allegedly violated some terms of the Missile Technology Control Regime, a multilateral export control regime that Washington and Moscow are both part of. As U.R. Rao, former chairman of ISRO, writes in his book India’s Rise as a Space Power, “While the US did not object to the agreement with Glavkosmos at the time of signing, the rapid progress made by ISRO in launch vehicle technology was probably the primary cause which triggered [the delayed reaction 15 months later].” Officials on the Indian side were annoyed by the threat because solid- and liquid-fuel motors were preferred for use in rockets – not the hard-to-operate cryogenic engines – and because India had already indigenously developed such rockets (a concern that would be revived later). Nonetheless, after it became clear that the deal between Glavkosmos and ISRO wouldn’t be called off, the US imposed a two-year sanction from 1992 that voided all contracts between ISRO and the US and the transfer of any goods or services between them.

    Remembering the cryogenic engines affair

    This episode raised its ugly head once again in 2006, when India and the US – which had just issued a landmark statement on nuclear cooperation a year earlier – agreed on the final text of the Technical Safeguards Agreement (TSA) they would sign three years later. The TSA would “facilitate the launch of US satellite components on Indian space launch vehicles”. At this time, negotiations were also on for the Commercial Space Launch Agreement (CSLA), which would allow the launch of American commercial satellites onboard Indian launch vehicles. The terms of the CSLA were derived from the Next Steps in Strategic Partnership (NSSP), a bilateral dialogue that began during the Vajpayee government and defined a series of “quid-pro-quos” between the two countries that eventually led to the 2005 civilian nuclear deal. A new and niggling issue that crept in was that the US government was attempting to include satellite services in the CSLA – a move the Indian government was opposed to because it amounted to shifting the “carefully negotiated” NSSP goalposts.

    As negotiations proceeded, the cable, declassified by the then US ambassador David Mulford, reads:

    “Since the inception of the NSSP, reactionary holdouts within the Indian space bureaucracy and in the media and policy community have savaged the concept of greater ties with the US, pointing to the progress that India’s indigenous programs made without assistance from the West. The legacy of bitterness mingled with pride at US sanctions continues in the present debate, with commentators frequently referring to US actions to block the sale of Russian cryogenic engines in the 1990s as proof that American interest continues to focus on hobbling and/or displacing India’s indigenous launch and satellite capabilities.”

    The timing of the Glavkosmos offer, and the American intervention to block it, is important when determining how much the indigenous development of the cryogenic upper stage in the 2000s meant to India. After ISRO had turned down Arianespace’s HM7 engines offer, it had decided to develop a cryogenic engine from scratch by itself over eight years. As a result, the GSLV program would’ve been set back by at least that much. And it was this setback that Glavkosmos helped avoid (allowing the GSLV development programme to commence in 1990). Then again, with the more-US-friendly Boris Yeltsin having succeeded Mikhail Gorbachev in 1991, Glavkosmos was pressurised from the new Russian government to renegotiate its ISRO deal. In December 1993, it was agreed that Glavkosmos would provide four operational cryogenic engines and two mockups at the same cost (Rs.230 crore), with three more for $9 million, but without any more technology transfer.

    The result was that ISRO had to fabricate its own cryogenic engines (with an initial investment of Rs.280 crore in 1993) with little knowledge of the challenges and solutions involved. The first successful test flight happened in January 2014 on board the GSLV-D5 mission.

    So a part of what’re proud about ISRO today, and repeatedly celebrate, is rooted in an act whose memories were potential retardants for a lucrative Indo-US space deal. Moreover, they would also entrench any concessions made on the Indian side in a language that was skeptical of the Americans by default. As the US cable notes:

    “While proponents point to ISRO’s pragmatism and scientific openness (a point we endorse), opponents of the [123] nuclear deal have accused ISRO of selling out India’s domestic prowess in space launch vehicles and satellite construction in order to serve the political goal of closer ties with the US. They compare ISRO’s “caving to political pressure” unfavorably with … Anil Kakodkar’s public statements drawing a red line on what India’s nuclear establishment would not accept under hypothetical civil-military nuclear separation plans.”

    How do we square this ‘problematic recall’ with, as the same cable also quotes, former ISRO chairman G. Madhavan Nair saying a deal with the US would be “central to India’s international outreach”? Evidently, agreements like the TSA and CSLA signal a reversal of priorities for the US government – away from the insecurities motivated by Cold-War circumstances and toward capitalising on India’s rising prominence in the Space Age. In the same vein, further considering what else could be holding back the CSLA throws more light on what another government sees as being problematic about ISRO.

    Seeing the need for the CSLA

    The drafting of the CSLA was motivated by an uptick in collaborations between Indian and American entities in areas of strategic interest. The scope of these collaborations was determined by the NSSP, which laid the groundwork for the civilian nuclear deal. While the TSA would allow for American officials to inspect the integration of noncommercial American payloads with ISRO rockets ahead of launch, to prevent their misuse or misappropriation, it wouldn’t contain the checks necessary to launch commercial American payloads with ISRO rockets. Enter CSLA – and by 2006, the Americans had started to bargain for the inclusion of satellite services in it. (Note: US communications satellites are excluded from the CSLA because their use requires separate clearances from the State Department.)

    However, the government of India wasn’t okay with the inclusion of satellite services in the CSLA because ISRO simply wasn’t ready for it and also because all other CSLAs that the US had signed didn’t include satellite services. The way S. Jaishankar – who was the MEA joint secretary dealing with North America at the time – put it: “As a market economy, India is entitled to an unencumbered CSLA with the US”. This, presumably, was also an allusion to the fact that Indian agencies were not being subsidised by their government in order to undercut international competitors.

    A cable tracking the negotiations in 2009 noted that:

    “ISRO was keen to be able to launch U.S. commercial satellites, but expected its nascent system to be afforded flexibility with respect to the market principles outlined in the CSLA. ISRO opposed language in the draft CSLA text on distorting competition, transparency, and improper business practices, but agreed to propose some alternate wording after Bliss made clear that the USG would not allow commercial satellites to be licensed in the same way as non-commercial satellites … indicating that commercial satellites licenses would either be allowed through the completion of a CSLA or after a substantial period of time has passed to allow the USG to evaluate ISRO’s pricing practices and determine that they do not create market distortions.”

    ISRO officials present at the discussion table on that day asked if the wording meant the US government was alleging that ISRO was unfairly undercutting prices (when it wasn’t), and if the CSLA was being drafted as a separate agreement from the TSA because it would allow the US government to include language that explicitly prevented the Indian government from subsidising PSLV launches. USTR officials countered that such language was used across all CSLAs and that it had nothing to do with how ISRO operated. (Interestingly, 2009 was also the year when SpaceX ditched its Falcon 1 rocket in favour of the bigger Falcon 9, opening up a gap in the market for a cheaper launcher – such as the PSLV.)

    Nonetheless, the underlying suspicion persists to this day. In September 2015, the PSLV C-30 mission launchedASTROSAT and six foreign satellites – including four cubesats belonging to an American company named Spire Global. In February 2016, US Ambassador Richard Verma recalled the feat in a speech he delivered at a conference in New Delhi; the next day, the Federal Aviation Administration reiterated its stance that commercial satellites shouldn’t be launched aboard ISRO rockets until India had signed the CSLA. In response to this bipolar behaviour, one US official told Space News, “On the one hand, you have the policy, which no agency wants to take responsibility for but which remains the policy. On the other, government agencies are practically falling over themselves to grant waivers.” Then, in April, private spaceflight companies in the US called for a ban on using the PSLV for launching commercial satellites because they suspected the Indian government was subsidising launches.

    A fork in the path

    India also did not understand the need for the CSLA in the first place because any security issues would be resolved according to the terms of the TSA (signed in 2009). It wanted to be treated the way Japan or the European Union were: by being allowed to launch American satellites without the need for an agreement to do so. In fact, at the time of signing its agreement with Japan, Japan did not allow any private spaceflight entities to operate, and first considered legislation to that end for the first time in 2015. On both these counts, the USTR had argued that its agreement with India was much less proscriptive than the agreements it had struck with Russia and Ukraine, and that its need for an agreement at all was motivated by the need to specify ‘proper’ pricing practices given India’s space launches sector was ruled by a single parastatal organisation (ISRO) as well as to ensure that knowhow transferred to ISRO wouldn’t find its way to military use.

    The first news of any organisation other than ISRO being allowed to launch rockets to space from within India also only emerged earlier this year, with incumbent chairman A.S. Kiran Kumar saying he hoped PSLV operations could be privatised – through an industrial consortium in which its commercial arm, Antrix Corporation, would have a part – by 2020 so the rockets could be used on at least 18 missions every year. The move could ease the way to a CSLA. However, no word has emerged on whether the prices of launches will be set to market rates in the US or if ISRO is considering an absolute firewall between its civilian and military programmes. Recently, a group of universities developed the IRNSS (later NAVIC), India’s own satellite navigation system, alongside ISRO, ostensibly for reducing the Indian armed forces’ dependence on the American GPS system; before that was the GSAT-6 mission in August 2015.

    If it somehow becomes the case that ISRO doesn’t ever accede to the CSLA, then USTR doubts over its pricing practices will intensify and any commercial use of the Indian agency’s low-cost launchers by American firms could become stymied by the need for evermore clearances. At the same time, signing up to the CSLA will mean the imposition of some limits on what PSLV launches (with small, commercial American payloads) can be priced at. This may rob ISRO of its ability to use flexible pricing as a way of creating space for what is after all a “nascent” entity in global terms, besides becoming another instance of the US bullying a smaller player into working on its terms. However, either course means that ISRO will have to take a call about whether it still thinks of itself as vulnerable to getting “priced out” of the world market for commercial satellite launches or is now mature enough to play hardball with the US.

    Special thanks to Prateep Basu.

    The Wire
    May 23, 2016

  • So what’s ISRO testing on May 23?

    Apologies about the frequency of updates having fallen off. Work’s been hectic at The Wire – we’re expanding editorially, technologically and aesthetically – but more to the point, Delhi’s heat ensures my body has no surplus energy when I get back from work to blog (it’s a heartless 38 ºC at 10 pm). Even now, what follows is a Facebook Note I posted on The Wire‘s page yesterday (but which didn’t find much traction because of the buildup to today’s big news: the election results from five states).

    At about 9.30 am on Monday, May 23, a two-stage rocket will take off from the Sriharikota High Altitude Range and climb to an altitude of 48 km while reaching a speed of ~1,770 m/s. At that point, the first stage – a solid-fuel booster – will break off from the rocket and fall down into the Bay of Bengal. At the same time, the second stage will still be on the ascent, climbing to 70 km and attaining a speed of ~1,871.5 m/s. Once there, it will begin its plummet down and so kick off the real mission.

    Its designation is RLV-TD HEX1 – for Reusable Launch Vehicle Technology Demonstration, Hypersonic Experiment 1. The mission’s been in the works for about five years now, with an investment of Rs.95 crore, and is part of the Indian Space Research Organisation’s plans to develop a reusable launch vehicle in another 15 years. The HEX1 mission design suggests the vehicle won’t look anything like SpaceX’s reusable rockets (to be precise, reusable boosters). Instead, it will look more like NASA’s Space Shuttle (retired in 2011): with an airplane-like fuselage flanked by delta wings.

    Screenshot from a presentation made by M.V. Dhekane, deputy director of the Control Guidance & Simulation Entity, VSSC, in 2014.
    Screenshot from a presentation made by M.V. Dhekane, deputy director of the Control Guidance & Simulation Entity, VSSC, in 2014.

    And the one that’ll be flying on Monday will be a version six-times smaller in scale than what may ultimately be built (though still 6.5-m long and weighing 1.7 tonnes). This is because ISRO intends to test two components of the flight for which the RLV’s size can be smaller. The first (in no specific order) will be the ability of its body to withstand high temperatures while falling through Earth’s atmosphere. ISRO will be monitoring the behaviour of heat-resistance silica tiles affixed to the RLV’s underside and its nose cone, made of a special carbon composite, as they experience temperatures of more than 1,600º C.

    The second will be the RLV’s onboard computer’s ability to manoeuvre the vehicle to a designated spot in the Bay of Bengal before crashing into the water. That spot, in a future test designated LEX and a date for which hasn’t been announced, will hold a floating runway over 5 km long – and where the RLV will land like an airplane. A third test will check for the RLV’s ability to perform a ‘return flight experiment’ (REX) and the final one will check the scramjet propulsion system, currently under development.

    ISRO has said that the RLV, should it someday be deployed, will be able to bring down launch costs from $5,000 per kg to $2,000 per kg – the sort of cuts SpaceX CEO Elon Musk has repeatedly asserted are necessary to hasten the advent of interplanetary human spaceflight. However, the development of advanced technologies isn’t the only driver at the heart of this ambition. Private spaceflight companies in the US recently lobbied for a ban against the launch of American satellites onboard ISRO rockets “because it would be tough for them to compete against ISRO’s low-cost options, which they also alleged were subsidised by the Indian government”.

    Then again, an ISRO official has since clarified that the organisation isn’t competing against SpaceX either. Speaking to Sputnik News, K. Sivan, director of the Vikram Sarabhai Space Centre in Thiruvananthapuram, said on May 17, “We are not involved in any race with anybody. We have our own problems to tackle. ISRO has its own domestic requirements which we need to satisfy.”

    So, good luck for HEX1, ISRO!

    Featured image: The PSLV C33 mission takes off to launch the IRNSS 1G satellite. Credit: ISRO.

    Note: This post earlier stated that the HEX1 chassis would experience temperatures of 5,000º C during atmospheric reentry. It’s actually 1,600º C and the mistake has been corrected.

  • Has ‘false balance’ become self-evidently wrong?

    Featured image credit: mistermoss/Flickr, CC BY 2.0.

    Journalism’s engagement with a convergent body of knowledge is an interesting thing in two ways. From the PoV of the body, journalism is typically seen as an enabler, an instrument for furthering goals and which is adjacent at best until it begins to have an adverse effect on the dominant forces of convergence. From the PoV of journalism, the body of knowledge isn’t adjacent but more visceral – the flesh with which the narratives of journalistic expression manifest themselves. Both perspectives are borne out in the interaction between anthropogenic global warming (AGW) and its presence in the news. Especially from the PoV of journalism, covering AGW has been something of a slow burn because the assembly of its facts can’t be catalysed even as it maintains a high propensity to be derailed, requiring journalists to maintain a constant intensity over a longer span of time than would typically be accorded to other news items.

    When I call AGW a convergent body of knowledge, I mean that it is trying to achieve consensus on some hypotheses – and the moment that consensus is achieved will be the point of convergence. IIRC, the latest report from the Intergovernmental Panel on Climate Change says that the ongoing spate of global warming is 95% a result of human activities – a level of certainty that we’ll take to be just past the point of convergence. Now, the coverage of AGW until this point was straightforward, that there were two sides which deserved to be represented equally. When the convergence eliminated one side, it was a technical elimination, a group of fact-seekers getting together and agreeing that what they had on their hands was indeed a fact even if they weren’t 100% certain.

    What this meant for journalism was that its traditional mode of creating balance was no longer valid. The principal narrative had shifted from being a conflict between AGW-adherents and AGW-deniers (“yes/no”) to becoming a conflict between some AGW-adherents and other AGW-adherents (“less/more”). And if we’re moving in the right direction, less/more is naturally the more important conflict to talk about. But post-convergence, any story that reverted to the yes/no conflict was accused of having succumbed to a sense of false balance, and calling out instances of false balance has since become a thing. Now, to the point of my piece: have we finally entered a period wherein calling out instances of false balance has become redundant, wherein awareness of the fallacies of AGW-denial has matured enough for false-balance to have become either deliberate or the result of mindlessness?

    Yes. I think so – that false-balance has finally become self-evidently wrong, and to not acknowledge this is to concede that AGW-denial might still retain some vestiges of potency.

    I was prompted to write this post after I received a pitch for an article to be published on The Wire, about using the conclusions of a recently published report to ascertain that AGW-denial was flawed. In other words: new data, old conclusions. And the pitch gave me the impression that the author may have been taking the threat of AGW-deniers too seriously. Had you been the editor reading this, would you have okayed the piece?

  • We’ve become more ambitious about reaching Alpha Centauri – what changed?

    Yuri Milner’s announcement last night that he’s investing $100 million into figuring out how thousands of chip-sized probes could be sent to the Alpha Centauri star system in 20 years must’ve felt like the future to many. The proposal, titled Starshot, imagines the probes to be fitted with small ‘sails’ a few hundred atoms thick that could be propelled by a powerful array of lasers fired from Earth to 60,000 km/s. And once they get to the Alpha Centauri stars A or B, they could take images with a 2 MP camera and transmit them to Earth through an optical communications channel. The radical R&D developed on the way to achieving these big goals could also be deployed to visiting planets within the Solar System in a matter of hours to days, as well as using the lasers and their optical systems to study asteroids and stars.

    But strip away the radicalness and Starshot begins to resemble pieces of the previous century as well, pieces that make for a tradition in which Milner is only the latest, albeit most prominent, participant, and which provide an expanded frame of reference to examine what had to change for astronomers to dream of literally reaching for the stars. The three most prominent pieces are Orion, Daedalus and Daedalus’s derivative, Longshot. All three were reliant on technology that didn’t exist but soon would. Daedalus and Longshot in particular wanted to send unmanned probes to nearby stars within a 100 years. And even more specifically, Longshot is closer to Milner’s idea in its aspiring to:

    • Launch the probes from space, not from the ground
    • Envision and build a more efficient kind of propulsion, and
    • Send unmanned probes to Alpha Centauri

    Commissioned by the British Interplanetary Society and led by an engineer named Alan Bond, Daedalus involved the designing of an unmanned probe that could be sent to Barnard’s Star, 5.9 lightyears away, in 50 years using contemporary technological ideas. At the time, in 1973-1978, the most efficient such idea was nuclear fusion – it was only two decades before that when another nuclear-fusion-propelled rocket, called Project Orion, had been under consideration by the physicists Theodore B. Taylor and Freeman Dyson. Because of the amount power and thrust produced by fusion was known to be very high, Bond’s Daedalus could be massive as well.

    The size of the (unbuilt) Daedalus starship compared to the Empire State Building, which is 443 metres tall. Credit: Adrian Mann/bisbos.com
    The size of the (unbuilt) Daedalus starship compared to the Empire State Building, which is 443 metres tall. Credit: Adrian Mann/bisbos.com

    Conventional rockets that burn high-grade fuel in large quantities in a short span of time to get off the ground are limited by the pesky Tsiolkovsky rocket equation. At its simplest, the equation describes how a rocket that wants to carry more payloads must also carry more fuel to carry those payloads, in turn becoming even heavier, in turn having to carry even more fuel, in turn becoming even heavier, in turn having to carry even more fuel, and so forth… On the other hand, a small nuclear power plant carried onboard Daedalus’s rocket would seem to provide quick escape from the Tsiolkovsky problem – compressing small amounts of helium-3 (mined from the moon or the surface of Jupiter by hot-air balloons) using electron beams to yield 250 pellet-detonations/second sustained for 3.8 years. So, the review concluded with a suggestion that Project Daedalus’s rocket could weigh 54,000 tonnes (including fuel).

    In 1986, interest in this idea was revived by a report of the US National Space Society’s National Commission on Space. It recommended focused research into human spaceflight, efficient and sustainable fuel options, astrometry and interstellar research unto developing a “pioneering” American mission in the early 21st century. Particularly, it suggested developing a “long-life, high-velocity spacecraft to be sent out of the Solar System on a trajectory to the nearest star”. These ideas were advanced by a team of engineering students and NASA scientists, working with the US Naval Academy, which published its own 74-page report in 1987 titled ‘Project Longshot: An Unmanned Probe to Alpha Centauri’. Like Daedalus, Longshot aspired to use inertial confinement fusion – using energetic beams of particles to confine pellets of helium-3 and deuterium to fusionable levels – but with four key differences.

    First, instead of obtaining helium-3 from the moon or the clouds of Jupiter, the team suggested using particle accelerators. Second, the nuclear reactions would be executed at small scales within a “pulsed fusion microexplosion drive”. A magnetic casing surrounding the drive chamber would then channel out a stream of charged particles produced in the reaction to create thrust. Third, while Daedalus would fly by the Alpha Centauri system’s stars (and release smaller probes), Longshot was designed to be able to get into orbit around the star Alpha Centauri B. Fourth, and most important, the fusion reactor was not to be used to launch Longshot from the ground nor away from Earth but only to propel it through space over 100 years. This was because of the risk of the fusion reactor going out of control in or close to Earth’s atmosphere. The team recommended sending each of the spacecraft’s modules to a space station (the ISS would be built a decade later), then assembling and launching it from there.

    Illustrations showing how the inertial fusion reactions inside Longshot’s reactors would power the rocket. Credit: stanford.edu
    Illustrations showing how the inertial fusion reactions inside Longshot’s reactors would power the rocket. Credit: stanford.edu

    The report was published only three years after the sci-fi writer Robert L. Forward described his idea of the Starwisp, a satellite fit with a sail that would be pushed on by beams of microwave radiation shot from Earth – much like Starshot. However, the Longshot report devotes three pages to discussing why a “laser-pumped light sail” might not be a good idea. The authors write: “The single impulse required to reach the designated system in 100 years was determined to be 13,500 km/sec. The size of a laser with continuous output, to accelerate the payload to 13,500 km/sec in a year, is 3.75 Terra Watts.” The payload mass was considered to be 30,000 kg. The specific impulse (Isp, evaluated in the table below) is measured in seconds – it’s the duration over which a rocket engine must be fired in order to achieve a proportional amount of thrust given the fuel is flowing into the engine at a fixed rate.

    Table considering trade-offs between various propulsion options and their feasibilities. Credit: stanford.edu
    Table considering trade-offs between various propulsion options and their feasibilities. Credit: stanford.edu

    The Starshot team works around this problem by shrinking the size of the spacecraft (StarChip) to a little bigger than a postage stamp – allowing a 100-gigawatt laser to propel it to very large velocities in a few dozen minutes. (A back-of-the-envelope calculation discounting effects of the atmosphere shows a 50-MW laser working at 100% efficiency will suffice to push a 10-gram StarChip + sail to 60,000 km/s in 30 minutes.) Another boon of the small size is that the power required to operate such instruments is very low, whereas Longshot required a 300-kW fission reactor to power its payload. So, the real innovation on this front is not in terms of propulsion or even the lasers but of the miniaturisation of electronics.

    One of the scientists involved in the StarChip team is Zachary R. Manchester, who launched a project called KickSat in 2011 and which was selected for the NASA CubeSat Launch Initiative in 2015. A press statement accompanying the selection reads: “the Sprite [a single “ChipSat”] is a tiny spacecraft that includes power, sensor and communication systems on a printed circuit board measuring 3.5 by 3.5 centimetres with a thickness of a few millimetres and a mass of a few grams”. Each StarChip is likely to turn out to be something similar.

    A batch of KickSat Sprites lying on a table. Credit: @zacinaction on Twitter
    A batch of KickSat Sprites lying on a table. Credit: @zacinaction on Twitter
    A preview of KickSat-2 showing its component circuits. Credit: @spacecraftlab on Twitter
    A preview of KickSat-2 showing its component circuits. Credit: @spacecraftlab on Twitter

    However, the miniaturisation of electronics doesn’t solve the other problems the Longshot team anticipated, and which Milner’s team has chosen to overlook in favour of compensatory correcting techniques. The biggest among them is deceleration. Even the Longshot’s 100-year transit from a space station in Earth-orbit to a star 4.37 lightyears away consists of accelerating for about 71 years followed by 29 years of deceleration. In contrast, the StarChip fleet won’t decelerate but pretty much sneeze past Alpha Centauri, making rapid well-timed measurements.

    Another problem is whether the StarChip sails will be able to withstand the effects of being hit by a 100-GW laser. Recent sail-based experiments like IKAROS and LightSail-1 have demonstrated their feasibility, at least when it comes to being propelled by photons streaming out from the Sun, as well as engineering limitations. Borrowing on lessons from these missions, the Starshot collaboration has proposed that a suitable metamaterial (composed of various materials) be built to be extremely reflective and absorb as few of the laser’s photons as possible. According to a calculation on the website, absorbing even 1-10,000th of the laser’s energy will mean the sail becoming quickly overheated, and that this has to be reduced to a billionth. In fact, as is often overlooked, having endless possibilities also means having endless engineering challenges, and there are enough for Breakthrough Starshot to warrant the $100-million from Milner.

    What makes the project truly exciting is its implicit synecdoche – that none of its challenges are real deal-breakers even as surmounting all of them together would give birth to a wild new idea in interstellar research. Unlike Orion, Daedalus or Longshot, Starshot stays clear of the controversies and technical limitations attendant to nuclear power, and is largely divorced from political considerations apropos research-funding. Most importantly, in hindsight, Starshot isn’t proposing a bigger-therefore-better idea, rather taking a break from the past to better leverage our advanced abilities to manipulate materials as well as showing a way out of Tsiolkovsky’s tyranny (even with a nuclear engine, Daedalus’s conceivers suggested the rocket carry 50,000 tonnes of fuel – and it represented a more serious design effort than what went into Longshot). As with all human enterprises, Starshot is worth celebrating if only for the drastic leap in efficiency it represents.

    The Wire
    April 13, 2016

  • The INO story

    A longer story about the India-based Neutrino Observatory that I’d been wanting to do since 2012 was finally published today (to be clear, I hit the ‘Publish’ button today) on The Wire. Apart from myself, four people worked on it: two amazing reporters, one crazy copy-editor and one illustrator. I don’t mean to diminish the role of the illustrator, especially in setting the piece’s mood quite well, but only that the reporters and the copy-editor did a stupendous job of getting the story from 0 to 1. After all, all I’d had was an idea.

    The INO’s is a great story but stands unfortunately to become a depressing parable at the moment – the biggest bug yet in a spider’s web spun of bureaucracy and misinformation. As told on The Wire, the INO is India’s most badass science experiment yet but its inherent sophistication has become its strength and weakness: a strength for being able yield cutting-edge scientific, a weakness for being the ideal target of stubborn activism, unreason and, consequently and understandably, fatigue on the part of the physicists.

    Here on out, it doesn’t look like the INO will get built by 2020, and it doesn’t look like it will be the same thing it started out as when it does get built. Am I disappointed by that? Of course – and bad question. I’m rooting for the experiment, yes? I’m not sure – and much better question. In the last few years, in which the project’s plans gained momentum, some unreasonable activists were able to cash in on the Department of Atomic Energy’s generally cold-blooded way of dealing with disagreement (the DAE is funding the INO). At the same time, the INO collaboration wasn’t as diligent as it ought to have been with the environmental impact assessment report (getting it compiled by a non-accredited agency). Finally, the DAE itself just stood back and watched as the scientists and activists battled it out.

    Who lost? Take a guess. I hope the next Big Science experiment fares better (I’m probably not referring to LIGO because it has a far stronger global/American impetus while the INO is completely indigenously motivated).