Uncategorized

  • Physicists could have to wait 66,000 yottayears to see an electron decay

    The longest coherently described span of time I’ve encountered is from Hindu cosmology. It concerns the age of Brahma, one of Hinduism’s principal deities, who is described as being 51 years old (with 49 more to go). But these are no simple years. Each day in Brahma’s life lasts for a period called the kalpa: 4.32 billion Earth-years. In 51 years, he will actually have lived for almost 80 trillion Earth-years. In a 100, he will have lived 157 trillion Earth-years.

    157,000,000,000,000. That’s stupidly huge. Forget astronomy – I doubt even economic crises have use for such numbers.

    On December 3, scientists announced that we’ve all known something that will live for even longer: the electron.

    Yup, the same tiny lepton that zips around inside atoms with gay abandon, that’s swimming through the power lines in your home, has been found to be stable for at least 66,000 yottayears – yotta- being the largest available prefix in the decimal system.

    In stupidly huge terms, that’s 66,000,000,000,000,000,000,000,000,000 (66,000 trillion trillion) years. Brahma just slipped to second place among the mortals.

    But why were scientists making this measurement in the first place?

    Because they’re desperately trying to disprove a prevailing theory in physics. Called the Standard Model, it describes how fundamental particles interact with each other. Though it was meticulously studied and built over a period of more than 30 years to explain a variety of phenomena, the Standard Model hasn’t been able to answer few of the more important questions. For example, why is gravity among the four fundamental forces so much weaker than the rest? Or why is there more matter than antimatter in the universe? Or why does the Higgs boson not weigh more than it does? Or what is dark matter?

    Silence.

    The electron belongs to a class of particles called leptons, which in turn is well described by the Standard Model. So if physicists are able to find that an electron is less stable the model predicts, it’d be a breakthrough. But despite multiple attempts to find an equally freak event, physicists haven’t succeeded – not even with the LHC (though hopeful rumours are doing the rounds that that could change soon).

    The measurement of 66,000 yottayears was published in the journal Physical Review Letters on December 3 (a preprint copy is available on the arXiv server dated November 11). It was made at the Borexino neutrino experiment buried under the Gran Sasso mountain in Italy. The value itself is hinged on a simple idea: the conservation of charge.

    If an electron becomes unstable and has to break down, it’ll break down into a photon and a neutrino. There are almost no other options because the electron is the lightest charged particle and whatever it breaks down into has to be even lighter. However, neither the photon nor the neutrino has an electric charge so the breaking-down would violate a fundamental law of nature – and definitely overturn the Standard Model.

    The Borexino experiment is actually a solar neutrino detector, using 300 tonnes of a petroleum-based liquid to detect and study neutrinos streaming in from the Sun. When a neutrino strikes the liquid, it knocks out an electron in a tiny flash of energy. Some 2,210 photomultiplier tubes surrounding the tank amplify this flash for examination. The energy released is about 256 keV (by the mass-energy equivalence, corresponding to about a 4,000th the mass of a proton).

    However, the innards of the mountain where the detector is located also produce photons thanks to the radioactive decay of bismuth and polonium in it. So the team making the measurement used a simulator to calculate how often photons of 256 keV are logged by the detector against the ‘background’ of all the photons striking the detector. Kinda like a filter. They used data logged over 408 days (January 2012 to May 2013).

    The answer: once every 66,000 yotta-years (that’s 420 trillion Brahma-years).

    Physics World reports that if photons from the ‘background’ radiation could be eliminated further, the electron’s lifetime could probably be increased by a thousand times. But there’s historical precedent that to some extent encourages stronger probes of the humble electron’s properties.

    In 2006, another experiment situated under the Gran Sasso mountain tried to measure the rate at which electrons violated a defining rule in particle physics called Pauli’s exclusion principle. All electrons can be described by four distinct attibutes called their quantum numbers, and the principle holds that no two electrons can have the same four numbers at any given time.

    The experiment was called DEAR (DAΦNE Exotic Atom Research). It energised electrons and then measured how much of it was released when the particles returned to a lower-energy state. After three years of data-taking, its team announced in 2009 that the principle was being violated once every 570 trillion trillion measurements (another stupidly large number).

    That’s a violation 0.0000000000000000000000001% of the time – but it’s still something. And it could amount to more when compared to the Borexino measurement of an electron’s stability. In March 2013, the team that worked DEAR submitted a proposal for building an instrument that improve the measurement by a 100-times, and in May 2015, reported that such an instrument was under construction.

    Here’s hoping they don’t find what they were looking for?

  • New LHC data has more of the same but could something be in the offing?

    Dijet mass (TeV) v. no. of events. SOurce: ATLAS/CERN
    Dijet mass (TeV) v. no. of events. Source: ATLAS/CERN

    Looks intimidating, doesn’t it? It’s also very interesting because it contains an important result acquired at the Large Hadron Collider (LHC) this year, a result that could disappoint many physicists.

    The LHC reopened earlier this year after receiving multiple performance-boosting upgrades over the 18 months before. In its new avatar, the particle-smasher explores nature’s fundamental constituents at the highest energies yet, almost twice as high as they were in its first run. By Albert Einstein’s mass-energy equivalence (E = mc2), the proton’s mass corresponds to an energy of almost 1 GeV (giga-electron-volt). The LHC’s beam energy to compare was 3,500 GeV and is now 6,500 GeV.

    At the start of December, it concluded data-taking for 2015. That data is being steadily processed, interpreted and published by the multiple topical collaborations working on the LHC. Two collaborations in particular, ATLAS and CMS, were responsible for plots like the one shown above.

    This is CMS’s plot showing the same result:

    Source: CMS/CERN
    Source: CMS/CERN

    When protons are smashed together at the LHC, a host of particles erupt and fly off in different directions, showing up as streaks in the detectors. These streaks are called jets. The plots above look particularly at pairs of particles called quarks, anti-quarks or gluons that are produced in the proton-proton collisions (they’re in fact the smaller particles that make up protons).

    The sequence of black dots in the ATLAS plot shows the number of jets (i.e. pairs of particles) observed at different energies. The red line shows the predicted number of events. They both match, which is good… to some extent.

    One of the biggest, and certainly among the most annoying, problems in particle physics right now is that the prevailing theory that explains it all is unsatisfactory – mostly because it has some really clunky explanations for some things. The theory is called the Standard Model and physicists would like to see it disproved, broken in some way.

    In fact, those physicists will have gone to work today to be proved wrong – and be sad at the end of the day if they weren’t.

    Maintenance work underway at the CMS detector, the largest of the five that straddle the LHC. Credit: CERN
    Maintenance work underway at the CMS detector, the largest of the five that straddle the LHC. Credit: CERN

    The annoying problem at its heart

    The LHC chips in providing two kinds of opportunities: extremely sensitive particle-detectors that can provide precise measurements of fleeting readings, and extremely high collision energies so physicists can explore how some particles behave in thousands of scenarios in search of a surprising result.

    So, the plots above show three things. First, the predicted event-count and the observed event-count are a match, which is disappointing. Second, the biggest deviation from the predicted count is highlighted in the ATLAS plot (look at the red columns at the bottom between the two blue lines). It’s small, corresponding to two standard deviations (symbol: σ) from the normal. Physicists need at least three standard deviations () from the normal for license to be excited.

    But this is the most important result (an extension to the first): The predicted event-count and the observed event-count are a match across 6,000 GeV. In other words: physicists are seeing no cause for joy, and all cause for revalidating a section of the Standard Model, in a wide swath of scenarios.

    The section in particular is called quantum chromodynamics (QCD), which deals with how quarks, antiquarks and gluons interact with each other. As theoretical physicist Matt Strassler explains on his blog,

    … from the point of view of the highest energies available [at the LHC], all particles in the Standard Model have almost negligible rest masses. QCD itself is associated with the rest mass scale of the proton, with mass-energy of about 1 GeV, again essentially zero from the TeV point of view. And the structure of the proton is simple and smooth. So QCD’s prediction is this: the physics we are currently probing is essential scale-invariant.

    Scale-invariance is the idea that two particles will interact the same way no matter how energetic they are. To be sure, the ATLAS/CMS results suggest QCD is scale-invariant in the 0-6,000 GeV range. There’s a long way to go – in terms of energy levels and future opportunities.

    Something in the valley

    The folks analysing the data are helped along by previous results at the LHC as well. For example, with the collision energy having been ramped up, one would expect to see particles of higher energies manifesting in the data. However, the heavier the particle, the wider the bump in the plot and more the focusing that’ll be necessary to really tease out the peak. This is one of the plots that led to the discovery of the Higgs boson:

     

    Source: ATLAS/CERN
    Source: ATLAS/CERN

    That bump between 125 and 130 GeV is what was found to be the Higgs, and you can see it’s more of a smear than a spike. For heavier particles, that smear’s going to be wider with longer tails on the site. So any particle that weighs a lot – a few thousand GeV – and is expected to be found at the LHC would have a tail showing in the lower energy LHC data. But no such tails have been found, ruling out heavier stuff.

    And because many replacement theories for the Standard Model involve the discovery of new particles, analysts will tend to focus on particles that could weigh less than about 2,000 GeV.

    In fact that’s what’s riveted the particle physics community at the moment: rumours of a possible new particle in the range 1,900-2,000 GeV. A paper uploaded to the arXiv preprint server on December 10 shows a combination of ATLAS and CMS data logged in 2012, and highlights a deviation from the normal that physicists haven’t been able to explain using information they already have. This is the relevant plot:

    Source: arXiv:1512.03371v1
    Source: arXiv:1512.03371v1

     

    The one on the middle and right are particularly relevant. They each show the probability of the occurrence of an event (observed as a bump in the data, not shown here) of some heavier mass of energy decaying into two different final states: of W and Z bosons (WZ), and of two Z bosons (ZZ). Bosons make a type of fundamental particle and carry forces.

    The middle chart implies that the mysterious event is at least 1,000-times less likelier to occur than normally and the one on the left implies the event is at least 10,000-times less likelier to occur than normally. And both readings are at more than 3σ significance, so people are excited.

    The authors of the paper write: “Out of all benchmark models considered, the combination favours the hypothesis of a [particle or its excitations] with mass 1.9-2.0 [thousands of GeV] … as long as the resonance does not decay exclusively to WW final states.”

    But as physicist Tommaso Dorigo points out, these blips could also be a fluctuation in the data, which does happen.

    Although the fact that the two experiments see the same effect … is suggestive, that’s no cigar yet. For CMS and ATLAS have studied dozens of different mass distributions, and a bump could have appeared in a thousand places. I believe the bump is just a fluctuation – the best fluctuation we have in CERN data so far, but still a fluke.

    There’s a seminar due to happen today at the LHC Physics Centre at CERN where data from the upgraded run is due to be presented. If something really did happen in those ‘valleys’, which were filtered out of a collision energy of 8,000 GeV (basically twice the beam energy, where each beam is a train of protons), then those events would’ve happened in larger quantities during the upgraded run and so been more visible. The results will be presented at 1930 IST. Watch this space.

    Featured image: Inside one of the control centres of the collaborations working on the LHC at CERN. Each collaboration handles an experiment, or detector, stationed around the LHC tunnel. Credit: CERN.

  • Calling 2015

    It might still be too soon to call it but 2015 was a great year, far better than the fiasco 2014 was. Ups and downs and all that, but what ups they were have been. I thought I’d list them out just to be able to put a finger on all that I’ve dealt with and been dealt with.

    Ups

    1. Launched The Wire (only Siddharth and Vignesh know my struggle at 5 am on May 11 to get the domain mapped properly)
    2. Wrote a lot of articles, and probably the most in a year about the kind of stuff that really interests me (history of science, cosmology, cybersec)
    3. Got my reading habit back (somewhat)
    4. Found two awesome counselors and a psychologist, absolutely wonderful people
    5. … who helped me get a great handle on my depression and almost completely get rid of it
    6. Managed to hold on to a job for more than four months for the first time since early 2014 (one of the two companies that hired me in between is now shut, so not my fault?)
    7. Didn’t lose any of my friends – in fact, made six really good new ones!
    8. Didn’t have to put up with a fourth The Hobbit movie (I’m sure Tauriel’s lines would’ve had Tolkien doing spinarooneys in his grave)

    and others.

    Downs

    1. Acquired an addiction
    2. Didn’t have a Tolkien story releasing on the big screen 10 days before my birthday)
    3. Grandpa passed away (though I don’t wish he’d stayed on for longer either – he was in a lot of pain before he died) as did an uncle
    4. Chennai floods totalled my Macbook Pro (and partially damaged my passport)
    5. Stopped sending out the Curious Bends newsletter
    6. My vomit-free streak ended after eight years
    7. Still feel an impostor
    8. Didn’t discover any major fantasy series to read (which sucks because Steven Erikson publishes one book only every two years)

    and others.

    Lots to look forward to in 2016; five things come immediately to mind:

    • Move to Delhi
    • Continue contributing to The Wire
    • Visit a world-renowned particle accelerator lab
    • Await, purchase and devour Erikson’s new book (Fall of Light, book #2 of the Kharkhanas Trilogy)
    • Await new Planck and LHC data (kind of a big deal when you’re able to move away from notions of nerdiness or academic specialisation and toward the idea that the data will provide you – a human – a better idea of the cosmos that surrounds you, that is you)
  • Tracing the origins of Pu-244

    Excerpt:

    The heaviest naturally occurring elements are thought to form not when a star is alive but when it begins to die. Specifically, in the explosion that results when a star weighing 8x to 20x our Sun dies, in a core-collapse supernova (cc-SNe). In this process, the star first implodes to some extent before being rebounded outward in a violent throwing-off of its outer layers. The atoms of lighter elements in these layers could capture free neutrons and transmutate into an atom of a heavier one, called the r-process.

    The rebound occurs because if the star’s core weighs less than about 5x our Sun (our entire Sun!), it doesn’t collapse into a blackhole but an intermediary state called a neutron star – a small and extremely dense ball composed almost entirely of neutrons.

    Anyway, the expelled elements are dispersed through the interstellar medium, the region of space between stars. Therefrom, for example, they could become part of the ingredients of a new nebula or star, get picked up by passing comets or meteors, or eventually settle down on the surface of a distant planet. For example, the isotope of one such element – plutonium (Pu) – is found scattered among the sediments on the floor of Earth’s deepest seas: plutonium-244.

    Based on multiple measurements of the amount of Pu-244 on the seafloor and in the interstellar medium, scientists know how the two amounts correlate over time. And based on astronomical observations, they also know how much Pu-244 each cc-SNe may have produced. But what has caught off recent scientists is that the amount of Pu-244 on Earth over time doesn’t match up with the rate at which cc-SNe occur in the Milky Way galaxy. That is, the amount of Pu-244 on Earth is 100 times lower than there would’ve been if all of it had to have come from cc-SNe.

    So where is the remaining Pu-244?

    Or, a team of astrophysicists from the Hebrew University, Jerusalem, realised, was so much Pu-244 not being produced in the first place?

    Read the full piece here.

  • The downward/laterward style in science writing

    One of the first lessons in journalism 101 is the inverted pyramid, a style of writing where the journalist presents the more important information higher up the piece. This way, the copy sort of tapers down in importance the longer it runs. The idea was that such writing served two purposes:

    1. Allowing editors looking to shorten the copy to make it fit in print to make cuts easily – they’d just have to snip whatever they wanted off the bottom, knowing that the meat was on the top.
    2. Readers would get the most important information without having to read too much through the copy – allowing them to decide earlier if they want to read the whole thing or move on to something else.

    As a science writer, I don’t like the inverted pyramid. Agreed, it makes for pithy writing and imposes the kind of restriction on the writer that does a good job of forcing her to preclude her indulgence from the writing process. But if the writer was intent on indulging herself, I think she’d do it inverted pyramid or not. My point is that the threat of self-indulgence shouldn’t disallow other, possibly more engaging, forms of writing.

    To wit: my favourite style is the pyramid. It starts with a slowly building trickle of information at the top with the best stuff coming at the bottom. I like this style because it closely mimics the process of discovery, of the brain receiving new information and then accommodating it within an existing paradigm. To me, it also allows for a more logical, linear construction of the narrative. In fact, I prefer the descriptor ‘downward/laterward’ because, relative to the conventional inverted pyramid style, the pyramid postpones the punchline.

    However, two caveats.

    1. The downward/laterward doesn’t make anything easier for the editors, but that again – like self-indulgence – is to me a separate issue. In the pursuit of constructing wholesome pieces, it’d be an insult to me if I had an editor who wasn’t interested in reading my whole piece and then deciding how to edit it. Similarly, in return for the stylistic choices it affords, the downward/laterward compels the writer to write even better to keep the reader from losing interest.
    2. I usually write explainers (rather, end up having tried to write one). Explainers in the context of my interests typically focus on the science behind an object or an event, and they’re usually about high-energy astronomy/physics. Scientific advancements in these subjects usually require a lot of background, pre-existing information. So the pyramid style affords me the convenience of presenting such information as a build toward the conclusion – which is likely the advancement in question.
      However, I’m sure I’m in the minority. Most writers whose articles I enjoy are also writers gunning to describe the human emotions at play behind significant scientific findings. And their articles are typically about drama. So it might be that the drama builds downward/laterward while the science itself is presented in the inverted-pyramid way (and I just end up noticing the science).

    Looking back, I think most of my recent pieces (2011-onward) have been written in the downward/laterward style. And the only reason I decided to reflect on the process now is because of this fantastic piece in The Atlantic that talks about how astronomers hunt for the oldest stars in the universe. Great stuff.

  • #ChennaiRains – let’s not forget

    Chennai. Poda vennai. Credit: Wikimedia Commons
    Chennai. Poda vennai. Credit: Wikimedia Commons

    It was a friend’s remark in 2012 that alerted me to something off about the way I’ve looked at natural disasters in India’s urban centres – especially Chennai. At that time – as it is today – long strips of land in many parts of the city were occupied by trucks and machinery involved in building the Metro. At the same time, arbitrary overcharging by auto-rickshaws was rampant and almost all buses were overcrowded during peak hours. Visiting the city for a few days, she tweeted: “Get your act together, Chennai.”

    Like all great cities, Chennai has always sported two identities conflated as one: its public infrastructure and its people. There has been as much to experience about Chennai’s physical framework as its anthropological counterpart. For every dabara of filter coffee you had, visit to the Marina beach you paid on a cloudy evening, stroll around Kapaleeshwarar Temple you took during a festival, you could take a sweaty bus-ride at 12 pm, bargain with an auto-rickshaw driver, and get lost on South Usman road. This conflation has invoked the image of a place retaining its small-townish charm while evolving a big-town bustle. And this impression wouldn’t be far off the mark if it weren’t for one problem.

    In the shadow of its wonderful people, Chennai’s public infrastructure has been fraying at the seams.

    The ongoing spell of rains in the city have really brought some of these tears to the fore. Large swaths are flooded with upto two feet of water while Saidapet, Kotturpuram, Eekkattuthangal, Tiruvanmiyur and Tambaram areas have been wrecked. A crowdsourced effort has registered over 2,000 roads as being water-logged. Hundreds of volunteers still ply the city providing what help they can – while a similar number of others have opened up their homes – as thousands desperately await it. The airport has been shut for a week, all trains cancelled and major arterial roads blocked off. The Army, Navy and the NDRF have been deployed for rescue efforts but they’re overstretched. Already, the northern, poorer suburbs are witnessing flash protests amidst a building exodus for want of supplies.

    Nobody saw these rains coming. For over three decades, the annual northeast monsoons have been just about consistently short of expectations. But this year, the weather has seemed intent on correcting that hefty deficit in the span of a few weeks. For example, December 1-2 alone witnessed over 300 mm of rainfall as opposed to a full month’s historic average of 191 mm.

    But as it happens, there’s no credible drainage system. The consequential damage is already an estimated Rs.15,000 crore – which is really just fine because I believe that that number’s smaller than all the bribes that were given and taken by the city’s municipal administrators to let builders build where and how they wished: within once-swamps, in the middle of dried lakebeds, using impervious materials for watertight designs, with little care for surface runoffs and solid waste management, the entire facade constructed to be car- and motorbike-friendly.

    What I think is up for change now is that we don’t forget, that we don’t let the government surmount the disaster this time with compensation packages, reconciliatory sops and good ol’ flattery – the last one by saying the people of Chennai have stood tall, have coped well, and move on, just like that. But what made the crisis that required the fortitude in the first place – any more than the fortitude we already display to get on with our lives? It was only drawn out by what has always been a planned but ignored crisis. Even if it’s the sole silver-lining, focusing on it also distracts us from understanding the real damage we’ve taken.

    An opinion piece that appeared in The Hindu on December 3 provides a convenient springboard to further explain my views. An excerpt:

    Many outsiders who come to the city say it’s hard to make friends here. The people are insular, they say. It’s true, we Chennaites stick to ourselves. There is none of the brash socialising of the Delhiite, the familiar chattiness of the Kolkatan, or the earthy amiability of the Mumbaikar. Your breezy hello will likely get a grunt in return and chirpy conversational overtures will meet austere monosyllables. That’s because we don’t much care for small talk. We can spend entire evenings making few friends and influencing nobody, but give us a crisis and you’ll find that few cities stand up tall the way Chennai does. It is unglamorously practical, calmly efficient, and absolutely rock-solid in its support systems.

    Apropos these words: It’s very important to glorify the people who’ve stood up to adversity but when the adversity was brought on by the government (pointing at AIADMK for its construction-heavy reigns and at the DMK for having no sense of urban planning – exemplified by that fucking flyover on South Usman Road), it’s equally important to call it out as well. Sadly, the author of the piece blames the rain god for it! It’s like I push you in front of a speeding truck, you somehow survive a fatal scenario, then I applaud you and you thank me for the applause. I think that when you’re able to celebrate a life-goes-on narrative without talking about what broke, you’re essentially rooting for the status quo.

    Moreover, thousands of cities have stood tall the way Chennai has. Kalyan Raman had penned a justifiably provocative essay in 2005 where he argued that India’s biggest metros have largely been made (as opposed to being unmade) by daunting crises. I think it’s important in this context to cheer on rescue efforts but not the physical infrastructure itself (which has a cultural component in having established it), and which is neither “calmly efficient” nor has a rocky quality to it. The infrastructure stinks (a 10-year timeline for building the Metro is another example) and must now earn its own narrative in stories of Chennai instead of piggybacking on the city’s other well-deserved qualities.

    In the same vein, I don’t think different cities’ different struggles are even comparable, so it’s offensive to suggest few cities can stand up tall the way Chennai has. Let’s cheer for having survived, not thump our chests. We made the floods happen, and unless we demand better from our government, we won’t get better governance (for starters, in the form of civic infrastructure reform).

    https://www.facebook.com/gananthakrishnan/posts/10207304813712086

  • Relativity’s kin, the Bose-Einstein condensate, is 90 now

    Excerpt:

    Over November 2015, physicists and commentators alike the world over marked 100 years since the conception of the theory of relativity, which gave us everything from GPS to blackholes, and described the machinations of the universe at the largest scales. Despite many struggles by the greatest scientists of our times, the theory of relativity remains incompatible with quantum mechanics, the rules that describe the universe at its smallest, to this day. Yet it persists as our best description of the grand opera of the cosmos.

    Incidentally, Einstein wasn’t a fan of quantum mechanics because of its occasional tendencies to violate the principles of locality and causality. Such violations resulted in what he called “spooky action at a distance”, where particles behaved as if they could communicate with each other faster than the speed of light would have it. It was weirdness the likes of which his conception of gravitation and space-time didn’t have room for.

    As it happens, 2015 also marks another milestone, also involving Einstein’s work – as well as the work of an Indian scientist: Satyendra Nath Bose. It’s been 20 years since physicists realised the first Bose-Einstein condensate, which has proved to be an exceptional as well as quirky testbed for scientists probing the strange implications of a quantum mechanical reality.

    Its significance today can be understood in terms of three ‘periods’ of research that contributed to it: 1925 onward, 1975 onward, and 1995 onward.

    Read the full piece here.

     

  • Why this ASTROSAT instrument could be a game-changer for high-energy astrophysics

    On November 17, NASA announced its Swift satellite had recorded its thousandth gamma-ray burst (GRB), an important milestone that indicates how many of these high-energy explosions, sometimes followed by the creation of blackholes, happen in the observable universe and in what ways.

    Some five weeks before the announcement, Swift had observed a less symbolically significant GRB called 151006A. Its physical characteristics as logged and analysed by the satellite were quickly available, too, on a University of Leicester webpage.

    On the same day as this observation, on October 6, the 50-kg CZTI instrument onboard India’s ASTROSAT space-borne satellite had come online. Like Swift, CZTI is tuned to observe and study high-energy phenomena like GRBs. And like every instrument that has just opened its eyes to the cosmos, ISRO’s scientists were eager to do something with it to check if it worked according to expectations. The Swift-spotted GRB 151006A provided just the opportunity.

    CZTI stands for Cadmium-Zinc-Telluride Imager – a compound of these three metals (the third is tellurium) being a known industrial radiation detector. And nothing releases radiation as explosively as a GRB, which have been known to outshine the light of whole galaxies in the few seconds that they last. The ISRO scientists pointed the CZTI at 151006A and recorded observations that they’d later compare against Swift records and see if they matched up. A good match would be validation and a definite sign that the CZTI was working normally.

    It was working normally, and how.

    NASA has two satellites adept at measuring high-energy radiation coming from different sources in the observable universe – Swift and the Fermi Gamma-ray Space Telescope (FGST). Swift is good at detecting incoming particles that have an energy of up to 150 keV, but not so good at determining the peak energy of hard-spectrum emissions. In astrophysics, spectral hardness is defined as the position of the peak – in power emitted per decade in energy – in the emission spectrum of the GRB. This spectrum is essentially a histogram of the number of particles with some values of a property that strike a detector, so a hard-spectrum emission has a well-defined peak in that histogram. An example:

    The plot of argon dense plasma emission is a type of histogram – where the intensity of photons is binned according to the energies at which they were observed. Credit: Wikimedia Commons
    The plot of argon dense plasma emission is a type of histogram – where the intensity of photons is binned according to the energies at which they were observed. Credit: Wikimedia Commons

    FGST, on the other hand, is better equipped to detect emissions higher than 150 keV but not so much at quickly figuring out where in the sky the emissions are coming from. The quickness is important because GRBs typically last for a few seconds, while a subcategory of them lasts for a few thousandths of a second, and then fade into a much duller afterglow of X-rays and other lower-energy emissions. So it’s important to find where in the sky GRBs could be when the brighter flash occurs so that other telescopes around the world can better home in on the afterglow.

    This blindspot between Swift and FGST is easily bridged by CZTI, according to ISRO. In fact, per a deceptively innocuous calibration notice put out by the organisation on October 17, CZTI boasts the “best spectral [capabilities] ever” for GRB studies in the 80-250 keV range. This means it can provide better spectral studies of long GRBs (which are usually soft) and better localisation for short, harder GRBs. And together, they make up a strong suite of simultaneous spectral and timing observations of high-energy phenomena for the ASTROSAT.

    There’s more.

    Enter Compton scattering

    The X-rays and gamma rays emanating from a GRB are simply photons that have a very low wavelength (or, very high frequency). Apart from these characteristics, they also have a property called polarisation, which describes the plane along which the electromagnetic waves of the radiation are vibrating. Polarisation is very important when studying directions along long distances in the universe and how the alignment of intervening matter affects the path of the radiation.

    All these properties can be visualised according to the wave nature of radiation.

    But in 1922, the British physicist Arthur Compton found that when high-frequency X-rays collided with free electrons, their frequency dropped by a bit (because some energy was transferred to the electrons). This discovery – celebrated for proving that electromagnetic radiation could behave like particles – also yielded an equation that let physicists calculate the angle at which the radiation was scattered off based on the change in its frequency. As a result, instruments sensitive to Compton scattering are also able to measure polarisation.

    Observed count profile of Compton events during GRB 151006A. Source: IUCAA
    Observed count profile of Compton events during GRB 151006A. Source: IUCAA

    This plot shows the number of Compton scattering events logged by CZTI based on observing GRB 151006A; zero-time is the time at which the GRB triggered the attention of Swift. That CZTI was able to generate this plot was evidence that it could make simultaneous observations of timing, spectra and polarisation of high-energy events (especially in X-rays, up to 250 keV), lessening the burden on ISRO to depend on multiple satellites for different observations at different energies.

    The ISRO note did clarify that no polarisation measurement was made in this case because about 500 Compton events were logged against the 2,000 needed for the calculation.

    But that a GRB had been observed and studied by CZTI was broadcast on the Gamma-ray Coordinates Network:

    V. Bhalerao (IUCAA), D. Bhattacharya (IUCAA), A.R. Rao (TIFR), S. Vadawale (PRL) report on behalf of the Astronaut CZTI collaboration:

    Analysis of Astronaut commissioning data showed the presence of GRB 151006A (Kocevski et al. 2015, GCN 18398) in the Cadmium Zinc Telluride Imager. The source was located 60.7 degrees away from the pointing direction and was detected at energies above 60 keV. Modelling the profile as a fast rise and exponential decay, we measure T90 of 65s, 775s and 50s in 60-80 keV, 80-100 keV and 100-250 keV bands respectively.

    In addition, the GRB is clearly detected in a light curve created from double events satisfying Compton scattering criteria (Vadawale et al, 2015, A&A, 578, 73). This demonstrates the feasibility of measuring polarisation for brighter GRBs with CZTI.

    That CZTI is a top-notch instrument doesn’t come as a big surprise: most of ASTROSAT’s instruments boast unique capabilities and in some contexts are the best on Earth in space. For example, the LAXPC (Large Area X-ray Proportional Counter) instrument as well as NASA’s uniquely designed NuSTAR space telescope both log radiation in the 6-79 keV range coming from around blackholes. While NuSTAR’s spectral abilities are superior, LAXPC’s radiation-collecting area is 10x as much.

    On October 7-8, ISRO also used CZTI to observe the famous Cygnus X-1 X-ray source (believed to be a blackhole) in the constellation Cygnus. The observation was made coincidental to NuSTAR’s study of the same object in the same period, allowing ISRO to calibrate CZTI’s functioning in the 0-80 (approx.) keV range and signalling the readiness of four of the six instruments onboard ASTROSAT.

    The two remaining instruments: the Ultraviolet Imaging Telescope will switch on on December 10 and the Soft X-ray Telescope, on December 13. And from late December to September 2016, ISRO will use the satellite to make a series of observations before it becomes available to third-parties, and finally to foreign teams in 2018.

    The Wire
    November 21, 2015

  • A new dawn for particle accelerators in the wake

    During a lecture in 2012, G. Rajasekaran, professor emeritus at the Institute for Mathematical Sciences, Chennai, said that the future of high-energy physics lay with engineers being able to design smaller particle accelerators. The theories of particle physics have for long been exploring energy levels that we might never be able to reach with accelerators built on Earth. At the same time, it will still be on physicists to reach the energies that we can reach but in ways that are cheaper, more efficient, and smaller – because reach them we will have to if our theories must be tested. According to Rajasekaran, the answer is, or will soon be, the tabletop particle accelerator.

    In the last decade, tabletop accelerators have inched closer to commercial viability because of a method called plasma wakefield acceleration. Recently, a peer-reviewed experiment detailing the effects of this method was performed at the University of Maryland (UMD) and the results published in the journal Physical Review Letters. A team-member said in a statement: “We have accelerated high-charge electron beams to more than 10 million electron volts using only millijoules of laser pulse energy. This is the energy consumed by a typical household lightbulb in one-thousandth of a second.” Ten MeV pales in comparison to what the world’s most powerful particle accelerator, the Large Hadron Collider (LHC), achieves – a dozen million MeV – but what the UMD researchers have built doesn’t intend to compete against the LHC but against the room-sized accelerators typically used for medical imaging.

    In particle accelerator like the LHC or the Stanford linac, a string of radiofrequency (RF) cavities are used to accelerate charged particles around a ring. Energy is delivered to the particles using powerful electromagnetic fields via the cavities, which switch polarity at 400 MHz – that’s switching at 400 million times a second. The particles’ arrival at the cavities are timed accordingly. Over the course of 15 minutes, the particle bunches are accelerated from 450 GeV to 4 TeV (the beam energy before the LHC was upgraded over 2014), with the bunches going 11,000 times around the ring per second. As the RF cavities switch faster and are ramped up in energy, the particles swing faster and faster around – until computers bring two such beams into each other’s paths at a designated point inside the ring and BANG.

    A wakefield accelerator also has an electromagnetic field that delivers the energy, but instead of ramping and switching over time, it delivers the energy in one big tug.

    First, scientists create a plasma, a fluidic state of matter consisting of free-floating ions (positively charged) and electrons (negatively charged). Then, the scientists shoot two bunches of electrons separated by 15-20 micrometers (millionths of a metre). As the leading bunch moves into the plasma, it pushes away the plasma’s electrons and so creates a distinct electric field around itself called the wakefield. The wakefield envelopes the trailing bunch of electrons as well, and exerts two forces on them: one along the direction of the leading bunch, which accelerates the trailing bunch, and one in the transverse direction, which either makes the bunch more or less focused. And as the two bunches shoot through the plasma, the leading bunch transfers its energy to the trailing bunch via the linear component of the wakefield, and the trailing bunch accelerates.

    A plasma wakefield accelerator scores over a bigger machine in two key ways:

    • The wakefield is a very efficient energy transfer medium (but not as much as natural media), i.e. transformer. Experiments at the Stanford Linear Accelerator Centre (SLAC) have recorded 30% efficiency, which is considered high.
    • Wakefield accelerators have been able to push the energy gained per unit distance travelled by the particle to 100 GV/m (an electric potential of 1 GV/m corresponds to an energy gain of 1 GeV/c2 for one electron over 1 metre). Assuming a realistic peak accelerating gradient of 100 MV/m, a similar gain (of 100 GeV) at the SLAC would have taken over a kilometre.

    There are many ways to push these limits – but it is historically almost imperative that we do. Could the leap in accelerating gradient by a factor of 100 to 1,000 break the slope of the Livingston plot?

    Could the leap in accelerating gradient from RF cavities to plasma wakefields break the Livingston plot? Source: AIP
    Could the leap in accelerating gradient from RF cavities to plasma wakefield accelerators break the Livingston plot? Source: AIP

    In the UMD experiment, scientists shot a laser pulse into a hydrogen plasma. The photons in the laser then induced the wakefield that trailing electrons surfed and were accelerated through. To reduce the amount of energy transferred by the laser to generate the same wakefield, they made the plasma denser instead to capitalise on an effect called self-focusing.

    A laser’s electromagnetic field, as it travels through the plasma, makes electrons near it wiggle back and forth as the field’s waves pass through. The more intense waves near the pulse’s centre make the electrons around it wiggle harder. Since Einstein’s theory of relativity requires objects moving faster to weigh more, the harder-wiggling electrons become heavier, slow down and then settle down, creating a focused beam of electrons along the laser pulse. The denser the plasma, the stronger the self-focusing – a principle that can compensate for weaker laser pulses to sustain a wakefield of the same strength if the pulses were stronger but the plasma less dense.

    The UMD team increased the hydrogen gas density, of which the plasma is made, by some 20x and found that electrons could be accelerated by 2-12 MeV using 10-50 millijoule laser pulses. Additionally, the scientists also found that at high densities, the amplitude of the plasma wave propagated by the laser pulse increases to the point where it traps some electrons from the plasma and continuously accelerates them to relativistic energies. This obviates the need for trailing electrons to be injected separately and increases the efficiency of acceleration.

    But as with all accelerators, there are limitations. Two specific to the UMD experiment are:

    • If the plasma density goes beyond a critical threshold (1.19 x 1020 electrons/cm3) and if the laser pulse is too powerful (>50 mJ), the electrons are accelerated more by the direct shot than by the plasma wakefield. These numbers define an upper limit to the advantage of relativistic self-focusing.
    • The accelerated electrons slowly drift apart (in the UMD case, to at most 250 milliradians) and so require separate structures to keep their beam focused – especially if they will be used for biomedical purposes. (In 2014, physicists from the Lawrence Berkeley National Lab resolved this problem by using a 9-cm long capillary waveguide through which the plasma was channelled.)

    There is another way lasers can be used to build an accelerator. In 2013, physicists from Stanford University devised a small glass channel 0.075-0.1 micrometers wide, and etched with nanoscale ridges on the floor. When they shined infrared light with wavelength of twice the channel’s height across it, the eM field of the light wiggled the electrons back and forth – but the ridges on the floor were cut such that electrons passing over the crests would accelerate more than they would decelerate when passing over the troughs. Like this, they achieved an energy gain gradient of 300 MeV/m. This way, the accelerator is only a few millimetres long and devoid of any plasma, which is difficult to handle.

    At the same time, this method shares a shortcoming with the (non-laser driven) plasma wakefield accelerator: both require the electrons to be pre-accelerated before injection, which means room-sized pre-accelerators are still in the picture.

    Physical size is an important aspect of particle accelerators because, the way we’re building them, the higher-energy ones are massive. The LHC currently accelerates particles to 13 TeV (1 TeV = 1 million MeV) in a 27-km long underground tunnel running beneath the shared borders of France and Switzerland. The planned Circular Electron-Positron Collider in China envisages a 100-TeV accelerator around a 54.7-km long ring (Both the LHC and the CEPC involve pre-accelerators that are quite big – but not as much as the final-stage ring). The International Linear Collider will comprise a straight tube, instead of a ring, over 30 km long to achieve accelerations of 500 GeV to 1 TeV. In contrast, Georg Korn suggested in APS Physics in December 2014 that a hundred 10-GeV electron acceleration modules could be lined up facing against a hundred 10-GeV positron acceleration modules to have a collider that can compete with the ILC but from atop a table.

    In all these cases, the net energy gain per distance travelled (by the accelerated particle) was low compared to the gain in wakefield accelerators: 250 MV/m versus 10-100 GV/m. This is the physical difference that translates to a great reduction in cost (from billions of dollars to thousands), which in turn stands to make particle accelerators accessible to a wider range of people. As of 2014, there were at least 30,000 particle accelerators around the world – up from 26,000 in 2010 according to a Physics Today census. More importantly, the latter estimated that almost half the accelerators were being used for medical imaging and research, such as in radiotherapy, while the really high-energy devices (>1 GeV) used for physics research numbered a little over 100.

    These are encouraging numbers for India, which imports 75% of its medical imaging equipment for more than Rs.30,000 crore a year (2015). These are also encouraging numbers for developing nations in general that want to get in on experimental high-energy physics, innovations in which power a variety of applications, ranging from cleaning coal to detecting WMDs, not to mention expand their medical imaging capabilities as well.

    Featured image credit: digital cat/Flickr, CC BY 2.0.

  • Is the universe as we know it stable?

    The anthropic principle has been a cornerstone of fundamental physics, being used by some physicists to console themselves about why the universe is the way it is: tightly sandwiched between two dangerous states. If the laws and equations that define it had slipped during its formation just one way or the other in their properties, humans wouldn’t have existed to be able to observe the universe, and conceive the anthropic principle. At least, this is the weak anthropic principle – that we’re talking about the anthropic principle because the universe allowed humans to exist, or we wouldn’t be here. The strong anthropic principle thinks the universe is duty-bound to conceive life, and if another universe was created along the same lines that ours was, it would conceive intelligent life, too, give or take a few billion years.

    The principle has been repeatedly resorted to because physicists are at that juncture in history where they’re not able to tell why some things are the way they are and – worse – why some things aren’t the way they should be. The latest significant addition to this list, and an illustrative example, is the Higgs boson, whose discovery was announced on July 4, 2012, at the CERN supercollider LHC. The Higgs boson’s existence was predicted by three independently working groups of physicists in 1964. In the intervening decades, from hypothesis to discovery, physicists spent a long time trying to find its mass. The now-shut American particle accelerator Tevatron helped speed up this process, using repeated measurements to steadily narrow down the range of masses in which the boson could lie. It was eventually found at the LHC at 125.6 GeV (a proton weighs about 0.98 GeV).

    It was a great moment, the discovery of a particle that completed the Standard Model group of theories and equations that governs the behaviour of fundamental particles. It was also a problematic moment for some, who had expected the Higgs boson to weigh much, much more. The mass of the Higgs boson is connected to the energy of the universe (because the Higgs field that generates the boson pervades throughout the universe), so by some calculations 125.6 GeV implied that the universe should be the size of a football. Clearly, it isn’t, so physicists got the sense something was missing from the Standard Model that would’ve been able to explain the discrepancy. (In another example, physicists have used the discovery of the Higgs boson to explain why there is more matter than antimatter in the universe though both were created in equal amounts.)

    The energy of the Higgs field also contributes to the scalar potential of the universe. A good analogy lies with the electrons in an atom. Sometimes, an energised electron sees fit to lose some extra energy it has in the form of a photon and jump to a lower-energy state. At others, a lower-energy electron can gain some energy to jump to a higher state, a phenomenon commonly observed in metals (where the higher-energy electrons contribute to conducting electricity). Like the electrons can have different energies, the scalar potential defines a sort of energy that the universe can have. It’s calculated based on the properties of all the fundamental forces of nature: strong nuclear, weak nuclear, electromagnetic, gravitational and Higgs.

    For the last 13.8 billion years, the universe has existed in a particular way that’s been unchanged, so we know that it is at a scalar-potential minimum. The apt image is of a mountain-range, like so:

    valleys1

    The point is to figure out if the universe is lying at the deepest point of the potential – the global minimum – or at a point that’s the deepest in a given range but not the deepest overall – the local minimum. This is important for two reasons. First: the universe will always, always try to get to the lowest energy state. Second: quantum mechanics. With the principles of classical mechanics, if the universe were to get to the global minimum from the local minimum, its energy will first have to be increased so it can surmount the intervening peaks. But with the principles of quantum mechanics, the universe can tunnel through the intervening peaks to sink into the global minimum. And such tunnelling could occur if the universe is currently in a local minimum only.

    To find out, physicists try and calculate the shape of the scalar potential in its entirety. This is an intensely complicated mathematical process and takes lots of computing power to tackle, but that’s beside the point. The biggest problem is that we don’t know enough about the fundamental forces, and we don’t know anything about what else could be out there at higher energies. For example, it took an accelerator capable of boosting particles to 3,500 GeV and then smash them head-on to discover a particle weighing 125 GeV. Discovering anything heavier – i.e. more energetic – would take ever more powerful colliders costing many billions of dollars to build.

    Almost sadistically, theoretical physicists have predicted that there exists an energy level at which the gravitational force unifies with the strong/weak nuclear and electromagnetic forces to become one indistinct force: the Planck scale, 12,200,000,000,000,000,000 GeV. We don’t know the mechanism of this unification, and its rules are among the most sought-after in high-energy physics. Last week, Chinese physicists announced that they were planning to build a supercollider bigger than the LHC, called the Circular Electron-Positron Collider (CEPC), starting 2020. The CEPC is slated to collide particles at 100,000 GeV, more than 7x the energy at which the LHC collides particles now, in a ring 54.7 km long. Given the way we’re building our most powerful particle accelerators, one able to smash particles together at the Planck scale would have to be as large as the Milky Way.

    (Note: 12,200,000,000,000,000,000 GeV is the energy produced when 57.2 litres of gasoline are burnt, which is not a lot of energy at all. The trick is to contain so much energy in a particle as big as the proton, whose diameter is 0.000000000000001 m. That is, the energy density is 1064 GeV/m3.)

    We also don’t know how the Standard Model scales from the energy levels it currently inhabits unto the Planck scale. If it changes significantly as it scales up, then the forces’ contributions to the scalar potential will change also. Physicists think that if any new bosons, essentially new forces, appear along the way, then the equations defining the scalar potential – our picture of the peaks and valleys – will have to be changed themselves. This is why physicists want to arrive at more precise values of, say, the mass of the Higgs boson.

    Or the mass of the top quark. While force-carrying particles are called bosons, matter-forming particles are called fermions. Quarks are a type of fermion; together with force-carriers called gluons, they make up protons and neutrons. There are six kinds, or flavours, of quarks, and the heaviest is called the top quark. In fact, the top quark is the heaviest known fundamental particle. The top quark’s mass is particularly important. All fundamental particles get their mass from interacting with the Higgs field – the more the level of interaction, the higher the mass generated. So a precise measurement of the top quark’s mass indicates the Higgs field’s strongest level of interaction, or “loudest conversation”, with a fundamental particle, which in turn contributes to the scalar potential.

    On November 9, a group of physicists from Russia published the results of an advanced scalar-potential calculation to find where the universe really lay: in a local minimum or in a stable global minimum. They found that the universe was in a local minimum. The calculations were “advanced” because they used the best estimates available for the properties of the various fundamental forces, as well as of the Higgs boson and the top quark, to arrive at their results, but they’re still not final because the estimates could still vary. Hearteningly enough, the physicists also found that if the real values in the universe shifted by just 1.3 standard deviations from our best estimates of them, our universe would enter the global minimum and become truly stable. In other words, the universe is situated in a shallow valley on one side of a peak of the scalar potential, and right on the other side lies the deepest valley of all that it could sit in for ever.

    If the Russian group’s calculations are right (though there’s no quick way for us to know if they aren’t), then there could be a distant future – in human terms – where the universe tunnels through from the local to the global minimum and enters a new state. If we’ve assumed that the laws and forces of nature haven’t changed in the last 13.8 billion years, then we can also assume that in the fully stable state, these laws and forces could change in ways we can’t predict now. The changes would sweep over from one part of the universe into others at the speed of light, like a shockwave, redefining all the laws that let us exist. One moment we’d be around and gone the next. For all we know, that breadth of 1.3 standard deviations between our measurements of particles’ and forces’ properties and their true values could be the breath of our lives.

    The Wire
    November 11, 2015