Uncategorized

  • Better to have been a mammal in India

    The Copernican
    April 10, 2014

    Studies of fossils and soil samples collected at a site in South India have revealed unique attributes of the ecosystem not found in many parts of the world. In particular, an international team of scientists found that most mammalian species in the region seem to have survived at least 100,000 years in conditions that could have pushed them to extinction in Europe or the Americas.

    Their research, for the first time, reports on dated and stratified deposits of mammalian fauna in the Indian subcontinent over the last 200,000 years. In addition, “one of the most significant findings is that a variety of mammals survived through major fluctuations of climate in the past,” said Michael Petraglia, one of the authors of the study published in Proceedings of the National Academy of Sciences in the week of April 7.

    Apart from climactic changes, the Indian subcontinent was also subjected to the devastating Toba volcanic super-eruption 75,000 years ago as well as increasing numbers of humans in the last 10,000 years. Mammals in the subcontinent, however, survived almost unchanged because they had access to a unique range of ecosystems to inhabit. Meanwhile, those in many other parts of the world were becoming extinct in large numbers. The difference lay in the nature of their habitats.

    A mosaic

    The configuration of geographic landmarks and monsoon patterns “has given rise to a network of ecosystems like the coastal mangrove, evergreen upland western and eastern ghats, deciduous forest, semi-arid inland regions, the Thar desert and plains of the Indus and Ganga,” explained Dr. Ravi Korisettar. “Between the Vindhyas in the north and the Nilgiris in the south, the Deccan Plateau has preserved a variety of these ecosystems suitable for habitation since prehistoric times.”

    Dr. Korisettar is the Dr. D.C. Pavate Professor of Art and Archaeology, Karnatak University, Dharwad, and a member of the team that conducted the study.

    It found that the variety of ecological settings available for habitation as well as their interconnected nature were essential for the continued existence of mammals. “The mosaic of habitats allows for the presence of a diverse range of species, whereas the connection between these habitats allows animals to migrate between them as the climate changes,” said Dr. Petraglia, of the Research Laboratory for Archaeology and the History of Art, University of Oxford, UK.

    Their findings are consistent with fossil records from around South Asia, and with parts of tropical Africa.

    Both these scientists were part of a team that studied samples collected from the Billasurgam cave complex in the Kurnool district of Andhra Pradesh, a state in southern India. The site was chosen because investigations in the 1970s had found that many animal fossils and archaeological deposits were present there.

    Describing their findings, Dr. Korisettar said they’d found “11 families and 26 genera of birds, mammals and amphibians. There were antelopes, gazelles, horses, pigs, primates, rhinoceroses, rabbits, and evidence of crocodile amphibians.” The samples were then studied using optically stimulated luminescence to discern their ages and characteristics. With the exception of one primate – identified in the paper as Theropithecus cf gelada – all other mammalian taxa, or population groups, survive in the subcontinent to this day.

    Conservation efforts

    However, even as Dr. Korisettar remarks, “Conserve the habitats, the rest will take care of itself”, the quality of the mammals’ survival these days is deteriorating. One of the suggested causes for extinction of fauna in other parts of the world in the last 1,000 centuries was over-hunting by early humans. An analogous threat has come to modern India after all these years of resilient survival: anthropogenic climate change.

    Is a mutually beneficial coexistence possible once again?

    “Climate change, in combination with the dramatic increases in human populations in the last 10,000 years in India, may be leading to decline of certain animals, restricting them to smaller geographic ranges,” Dr. Petraglia said. For example, the Kaziranga National Park in northeast India hosts two thirds of the world’s population of Great One-horned Rhinoceroses, an animal whose habitat may once have spread farther south.

    But whatever has been pushing them over the brink, mammalian conservation efforts in India may now find it essential to not just preserve habitats but also their inter-connected nature in the subcontinent. Moreover, Dr. Petraglia suggested that the Billasurgam caves also be protected against economic development in the region, especially mining activities. “The caves should be considered for national protection owing to their fascinating history of research and the significance of their deposits for future research,” he said.

  • Elon Musk’s altruism powertrain is just good business

    In 1907, the Serbian-American inventor Nikola Tesla sold all his patents to Westinghouse, Inc., for a heavily discounted $216,000, including one for alternating current. In 1943, he died penniless. In 2014, another Tesla has given away its patents but signs are that this one will be way more successful. Through a blog post on June 12, Elon Musk, the CEO of Tesla Motors, announced that his company would be releasing all the patents it held on the brand of successful electric vehicles (EVs) it manufactures. A line in the post indicates he wants to avoid future patent-infringement lawsuits, but this belies what Musk is set to reap from this ‘altruistic’ gesture.

    Patents cut both ways. They safeguard information and prevent others from utilising it without paying its originators a license fee. On the other hand, patents also explicitly earmark some information as being useful and worth safeguarding over the rest. Even after open-sourcing patents on the Tesla EVs, Musk is still the proprietor of a lot of technical and managerial information – “the rest” – that his competitors are not going to master easily. By releasing his patents, Musk is not levelling the playing field as much as he’s releasing knowledge he thinks is crucial to develop zero-emission vehicles.

    Shared knowledge

    In fact, the battery-swapping station he showcased in 2013 was an idea borrowed from the Israeli entrepreneur Shai Agassi, who was Musk’s biggest competitor until his EVs company went bankrupt in 2012. Agassi had conceived battery-swapping a decade earlier to resolve the issue of range anxiety: the apprehension that gripped EV-drivers about how long the batteries in their cars were going to last. Unfortunately, the Israeli flunked while executing his plans. Musk not only installed the stations but also integrated it into his network of 480-volt Superchargers, of which he now has 90 in the USA, 16 in Europe and three in China.

    Nevertheless, after Agassi’s departure, Tesla was king in a kingdom of frozen lakes. As Musk wrote in his post: “electric car programs (or programs for any vehicle that doesn’t burn hydrocarbons) at the major manufacturers are small to non-existent, constituting an average of far less than 1% of their total vehicle sales.” Without competition, Tesla both controls a market as well as leaves no room for errors for itself and witnesses no competing innovation to help support growing opportunities. While the charge capacity and efficiency of present lithium-ion batteries are nowhere close to being as high as the industry requires them to be, the Superchargers and the Panasonic cylindrical battery cells whose use Tesla pioneered are still unique and desirable. Now, Big Cars like GM and Ford could leverage the patents to crawl into the EVs market – and hopefully keep it from imploding.

    Setting standards

    Another way for Tesla to reap benefits from Big Cars is to latently guide them to model their products around the mould Musk has perfected in the last seven years. By releasing his patents, Musk has pushed a nascent industry toward one of its understated inflection points: standardization. Hardware standardization modularizes architecture, jumpstarts innovation, sets a benchmark for consumer expectations, and makes for easier adaptation of new technologies. For example, the Joint Center for Energy Storage Research was established by the US government in 2013 with one goal to compile an ‘electrolyte genome’, a database of electrolytes aimed at EV manufacturers. Minimally changing hardware specifications makes JCESR’s work easier.

    After all, the Tesla Model S costs $70,000-$80,000, and the Tesla Roadster, $110,000. As much as they have sold in the thousands, the only way they can sell in the millions is if they are as accessible as fossil-fuel-powered cars. Musk may be leading the way but he’s reliant on costly subsidies on battery packs, refuelling and maintenance. If he has to keep them up, he has to make the business of batteries and refuelling profitable. While his decision to release Tesla’s patents could help keep the EVs industry alive, its existence feeds his supercharger network and batteries’ use.

  • What life on Earth tells us about life ‘elsewhere’

    Plumes of water seen erupting form the surface of Saturn's moon Enceladus. NASA/JPL-Caltech and Space Science Institute
    Plumes of water seen erupting form the surface of Saturn’s moon Enceladus. NASA/JPL-Caltech and Space Science Institute

    In 1950, the physicist Enrico Fermi asked a question not many could forget for a long time: “Where is everybody?” He was referring to the notion that, given the age and size of the universe, advanced civilizations ought to have arisen in many parts of it. But if they had, then where are their space probes and radio signals? In the 60 years since, we haven’t come any closer to answering Fermi, although many interesting explanations have cropped up. In this time, the the search for “Where” has encouraged with it a search for “What” as well.

    What is life?

    Humankind’s search for extra-terrestrial life is centered on the assumption – rather hope – that life can exist in a variety of conditions, and displays a justified humility in acknowledging we really have no idea what those conditions could be or where. Based on what we’ve found on Earth, water seems pretty important. As @UrbanAstroNYC tweeted,

    And apart from water, pretty much everything else can vary. Temperatures could drop below the freezing point or cross to beyond the boiling point of water, the environment can be doused in ionizing radiation, the amount of light could dip to quasi-absolute darkness levels, acids and bases can run amok, and the concentration of gases may vary. We have reason to afford such existential glibness: consider this Wikipedia list of extremophiles, the living things that have adapted to extreme environments.

    Nonetheless, we can’t help but wonder if the qualities of life on Earth can tell us something about what life anywhere else needs to take root- even if that means extrapolating based on the assumption that we’re looking for something carbon-based, and dependent on liquid water, some light, and oxygen and nitrogen in the atmosphere. Interestingly, even such a leashed approach can throw open a variety of possibilities.

    “If liquid water and biologically available nitrogen are present, then phosphorus, potassium, sodium, sulfur and calcium might come next on a requirements list, as these are the next most abundant elements in bacteria,” writes Christopher McKay of the NASA Ames Research Center, California, in his new paper ‘Requirements and limits for life in the context of exoplanets’. It was published in Proceedings of the National Academy of Sciences on June 9.

    Stuff of stars

    McKay, an astro-geophysicist, takes a stepped approach to understanding the conditions life needs to exist. He bases his argument on one inescapable fact: that we know little to nothing about how life originated, but a lot about how, once it exists, it can or can’t thrive on Earth. Starting from that, the first step he devotes to understanding the requirements for life. In the second step, he analyzes the various extreme conditions life can then adapt to. Finally, he extrapolates his findings to arrive at some guidelines.

    It’s undeniable that these guidelines will be insular or play a limited role in our search for extraterrestrial life. But such criticism can be partly ablated if you consider Carl Sagan’s famous words from his 1980 book Cosmos: “The nitrogen in our DNA, the calcium in our teeth, the iron in our blood, the carbon in our apple pies were made in the interiors of collapsing stars. We are made of starstuff.”

    In 1991, RH Koch and RE Davies published a paper (titled ‘All the observed universe has contributed to life’) presenting evidence that “a standard 70 kg human  is always making about 7 3He, 600 40Ca, and 3,000 14N nuclei every second by radioactive decay of 3H, 40K, and 14C, respectively”. In other words, we’re not just made of starstuff, we’re also releasing starstuff! So it’s entirely plausible other forms of life out there – if they exist – could boast some if not many similarities to life on Earth.

    To this end, McKay postulates a ‘checklist for habitability’on an exoplanet based on what we’ve found back home.

    • Temperature and state of water – Between -15° C and 122° C (at pressure greater than 0.01 atm)
    • Water availability – Few days per year of rain, fog or snow, or relative humidity more than 80%
    • Light and chemical energy sources
    • Ionizing radiation – As much as the bacterium Deinococcus radiodurans can withstand (this microbe is the world’s toughest extremophile according to the Guinness Book of World Records)
    • Nitrogen – Enough for fixation
    • Oxygen (as the molecule O2) – Over 0.01 atm needed to support complex life

    McKay calls this list “a reasonable starting point in the search for life”. Its items show that together they make possible environmental conditions that sustain some forms of chemical bonding – and such a conclusion could inform our search for ‘exo-life’. Because we’re pretty clueless about the origins of life, it doesn’t mean we’ve to look for just these items on exoplanets but the sort of environment that these items’ counterparts could make possible. For example, despite the abundance of life-friendly ecosystems on Earth today, one way life could have originated in the first place is by meteorites having seeded the crust with the first microbes. And once seeded, the items on the checklist could have taken care of the rest.

    Are you sure water is life?

    Such otherworldly influences present yet more possibilities; all you need is another interstellar smuggler of life to crash into a conducive laboratory. Consider the saturnine moon Titan. While hydrocarbons – the principal constituents of terran life – on Earth are thought to have gassed up and out from the mantle since its formative years, Titan already boasts entire lakes of methane (CH4), a simple hydrocarbon. A 2004 paper by Steven Benner et al discusses the implications of this in detail, arguing that liquid methane could actually be a better medium than water for certain simple chemical reactions that are the precursors of life to occur in.

    Another Solar System candidate that shows signs of habitability is Titan’s peer Enceladus. In April this year, teams of scientists studying data from the Cassini space probe said there was evidence that Enceladus hosts a giant reservoir of liquid water 10 km deep under an extensive ice shell some 30-40 km thick. Moreover, Cassini flybys since 2005 had shown that the moon had an atmosphere of 91% water vapor, 3-4% each of nitrogen and carbon dioxide, and the rest of methane.

    These examples in our Solar System reveal how the conditions necessary for life are possible not just in the Goldilocks zone because life can occur in a variety of environments as long some simpler conditions are met. The abstract of the paper by Benner et al sums this up nicely:

    A review of organic chemistry suggests that life, a chemical system capable of Darwinian evolution, may exist in a wide range of environments. These include non-aqueous solvent systems at low temperatures, or even supercritical dihydrogen– helium mixtures. The only absolute requirements may be a thermodynamic disequilibrium and temperatures consistent with chemical bonding.

    As humans, we enjoy the benefits of some or many of these conditions – although we know what we do only on the basis of what we’ve observed in nature, not because some theory or formula tells us what’s possible or not. Such is the amount of diversity of life on Earth, and that should tell us something about how far from clued-in we are to understanding what other forms of life could be out there. In the meantime, as the search for extra-terrestrial life and intelligence goes on, let’s not fixate on the pessimism of Fermi’s words and instead remember the hope in Sagan’s (and keep an eye on McKay’s checklist).

  • After-math of the IPL

    Your betweenness isn't good enough.
    Your betweenness isn’t good enough.

    Yes, we all know the Kolkata Knight Riders (KKR) won the Indian Premier League 2014, but who among all the teams’ many players really did well? And were they awarded for it? The former is a decidedly subjective question based on what you consider goodness in a cricketer – especially one playing in a new format of the game, Twenty20, that places different emphases on skills than by Tests and ODIs, the other formats. And a satisfactory answer to the latter question depends on how you answer the former. How do you take this forward?

    Satyam Mukherjee, a postdoctoral fellow at the Kellogg School of Management, Northwestern University, has an answer. He has extended network analysis, a tool conventionally used to analyzing social media interactions, to cricket. On a social network like Facebook, people are treated as nodes and the connections between the nodes denote certain things about how the nodes are interacting. On a cricket ground, each team becomes one network in Mukherjee’s notebook, and the competition between them is a competition to be the better network.

    Cricket is played by two teams at a time with 11 players per team. At all points during the game, teamwork is paramount although to varying extents. Even if one player fails to perform, the game could be lost by the team that player belongs to. Conversely, if one player plays too well, then the burden on the rest of the team is lighter.

    In this scenario, network analysis provides a useful way to look not at players’ skills but how they’ve deployed them in situations that required their deployment.

    In the second qualifier in IPL 2014, the Chennai Super Kings (CSK) trounced the Mumbai Indians (MI). Batting first, MI were restricted to 173 on a surface on which defending 190+ would’ve been easier, thanks to economic and incisive bowling from R. Ashwin and Mohit Sharma respectively.

    Nonetheless, the sub-par score did require CSK to score at a stiff 8.7 runs-per-over (rpo) to win – and Suresh Raina became the man to do this, taking them to 176 in 18.4 overs at a rate of 9.4 rpo. Even though he had mowed down a sub-par score on a batting-friendly ground, he was awarded the ‘Man of the Match’ title, not Ashwin or Sharma. Why was that?

    By addressing each game as a meeting of two networks that can interact only in specific ways, network analysis throws up four metrics that represent the quality of the interactions. They are

    1. PageRank – A proportional measure that describes the importance of a player
    2. In-strength – The sum of the fractions of runs a player has scored in partnership with others players
    3. Betweenness – Denotes the number of partnerships a batsman was involved in
    4. Closeness – Another proportional measure that describes the ability of a player to adapt to different batting positions (higher, middle, lower, etc.)

    It’s reasonable that whichever player has made the most significant contribution to the values of those metrics deserves the ‘Man of the Match’ title. In the MI v. CSK game, Raina outperforms any other player from CSK, the winning team (from which the MoM is usually chosen).

    Favoring batsmen, just like the game

    Some things are immediately clear. One, PageRank and closeness are like global variables, with scores that can be carried and calculated across games (And while their definition seems arbitrary, their values do abide by well-defined formulas).

    Two, all four metrics are relevant for batsmen and not bowlers or fielders. This is odd because it is the bowling side (same as the fielding side) that requires all the teamwork in the game. Does this mean Mukherjee’s approach is invalid? Not entirely because it is still useful in assessing how well batsmen have performed against certain oppositions and from certain batting positions.

    For example, in the first qualifier in IPL 2014, KKR put in an all-round great performance to strangle KXIP and proceed to the finals. Ryan ten Doeschate (KKR) hit two sixes in the death overs to revive a sagging run rate and anchored two partnerships on the way. He takes the highest PageRank and betweenness in the game. Piyush Chawla scored the majority of the runs in the two partnerships he was involved in and has the highest in-strength. Finally, Robin Uthappa finished with the highest centrality, having played both as a upper-middle- and opening batsman in the tournament.

    So much stands to reason, but this is where things get interesting because the ‘Man of the Match’ was Umesh Yadav.

    How did this happen? A common woe among IPL teams is the performance of the ominously named death bowler, i.e. the player who bowls during the last four overs of a Twenty20 game. Over seven editions of the IPL, these so-called death overs have become notorious for the pace at which teams accrue runs in them. A death bowler, therefore, has to be good enough to stem the flow even if an in-form batsman is at the crease.

    During the KKR v. KXIP game, Yadav bowled four overs for 13 runs (3.25 rpo) and took three wickets – Sehwag’s, Maxwell’s and Bailey’s. Moreover, one was a death over in which he conceded the princely sum of 1 run. These are match-winning feats in a Twenty20 game, and Yadav more than deserved to become the ‘Man of the Match’.

    mf1

    For the final game, Mukherjee drew up a network visualization (above) of the batting partnerships of KXIP and KKR. The nodes are colored according to their betweenness centrality. The size of each node is proportional to its PageRank. The colors of the connections are according to the colors of the source nodes. “For example, if we see the connection between Uthappa and Gambhir, Gambhir has a larger share of the runs they scored,” he explained.

    Bowling performances notwithstanding: In IPL 2014, “for a majority of the matches, the Man of the Match compares well with the top three performers as per their centrality measures,” Mukherjee said. He said he hopes that such tools would work their way into extant decision-making procedures as a way to eliminate vested interests, biases and “close calls” as well as to help recruit new players. In the future, Mukherkjee plans to work something in to gauge bowlers and fielders, too.

    Earlier, he had similarly analyzed the 2013 Ashes series held in and won by England. Then, the ‘Man of the Match’ awards agreed with his analysis of the games: Joe Root, Michael Clarke and Shane Watson, each of whom had higher in-strength and betweenness centrality than other players. He published his methods and results in Advances in Complex Systems in November 2013 (pre-print).

  • As the ripples in space-time blow through dust…

    The last time a big announcement in science was followed by an at least partly successful furor to invalidate it was when physicists at the Gran Sasso National Laboratory, Italy, claimed to have recorded a few neutrinos travelling at faster than the speed of light. In this case, most if not all scientists know something had to be wrong. That nothing can clock such speeds except electromagnetic radiation is set in stone for all practical purposes.

    9flkw

    Although astronomers from Harvard University’s Center for Astrophysics (CfA) made a more plausible claim on March 17 on having found evidence of primordial gravitational waves, they do have something in common with the superluminal-neutrinos announcement: prematurity. Since the announcement, it has emerged that the CfA team didn’t account for some observations that would’ve seriously disputed their claims even though, presumably, they were aware that such observations existed. Something like willful negligence…

    Imagine receiving a tight slap to the right side of face. If there was good enough contact, the slapper’s fingers should be visible for some time on your right cheek before fading away. Your left cheek should bear mostly no signs of you having just been slapped. The CfA astronomers were trying to look for a similar fingerprint in a sea of energy found throughout the universe. If they found the fingerprint, they’d know the energy was polarized, or ‘slapped’, by primordial gravitational waves more than 13 billion years ago. To be specific, the gravitational waves – which are ripples in space-time – would only have polarized one of two components the energy contains: the B-mode (‘right cheek’), the other being the E-mode (‘left cheek’).

    The Dark Sector Lab (DSL), located 3/4 of a mile from the Geographic South Pole, houses the BICEP2 telescope (left) and the South Pole Telescope (right).
    The Dark Sector Lab (DSL), located 3/4 of a mile from the Geographic South Pole, houses the BICEP2 telescope (left) and the South Pole Telescope (right). Image: bicepkeck.org

    On March 17, CfA astronomers made the announcement that they’d found evidence of B-mode polarization using a telescope situated at the South Pole called BICEP2, hallelujah! Everyone was excited. Many journalists wrote articles without exercising sufficient caution, including me. Then, just the next day I found an astronomy blog that basically said, “Hold on right there…” The author’s contention was that CfA had looked only at certain parts of the sea of energy to come to their conclusion. The rest of the ‘cheek’ was still unexplored, and the blogger believed that if they checked out those areas, the fingerprints actually might not be there (for the life of me I can’t find the blog right now).

    “Right from the time of BICEP2 announcement, some important lacunae have been nagging the serious-minded,” N.D. Hari Dass, an adjunct professor at Chennai Mathematical Institute told me. From the instrumental side, he said, there was the possibility of cross-talk between measurements of polarization and of temperature, and between measurements on the E-mode and on the B-mode. On the observational front, CfA simply hadn’t studied all parts of the sky – just one patch above the South Pole where B-mode polarization seemed significant. And they had studied that one patch by filtering for one specific temperature, not a range of temperatures.

    “The effect should be frequency-independent if it were truly galactic,” Prof. Dass said.

    Milky_Way_s_magnetic_fingerprint
    The Milky Way galaxy’s magnetic fingerprint according to observations by the Planck space telescope. Image: ESA

    But the biggest challenge came from quarters that questioned how CfA could confirm the ‘slappers’ were indeed primordial gravitational waves and not something else. Subir Sarkar, a physicist at Oxford University, and his colleagues were able to show that what BICEP2 saw to be B-mode polarization could actually have been from diffuse radio emissions from the Milky Way and magnetized dust. The pot was stirred further when the Planck space telescope team released a newly composed map of magnetic fields across the sky but blanked out the region where BICEP2 had made its observations.

    There was reasonable, and it persists… More Planck data is expected by the end of the year and that might lay some contentions to rest.

    On June 3, physicist Paul Steinhardt made a provocative claim in Nature: “The inflationary paradigm” – which accounts for B-mode polarization due to gravitational waves – “is fundamentally untestable, and hence scientifically meaningless”. Steinhardt was saying that the theory supposed to back the CfA quest was more like a tautology and that it would be true no matter the outcome. I asked Prof. Dass about this and he agreed.

    A tautology at work.
    A tautology at work.

    “Inflation is a very big saga with various dimensions and consequences. One of Steinhardt’s
    points is that the multiverse aspect” – which it allows for – “can never be falsified as every conceivable value predict will manifest,” he explained. “In other words, there are no predictions.” Turns out the Nature claim wasn’t provocative at all, implying CfA did not set itself well-defined goals to overcome these ‘axiomatic’ pitfalls or that it did but fell prey to sampling bias. At this point, Prof. Dass said, “current debates have reached some startling professional lows with each side blowing their own trumpets.

    It wasn’t as if BICEP2 was the only telescope making these observations. Even in the week leading up to March 17, in fact, another South Pole telescope named Polarbear announced that it had found some evidence for B-mode polarization in the sky (see tweet below). The right thing to do now, then, would be to do what we’re starting to find very hard: be patient and be critical.

  • Gerald Guralnik (1936-2014)

    Of the six scientists who came up with the idea of a Higgs boson in the mid-1960s, independently or in collaboration with others, I’ve met all of one. Tom Kibble was at the Institute of Mathematical Science, Chennai, in January 2013 for a conference. He was 80 years old then, and looked quite frail. Every time somebody tapped his shoulder before taking a photograph, he would break into a self-effacing smile. It was clear he was surprised by the attention he was receiving. Kibble thought he didn’t deserve it.

    He, Carl Hagen and Gerald Guralnik comprised one of the three teams that conceived the mechanism to explain how some fundamental particles acquired mass in the early universe, over time making possible chemical reactions, stars, life, and many things besides. The other two teams comprised Francois Englert and Robert Brout, and Peter Higgs; Higgs’ name has today become attached to the name of the mechanism. For their work, Higgs and Englert were awarded the 2013 Nobel Prize in physics. Brout couldn’t receive the prize because he had died in 2011. Kibble, Hagen and Guralnik were left out because of limits on how many people the prize could be awarded to at a time.

    Fair share of obstacles

    On April 26, 2014, Gerald Guralnik died of a heart attack in Rhode Island after delivering a lecture at Brown University. He was 77. In those seven decades, he had become one of the world’s leading experts on theoretical particle physics, which, through the 1960s, was entering its boom time as the world would later discover. In this period, he co-scripted one of the most enduring quests in modern physics research.

    Before I started writing this, I visited the Wikipedia page for the Physical Review Letters papers published by the three groups that first called the world’s attention to their findings. In the second line, Peter Higgs is mentioned as having worked with Satyen Bose – undoubtedly the consequence of a grave misapprehension that pervaded India when the 2013 Nobel Prizes were announced. Many believed Satyen Bose had been neglected for his work, but he just hadn’t worked on the Higgs boson, only on the underlying theory that controls the lives and times of all bosons. If such are the facile issues that concern some misguided Indians today, Guralnik tackled more than a fair share in his time.

    sb1

    For a few years after Kibble, Hagen and Guralnik published their paper, their work wasn’t taken seriously. Guralnik wrote in Huffington Post in August 2012 that, in the summer of 1965, Werner Heisenberg – the originator of the notorious uncertainty principle – thought Guralnik’s ideas were junk. The New York Times wrote that Robert Marshak, a famous theoretical physicist, told Guralnik that if he wished to survive in physics, he “must stop thinking about this sort of problem and move on,” advice that Guralnik “wisely obeyed”. According to Kibble, however, Marshak later admitted that he had been misguided.

    Deference over primacy

    Nevertheless, some other scientists had starting working on Guralnik & co.’s theories. By the 1970s, Sheldon Glashow, Abdus Salam and Steven Weinberg had succeeded in ironing out many of its inconsistencies and won the Nobel Prize for physics in 1979 for their work… even though it would be 50 more years to prove via experiment that the Higgs mechanism was for real. This is because there was no disputing that the implications of the work of Kibble, Hagen, Guralnik, Higgs, Brout and Englert were revolutionary, at least among those who were willing to accept it.

    To this end, the 1979 prizewinners and the ‘Higgs Six’ were aware of and deferential toward the contributions of others to the development of this new theory. In fact, Higgs, who has often wound up being the centre of attention when talk of his eponymous mechanism comes up, has said that he’d rather call it the ABEGHHK’tH mechanism (A denoted Phillip Warren Anderson; ‘tH, Gerardus ‘t Hooft).

    But others were less considerate, which didn’t go down well with Guralnik. As Kibble wrote in his obituary in Nature, “Guralnik came to feel that our early paper was often unfairly neglected. He gave talks and wrote papers pointing out our distinctive contribution, of which he was justifiably proud, and in which he was unquestionably the prime mover.” This doesn’t mean he went on to become a sour, old bat, of course, but only that Guralnik seemed to appreciate the gravitas of his work much more than others at the time. When  Higgs and Englert shared the 2013 Nobel Prize in physics, Guralnik told Brown Daily Herald that he was “a little hurt”, but happier for the recognition that his peers – and by extension his work – had received.

    (It is, in fact, hard to say if he is as celebrated as Higgs is today, physicists notwithstanding. Such are the consequences of asymmetric recognition, a sort of ceiling effect that silences avant garde advancements until the world is ready to hear them. This is also a complaint I’ve heard from far too many Indian scientists and whose efforts to remedy it I don’t begrudge them even if it only seems like an infantile squabble over primacy.)

    In fact, after his work in establishing the theoretical foundations of the Higgs mechanism, which itself is a cornerstone of a unified theory that describes both the electromagnetic and weak nuclear forces of nature, Guralnik proceeded to make a lot of other contributions. He worked on computational approaches to quantum field theory, quantum chromodynamics (i.e., the theory of the strong nuclear force), the application of chaos theory to particle physics, and string theory. His was a versatile genius, in part combative and in part pliant. Rest in peace.

  • Hey, is anybody watching Facebook?

    The Boston Marathon bombings in April 2013 kicked off a flurry of social media activity that was equal parts well-meaning and counterproductive. Users on Facebook and Twitter shared reports, updates and photos of victims, spending little time on verifying them before sharing them with thousands of people.

    Others on forums like Reddit and 4chan started to zero in on ‘suspects’ in photos of people seen with backpacks. Despite the amount of distress and disruption these activities, the social media broadly also served to channel grief and help, and became a notable part of the Boston Marathon bombings story.

    In our daily lives, these platforms serve as news forums. With each person connected to hundreds of others, there is a strong magnification of information, especially once it crosses a threshold. They make it easier for everybody to be news-mongers (not journalists). Add this to the idea that using a social network can just as easily be a social performance, and you realize how the sharing of news can also be part of the performance.

    Consider Facebook: Unlike Twitter, it enables users to share information in a variety of forms – status updates, questions, polls, videos, galleries, pages, groups, etc – allowing whatever news to retain its multifaceted attitude, and imposing no character limit on what you have to say about it.

    Facebook v. Twitter

    So you’d think people who want the best updates on breaking news would go to Facebook, and that’s where you might be wrong. ‘Might’ because, on the one hand, Twitter has a lower response time, keeps news very accessible, encourages a more non-personal social media performance, and has a high global reach. These reasons have also made Twitter a favorite among researchers who want to study how information behaves on a social network.

    On the other hand, almost 30% of the American general population gets its news from Facebook, with Twitter and YouTube at par with a command of 10%, if a Pew Research Center technical report is to be believed. Other surveys have also shown that there are more people from India who are on Facebook than on Twitter. At this point, it’d just seem inconsiderate when you realize Facebook does have 1.28 billion monthly active users from around the world.

    A screenshot of Facebook Graph Search.
    A screenshot of Facebook Graph Search.

    Since 2013, Facebook has made it easier for users to find news in its pages. In June that year, it introduced the #hashtagging facility to let users track news updates across various conversations. In September, it debuted Graph Search, making it easier for people to locate topics they wanted to know more about. Even though the platform’s allowance for privacy settings stunts the kind of free propagation of information that’s possible on Twitter (and only 28% of Facebook users made any of their content publicly available), Facebook’s volume of updates enables its fraction of public updates rise to levels comparable with those of Twitter.

    Ponnurangam Kumaraguru and Prateek Dewan, from the Indraprastha Institute of Information Technology, New Delhi (IIIT-D), leveraged this to investigate how Facebook and Twitter compared when sharing information on real-world events. Kumaraguru explained his motivation: “Facebook is so famous, especially in India. It’s much bigger in terms of the number of users. Also, having seen so many studies on Twitter, we were curious to know if the same outcomes as from work done on Twitter would hold for Facebook.”

    The duo used the social networks’ respective APIs to query for keywords related to 16 events that occurred during 2013. They explain, “Eight out of the 16 events we selected had more than 100,000 posts on both Facebook and Twitter; six of these eight events saw over 1 million tweets.” Their pre-print paper was submitted to arXiv on May 19.

    An upper hand

    In all, they found that an unprecedented event appeared on Facebook just after 11 minutes while on Twitter, according to a 2014 study from the Association for the Advancement of Artificial Intelligence (AAAI), it took over ten times as longer. Specifically, after the Boston Marathon bombings, “the first [relevant] Facebook post occurred just 1 minute 13 seconds after the first blast, which was 2 minutes 44 seconds before the first tweet”.

    However, this order-of-magnitude difference could be restricted to Kumaraguru’s choice of events because the AAAI study claims breaking news was broken fastest during 29 major events on Twitter, although it considered only updates on trending topics (and the first update on Twitter, according to them, appeared after two hours).

    The data-mining technique could also have played a role in offsetting the time taken for an event to be detected because it requires the keywords being searched to be manually keyed. Finally, the Facebook API is known to be more rigorous than Twitter’s, whose ability to return older tweets is restricted. On the downside, the output from the Facebook API is restricted by users’ privacy settings.

    Nevertheless, Kumaraguru’s conclusions paint a picture of Facebook being just as resourceful as Twitter when tracking real-world events – especially in India – leaving news discoverability to take the blame. Three of the 16 chosen events were completely local to India, and they were all accompanied by more activity on Facebook than on Twitter.

    table1

    Even after the duo corrected for URLs shared on both social networks simultaneously (through clients like Buffer and HootSuite) – 0.6% of the total – Facebook had the upper hand not just in primacy but also origin. According to Kumaraguru and Dewan, “2.5% of all URLs shared on Twitter belonged to the facebook.com domain, but only 0.8% of all URLs shared on Facebook belonged to the twitter.com domain.”

    Facebook also seemed qualitatively better because spam was present in only five events. On Twitter, spam was found to be present in 13. This disparity can be factored in by programs built to filter spam from social media timelines in real-time, the sort of service that journalists will find very useful.

    Kumaraguru and Dewan resorted to picking out spam based on differences in sentence styles. This way, they were able to avoid missing spam that was stylistically conventional but irrelevant in terms of content, too. A machine wouldn’t have been able to do this just as well and in real-time unless it was taught – in much the same way you teach your Google Mail inbox to automatically sort email.

    Digital information forensics

    A screenshot of TweetCred at work. Image: Screenshot of TweetCred Chrome Extension
    A screenshot of TweetCred at work. Image: Screenshot of TweetCred Chrome Extension

    Patrick Meier, a self-proclaimed – but reasonably so – pioneer in the emerging field of humanitarian technologies, wrote a blog post on April 28 describing a browser extension called TweetCred which is just this sort of learning machine. Install it and open Twitter in your browser. Above each tweet, you will now see a credibility rating bar that grades each tweet out of 7 points, with 7 describing the most credibility.

    If you agree with each rating, you can bolster with a thumbs-up that appears on hover. If you disagree, you can give the shown TweetCred rating a thumbs down and mark what you think is correct. Meier makes it clear that, in its first avatar, the app is geared toward rating disaster/crisis tweets. A paper describing the app was submitted to arXiv on May 21, co-authored by Kumaraguru, Meier, Aditi Gupta (IIIT-D) and Carlos Castillo (Qatar Computing Research Institute).

    Between the two papers, a common theme is the origin and development of situational awareness. We stick to Twitter for our breaking news because it’s conceptually similar to Facebook, fast and importantly cuts to the chase, so to speak. Parallely, we’re also aware that Facebook is similarly equipped to reconstruct details because of its multimedia options and timeline. Even if Facebook and Twitter the organizations believe that they are designed to accomplish different things, the distinction blurs in the event of a real-world crisis.

    “Both these networks spread situational awareness, and both do it fairly quickly, as we found in our analysis,” Kumaraguru said. “We’d like to like to explore the credibility of content on Facebook next.” But as far as establishing a mechanism to study the impact of Facebook and Twitter on the flow of information is concerned, the authors have exposed a facet of Facebook that Facebook, Inc., could help leverage.

  • Brazuca over Jabulani for better football, say physicists

    Say hello to Brazuca, the official football of the 2014 FIFA World Cup. Brazuca is a ball designed and produced by sportswear manufacturer Adidas, which also produced the Jabulani used in the 2010 World Cup. Both balls are part of a legacy where designers are reducing the number of panels they are constituted by. Until the late 2000s, the conventional football had 32 panels of hexagonal and pentagonal shapes. Jabulani has eight panels and Brazuca, six.

    However, that didn’t do much good for Jabulani, which faced a lot of flak during the 2010 FIFA World Cup, for which it was made, because of its wobbly movement through the air. And according to a study in Scientific Reports published May 29, scientists have figured out why and found the problem fixed in Brazuca.

    Sungchan Hong and Takeshi Asai, both from the University of Tsukuba’s Institute of Health and Sports Science, used a wind-tunnel and a robot to kick balls toward goal-posts 25 m away, to study the spheres’ non-spin aerodynamics. Their results show that Brazuca displayed very little of the irregular fluctuations that Jabulani did, and other improvements besides, because of the shape of its panels and their rough surface.

    wta1
    Photograph of the wind tunnel test setup. Image: doi:10.1038/srep05068
    Photograph of the wind tunnel test setup. Image: doi:10.1038/srep05068
    Photograph of the wind tunnel test setup. Image: doi:10.1038/srep05068

    When a smooth, spherical ball with seams is kicked up, streams of air moving near the seams exert a different amount of force on the ball than air flowing around elsewhere. This asymmetry gives rise to a wobbly movement of the ball, which Jabulani was especially susceptible to. At the time, it was considered to be one of the reasons the tournament’s first leg had as few goals as it did. Rabindra Mehta, Chief Aerospace Engineer at NASA Ames Research Centre, told Discovery, “You want to see more consistent, rounder balls that are totally water-proof. That’s all good stuff, but perhaps the aerodynamics was not looked at as carefully as [Adidas] should have.”

    Jabulani also fell behind because, according to Hong and Asai, the asymmetry of forces was influenced by how the ball was oriented, too. Specifically, “the amplitude of the unsteady aerodynamic forces acting on soccer balls changes according to the number of panels as well as the directions they are facing,” they write in their paper. This means Jabulani moved differently depending on how it was facing when kicked.

    Amplitude with respect to unsteady aerodynamic forces (blue line: side force, red line: lift force) of soccer balls derived using fast Fourier transform at flow speed of 30 m·s−1.
    Amplitude with respect to unsteady aerodynamic forces (blue line: side force, red line: lift force) of soccer balls derived using fast Fourier transform at flow speed of 30 m·s−1. (a, b) Brazuca, (c, d) Cafusa, (e, f) Jabulani, (g, h) Teamgeist 2, and (i, j) conventional ball. From: doi:10.1038/srep05068

    Hong and Asai also observed that there was a marked difference in the asymmetry of forces on Jabulani and Brazuca at higher speeds, such as during a freekick (30 m/s), with Jabulani consistently out-wobbling the others.

    Brazuca overcomes these issues by boasting a rough, nubby surface. It reduces the asymmetry of air-flow and makes the ball’s motion smoother through the air. Second, the arrangement of Brazuca’s six curvaceous panels ensure the ball moves the same way no matter how it is kicked. With respect to the other balls, Hong and Asai write, “Brazuca and the conventional ball exhibited relatively stable and regular flight trajectories compared to Cafusa, Teamgeist 2, and Jabulani, whose panel shapes varied significantly with the orientation and were characterized by relatively irregular flight trajectories.”

    However, this stamp of approval doesn’t count for much because conditions on a football field in Brazil are going to be very different from a controlled environment in a lab in Japan. There’s going to be wind, more or less humidity, different temperatures, the ball’s attitude during rains, etc. – not to mention players’ level of comfort. These are the real deciders, and only time will tell if Brazuca is in every way better than Jabulani.

    But as far as the physics is concerned, Brazuca should make for better football.


    Featured image: Adidas Brazuca, ИЛЬЯ ХОХЛОВ/Wikimedia Commons

  • Even something will come of nothing

    The Hindu
    June 3, 2014

    “In the 3,000 years since the philosophers of ancient Greece first contemplated the mystery of creation, the emergence of something from nothing, the scientific method has revealed truths that they could not have imagined.” Thus writes the British physicist Frank Close in an introductory book on the idea of nothingness he wrote in 2009. It is the ontology of these truths that the book Nothing: From Absolute Zero to Cosmic Oblivion – Amazing Insights Into Nothingness explores so succinctly, drawing upon the communication skills of many of the renowned writers with NewScientist.

    While at first glance the book may appear to be an anthology with no other connection between its various pieces than the narration of what lies at today’s cutting edge of scientific research, there grows a deeper sense of homogeneity toward the end as you, the reader, realize what you’ve read are stories of what drives people: a compulsion toward the known, away from the unknown, in various forms. Because we are a species hardwired to recognize nature in terms of a cause-effect chain, it can be intuited that somewhere between nothing and something lies our origin. And by extrapolating between the two, the pieces’ authors explore how humankind’s curiosity is inseparable from its existence.

    So, as is customary when thinking about such things, the book begins and ends with pieces on cosmology. This is a vantage point that presents sufficient opportunity to think about both the physical and the metaphysical of nothingness, and the pieces by Marcus Chown and Stephen Battersby shows that that’s true. Both writers present the intricate circumstances of our conception and ultimate demise in language that is never intimidating, although it could easily have been, and with appreciable lucidity.

    However, the best part of the book is that it dispels the notion that profound unknowns are limited to cosmology. Pieces on the placebo effect (Michael Brooks), vestigial organs (Laura Spinney) and anesthetics (Linda Geddes) reveal how scientists confront these mysteries when treating with the human body, the diminishing space for its organs, its elusive mind and the switch that throws the bulb ‘on’ inside it. What makes sick peoples’ malfunctioning bodies heal with nothing? What is the brain doing when people are ‘put under’? We’ve known about these effects since the 19th century. To this day, we’re having trouble getting a logical grip on them. Yet, in the past, today and henceforth, we will take what rough ideas of them pass for knowledge for granted.

    There are other examples, too. Physicist Per Eklund writes a wonderful piece on how long it took for the world’s enterprising to defy Aristotle and discover vacuum because its existence is so far removed from ours. Jonathan Knight shows how animals that sit around and do nothing all day could actually die of starvation if they did anything more. Richard Webb awakens us to the staggering fact that modern electronics is based on the movement of holes, or locations in atoms where electrons are absent. And then, Nigel Henbest’s unraveling of the discourteous blankness of outer space leaves you feeling alone and… perhaps scared.

    But relax. Matters are not so dire if only because nothingness is unique and rare, and insured against by the presence of something. At the same time, it isn’t extinct either even if places for it to exist on Earth are limited to laboratories and opinions, and even if it, unlike anything else, can be conjured out of thin air. A case in point is the titillating Casimir effect. In 1948, the Dutch physicist Hendrik Casimir predicted a “new” force that could act between two metallic plates parallel to each other in a vacuum such that the distance between them was only some tens of nanometers. Pointless thought it seems, Casimir was actually working on a tip-off from Niels Bohr, and his calculations showed something.

    He’d found that the plates would move closer, in an effect that has come to be named for him. What could have moved them? They would practically have been surrounded by nothingness. However, as Sherlock Holmes might have induced, Casimir thought the answer lay with the nothingness itself. He explained that the vacuum of space didn’t imply an absolute nothingness but a volume that still contained some energy, called zero-point energy, continuously experiencing fluctuations. In this arena, bring two plates close enough and at some point, the strength of fluctuations between the plates is going to be outweighed by the strength of fluctuations on the outside, pushing the plates together.

    Although it wasn’t until 1958 that an experiment to test the Casimir effect was performed, and until 1996 that the attractive force was measured to within 15 per cent of the value predicted by theory, the prediction salvaged the vacuum of space from abject impotency and made it febrile. As counter-intuitive as this seems, such is what quantum mechanics makes possible, in the process setting up a curious but hopefully fruitful stage upon which, in the same vein as Paul Davies writes in the piece The Day Time Began, science and theology can meet and sort out their differences.

    Because, if anything, Nothing from the writers at NewScientist is as much a spiritual exploration as it is a physical one. Each of the pieces has at its center a human who is lost, confused, looking for answers, but doesn’t yet know the questions, a situation we’re becoming increasingly familiar with as we move on from the “How” of things to the “Why”. Even if we’re not in a position to understand what exactly happened before the big bang, the promise of causality that has accompanied everything after says that the answers lie between the nothingness of then and the somethingness of now. And the more somethings we find, the more Nothing will help us understand it.

    Buy the book.

  • Psych of Science: Hello World

    Hello, world. 🙂 I’m filing this post under a new category on Is Nerd called Psych of Science. A dull name but it’ll do. This category will host my personal reflections on the science in the stories I’ve written or read and, more importantly, of the people in those stories.

    I decided to create this category after the Social Psychology replications incident. While it was not a seminal episode, reading and understanding the kind of issues faced by authors of the original paper and the replicators really got me thinking about the psychology of science. It wasn’t an eye-opening incident but I was surprised by how interested I was in how the conversation was going to play out.

    Admittedly, I’m a lousy people person, and that especially comes across in my writing. I’ve always been interested in understanding how things work, not how people work. This is a discrepancy I hope will be fixed during my stint at NYU, which I’m slated to attend this fall (2014). In the meantime, and after if I get the time, I’ll leave my reflections here, and you’re welcome to add to it, too.