Uncategorized

  • How big is your language?

    This blog post first appeared, as written by me, on The Copernican science blog on December 20, 2012.

    zipff

    It all starts with Zipf’s law. Ever heard of it? It’s a devious little thing, especially when you apply it to languages.

    Zipf’s law states that the chances of finding a word of a language in all the texts written in that language are inversely proportional to the word’s rank in the frequency table. In other words, this means that the chances of finding the most frequent word is twice as much as are chances of finding the second most frequent word, thrice as much as are chances of finding the third most frequent word, and so on.

    Unfortunately (only because I like how “Zipf” sounds), the law holds only until about the 1,000th most common word; after this point, a logarithmic plot drawn between frequency and chance stops being linear and starts to curve.

    The importance of this break is that if Zipf’s law fails to hold for a large corpus of words, then the language, at some point, must be making some sort of distinction between common and exotic words, and its need for new words must either be increasing or decreasing. This is because, if the need remained constant, then the distinction would be impossible to define except empirically and never conclusively – going against the behaviour of Zipf’s law.

    Consequently, the chances of finding the 10,000th word won’t be 10,000 times less than the chances of finding the most frequently used word but a value much lesser or much greater.

    A language’s diktat

    Analysing each possibility, i.e., if the chances of finding the 10,000th-most-used word are NOT 10,000 times less than the chances of finding the most-used word but…

    • Greater (i.e., The Asymptote): The language must have a long tail, also called an asymptote. Think about it. If the rarer words are all used almost as frequently as each other, then they can all be bunched up into one set, and when plotted, they’d form a straight line almost parallel to the x-axis (chance), a sort of tail attached to the rest of the plot.
    • Lesser (i.e., The Cliff): After expanding to include a sufficiently large vocabulary, the language could be thought to “drop off” the edge of a statistical cliff. That is, at some point, there will be words that exist and mean something, but will almost never be used because syntactically simpler synonyms exist. In other words, in comparison to the usage of the first 1,000 words of the language, the (hypothetical) 10,000th word would be used negligibly.

    The former possibility is more likely – that the chances of finding the 10,000th-most-used word would not be as low as 10,000-times less than the chances of encountering the most-used word.

    As a language expands to include more words, it is likely that it issues a diktat to those words: “either be meaningful or go away”. And as the length of the language’s tail grows, as more exotic and infrequently used words accumulate, the need for those words drops off faster over time that are farther from Zipf’s domain.

    Another way to quantify this phenomenon is through semantics (and this is a far shorter route of argument): As the underlying correlations between different words become more networked – for instance, attain greater betweenness – the need for new words is reduced.

    Of course, the counterargument here is that there is no evidence to establish if people are likelier to use existing syntax to encapsulate new meaning than they are to use new syntax. This apparent barrier can be resolved by what is called the principle of least effort.

    Proof and consequence

    While all of this has been theoretically laid out, there had to have been many proofs over the years because the object under observation is a language – a veritable projection of the right to expression as well as a living, changing entity. And in the pursuit of some proof, on December 12, I spotted a paper on arXiv that claims to have used an “unprecedented” corpus (Nature scientific report here).

    Titled “Languages cool as they expand: Allometric scaling and the decreasing need for new words”, it was hard to miss in the midst of papers, for example, being called “Trivial symmetries in a 3D topological torsion model of gravity”.

    The abstract of the paper, by Alexander Petersen from the IMT Lucca Institute for Advanced Studies, et al, has this line: “Using corpora of unprecedented size, we test the allometric scaling of growing languages to demonstrate a decreasing marginal need for new words…” This is what caught my eye.

    While it’s clear that Petersen’s results have been established only empirically, that their corpus includes all the words in books written with the English language between 1800 and 2008 indicates that the set of observables is almost as large as it can get.

    Second: When speaking of corpuses, or corpora, the study has also factored in Heaps’ law (apart from Zipf’s law), and found that there are some words that obey neither Zipf nor Heaps but are distinct enough to constitute a class of their own. This is also why I underlined the word common earlier in this post. (How Petersen, et al, came to identify this is interesting: They observed deviations in the lexicon of individuals diagnosed with schizophrenia!)

    The Heaps’ law, also called the Heaps-Herdan law, states that the chances of discovering a new word in one large instance-text, like one article or one book, become lesser as the size of the instance-text grows. It’s like a combination of the sunk-cost fallacy and Zipf’s law.

    It’s a really simple law, too, and makes a lot of sense even intuitively, but the ease with which it’s been captured statistically is what makes the Heaps-Herdan law so wondrous.

    The sub-linear Heaps' law plot: Instance-text size on x-axis; Number of individual words on y-axis.
    The sub-linear Heaps’ law plot: Instance-text size on x-axis; Number of individual words on y-axis.

    Falling empires

    And Petersen and his team establish in the paper that, extending the consequences of Zipf’s and Heaps’ laws to massive corpora, the larger a language is in terms of the number of individual words it contains, the slower it will grow, the lesser cultural evolution it will engender. In the words of the authors: “… We find a scaling relation that indicates a decreasing ‘marginal need’ for new words which are the manifestations of cultural evolution and the seeds for language growth.”

    However, for the class of “distinguished” words, there seems to exist a power law – one that results in a non-linear graph unlike Zipf’s and Heaps’ laws. This means that as new exotic words are added to a language, the need for them, as such, is unpredictable and changes over time for as long as they are away from the Zipf’s law’s domain.

    All in all, languages eventually seem an uncanny mirror of empires: The larger they get, the slower they grow, the more intricate the exchanges become within it, the fewer reasons there are to change, until some fluctuations are injected from the world outside (in the form of new words).

    In fact, the mirroring is not so uncanny considering both empires and languages are strongly associated with cultural evolution. Ironically enough, it is the possibility of cultural evolution that very meaningfully justifies the creation and employment of languages, which means that at some point, languages only become bloated in some way to stop germination of new ideas and instead start to suffocate such initiatives.

    Does this mean the extent to which a culture centered on a language has developed and will develop depends on how much the language itself has developed and will develop? Not conclusively – as there are a host of other factors left to be integrated – but it seems a strong correlation exists between the two.

    So… how big is your language?

  • NPPs in Japan

    In the first general elections held since the phased shutdown of nuclear reactors across Japan, the Liberal Democratic Party (LDP) scored a landslide victory. Incidentally, the LDP was also the party most vehemently opposed to its predecessor, the Democratic Party of Japan (DJP), when it declared the shutdown of nuclear power plants (NPPs) across Japan, increasing the economic powerhouse’s reliance on fossil fuels.

    Now that Abe, who termed Noda’s actions “irresponsible”, is in power, the markets were also quick to respond. TEPCO’s shares jumped 33 per cent, Kansai Electric Power’s rose 18 per cent, the Nikkei index 1 per cent, and the shares of two Australian uranium mining companies rose at least 5 per cent each.

  • How will nuclear fusion develop in a carbon-free world?

    On December 5, Dr. Stephen. P. Obenschain was awarded the 2012 Fusion Power Associates’ (FPA) Leadership Award for his leadership qualities in accelerating the development of fusion. Dr. Obenschain is the branch-head of the U.S. Naval Research Laboratory Plasma Physics Division.

    Dr. Obenschain’s most significant contributions to the field are concerned with the development and deployment of inertial fusion facilities. Specifically, inertial fusion involves the focusing of high-power lasers into a really small capsule containing deuterium, forcing the atomic nuclei to fuse to produce helium and release large amounts of energy.

    There is one other way to induce fusion called magnetic containment. This is the more ubiquitously adopted technique in global attempts to generation power from fusion reactions. A magnetic containment system also resides at the heart of the International Thermonuclear Reactor Experiment (ITER) in Cadarache, France, that seeks to produce more power than it consumes while in operation ere the decade is out.

    I got in touch with Dr. Obenschain and asked him a few questions, and he was gracious enough to reply. I didn’t do this because I wanted a story but because India stands to become one of the biggest beneficiaries of fusion power if it ever becomes a valid option, and wanted to know what an engineer at the forefront of fusion deployment thought of such technology’s impact.

    Here we go.

    What are your comments on the role nuclear fusion will play in a carbon-free future?

    Nuclear fusion has the potential to play a major long term role in a clean, carbon free energy portfolio. It provides power without producing greenhouse gases. There is enough readily available fuel (deuterium and lithium) to last thousands of years. Properly designed fusion power plants would produce more readily controllable radioactive waste than conventional fission power plants, and this could alleviate long term waste disposal challenges to nuclear power.

    Inertial confinement has seen less development than its magnetic counterpart, although the NIF is making large strides in this direction. So how far, in your opinion, are we from this technology attaining break-even?

    Successful construction and operation of the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory has demonstrated that a large laser system with the energy and capabilities thought to be required for ignition can be built. NIF is primarily pursuing indirect drive laser fusion where the laser beams are used to produce x rays that drive the capsule implosion.

    The programs at the Naval Research Laboratory (NRL) and the University of Rochester’s Laboratory for Laser Energetics (LLE) are developing an alternate and more efficient approach where the laser beams directly illuminate the pellet and drive the implosions. Technologies have been invented by NRL and LLE to provide the uniform illumination required for direct drive. We believe that direct drive is more likely to achieve the target performance required for the energy application.

    Many of the key physics issues of this approach could be tested on NIF. Following two paths would increase the chances of successful ignition on NIF.

    Both the ITER and NRL/NIF are multi-billion dollar facilities, large and wealthy enough to create and sustain momentum on fusion research and testing. However, because of the outstanding benefits of nuclear fusion, smaller participants in the field are inevitable and, in fact, necessary for rapid innovation. How do you see America’s and the EU’s roles in this technology-transfer scenario panning out?

    The larger facilities take substantial time to build and operate, so they inherently cannot reflect the newest ideas. There needs to be continued support for new ideas and approaches, that typically result in substantial improvements, and that often will come from the smaller programs.

    Most research in fusion is published in the open scientific and technological journals so there is already a free flow of ideas. The main challenge is to maintain funding support for innovative fusion research given the resources required by the large facilities.

    What are the largest technical challenges facing the development of laser-fusion?

    Development of laser fusion as an energy source will require an integrated research effort that addresses the technological and engineering issues as well as developing the laser-target physics. We need efficient and reliable laser drivers that can operate at 5 to 10 pulses per second (versus the few shots per day on NIF). We need to develop technologies for producing low-cost precision targets. We need to develop concepts and advanced materials for the reaction chamber.

    We (NRL laser fusion) have advocated a phased approach which takes advantage of the separable and modular nature of laser fusion. For example the physics of the laser target interaction can be tested on a low repetition rate system like NIF, while the high repetition laser technology is developed elsewhere.

    In the phased plan sub-full scale components would be developed in Phase I, full scale components would be developed in Phase II (e.g. a full-scale laser beamline), and an inertial Fusion Test Facility built and operated in Phase III. The Fusion Test Facility (FTF) would be a small fusion power plant that would allow testing and development of components and systems for the full-scale power plants that would follow.

    Use of NRL’s krypton fluoride (KrF) laser technology would increase the target performance (energy gain) and thereby reduce the size and cost of an FTF. This research effort would take some time, probably 15 to 20 years, but with success we would have laid the path for a major new clean energy source.

    -Ends-

    (This blog post first appeared at The Copernican on December 16, 2012.)

  • The strong CP problem: We’re just lost

    Unsolved problems in particle physics are just mind-boggling. They usually concern nature at either the smallest or the largest scales, and the smaller the particle whose properties you’re trying to decipher, the closer you are to nature’s most fundamental principles, principles that, in their multitudes, father civilisations, galaxies, and all other kinds of things.

    One of the most intriguing such problems is called the ‘strong CP problem’. It has to do with the strong force, one of nature’s four fundamental forces, and what’s called the CP-violation phenomenon.

    The strong force is responsible for most of the mass of the human body, most of the mass of the chair you’re sitting on, even most of the mass of our Sun and the moon.

    Yes, the Higgs mechanism is the mass-giving mechanism, but it gives mass only to the fundamental particles, and if we were to be weighed by that alone, we’d weigh orders of magnitude lesser. More than 90 per cent of our mass actually comes from the strong nuclear force.

    The relationship between the strong nuclear force and our mass is unclear (this isn’t the problem I’m talking about). It’s the force that holds together quarks, a brand of fundamental particles, to form protons and neutrons. As with all other forces in particle physics, its push-and-pull is understood in terms of a force-carrier particle – a messenger of the force’s will, as it were.

    This messenger is called a gluon, and the behaviour of all gluons is governed by a set of laws that fall under the subject of quantum chromodynamics (QCD).


    Dr. Murray Gell-Mann is an American scientist who contributed significantly to the development of theories of fundamental particles, including QCD

    According to QCD, the farther two gluons get away from each other, the stronger the force between them will get. This is counterintuitive to those who’ve grown up working with Newton’s inverse-square laws, etc. An extension of this principle is that gluons can emit gluons, which is also counter-intuitive and sort of like the weird Banach-Tarski paradox.

    Protons and neutrons belong to a category called hadrons, which are basically heavy particles that are made up of three quarks. When, instead, a quark and an antiquark are held together, another type of hadron called the meson comes into existence. You’d think the particle and its antiparticle would immediately annihilate each other. However, it doesn’t happen so quickly if the quark and antiquark are of different types (also called flavours).

    One kind of meson is the kaon. A kaon comprises one strange quark (or antiquark) and one upantiquark (or quark). Among kaons, there are two kinds, K-short and K-long, whose properties were studied by Orreste Piccioni in 1964. They’re called so because K-long lasts longer than K-short before it decays into a shower of lighter particles, as shown:

    Strange antiquark –> up antiquark + W-plus boson (1)

    W-plus boson –> down antiquark + up quark

    Up quark –> gluon + down quark + down antiquark (2)

    The original other up quark remains as an up quark.

    Whenever a decay results in the formation of a W-plus/W-minus/Z boson, the weak force is said to be involved. Whenever a gluon is seen mediating, the strong nuclear force is said to be involved.

    In the decay shown above, there is one weak-decay (1) and one strong-decay (2). And whenever a weak-decay happens, a strange attitude of nature is revealed: bias.


    Handed spin (the up-down arrows indicate the particle’s momentum)

    The universe may not have a top or a bottom, but it definitely has a left and a right. At the smallest level, these directions are characterised by spinning particles. If a particle is spinning one way, then another particle with the same properties but spinning the other way is said to be the original’s mirror-image. This way, a right and a left orientation are chosen.

    As a conglomeration of such spinning particles, some toward the right and some toward the left, comes together to birth stuff, the stuff will also acquire a handedness with respect to the rest of the universe.

    And where the weak-decay is involved, left and right become swapped; parity gets violated.

    Consider the K-long decay depicted above (1). Because of the energy conservation law, there must be a way to account for all the properties going into and coming out of the decay. This means if something went in left-handed, it must come out left-handed, too. However, the strange antiquark emerges as anup antiquark with its spin mirrored.


    Physicists Tsung-Dao Lee and Chen Ning Yang (Image from the University of Chicago archive)

    As Chen Nin Yang and Tsung-Dao Lee investigated in the 1950s, they found that the weak-decay results in particles whose summed up properties were exactly the same as that of the decaying particle, but in a universe in which left and right had been swapped! In addition, the weak-decay also forced any intervening quarks to change their flavour.


    In the Feynman diagram shown above, a neutron decays into a proton because a down quark is turned into an up quark (The mediating W-minus decays into an electron and an electron antineutrino).

    This is curious behaviour, especially for a force that is considered fundamental, an innate attribute of nature itself. Whatever happened to symmetry, why couldn’t nature maintain the order of things without putting in a twist? Sure, we’re now able to explain how the weak-interaction swaps orientations, but there’s no clue about why it has to happen like that. I mean… why?!

    And now, we come to the strong CP problem(!): The laws governing the weak-interaction, brought under electroweak theory (EWT), are very, very similar to QCD. Why then doesn’t the strong nuclear force violate parity?

    This is also fascinating because of the similarities it bears to nature’s increasing degree of prejudices. Why an asymmetric force like the weak-interaction was born in an otherwise symmetric universe, no one knows, and why only the weak-interaction gets to violate parity, no one knows. Pfft.

    More so, even on the road leading up to this problem, we chanced upon three other problems, and altogether, this provides a good idea of how much humans are lost when it comes to particle physics. It’s evident that we’re only playing catching up, building simulations and then comparing the results to real-life occurrences to prove ourselves right. And just when you ask “Why?”, we’re lost for words.

    Even the Large Hadron Collider (LHC), a multi-billion dollar particle sledgehammer in France-Switzerland, is mostly a “How” machine. It smashes together billions of particles and then, using seven detectors positioned along its length, analyses the debris spewn out.


    An indicative diagram of the layout of detectors on the LHC

    Incidentally, one of the detectors, the LHCb, sifts through the particulate mess to find out how really the weak-interaction affects particle-decay. Specifically, it studies the properties of the B-meson, a kind of meson that has a bottom quark/antiquark (b-quark) as one of its two constituents.

    The b-quark has a tendency to weak-decay into its antiparticle, the b*-quark, in the process getting its left and right switched. Moreover, it has been observed the b*-quark is more likely to decay into the b-quark than it is for the b-quark to decay into the b*-quark. This phenomenon, involved in a process called baryogenesis, was responsible for today’s universe being composed of matter and not antimatter, and the LHCb is tasked with finding out… well, why?

    (This blog post first appeared at The Copernican on December 14, 2012.)

  • One of the hottest planets cold enough for ice

    This article, as written by me, appeared in The Hindu on December 6, 2012.

    Mercury, the innermost planet in the Solar System, is like a small rock orbiting the Sun, continuously assaulted by the star’s heat and radiation. It would have to be the last place to look for water at.

    However, observations of NASA’s MESSENGER spacecraft indicate that Mercury seems to harbour enough water-ice to fill 20 billion Olympic skating rinks.

    On November 29, during a televised press conference, NASA announced that data recorded since March 2011 by MESSENGER’s on-board instruments hinted that large quantities of water ice were stowed in the shadows of craters around the planet’s North Pole.

    Unlike Earth, Mercury’s rotation is not tilted about an axis. This means areas around the planet’s poles that are not sufficiently tilted toward the sun will remain cold for long periods of time.

    This characteristic allows the insides of polar craters to maintain low temperatures for millions of years, and capable of storing water-ice. But then, where is the water coming from?

    Bright spots were identified by MESSENGER’s infrared laser fired from orbit into nine craters around the North Pole. The spots lined up perfectly with a thermal model of ultra-cold spots on the planet that would never be warmer than -170 degrees centigrade.

    These icy spots are surrounded by darker terrain that receives a bit more sunlight and heat. Measurements by the neutron spectrometer aboard MESSENGER suggest that this darker area is a layer of material about 10 cm thick that lies on top of more ice, insulating it.

    Dr. David Paige, a planetary scientist at the University of California, Los Angeles, and lead author of one of three papers that indicate the craters might contain ice, said, “The darker material around the bright spots may be made up of complex hydrocarbons expelled from comet or asteroid impacts.” Such compounds must not be mistaken as signs of life since they can be produced by simple chemical reactions as well.

    The water-ice could also have been derived from crashing comets, the study by Paige and his team concludes.

    Finding water on the system’s hottest planet changes the way scientists perceive the Solar System’s formation.

    Indeed, in the mid-1990s, strong radar signals were fired from the US Arecibo radar dish in Puerto Rico, aimed at Mercury’s poles. Bright radar reflections were seen from crater-like regions, which was indicative of water-ice.

    “However, other substances might also reflect radar in a similar manner, like sulfur or cold silicate materials,” says David J. Lawrence, a physicist from the Johns Hopkins University Applied Physics Laboratory and lead author of the neutron spectrometer study.

    Lawrence and his team observed particles called neutrons bouncing and ricocheting off the planet via a spectrometer aboard MESSENGER. As high-energy cosmic rays from outer space bombarded into atoms on the planet, debris of particles, including neutrons, was the result.

    However, hydrogen atoms in the path of neutrons can halt the speeding particles almost completely as both weigh about the same. Since water molecules contain two hydrogen atoms each, areas that could contain water-ice will show a suppressed count of neutrons in the space above them.

    Because scientists have been living with the idea of Mercury containing water for the last couple decades, the find by MESSENGER is not likely to be revolutionary. However, it bolsters an exciting idea.

    As Lawrence says, “I think this discovery reinforces the reality that water is able to find its way to many places in the Solar System, and this fact should be kept in mind when studying the system and its history.”

  • Reaching for the… sky?

    This article, as written by me, appeared in The Hindu on December 4, 2012.

    The Aakash initiative of the Indian government is an attempt to bolster the academic experience of students in the country by equipping them with purpose-built tablets at subsidised rates.

    The Aakash 2 tablet was unveiled on November 11, 2012. It is the third iteration of a product first unveiled in October, 2011, and is designed and licensed by a British-Canadian-Indian company named DataWind, headed by chief executive Suneet Singh Tuli.

    On November 29, the tablet received an endorsement from the United Nations, where it was presented to Secretary-General Ban-ki Moon by India’s ambassador to the UN, Hardeep Singh Puri, and Tuli.

    DataWind will sell Aakash 2 to the government at Rs. 2,263, which will then be subsidised to students at Rs. 1,130. However, the question is this: is it value for money even at this low price?

    When it first entered the market, Aakash was censured for being underpowered, underperforming, and just generally cheap. Version one was a flop. The subsequently upgraded successor, released April, 2012, was released commercially before it was remodelled into the Aakash 2 to suit the government’s subsidised rate. As a result, some critical features were substituted with some others whose benefits are either redundant or unnecessary.

    Aakash 2 is more durable and slimmer than Aakash, even though both weigh 350 grams. If Akash is going to act as a substitute for textbooks, that would be a load off children’s schoolbags.

    But the Ministry of Human Resource Development is yet to reveal if digitised textbooks in local languages or any rich, interactive content have been developed to be served specifically through Aakash 2. The 2 GB of storage space, if not expanded to a possible 32 GB, is likely to restrict the quantity of content further, whereas the quality will be restrained by the low 512 MB of RAM.

    The new look has been achieved by substituting two USB ports that the first Aakash had for one mini-USB port. This means no internet dongles.

    That is a big drawback, considering Aakash 2 can access only Wi-Fi networks. It does support tethering capability that lets it act as a local Wi-Fi hotspot. But not being able to access cellular networks like 3G, such as in rural areas where mobile phone penetration is miles ahead of internet penetration, will place the onus on local governments to lay internet-cables, bring down broadband prices, etc.

    If the device is being envisaged mainly as a device on which students may take notes, then Aakash 2 could pass muster. But even here, the mini-USB port rules out plugging in an external keyboard for ease of typing.

    Next, Aakash 2’s battery life is a meagre 4 hours, which is well short of a full college day, and prevents serious student use. Video-conferencing, with a front-facing low-resolution camera, will only drain the battery faster. Compensatory ancillary infrastructure can only render the experience more cumbersome.

    In terms of software, after the operating system was recently upgraded in Aakash 2, the device is almost twice as fast and multi-tasks without overheating. But DataWind has quoted “insufficient processing power” as the reason the tablet will not have access to Android’s digital marketplace. Perhaps in an attempt to not entirely short-change students, access to the much less prolific GetJar apps directory is being provided.

    Effectively, with limited apps, no 3G, a weak battery and a mini-USB port, the success of the tablet and its contribution to Indian education seems to be hinged solely on its low price.

    As always, a problem of scale could exacerbate Aakash 2’s deficiencies. Consider the South American initiative of the One Laptop Per Child program instituted in 2005. Peru, in particular, distributed 8.5 lakh laptops at a cost of US $225 million in order to enhance its dismal education system.

    No appreciable gains in terms of test scores were recorded, however. Only 13 per cent of twelve-year olds were at the required level in mathematics and 30 per cent at the required reading level, the country’s education ministry reported in March 2012.

    However, Uruguay, its smaller continent-mate, saw rapid transformations after it equipped every primary-school student in the country with a laptop.

    The difference, as Sandro Marcone, a Peruvian ministry official, conceded, lay in Uruguayan students using laptops to access interactive content from the web to become faster learners than their teachers, and forming closely knit learning communities that then expanded.

    Therefore, what India shouldn’t do is subsidise a tablet that could turn out to be a very costly notebook. Yes, the price is low, but given the goal of ultimately unifying 58.6 lakh students across 25,000 colleges and 400 universities, Aakash 2 could be revised to better leverage existing infrastructure instead of necessitating more.

  • A regulator of the press

    While Cameron is yet to accept the Leveson inquiry’s recommendations, political pressure is going to force his hand no doubt. Which side of the debate do you come down on, though?

    I believe that a regulatory body must not exist – extraneous or no – to stem any practices by suppressing or appreciating the quantum of penalties in cases relating to privacy violations – albeit, of course, a system whose benefits in no way outweigh its hindrances.

    By appreciating the solatium for prosecutors against defenders lying outside the purview of the recommended system, such as The Spectator, no justice is served if the defending party isn’t part of the system purely on the grounds of principle.

    And a system that openly permits such inconsistencies is serving no justice but only sanctions, especially when the recommendations are based on a wide-ranging yet definitely locally emergent blight. Then, of course, there is also the encouragement of self-policing: when will we ever learn?

  • This is poetry. This is dance.

    Drop everything, cut off all sound/noise, and watch this.

    [vimeo http://www.vimeo.com/53914149 w=398&h=224]

    If you’ve gotten as far as this line, here’s some extra info: this video was shot with these cameras for the sake of this conversation.

    To understand the biology behind such almost-improbable fluidity, this is a good place to start.

  • The post-reporter era II

    When a print-publication decides to go online, it will face a set of problems that is wholly unique and excluded from the set of problems it will have faced before. Keeping in mind that such an organization functions as a manager of reporters, and that those reporters will have already (hopefully) made the transition from offline-only to online-publishing as well, there is bound to be an attrition between how individuals see their stories and how the organization sees what it can do with those stories.

    The principal cause of this problem – if that – is the nature of property on the world wide web. The newspaper isn’t the only portal on the internet where news is published and consumed, therefore its views on “its news” cannot be monopolistic. A reporter may not be allowed to publish his story with two publications if he works for either publication. This view is especially exacerbated if the two are competitors. On the web, however, who are you competing with?

    On the web, news-dissemination may not be the only agenda of those locations where news is still consumed in large quantities. The Hindu or The Times of India keeping their reporters from pushing their agenda on Facebook or Twitter is just laughable: it could easily and well be considered censorship. At the same time, reporters abstain from a free exchange of ideas with the situation on the ground over the social networks because they’re afraid someone else might snap up their idea. In other words, Facebook/Twitter have become the battleground where the traditional view of information-ownership meets the emerging view.

    The traditional newspaper must disinvest of its belief that news is a matter of money as well as of moral and historical considerations, and start to inculcate that, with the advent of information-management models for whom the “news” is not the most valuable commodity, news is of any value only for its own sake.

    Where does this leave the reporter? For example, if a print-publication has promulgated an idea to host its reporters’ blogs, who owns the content on the blogs? Does the publication own the content because it has been generated with the publication’s resources? Or does the reporter own the content because it would’ve been created even if not for the publication’s resources? There are some who would say that the answers to these questions depends on what is being said.

    If it’s a matter of opinion, then it may be freely shared. If it’s a news report, then it may not be freely shared. If it’s an analysis, then it may be dealt with on an ad hoc basis. No; this will not work because, simply put, it removes from the consistency of the reporter’s rights and, by extension, his opinions. It removes from the consistency of what the publication thinks “its” news is and what “its” news isn’t. Most of all, it defies the purpose of a blog itself – it’s not a commercial venture but an informational one. So… now what?

    Flout ’em and scout ’em, and scout ’em and flout ’em;
    Thought is free.

    Stephano, The Tempest, Act 3: Scene II

    News for news’s sake, that’s what. The deviation of the web from the commoditization of news to the commoditization of what presents that news implies a similar deviation for anyone who wants to be part of an enhanced enterprise. Don’t try to sell the content of the blogs: don’t restrict its supply and hope its value will increase; it won’t. Instead, drive traffic through the blogs themselves – pull your followers from Facebook and Twitter – and set up targeted-advertising on the blogs. Note, however, that this is only the commercial perspective.

    What about things on the other side of the hypothetical paywall? Well, how much, really, has the other side mattered until now?

  • Window for an advanced theory of particles closes further

    A version of this article, as written by me, appeared in The Hindu on November 22, 2012.

    On November 12, at the first day of the Hadron Collider Physics Symposium at Kyoto, Japan, researchers presented a handful of results that constrained the number of hiding places for a new theory of physics long believed to be promising.

    Members of the team from the LHCb detector on the Large Hadron Collider (LHC) experiment located on the border of France and Switzerland provided evidence of a very rare particle-decay. The rate of the decay process was in fair agreement with an older theory of particles’ properties, called the Standard Model (SM), and deviated from the new theory, called Supersymmetry.

    “Theorists have calculated that, in the Standard Model, this decay should occur about 3 times in every billion total decays of the particle,” announced Pierluigi Campana, LHCb spokesperson. “This first measurement gives a value of around 3.2 per billion, which is in very good agreement with the prediction.”

    The result was presented at the 3.5-sigma confidence level, which corresponds to an error rate of 1-in-2,000. While not strong enough to claim discovery, it is valid as evidence.

    The particle, called a Bsmeson, decayed from a bottom antiquark and strange quark pair into two muons. According to the SM, this is a complex and indirect decay process: the quarks exchange a W boson particle, turn into a top-antitop quark pair, which then decays into a Z boson or a Higgs boson. The boson then decays to two muons.

    This indirect decay is called a quantum loop, and advanced theories like Supersymmetry predict new, short-lived particles to appear in such loops. The LHCb, which detected the decays, reported no such new particles.

    The solid blue line shows post-decay muons from all events, and the red dotted line shows the muon-decay event from the B(s)0 meson. Because of a strong agreement with the SM, SUSY may as well abandon this bastion.

    At the same time, in June 2011, the LHCb had announced that it had spotted hints of supersymmetric particles at 3.9-sigma. Thus, scientists will continue to conduct tests until they can stack 3.5 million-to-1 odds for or against Supersymmetry to close the case.

    As Prof. Chris Parkes, spokesperson for the UK participation in the LHCb experiment, told BBC News: “Supersymmetry may not be dead but these latest results have certainly put it into hospital.”

    The symposium, which concluded on November 16, also saw the release of the first batch of data generated in search of the Higgs boson since the initial announcement on July 4 this year.

    The LHC can’t observe the Higgs boson directly because it quickly decays into lighter particles. So, physicists count up the lighter particles and try to see if some of those could have come from a momentarily existent Higgs.

    These are still early days, but the data seems consistent with the predicted properties of the elusive particle, giving further strength to the validity of the SM.

    Dr. Rahul Sinha, a physicist at the Institute of Mathematical Sciences, Chennai, said, “So far there is nothing in the Higgs data that indicates that it is not the Higgs of Standard Model, but a conclusive statement cannot be made as yet.”

    The scientific community, however, is disappointed as there are fewer channels for new physics to occur. While the SM is fairly consistent with experimental findings, it is still unable to explain some fundamental problems.

    One, called the hierarchy problem, asks why some particles are much heavier than others. Supersymmetry is theoretically equipped to provide the answer, but experimental findings are only thinning down its chances.

    Commenting on the results, Dr. G. Rajasekaran, scientific adviser to the India-based Neutrino Observatory being built at Theni, asked for patience. “Supersymmetry implies the existence of a whole new world of particles equaling our known world. Remember, we took a hundred years to discover the known particles starting with the electron.”

    With each such tightening of the leash, physicists return to the drawing board and consider new possibilities from scratch. At the same time, they also hope that the initial results are wrong. “We now plan to continue analysing data to improve the accuracy of this measurement and others which could show effects of new physics,” said Campana.

    So, while the area where a chink might be found in the SM armour is getting smaller, there is hope that there is a chink somewhere nonetheless.