Uncategorized

  • Justice in Gotham City

    History

    The history of Gotham city is not unlike many American cities’ during British colonial rule. It was founded in 1635 by a Norwegian mercenary and was later taken over by the British, changing hands various times over the years. According to Alan Moore, the famous cartoonist and creator of such titles as Watchmen and V for Vendetta, Gotham city was the place of many mysterious occult rites during the American Revolutionary War (Swamp Thing #53).

    A separate history was provided for by Bill Willingham (Shadowpact #5): an evil warlock has slept for 40,000 years under the place where Gotham city is built, with his servant Strega claiming the “dark and often cursed character” of the city was inherited from the warlock’s nature. Going by either story, the city assumes a post-Apocalyptic mood that is also Gothic at the same time, and accords it an ambivalence that invites literary exploitation.

    This mood has since been open for modification by writers, more so after the chain of events set off by the villain Ra’as Al Ghul. He introduced a virus called the Clench, impacting the city greatly. Just as it was recuperating from its impact, it was hit by an earthquake measuring 7.6 on the Richter scale, prompting the federal government of the United States to cut off Gotham from the mainland because it had no hopes of rehabilitating it. However, respite arrived in the form of assistance from the brilliant billionaire Alexander “Lex” Luthor, Superman’s archenemy.

    In this regard, there are many comparisons to be made to Mumbai, which is itself a set of seven islands, is constantly assaulted by terrorists, and often finds support not from the government but from unexpected quarters (but, it must be said, not as unexpected as Luthor). By extension, the residents of Gotham city are also likely to be more resilient and resourceful than the residents of other cities, and possibly quite cynical, too.

    Everything about Gotham city is rooted in its mysticism-ridden history, and the fights fought between the region’s native tribes and evil powers. The first signs of modern civilization arise in the 19th century when, after the tribes’ abandonment due to infestation by what they claimed were evil powers, Gotham Town was born as a reputable port.

    Around the same period, in 1799, Darius Wayne profited from his labours on the port and started the construction of Wayne Manor, one of the precursors of the city’s cocktail of Gothic, Art Nouveau and Art Deco architectures. The manor itself is what one would call “stately”. It is located toward the northeast of the city, removed from the clamour of urbanism and allowing Batman, or Bruce Wayne – Darius Wayne’s descendent – to plan his adventures in peace.

    Exclusivity v. Justice

    The isolation of the manor parallels the isolation of Wayne’s personality from that of Batman’s: the former is portrayed as a dilettante indulging in the wealth of his forefathers whereas the latter is portrayed as a vigilante that the city seems to subconsciously need. At the same time, however, it is hard to say what the difference might have been had Wayne Manor been situated inside the city. In this regard, there is a notion of social exclusivity in terms of spaces occupied within the city.

    A good case in point for this would be the older part of Gotham, which is situated to the north of the city and generally considered a part of the city itself. Old Gotham is where Crime Alley (which includes the Bowery, the worst neighbourhood in all of Gotham), Arkham Asylum (albeit as an island – visible to the east of a forked New Trigate Bridge), and Amusement Mile (the stalking grounds of the Joker) are located. Therefore, the new city, developing on the principles of reformation and citizen-vigilantism, grew southward and away from its traditional centres of trade, finance, and commerce.

    Disregarding the depiction of Gotham’s architecture in the Burton and Nolan movies and the TV series: another of Wayne’s ancestors, Judge Solomon Wayne, was, according to Moore, the inspiration for the city’s unique architecture. Solomon’s intention to reform the city and rebrand it, so to speak, resulting in his commissioning of the young architect Cyrus Pinkney to design and construct the city’s financial centre. Moore’s choice of this explanation coincides perfectly with the period of Gothic Revivalism (around the early 1990s).

    Click to enlarge

    Growth v. Justice

    Justice within a city is not administered in a court of law nor does it arise out of the adherence to rules and ethics. It is a product of many of the city’s provisions, their accessibility, and how well they work together to give rise to a sense of social security and provide a livelihood. For instance, Gotham’s common man could be working a nine-to-five day-job at some company in One Gotham Centre, just down the road from Wayne Tower, living in the suburbs around the Knights Dome Sporting Complex, within swimming distance of Cape Carmine off Old Gotham, and supporting a family of three.

    However, this is not social justice. The need for social justice arises when aspirations, income and social liberty don’t coincide: if the nearest amusement park is haunted by a psychopathic serial killer, if a trip to the airport requires a drive through Arkham Asylum, if affordable housing comes at the price of personal security, and, most importantly, if there is the persistent knowledge of the need for a masked vigilante to rely on for a measurable sense of appeal against all the odds – in simple terms. It is as if the city was carefully misplanned: the Gotham city everyman is someone forced to live in a dangerous neighbourhood because of lack of other options for sheltering.

    In other words, social justice is a perfect city and, therefore, by definition, can be neither omnipresent nor omnipotent, especially since Gotham city falls under the umbra of laissez faire economics. As a corollary, to understand social justice within a city, we must understand where the city’s priorities lie. How has the city been developing in the last few years? Is economic equality rising or falling? Who within the city has a sense of ease of access when it comes to valuable resources and who doesn’t?

    The Metropolis

    The problem with studying Gotham city is that it is a city conceived as a negative space to serve as the battleground where the forces of good and evil meet. It has deliberately been envisioned as a child of the industrial revolution entombed within walls of steel and stone, overwhelming those living within it with by the enforcement of a systematic way of life that allows for the exercise of few liberties. This is what effectively paints the picture of Gotham city being a failed one. In fact, this very way of thinking is paralleled in the image of the Metropolis in Blade Runner (1982), whose Modern-expressionism production design was borrowed inefficiently by Barbara Ling for Joel Schumacher’s Batman Forever (1995) to imply a wildly whimsical side to the city. Anyway, this is how we understand the need for Batman, and how that need has been and is created.

    It begins with the blighting of the police force: the superhero can become a societal fixture only if there is something fundamentally wrong with the one other body that is responsible for keeping crime in check. The Gotham City Police Department (GCPD) was corrupt for a long time, especially under the leadership of Commissioner Gillian Loeb, who had his hands in the pockets of the Falcone, Galante and Maroni crime families amongst others. The social scene inspired by such a network could be compared to the conspiratorial mood in the movie L. A. Confidential (1997).

    By the time Commissioner James Gordon took over after Loeb’s successor Jack Grogan, the GCPD was overridden with lawlessness. Because of such a poor tradition, public authorities who should have been present to assuage the suffering of the historically discriminated were instead present to exacerbate, and profit from, the discrimination. Seeing that the GCPD couldn’t be cleaned from the inside, Gordon enlisted the skills of Batman, a veritable outsider, a deus ex machina.

    Once the cleansing was complete, the city could formally begin on its path of reformation. Here is where the question of economic equality arises: when weeding out criminals, did the police department assume a rehabilitative approach or a retributive one? If the movies and TV series based on the comic may be trusted, then retribution was the order of the day, perhaps born out of an urgent need to do away with everything that has plagued the city and start anew.

    At the same time, retribution also implies that enforcers of the law – and Batman – were willing to show no patience toward how the city itself was creating many criminals. This lack of patience is also reflected in many of the urban development projects undertaken by the city’s planning commission, especially such ill-conceived ones as the Underground Highway, as if the officials decided that desperate measures were necessary. (The ultimately-abandoned Underground Highway later went on to become the hideout of Killer Croc, apart from becoming the home for many of the city’s homeless – an indication that the forces of corruption at work were creating poverty.)

    Conclusion

    It can be deduced from all these threads that Gotham city is not simply a product of its history, which continues to influence the way outsiders think of it, but also its inability to cope with what it is fast becoming: a kennel for superheroes to flourish in. There are many decisions at work in the city that collude to create injustice in many forms, and the most significant ones are geographic exclusivity, a retributive mindset in the ranks of the executive, restriction on the exercising of social liberties based on past mistakes, and the presence of Batman himself.

  • Risky transfers

    This update is 6 days old, but it hasn’t made any more sense with time. Perhaps it was the way it was written – my opinion: the stress on the financial benefits of offsetting local plutonium storage with monetary compensation is alarming. That Germany will pay the UK to store this ridiculously dangerous material, that the UK will risk political backlash because the “financial benefits from the title transfer will exceed the long-term costs of the material’s safe storage and management”, that France will then supply processed MOX fuel for use in German reactors, that the UK will then argue that it is glad it has been spared the trouble of shipping plutonium while implying that it is comfortable being the site of nuclear waste storage… are all alarming developments.

    Why? Because, even though I’m pro-nuclear, the backlash that could arise out of this could negate years of progress in developing MOX-processing technologies and installing them in the middle of energy policies of three countries. One problem is already obviously foreseeable: Germany’s reluctance to continue its reliance on nuclear power is simply short-sighted. If it requires any more power in the future, it will have to purchase it from France, which, given the knee-jerk shutdown of NPPs worldwide after the Fukushima Incident, is just as surprisingly displaying enough sense to rely on NPPs. By then, I hope monetary advantages will not suffice to mask the reality that Germany would be paying to have France purchase its troubles. Unless, of course, there is some other agreeable form of risk-transfer.

    Just ugly.

  • Signs of a slowdown

    The way ahead for particle physics seems dully lit after CERN’s fourth-of-July firecracker. The Higgs announcement got everyone in the physics community excited – and spurred a frenzied submission of pre-prints all rushing to explain the particle’s properties. However, that excitement quickly died out after ICHEP ’12 was presented with nothing significant, even with anything a fraction as significant as the ATLAS/CMS results.

    (L-R) Gianotti, Heuer & Incandela

    Even so, I suppose we must wait at least another 3 months before a a conclusive Higgs-centric theory emerges that completely integrates the Higgs mechanism with the extant Standard Model.

    The spotting of the elusive boson – or an impostor – closes a decades-old chapter in particle physics, but does almost nothing in pointing the way ahead apart from verifying the process of mass-formation. Even theoretically, the presence of SM quadratic divergences in the mass of the Higgs boson prove a resilient barrier to correct. How the Higgs field will be used as a tool in detecting other particles and the properties of other entities is altogether unclear.

    The tricky part lies in working out the intricacies of the hypotheses that promise to point the way ahead. The most dominant amongst them is supersymmetry (SUSY). In fact, hints of existence of supersymmetric partners were recorded when the LHCb detector at the LHC spotted evidence of CP-violation in muon-decay events (the latter at 3.9σ). At the same time, the physicists I’m in touch with at IMS point out that rigid restrictions have been instituted on the discovery of sfermions and bosinos.

    The energies at which these partners could be found are beyond those achievable by the LHC, let alone the luminosity. More, any favourable-looking ATLAS/CMS SUSY-results – which are simply interpretations of strange events – are definitely applicable only in narrow and very special scenarios. Such a condition is inadmissible when we’re actually in the hunt for frameworks that could explain grander phenomena. Like the link itself says,

    “The searches leave little room for SUSY inside the reach of the existing data.”

    Despite this bleak outlook, there is still a possibility that SUSY may stand verified in the future. Right now: “Could SUSY be masked behind general gauge mediation, R-parity violation or gauge-mediated SUSY-breaking” is the question (gauge-mediated SUSY-breaking (GMSB) is when some hidden sector breaks SUSY and communicates the products to the SM via messenger fields). Also, ZEUS/DESY results (generated by e-p DIS studies) are currently being interpreted.

    However, everyone knows that between now and a future that contains a verified-SUSY, hundreds of financial appeals stand in the way. 😀 This is a typical time of slowdown – a time we must use for open-minded hypothesizing, discussion, careful verification, and, importantly, honest correction.

  • Making money

    An experimental “democratic” revenue model for news publications new to the internet (that are reluctant to change their delivery styles)

     

  • A dilemma of the auto-didact

    If publishers could never imagine that there are people who could teach themselves particle physics, why conceive cheaper preliminary textbooks and ridiculously expensive advanced textbooks? Learning vector physics for classical mechanics costs Rs. 245 while progressing then to analytical mechanics involves an incurrence of Rs. 4,520. Does the cost barrier exist because the knowledge is more specialized? If this is the case, then such books should have become cheaper over time. They have not: Analytical Mechanics, which a good friend recommended, has stayed in the vicinity of $75 for the last three years (now, it’s $78.67 for the original paperback and $43 for a used one). This is just a handy example. There are a host of textbooks that detail concepts in advanced physics and cost a fortune: all you have to do is look for those that contain “hadron”, “accelerator”, “QCD”, etc., in their titles.

    Getting to a place in time where a student is capable of understanding these subjects is cheap. In other words, the cost of aspirations is low while the price of execution is prohibitive.

    Sure, alternatives exist, such as libraries and university archives. However, that misses the point: it seems the costs of the books are higher to prevent their ubiquitous consumption. No other reason seems evident, although I am loth to reach this conclusion. If you, the publisher, want me to read such books only in universities, then you are effectively requiring me to either abstain from reading these books irrespective of my interests if my professional interests reside elsewhere or depend on universities and university-publisher relationships for my progress in advanced physics, not myself. The resulting gap between the layman and the specialist eventually evades spanning, leading to ridiculous results such as not understanding the “God” in “God particle” to questioning the necessity of the LHC without quite understanding what it does and how that helps mankind.

  • The Indian Bose in the universal boson

    Read this article.

    Do you think Indians are harping too much about the lack of mention of Satyendra Nath Bose’s name in the media coverage of the CERN announcement last week? The articles in Hindustan Times and Economic Times seemed to be taking things too far with anthropological analyses that have nothing to do with Bose’s work. The boson was named so around 1945 by the great Paul Dirac as a commemoration of Bose’s work with Einstein. Much has happened since; why would we want to celebrate the Bose in the boson again and again?

    Dr. Satyendra Nath Bose

    The stage now belongs to the ATLAS and the CMS collaborations, and to Higgs, Kibble, Englert, Brout, Guralnik, and Hagen, and to physics itself as a triumph of worldwide cooperation in the face of many problems. Smarting because an Indian’s mention was forgotten is jejune. Then again, this is mostly the layman and the media, because the physicists I met last week seemed to fully understand Bose’s contribution to the field itself instead of count the frequency of his name’s mention.

    Priyamvada Natarajan, as she writes in the Hindustan Times, is wrong (and the Economic Times article’s heading is just irritating). That Bose is not a household name like Einstein’s is is not because of post-colonialism – the exceptions are abundant enough to warrant inclusion – but because we place too much faith in a name instead of remembering what the man behind the name did for physics.

  • Ramblings on partons

    When matter and anti-matter meet, they annihilate each other in a “flash” of energy. Usually, this release of energy is in the form of high-energy photons, or gamma rays, which are then detected, analysed, and interpreted to understand more of the collision’s other properties. In nature, however, matter/anti-matter collisions are ultra-rare if not altogether non-existent because of the unavailability of anti-matter.

    Such annihilation processes are important not just to supplant our understanding of particle physics but also because they play a central role in the design of hadron colliders. Such colliders use heavily interacting particles (the superficial definition of hadrons), such as protons and neutrons, to bombard into each other. The target particles, depending on experimental necessities, may be stationary – in which case the collider is said to employ a fixed target – or moving. The Large Hadron Collider (LHC) is the world’s largest and most powerful hadron collider, and it uses moving targets, i.e., both the incident and target hadrons are moving toward each other.

    Currently, it is know that a hadronic collision is explicable in terms of their constituent particles, quarks and gluons. Quarks are the snowcloned fundamental building blocks of all matter, and gluons are particles that allow two quarks to “stick” together, behaving like glue. More specifically, gluons mediate the residual strong force (where the strong force itself is one of the four fundamental forces of nature): in other words, quarks interact by exchanging gluons.

    Parton distribution functions

    Earlier, before the quark-gluon model was known, a hadronic collision was broken down in terms of hypothetical particles called partons. The idea was suggested by Richard Feynman in 1969. At very high energies – such as the ones at which collisions occur at the LHC – equations governing the parton model, which approximates the hadrons as presenting point-targets, evolve into parton-distribution functions (PDFs). PDFs, in turn, allow for the prediction of the composition of the hubris resulting from the collisions. Theoretical calculations pertaining to different collision environments and outcomes are used to derive different PDFs for each process, which are then used by technicians to design hadron-colliders accordingly.

    (If you can work on FORTRAN, here are some PDFs to work with.)

    Once the quark-gluon model was in place, there were no significant deviations from the parton model. At the same time, because quarks have a corresponding anti-matter “form”,anti-quarks, a model had to be developed that could study quark/anti-quark collisions during the course of a hadronic collision, especially one that could factor in the production of pairs of leptons during such collisions. Such a model was developed by Sidney Drell and Tung-Mow Yan in 1970, and was called the Drell-Yan (DY) process, and further complimented by a phenomenon called Bjorken scaling (Bsc).

    (In Bsc, when the energy of an incoming lepton is sufficiently high during a collision process, the cross-section available for collision becomes independent of the electron’s momentum. In other words, the lepton, say, an electron, at very-high energies interacts with a hadron not as if the latter were particle but as if it were composed of point-like targets called partons.)

    In a DY process, a quark from one hadron would collide with an anti-quark from another hadron and annihilate each other to produce a virtual photon (γ*). The γ* then decays to form a dilepton pair, which, if we were to treat with as one entity instead of as a paired two, could be said to have a mass M.

    Now, if M is large, then Heisenberg’s uncertainty principle tells us that the time of interaction between the quark/anti-quark pair should have been small, essentially limiting its interaction with any other partons in the colliding hadrons. Similarly, in a timeframe that is long in comparison to the timescale of the annihilation, the other spectator-partons would rearrange themselves into resultant hadrons. However, in most cases, the dilepton is detected and momentum-analysed, not the properties of the outgoing hadrons. The DY process results in the production of dilepton pairs at finite energies, but these energies are very closely spaced, resulting in an energy-band, or continuum, being defined in the ambit of which a dilepton-pair might be produced.

    In quantum chromodynamics and quark-parton transitions

    Quark/anti-quark annihilation is of special significance in quantum chromodynamics (QCD), which studies the colour-force, the force between gluons and quarks and anti-quarks, inside hadrons. The strong field that gluons mediate is, in quantum mechanical terms, called the colour field. Unlike in QED (quantum electrodynamics) or classical mechanics, QCD allows for two strange kinds of behaviour from quarks and gluons. The first kind, called confinement, holds that the force between two interacting quarks does not diminish as they are separated. This doesn’t mean that quarks are strongly interacting at large distances! No, it means that once two quarks have come together, no amount of energy can take them apart. The second kind, called asymptotic freedom (AF), holds that that quarks and gluons interact weakly at high energies.

    (If you think about it, colour-confinement implies that gluons can emit gluons, and as the separation between two quarks increases, so also the rate of gluon emission increases. Axiomatically, as the separation decreases, or that the relative four-momentum squared increases, the force holding quarks together decreases monotonically in strength, leading to asymptotic freedom.)

    The definitions for both properties are deeply rooted in experimental ontology: colour-confinement was chosen to explain the consistent failure of free-quark searches, while asymptotic freedom doesn’t yield any phase-transition line between high- and low-energy scales while still describing a property transition between the two scales. Therefore, the DY process seemed well-poised to provide some indirect proof for the experimental validity of QCD if some relation could be found between the collision cross-section and the particles’ colour-charge, and this is just what was done.

    The QCD factorization theorem can be read as:

    Here, as(μ) is the effective chromodynamic (quark-gluon-quark) coupling at a factorization scale μ. Further, fa(xμ) defines the probability of finding a parton a within a nucleon with the Bjorken scaling variable x at the scale μ. Also, { hat { sigma  }  }_{ i }^{ a } (TeX converter) is the hard-scattering cross-section of the electroweak vector boson on the parton. The physical implication is that the nucleonic structure function is derived by the area of overlap between the function describing the probability of finding a parton inside a nucleon and the summa of all functions describing the probabilities of finding all partons within the nucleon.

    This scaling behaviour enabled by QCD makes possible predictions about future particle phenomenology.

  • A simple overview of particle physics

    (Click on the image for a larger size)

  • Putting particle physics research to work

    In the whole gamut of comments regarding the Higgs boson, there is a depressingly large number decrying the efforts of the ATLAS and CMS collaborations. Why? Because a lot of people think the Large Hadron Collider (LHC) is a yawning waste of time and money, an investment that serves mankind no practical purpose.

    Well, here and here are some cases in point that demonstrate the practical good that the LHC has made possible in the material sciences. Another big area of application is in medical diagnostics: making the point is one article about hunting for the origin of Alzheimer’s, and another about the very similar technology used in particle accelerators and medical imaging devices, meteorology, VLSI, large-scale networking, cryogenics, and X-ray spectroscopy.

    Moving on to more germane applications: arXiv has reams of papers that discuss the deployment of

    … amongst others.

    The LHC, above all else, is the brainchild of the European Centre for Nuclear Research, popularly known as CERN. These guys invented the notion of the internet, developed the first touch-screen devices, and pioneered the earliest high-energy medical imaging techniques.

    With experiments like those being conducted at the LHC, it’s easy to forget every other development in such laboratories apart from the discovery of much-celebrated particles. All the applications I’ve linked to in this post were conceived by scientists working with the LHC, if only to argue that everyone, the man whose tax money pays for these giant labs to the man who uses the money to work in the labs, is mindful of practical concerns.

  • Gunning for the goddamned: ATLAS results explained

    Here are some of the photos from the CERN webcast yesterday (July 4, Wednesday), with an adjoining explanation of the data presented in each one and what it signifies.

    This first image shows the data accumulated post-analysis of the diphoton decay mode of the Higgs boson. In simpler terms, physicists first put together all the data they had that resulted from previously known processes. This constituted what’s called the background. Then, they looked for signs of any particle that seemed to decay into two energetic photons, or gamma rays, in a specific energy window; in this case, 100-160 GeV.

    Finally, knowing how the number of events would vary in a scenario without the Higgs boson, a curve was plotted that fit the data perfectly: the number of events at each energy level v. the energy level at which it was tracked. This way, a bump in the curve during measurement would mean there was a particle previously unaccounted for that was causing an excess of diphoton decay events at a particular energy.

    This is the plot of the mass of the particle being looked for (x-axis) versus the confidence level with which it has (or has not, depending n how you look at it) been excluded as an event to focus on. The dotted horizontal line, corresponding to 1μ, marks off a 95% exclusion limit: any events registered above the line can be claimed as having been observed with “more than 95% confidence” (colloquial usage).

    Toward the top-right corner of the image are some numbers. 7 TeV and 8 TeV are the values of the total energy going into each collision before and after March, 2012, respectively. The beam energy was driven up to increase the incidence of decay events corresponding to Higgs-boson-like particles, which, given the extremely high energy at which they exist, are viciously short-lived. In experiments that were run between March and July, physicists at CERN reported an increase of almost 25-30% of such events.

    The two other numbers indicate the particle accelerator’s integrated luminosity. In particle physics, luminosity is measured as the number of particles that can pass detected through a unit of area per second. The integrated luminosity is the same value but measured over a period of time. In the case of the LHC, after the collision energy was vamped up, the luminosity, too, had to be increased: from about 4.7 fb-1 to 5.8 fb-1. You’ll want to Wiki the unit of area called barn. Some lighthearted physics talk there.

    In this plot, the y-axis on the left shows the chances of error, and the corresponding statistical significance on the right. When the chances of an error stand at 1, the results are not statistically significant at all because every observation is an error! But wait a minute, does that make sense? How can all results be errors? Well, when looking for one particular type of event, any event that is not this event is an error.

    Thus, as we move toward the ~125 GeV mark, the number of statistically significant results shoot up drastically. Looking closer, we see two results registered just beyond the 5-sigma mark, where the chances of error are 1 in 3.5 million. This means that if the physicists created just those conditions that resulted in this >5σ (five-sigma) observation 3.5 million times, only once will a random fluctuation play impostor.

    Also, notice how the differences between each level of statistical significance increases with increasing significance? For chances of errors: 5σ – 4σ > 4σ – 3σ > … > 1σ – 0σ. This means that the closer physicists get to a discovery, the exponentially more precise they must be!

    OK, this is a graph showing the mass-distribution for the four-lepton decay mode, referred to as a channel by those working on the ATLAS and CMS collaborations (because there are separate channels of data-taking for each decay-mode). The plotting parameters are the same as in the first plot in this post except for the scale of the x-axis, which goes all the way from 0 to 250 GeV. Now, between 120 GeV and 130 GeV, there is an excess of events (light blue). Physicists know it is an excess and not at par with expectations because theoretical calculations made after discounting a Higgs-boson-like decay event show that, in that 10 GeV, only around 5.3 events are to be expected, as opposed to the 13 that turned up.