Uncategorized

  • On meson decay-modes in studying CP violation

    In particle physics, CPT symmetry is an attribute of the universe that is held as fundamentally true by quantum field theory (QFT). It states that the laws of physics should not be changed and the opposite of all allowed motions be allowed (T symmetry) if a particle is replaced with its antiparticle (C symmetry) and then left and right are swapped (P symmetry).

    What this implies is a uniformity of the particle’s properties across time, charge and orientation, effectively rendering them conjugate perspectives.

    (T-symmetry, called so for an implied “time reversal”, defines that if a process moves one way in time, its opposite is signified by its moving the other way in time.)

    The more ubiquitously studied version of CPT symmetry is CP symmetry with the assumption that T-symmetry is preserved. This is because CP-violation, when it was first observed by James Cronin and Val Fitch, shocked the world of physics, implying that something was off about the universe. Particles that ought to have remained “neutral” in terms of their properties were taking sides! (Note: CPT-symmetry is considered to be a “weaker symmetry” then CP-symmetry.)

    Val Logsdon Fitch (L) and James Watson Cronin

    In 1964, Oreste Piccioni, who had just migrated to the USA and was working at the Lawrence Berkeley National Laboratory (LBNL), observed that kaons, mesons each composed of a strange quark and an up/down antiquark, had a tendency to regenerate in one form when shot as a beam into matter.

    The neutral kaon, denoted as K0, has two forms, the short-lived (KS) and the long-lived (KL). Piccioni found that kaons decay in flight, so a beam of kaons, over a period of time, becomes pure KL because the KS all decay away before them. When such a beam is shot into matter, the K0 is scattered by protons and neutrons whereas the K0* (i.e., antikaons) contribute to the formation of a class of particles called hyperons.

    Because of this asymmetric interaction, (quantum) coherence between the two batches of particles is lost, resulting in the emergent beam being composed of KS and KL, where the KS is regenerated by firing a K0-beam into matter.

    When the results of Piccioni’s experiment were duplicated by Robert Adair in the same year, regeneration as a physical phenomenon became a new chapter in the study of particle physics. Later that year, that’s what Cronin and Fitch set out to do. However, during the decay process, they observed a strange phenomenon.

    According to a theory formulated in the 1950s by Murray Gell-Mann and Kazuo Nishijima, and then by Gell-Mann and Abraham Pais in 1955-1957, the KS meson was allowed to decay into two pions in order for certain quantum mechanical states to be conserved, and the KL meson was allowed to decay into three pions.

    For instance, the KL (s*, u) decay happens thus:

    1. s* → u* + W+ (weak interaction)
    2. W+ → d* + u
    3. u → g + d + d* (strong interaction)
    4. u → u
    A Feynman diagram depicting the decay of a KL meson into three pions.

    In 1964, in their landmark experiment, Cronin and Fitch observed, however, that the KL meson was decaying into two pions, albeit at a frequency of 1-in-500 decays. This implied an indirect instance of CP-symmetry violation, and subsequently won the pair the 1980 Nobel Prize for Physics.

    An important aspect of the observation of CP-symmetry violation in kaons is that the weak force is involved in the decay process (even as observed above in the decay of the KL meson). Even though the kaon is composed of a quark and an antiquark, i.e., held together by the strong force, its decay is mediated by the strong and the weak forces.

    In all weak interactions, parity is not conserved. The interaction itself acts only on left-handed particles and right-handed anti-particles, and was parametrized in what is called the V-A Lagrangian for weak interactions, developed by Robert Marshak and George Sudarshan in 1957.

    Prof. Robert Marshak

    In fact, even in the case of the KS and KL kaons, their decay into pions can be depicted thus:

    KS → π+ + π0
    KL → π+ + π+ + π

    Here, the “+” and “-” indicate a particle’s parity, or handedness. When a KS decays into two pions, the result is one right-handed (“+”) and one neutral pion (“0”). When a KL decays into three pions, however, the result is two right-handed pions and one left-handed (“-“) pion.

    When kaons were first investigated via their decay modes, the different final parities indicated that there were two kaons that were decaying differently. Over time, however, as increasingly precise measurements indicated that only one kaon (now called K+) was behind both decays, physicists concluded that the weak interaction was responsible for resulting in one kind of decay some of the time and in another kind of decay the rest of the time.

    To elucidate, in particle physics, the squares of the amplitudes of two transformations, B → f and B* → f*, are denoted thus.

    Here,

    B = Initial state (or particle); f = Final state (or particle)
    B* = Initial antistate (or antiparticle); f* = Final antistate (or antiparticle)
    P = Amplitude of transformation B → f; Q = Amplitude of transformation B* → f*
    S = Corresponding strong part of amplitude; W = Corresponding weak part of amplitude; both treated as phases of the wave for which the amplitude is being evaluated

    Subtracting (and applying some trigonometry):

    The presence of the term sin(WPWQ) is a sign that purely, or at least partly, weak interactions can occur in all transformations that can occur in at least two ways, and thus will violate CP-symmetry. (It’s like having the option of having two paths to reach a common destination: #1 is longer and fairly empty; #2 is shorter and congested. If their distances and congestedness are fairly comparable, then facing some congestion becomes inevitable.)

    Electromagnetism, strong interactions, and gravitation do not display any features that could give rise to the distinction between right and left, however. This disparity is also called the ‘strong CP problem’ and is one of the unsolved problems of physics. It is especially puzzling because the QCD Lagrangian, which is a function describing the dynamics of the strong interaction, includes terms that could break the CP-symmetry.

    [youtube http://www.youtube.com/watch?v=KDkaMuN0DA0?rel=0]

    (The best known resolution – one that doesn’t resort to spacetime with two time-dimensions – is the Peccei-Quinn theory put forth by Roberto Peccei and Helen Quinn in 1977. It suggests that the QCD-Lagrangian be extended with a CP-violating parameter whose value is 0 or close to 0.

    This way, CP-symmetry is conserved during the strong interactions while CP-symmetry “breakers” in the QCD-Lagrangian have their terms cancelled by an emergent, dynamic field whose flux is encapsulated by massless Goldstone bosons called axions.)

    Now, kaons are a class of mesons whose composition includes a strange quark (or antiquark). Another class of mesons, called B-mesons, are identified by their composition including a bottom antiquark, and are also notable for the role they play in studies of CP-symmetry violations in nature. (Note: A B-meson composed of a bottom antiquark and a bottom quark is not called a meson but a bottomonium.)

    The six quarks, the fundamental (and proverbial) building blocks of matter

    According to the Standard Model (SM) of particle physics, there are some particles – such as quarks and leptons – that carry a property called flavor. Mesons, which are composed of quarks and antiquarks, have an overall flavor inherited from their composition as a result. The presence of non-zero flavor is significant because SM permits quarks and leptons of one flavor to transmute into the corresponding quarks and leptons of another flavor, a process called oscillating.

    And the B-meson is no exception. Herein lies the rub: during oscillations, the B-meson is favored over its antiparticle counterpart. Given the CPT theorem’s assurance of particles and antiparticles being differentiable only by charge and handedness, not mass, etc., the preference of B*-meson for becoming the B-meson more than the B-meson’s preference for becoming the B*-meson indicates a matter-asymmetry. Put another way, the B-meson decays at a slower rate than the B*-meson. Put yet another way, matter made of the B-meson is more stable than antimatter made of the B*-meson.

    Further, if the early universe started off as a perfect symmetry (in every way), then the asymmetric formation of B-mesons would have paved the way for matter to take precedence over anti-matter. This is one of the first instances of the weak interaction possibly interfering with the composition of the universe. How? By promising never to preserve parity, and by participating in flavor-changing oscillations (in the form of the W/Z boson).

    In this composite image of the Crab Nebula, matter and antimatter are propelled nearly to the speed of light by the Crab pulsar. The images came from NASA’s Chandra X-ray Observatory and the Hubble Space Telescope. (Photo by NASA; Caption from Howstuffworks.com)

    The prevalence of matter over antimatter in our universe is credited to a hypothetical process called baryogenesis. In 1967, Andrei Sakharov, a Soviet nuclear physicist, proposed three conditions for asymmetric baryogenesis to have occurred.

    1. Baryon-number violation
    2. Departure from thermal equilibrium
    3. C- and CP-symmetry violation

    The baryon-number of a particle is defined as one-third of the difference between the number of quarks and number of antiquarks that make up the particle. For a B-meson composed of a bottom antiquark and a quark, the value’s 0; of a bottom antiquark and another antiquark, the value’s 1. Baryon-number violation, while theoretically possible, isn’t considered in isolation of what is called “B – L” conservation (“L” is the lepton number, and is equal to the number of leptons minus the number of antileptons).

    Now, say a proton decays into a pion and a position. A proton’s baryon-number is 1, L-number is 0; a pion has both baryon- and L-numbers as 0; a positron has baryon-number 0 and L-number -1. Thus, neither the baryon-number nor the lepton-number are conserved, but their difference (1) definitely is. If this hypothetical process were ever to be observed, then baryogenesis would make the transition from hypothesis to reality (and the question of matter-asymmetry become conclusively answered).

    The quark-structure of a proton (notice that the two up-quarks have different flavors)

    Therefore, in recognition of the role of B-mesons (in being able to present direct evidence of CP-symmetry violation through asymmetric B-B* oscillations involving the mediation of the weak-force) and their ability to confirm or deny an “SM-approved” baryogenesis in the early universe, what are called the B-factories were built: a collider-based machine whose only purpose is to spew out B-mesons so they can be studied in detail by high-precision detectors.

    The earliest, and possibly most well-known, B-factories were constructed in the 1990s and shut down in the 2000s: the BaBar experiment at SLAC (2008), Stanford, and the Belle experiment at the KEKB collider (2010) in Japan. In fact, a Belle II plant is under construction and upon completion will boast the world’s highest-luminosity experiment.

    The Belle detector (L) and the logo for Belle II under construction

    Equations generated thanks to the Daum equations editor.

  • The travails of science communication

    There’s an interesting phenomenon in the world of science communication, at least so far as I’ve noticed. Every once in a while, there comes along a concept that is gaining in research traction worldwide but is quite tricky to explain in simple terms to the layman.

    Earlier this year, one such concept was the Higgs mechanism. Between December 13, 2011, when the first spotting of the Higgs boson was announced, and July 4, 2012, when the spotting was confirmed as being the piquingly-named “God particle”, the use of the phrase “cosmic molasses” was prevalent enough to prompt an annoyed (and struggling-to-make-sense) Daniel Sarewitz to hit back on Nature. While the article had a lot to say, and a lot more waiting there to just to be rebutted, it did include this remark:

    If you find the idea of a cosmic molasses that imparts mass to invisible elementary particles more convincing than a sea of milk that imparts immortality to the Hindu gods, then surely it’s not because one image is inherently more credible and more ‘scientific’ than the other. Both images sound a bit ridiculous. But people raised to believe that physicists are more reliable than Hindu priests will prefer molasses to milk. For those who cannot follow the mathematics, belief in the Higgs is an act of faith, not of rationality.

    Sarewitz is not wrong in remarking of the problem as such, but in attempting to use it to define the case of religion’s existence. Anyway: In bridging the gap between advanced physics, which is well-poised to “unlock the future”, and public understanding, which is well-poised to fund the future, there is good journalism. But does it have to come with the twisting and turning of complex theory, maintaining only a tenuous relationship between what the metaphor implies and what reality is?

    The notion of a “cosmic molasses” isn’t that bad; it does get close to the original idea of a pervading field of energy whose forces are encapsulated under certain circumstances to impart mass to trespassing particles in the form of the Higgs boson. Even this is a “corruption”, I’m sure. But what I choose to include or leave out makes all the difference.

    The significance of experimental physicists having probably found the Higgs boson is best conveyed in terms of what it means to the layman in terms of his daily life and such activities more so than trying continuously to get him interested in the Large Hadron Collider. Common, underlying curiosities will suffice to to get one thinking about the nature of God, or the origins of the universe, and where the mass came from that bounced off Sir Isaac’s head. Shrouding it in a cloud of unrelated concepts is only bound to make the physicists themselves sound defensive, as if they’re struggling to explain something that only they will ever understand.

    In the process, if the communicator has left out things such as electroweak symmetry-breaking and Nambu-Goldstone bosons, it’s OK. They’re not part of what makes the find significant for the layman. If, however, you feel that you need to explain everything, then change the question that your post is answering, or merge it with your original idea, etc. Do not indulge in the subject, and make sure to explain your concepts as a proper fiction-story: Your knowledge of the plot shouldn’t interfere with the reader’s process of discovery.

    Another complex theory that’s doing the rounds these days is that of quantum entanglement. Those publications that cover news in the field regularly, such as R&D mag, don’t even do as much justice as did SciAm to the Higgs mechanism (through the “cosmic molasses” metaphor). Consider, for instance, this explanation from a story that appeared on November 16.

    Electrons have a property called “spin”: Just as a bar magnet can point up or down, so too can the spin of an electron. When electrons become entangled, their spins mirror each other.

    The causal link has been omitted! If the story has set out to explain an application of quantum entanglement, which I think it has, then it has done a fairly good job. But what about entanglement-the-concept itself? Yes, it does stand to lose a lot because many communicators seem to be divesting of its intricacies and spending more time explaining why it’s increasing in relevance in modern electronics and computation. If relevance is to mean anything, then debate has to exist – even if it seems antithetical to the deployment of the technology as in the case of nuclear power.

    Without understanding what entanglement means, there can be no informed recognition of its wonderful capabilities, there can be no public dialog as to its optimum use to further public interests. When when scientific research stops contributing to the latter, it will definitely face collapse, and that’s the function, rather the purpose, that sensible science communication serves.

  • After less than 100 days, Curiosity renews interest in Martian methane

    A version of this story, as written by me, appeared in The Hindu on November 15, 2012.

    In the last week of October, the Mars rover Curiosity announced that there was no methane on Mars. The rover’s conclusion is only a preliminary verdict, although it is already controversial because of the implications of the gas’s discovery (or non-discovery).

    The presence of methane is one of the most important prerequisites for life to have existed in the planet’s past. The interest in the notion was increased when Curiosity found signs that water may have flowed in the past through Gale Crater, the immediate neighbourhood of its landing spot, after finding sedimentary settlements.

    The rover’s Tunable Laser Spectrometer (TLS), which analysed a small sample of Martian air to come to the conclusion, had actually detected a few parts per billion of methane. However, recognising that the reading was too low to be significant, it sounded a “No”.

    In an email to this Correspondent, Adam Stevens, a member of the science team of the NOMAD instrument on the ExoMars Trace Gas Orbiter due to be launched in January 2016, stressed: “No orbital or ground-based detections have ever suggested atmospheric levels anywhere above 10-30 parts per billion, so we are not expecting to see anything above this level.”

    At the same time, he also noted that the 10-30 parts per billion (ppb) is not a global average. The previous detections of methane found the gas localised in the Tharsis volcanic plateau, the Syrtis Major volcano, and the polar caps, locations the rover is not going to visit. What continues to keep the scientists hopeful is that methane on Mars seems to get replenished by some geochemical or biological source.

    The TLS will also have an important role to play in the future. At some point, the instrument will go into a higher sensitivity-operating mode and make measurements of higher significance by reducing errors.

    It is pertinent to note that scientists still have an incomplete understanding of Mars’s natural history. As Mr. Stevens noted, “While not finding methane would not rule out extinct or extant life, finding it would not necessarily imply that life exists or existed.”

    Apart from methane, there are very few “bulk” signatures of life that the Martian geography and atmosphere have to offer. Scientists are looking for small fossils, complex carbon compounds and other hydrocarbon gases, amino acids, and specific minerals that could be suggestive of biological processes.

    While Curiosity has some fixed long-term objectives, they are constantly adapted according to what the rover finds. Commenting on its plans, Mr. Stevens said, “Curiosity will definitely move up Aeolis Mons, the mountain in the middle of Gale Crater, taking samples and analyses as it goes.”

    Curiosity is not the last chance to look more closely for methane in the near future, however.

    On the other side of the Atlantic, development of the ExoMars Trace Gas Orbiter (TGO), with which Mr. Stevens is working, is underway. A collaboration between the European Space Agency and the Russian Federal Space Agency, the TGO is planned to deploy a stationary Lander that will map the sources of methane and other gases on Mars.

    Its observations will contribute to selecting a landing site for the ExoMars rover due to be launched in 2018.

    Even as Curiosity completed 100 days on Mars on November 14, it still has 590 days to go. However, it has also already attracted attention from diverse fields of study. There is no doubt that from the short trip from the rim of Gale Crater, where it is now, to the peak of Aeolis Mons, Curiosity will definitely change our understanding of the enigmatic red planet.

  • The cost of solutions

    Hereby, a new metric: Solution cost (SC)

    SC = Cost of finding the solution to a particular problem – (Rate at which the applications of the solution become cheaper * t),

    t = No. of years across which SC is being tracked.

     

  • A latent monadology: An extended revisitation of the mind-body problem

    Image by Genis Carreras

    In an earlier post, I’d spoken about a certain class of mind-body interfacing problems (the way I’d identified it): evolution being a continuous process, can psychological changes effected in a certain class of people identified solely by cultural practices “spill over” as modifications of evolutionary goals? There were some interesting comments on the post, too. You may read them here.

    However, the doubt was only the latest in a series of others like it. My interest in the subject was born with a paper I’d read quite a while ago that discussed two methods either of which humankind could possibly use to recreate the human brain as a machine. The first method, rather complexly laid down, was nothing but the ubiquitous recourse called reverse-engineering. Study the brain, understand what it’s made of, reverse all known cause-effect relationships associated with the organ, then attempt to recreate the cause using the effect in a laboratory with suitable materials to replace the original constituents.

    The second method was much more interesting (this bias could explain the choice of words in the previous paragraph). Essentially, it described the construction of a machine that could perform all the known functions of the brain. Then, this machine would have to be subjected to a learning process, through which it would acquire new skills while it retained and used the skills it’s already been endowed with. After some time, if the learnt skills, so chosen to reflect real human skills, are deployed by the machine to recreate human endeavor, then the machine is the brain.

    Why I like this method better than the reverse-engineered brain is because it takes into account the ability to learn as a function of the brain, resulting in a more dynamic product. The notion of the brain as a static body is definitively meaningless as, axiomatically, conceiving of it as a really powerful processor stops short of such Leibnizian monads as awareness and imagination. While these two “entities” evade comprehension, subtracting the ability to, yes, somehow recreate them doesn’t yield a convincing brain as it is. And this is where I believe the mind-body problem finds solution. For the sake of argument, let’s discuss the issue differentially.

    Spherical waves coming from a point source. The solution of the initial-value problem for the wave equation in three space dimensions can be obtained from the solution for a spherical wave through the use of partial differential equations. (Image by Oleg Alexandrov on Wikimedia, including MATLAB source code.)

    Hold as constant: Awareness
    Hold as variable: Imagination

    The brain is aware, has been aware, must be aware in the future. It is aware of the body, of the universe, of itself. In order to be able to imagine, therefore, it must concurrently trigger, receive, and manipulate different memorial stimuli to construct different situations, analyze them, and arrive at a conclusion about different operational possibilities in each situation. Note: this process is predicated on the inability of the brain to birth entirely original ideas, an extension of the fact that a sleeping person cannot be dreaming of something he has not interacted with in some way.

    Hold as constant: Imagination
    Hold as variable: Awareness

    At this point, I need only prove that the brain can arrive at an awareness of itself, the body, and the universe, through a series of imaginative constructs, in order to hold my axiom as such. So, I’m going to assume that awareness came before imagination did. This leaves open the possibility that with some awareness, the human mind is able to come up with new ways to parse future stimuli, thereby facilitating understanding and increasing the sort of awareness of everything that better suits one’s needs and environment.

    Now, let’s talk about the process of learning and how it sits with awareness, imagination, and consciousness, too. This is where I’d like to introduce the metaphor called Leibniz’s gap. In 1714, Gottfried Leibniz’s ‘Principes de la Nature et de la Grace fondés en raison‘ was published in the Netherlands. In the work, which would form the basis of modern analytic philosophy, the philosopher-mathematician argues that there can be no physical processes that can be recorded or tracked in any way that would point to corresponding changes in psychological processes.

    … supposing that there were a mechanism so constructed as to think, feel and have perception, we might enter it as into a mill. And this granted, we should only find on visiting it, pieces which push one against another, but never anything by which to explain a perception. This must be sought, therefore, in the simple substance, and not in the composite or in the machine.

    If any technique was found that could span the distance between these two concepts – the physical and the psychological – then Leibniz says the technique will effectively bridge Leibniz’s gap: the symbolic distance between the mind and the body.

    Now it must be remembered that the German was one of the three greatest, and most fundamentalist, rationalists of the 17th century: the other two were Rene Descartes and Baruch Spinoza (L-D-S). More specifically: All three believed that reality was composed fully of phenomena that could be explained by applying principles of logic to a priori, or fundamental, knowledge, subsequently discarding empirical evidence. If you think about it, this approach is flawless: if the basis of a hypothesis is logical, and if all the processes of development and experimentation on it are founded in logic, then the conclusion must also be logical.

    (L to R) Gottfried Leibniz, Baruch Spinoza, and Rene Descartes

    However, where this model does fall short is in describing an anomalous phenomenon that is demonstrably logical but otherwise inexplicable in terms of the dominant logical framework. This is akin to Thomas Kuhn’s philosophy of science: a revolution is necessitated when enough anomalies accumulate that defy the reign of an existing paradigm, but until then, the paradigm will deny the inclusion of any new relationships between existing bits of data that don’t conform to its principles.

    When studying the brain (and when trying to recreate it in a lab), Leibniz’s gap, as understood by L-D-S, cannot be applied for various reasons. First: the rationalist approach doesn’t work because, while we’re seeking logical conclusions that evolve from logical starts, we’re in a good position to easily disregard the phenomenon called emergence that is prevalent in all simple systems that have high multiplicity. In fact, ironically, the L-D-S approach might be more suited for grounding empirical observations in logical formulae because it is only then that we run no risk of avoiding emergent paradigms.

    “Some dynamical systems are chaotic everywhere, but in many cases chaotic behavior is found only in a subset of phase space. The cases of most interest arise when the chaotic behavior takes place on an attractor, since then a large set of initial conditions will lead to orbits that converge to this chaotic region.” – Wikipedia

    Second: It is important to not disregard that humans do not know much about the brain. As elucidated in the less favored of the two-methods I’ve described above, were we to reverse-engineer the brain, we can still only make the new-brain do what we already know that it already does. The L-D-S approach takes complete knowledge of the brain for granted, and works post hoc ergo propter hoc (“correlation equals causation”) to explain it.

    [youtube http://www.youtube.com/watch?v=MygelNl8fy4?rel=0]

    Therefore, in order to understand the brain outside the ambit of rationalism (but still definitely within the ambit of empiricism), introspection need not be the only way. We don’t always have to scrutinize our thoughts to understand how we assimilated them in the first place, and then move on from there, when we can think of the brain itself as the organ bridging Leibniz’s gap. At this juncture, I’d like to reintroduce the importance of learning as a function of the brain.

    To think of the brain as residing at a nexus, the most helpful logical frameworks are the computational theory of the mind (CTM) and the Copenhagen interpretation of quantum mechanics (QM).

    xkcd #45 (depicting the Copenhagen interpretation)

    In the CTM-framework, the brain is a processor, and the mind is the program that it’s running. Accordingly, the organ works on a set of logical inputs, each of which is necessarily deterministic and non-semantic; the output, by extension, is the consequence of an algorithm, and each step of the algorithm is a mental state. These mental states are thought to be more occurrent than dispositional, i.e., more tractable and measurable than the psychological emergence that they effect. This is the break from Leibniz’s gap that I was looking for.

    Because the inputs are non-semantic, i.e., interpreted with no regard for what they mean, it doesn’t mean the brain is incapable of processing meaning or conceiving of it in any other way in the CTM-framework. The solution is a technical notion called formalization, which the Stanford Encyclopedia of Philosophy describes thus:

    … formalization shows us how semantic properties of symbols can (sometimes) be encoded in syntactically-based derivation rules, allowing for the possibility of inferences that respect semantic value to be carried out in a fashion that is sensitive only to the syntax, and bypassing the need for the reasoner to have employ semantic intuitions. In short, formalization shows us how to tie semantics to syntax.

    A corresponding theory of networks that goes with such a philosophy of the brain is connectionism. It was developed by Walter Pitts and Warren McCulloch in 1943, and subsequently popularized by Frank Rosenblatt (in his 1957 conceptualization of the Perceptron, a simplest feedforward neural network), and James McClelland and David Rumelhart (‘Learning the past tenses of English verbs: Implicit rules or par­allel distributed processing’, In B. MacWhinney (Ed.), Mechanisms of Language Acquisition (pp. 194-248). Mah­wah, NJ: Erlbaum) in 1987.

    (L to R) Walter Pitts (L-top), Warren McCulloch (L-bottom), David Rumelhart, and James McClelland

    As described, the L-D-S rationalist contention was that fundamental entities, or monads or entelechies, couldn’t be looked for in terms of physiological changes in brain tissue but in terms of psychological manifestations. The CTM, while it didn’t set out to contest this, does provide a tensor in which the inputs and outputs are correlated consistently through an algorithm with a neural network for an architecture and a Turing/Church machine for an algorithmic process. Moreover, this framework’s insistence on occurrent processes is not the defier of Leibniz: the occurrence is presented as antithetical to the dispositional.

    Jerry Fodor

    The defier of Leibniz is the CTM itself: if all of the brain’s workings can be elucidated in terms of an algorithm, inputs, a formalization module, and outputs, then there is no necessity to suppress any thoughts to the purely-introspectionist level (The domain of CTM, interestingly, ranges all the way from the infraconscious to the set of all modular mental processes; global mental processes, as described by Jerry Fodor in 2000, are excluded, however).

    Where does quantum mechanics (QM) come in, then? Good question. The brain is a processor. The mind is a program. The architecture is a neural network. The process is that of a Turing machine. But how is the information between received and transmitted? Since we were speaking of QM, more specifically the Copenhagen interpretation of it, I suppose it’s obvious that I’m talking about electrons and electronic and electrochemical signals being transmitted through sensory, motor, and interneurons. While we’re assuming that the brain is definable by a specific processual framework, we still don’t know if the interaction between the algorithm and the information is classical or quantum.

    While the classical outlook is more favorable because almost all other parts of the body are fully understand in terms of classical biology, there could be quantum mechanical forces at work in the brain because – as I’ve said before – we’re in no way to confirm or deny if it’s purely classical or purely non-classical operationally. However, assuming that QM is at work, then associated aspects of the mind, such as awareness, consciousness, and imagination, can be described by quantum mechanical notions such as the wavefunction-collapse and Heisenberg’s uncertainty principle – more specifically, by strong and weak observations on quantum systems.

    The wavefunction can be understood as an avatar of the state-function in the context of QM. However, while the state-function can be constantly observable in the classical sense, the wavefunction, when subjected to an observation, collapses. When this happens, what was earlier a superposition of multiple eigenstates, metaphorical to physical realities, becomes resolved, in a manner of speaking, into one. This counter-intuitive principle was best summarized by Erwin Schrodinger in 1935 as a thought experiment titled…

    [youtube http://www.youtube.com/watch?v=IOYyCHGWJq4?rel=0]

    This aspect of observation, as is succinctly explained in the video, is what forces nature’s hand. Now, we pull in Werner Heisenberg and his notoriously annoying principle of uncertainty: if either of two conjugate parameters of a particle is measured, the value of the other parameter is altered. However, when Heisenberg formulated the principle heuristically in 1927, he also thankfully formulated a limit of uncertainty. If a measurement could be performed within the minuscule leeway offered by the constant limit, then the values of the conjugate parameters could be measured simultaneously without any instantaneous alterations. Such a measurement is called a “weak” measurement.

    Now, in the brain, if our ability to imagine could be ascribed – figuratively, at least – to our ability to “weakly” measure the properties of a quantum system via its wavefunction, then our brain would be able to comprehend different information-states and eventually arrive at one to act upon. By extension, I may not be implying that our brain could be capable of force-collapsing a wavefunction into a particular state… but what if I am? After all, the CTM does require inputs to be deterministic.

    How hard is it to freely commit to a causal chain?

    By moving upward from the infraconscious domain of applicability of the CTM to the more complex cognitive functions, we are constantly teaching ourselves how to perform different kinds of tasks. By inculcating a vast and intricately interconnected network of simple memories and meanings, we are engendering the emergence of complexity and complex systems. In this teaching process, we also inculcate the notion of free-will, which is simply a heady combination of traditionalism and rationalism.

    While we could be, with the utmost conviction, dreaming up nonsensical images in our heads, those images could just as easily be the result of parsing different memories and meanings (that we already know), simulating them, “weakly” observing them, forcing successive collapses into reality according to our traditional preferences and current environmental stimuli, and then storing them as more memories accompanied by more semantic connotations.

  • A muffling of the monsoons

    New research conducted at the Potsdam Institute for Climate Impact research suggests that global warming could cause frequent and severe failures of the Indian summer monsoon in the next two centuries.

    The study joins a growing body of work conducted by different research groups across the last five years that demonstrate a negative relationship between the two phenomena.

    The researchers, Jacob Schewe and Anders Levermann, defined failure as a decrease in rainfall by 40 to 70 per cent below normal levels. Their findings, published on November 6 in the Environmental Research Letters, show that as we move into the 22nd century, increasing temperatures contribute to a strengthening Pacific Walker circulation that brings higher pressures over eastern India, which weaken the monsoon.

    The Walker circulation was first proposed by Sir Gilbert Walker over 70 years go. It dictates that over regions such as the Indian peninsula, changes in temperature and changes in pressure and rainfall feedback into each other to bring a cyclic variation in rainfall levels.  The result of this is a seasonal high pressure over the western Indian Ocean.

    Now, almost once every five years, the eastern Pacific Ocean undergoes a warm phase that leads to a high air pressure over it. This is called the El Nino Southern Oscillation.

    In years when El Nino occurs, the high pressure over the western Indian Ocean shifts eastward and brings high pressure over land, suppressing the monsoon.

    The researchers’ simulation showed that as temperatures increased in the future, the Walker circulation brings more high pressure over India on average, even though the strength of El Nino isn’t shown to increase.

    The researchers described the changes they observed as unprecedented in the Indian Meteorological Department’s data, which dates back to the early-1900s. As Schewe, lead author of the study, commented to Phys Org, “Our study points to the possibility of even more severe changes to monsoon rainfall caused by climatic shifts that may take place later this century and beyond.”

    A study published in 2007 by researchers at the Lawrence Livermore National Laboratory, California, and the International Pacific Research Centre, Hawaii, showed an increase in rainfall levels throughout much of the 21st century followed by a rapid decrease. This is consistent with the findings of Schewe and Levermann.

    Similarly, a study published in April 2012 by the Chinese Academy of Sciences demonstrated the steadily weakening nature of the Indian summer monsoon since 1860 owing to rising global temperatures.

    The Indian economy, being predominantly agrarian, depends greatly on the summer monsoon which lasts from June to September. The country last faced a widespread drought due to insufficient rainfall in the summer of 2009, when it had to import sugar and pushed world prices for the commodity to a 30-year high.

  • Eschatology

    The meaning of the day of the blog is changing. While some argue that long-form journalism is in, I think it’s about extreme-form journalism. By this, I only mean that long-forms and short-forms are increasingly doing better, while the in-betweens are having to constantly redefine themselves, nebulously poised as they are between one mode that takes seconds to go viral and another mode that engages the intellectual and the creative for periods long enough to prompt protracted introspection on all kinds of things.

    Having said this, it is inevitable that this blog, trapped between an erstwhile obsessed blogger and a job that demands most of his time, eventually cascade into becoming an archive, a repository of links, memories, stories, and a smatter of comments on a variety of subjects – as and when each one caught this blogger’s fancy. I understand I’m making a mountain out of a molehill. However, this episode concludes a four-year old tradition of blogging at least 2,000 words a week, something that avalanched into a habit, and ultimately into a career.

    Thanks for reading. Much more will come, but just not as often as it has.

  • Backfiring biofuels in the EU

    A version of this article as written by me appeared in The Hindu on November 8, 2012.

    The European Union (EU) announced on October 17 that the amount of biofuels that will be required to make up the transportation energy mix by 2020 has been halved from 10 per cent to 5 per cent. The rollback mostly affects first-generation biofuels, which are produced from food crops such as corn, sugarcane, and potato.

    The new policy is in place to mitigate the backfiring of switching from less-clean fossil fuels to first-generation biofuels. An impact assessment study conducted in 2009-2012 by the EU found that greenhouse gas emissions were on the rise because of conversion of agricultural land to land for planting first-generation biofuel crops. In the process, authorities found that large quantities of carbon stock had been released into the atmosphere because of forest clearance and peatland-draining.

    Moreover, because food-production has now been shifted to take place in another location, transportation and other logistic fuel costs will have been incurred, emissions due to which will also have to be factored in. These numbers fall under the causal ambit of indirect land-use change (ILUC), which also includes the conversion of previously untenable land to fertile land. On October 17, The Guardian published an article that called the EU’s proposals “watered down” because it had chosen not to penalize fuel suppliers involved in the ILUC.

    This writer believes that’s only fair – that the EU not penalize the fuel suppliers – considering the “farming” of first-generation biofuels was enabled, rather incentivized by the EU, which would have well known that agricultural processes would be displaced and that agricultural output would drop in certain pockets. The backfiring happened only because the organization had underestimated the extent to which collateral emissions would outweigh the carbon-credits saved due to biofuel-use. (As for not enforcing legal consequences on those who manufacture only first-generation biofuels but go on to claim carbon credits arising from second-generation biofuel use as well: continue reading.)

    Anyway, as a step toward achieving the new goals, the EU will impose an emissions threshold on the amount of carbon stock that can be released when some agricultural land is converted for the sake of biofuel crops. Essentially, this will exclude many biofuels from entering the market.

    While this move reduces the acreage because “fuel-farming” eats it up, it is not without criticisms. As Tracy Carty, a spokeswoman for the poverty action group Oxfam, said over Twitter, “The cap is lower than the current levels of biofuels use and will do nothing to reduce high food prices.” Earlier, especially in the US, the recourse of first-generation biofuels such as biodiesel had been resorted to by farmers looking to cash in on their steady (and state-assured) demand as opposed to the volatility of food prices.

    The October 17 announcement effectively revises the Renewable Energy Directive (RED), 2009, which first required that biofuels constitute 10 per cent of the alternate energy mix by 2020.

    The EU is now incentivising second-generation biofuels, mostly in an implied manner, which are manufactured from crop residues such as organic waste, algae, and woody materials, and do not interfere with food-production. The RED also requires that biofuels that replace fossil fuels be at least 35 per cent more efficient. Now, the EU has revised that number to increase to 50 per cent in 2017, and to 60 per cent after 2020. This is a clear sign that first-generation biofuels, which enter the scene with a bagful of emissions, will be phased out while their second-generation counterparts will take their places – at least, this ought to happen considering the profitability of first-generation alternatives is set to go down.

    However, the research concerning high-performance biofuels is still nascent. As of now, it has been aimed at extracting the maximum amount of fuel from available stock, not as much at improving their efficiency. This is especially observed with the extraction of ethanol from wood, high-efficiency microalgae for biodiesel production, production of butanol from biomass with help from acetobutylicum, etc. – where more is known about the extraction efficiency and process economics than the performance of the fuel itself. Perhaps the new proposals will siphon some research out of the biotech. community in the near future.

    Like the EU, the USA also has a biofuel-consumption target set for 2022, by when it requires that 36 billion gallons of renewable fuel be mixed with transport fuel, up from the 9 billion gallons mandated by 2008. More specifically, under the Energy Independence and Security Act (EISA) of 2007,

    • RFS program to include diesel, in addition to gasoline;
    • Establish new categories of renewable fuel, and set separate volume requirements for each one.
    • EPA to apply lifecycle greenhouse gas performance threshold standards to ensure that each category of renewable fuel emits fewer greenhouse gases than the petroleum fuel it replaces.

    (The underlined bit’s what the EU has now included in its policies.)

    However, a US National Research Council report released on October 24 found that if algal biofuels, second-generation fluids whose energy capacity lies between petrol’s and diesel’s, have to constitute as much as 5 percent of the country’s alternate energy mix, “unsustainable demands” would be placed on “energy, water and nutrients”.

    Anyway, two major energy blocs – the USA and the EU – are leading the way to phase out first-generation biofuels and replace them completely with their second-generation counterparts. In fact, two other large-scale producers of biofuels, Indonesia and Argentina, wherefrom the EU imports 22 per cent of its biofuels, could also be forced to ramp up investment and research inline with their buyer’s interests. As Gunther Oettinger, the EU Energy Commissioner, remarked, “This new proposal will give new incentives for best-performing biofuels.” The announcement also affirms that till 2020, no major changes will be effected in the biofuels sector, and post-2020, only second-generation biofuels will be supported, paving the way for sustained and focused development of high-efficiency, low-emission alternatives to fossil fuels.

    (Note: The next progress report of the European Commission on the environmental impact of the production and consumption of biofuels in the EU is due on December 31, 2014.)

  • A cultured evolution?

    Can perceptions arising out of cultural needs override evolutionary goals in the long-run? For example, in India, the average marriage-age is in the late 20s now. Here, the (popular) tradition is to frown down upon, and even ostracize, those who would engage in premarital sex. So, after 10,000 years, say, are Indians more likely to have the development of their sexual desires postponed to occur in their late 20s (if they are not exposed to any avenues of sexual expression)? This question arose as a consequence of a short discussion with some friends on an article that appeared in SciAm: about if (heterosexual) men and women could stay “just friends”. To paraphrase the principal question in the context of the SciAm-featured “study”:

    1. Would you agree that the statistical implications of gender-sensitive studies will vary from region to region simply because the reasons on the basis of which such relationships can be established vary from one socio-political context to another?
    2. Assuming you have agreed to the first question: Would you contend that the underlying biological imperatives can, someday, be overridden altogether in favor of holding up cultural paradigms (or vice versa)?

    Is such a thing even possible? (To be clear: I’m not looking for hypotheses and conjectures; if you can link me to papers that support your point of view, that’d be great.)

  • Plotting a technological history of journalism

    Electric telegraph

    • July 27, 1866 – SS Great Eastern completes laying of Transatlantic telegraphic cables
    • By 1852, miles of American telegraphic wires had grown from 40 in 1846 to 23,000
    • In 1849-1869, telegraphic mileage had increased by 108,000 miles

    Cost of information transmission fell with its increasing ubiquity as well as instantization of global communication.

    • Usefulness of information was preserved through transmission-time, increasing its shelf-life, making production of information a significant task
    • Led to a boost in trade as well

    Advent of war – especially political turmoil in Europe and the American Civil War – pushed rapid developments in its technology.

    These last mentioned events led to establishment of journalism as a recognized profession

    • Because it focused finally on locating and defining local information,
    • Because transmission of information could now be secured through other means,
    • And prompted newspaper establishments to install information-transmission services of their own –
    • Leading to proliferation of competition and an emphasis on increase of the quality of reportage

    The advent of the electric telegraph, a harbinger of the “small world” phenomenon, did not contribute to the refinement of journalistic genres as much as it helped establish them.

    In the same period, rather from 1830 to 1870, significant political events that transpired alongside the evolution of communication, and were revolutionized by it, too, included the rapid urbanization in the USA and Great Britain (as a result of industrialization), the Belgian revolution, the first Opium War, the July revolution, the Don Pacifico affair, and the November uprising.

    Other notable events include the laying of the Raleigh-Gaston railroad in North Carolina and advent of the first steam locomotives in England. Essentially, the world was ready to receive its first specialized story-tellers.

    Photography

    Picture on the web from mousebilenadam

    Photography developed from the mid-19th century onward. While it did not have as drastic an impact as did the electric telegraph, it has instead been undergoing a slew of changes the impetus of which comes from technological advancement. While black-and-white photography was prevalent for quite a while, it was color photography that refocused interested in using the technology to augment story-telling.

    • Using photography to tell a story involves a trade-off between neutrality and subjective opinions
    • A photographer, in capturing his subject, first identifies the subject such that it encapsulates emotions that he is looking for

    Photography establishes a relationship between some knowledge of some reality and prevents interpretations from taking any other shape:

    • As such a mode of story-telling, it is a powerful tool only when the right to do so is well-exercised, and there is no given way of determining that absolutely
    • Through a lens is a powerful way to capture socio-history, and this preserve it in a columbarium of other such events, creating, in a manner of speaking, something akin to Asimov’s psycho-history
    • What is true in the case of photo-journalism is only partly true in the case of print-based story-telling

    Photography led to the establishment of perspectives, of the ability of mankind to preserve events as well as their connotations, imbuing new power into large-scale movements and revolutions. Without the ability to visualize connotations, adversarial journalism, and the establishment of the Fourth Estate as it were, may not be as powerful as it currently is because of its ability to provide often unambiguous evidence toward or against arguments.

    • A good birthplace of the discussion on photography’s impact on journalism is Susan Sontag’s 1977 book, On Photography.
    • Photography also furthered interest in the arts, starting with the contributions of William Talbot.

    Television

    Although television sets were introduced in the USA in the 1930s, a good definition of its impact came in the famous Wasteland Speech in 1961 by Newton Minow, speaking at a convention of the National Association of Broadcasters.

    When television is good, nothing — not the theater, not the magazines or newspapers — nothing is better.

    But when television is bad, nothing is worse. I invite each of you to sit down in front of your own television set when your station goes on the air and stay there, for a day, without a book, without a magazine, without a newspaper, without a profit and loss sheet or a rating book to distract you. Keep your eyes glued to that set until the station signs off. I can assure you that what you will observe is a vast wasteland.

    You will see a procession of game shows, formula comedies about totally unbelievable families, blood and thunder, mayhem, violence, sadism, murder, western bad men, western good men, private eyes, gangsters, more violence, and cartoons. And endlessly commercials — many screaming, cajoling, and offending. And most of all, boredom. True, you’ll see a few things you will enjoy. But they will be very, very few. And if you think I exaggerate, I only ask you to try it.

    It is this space, the “vast wasteland”, upon the occupation of which came journalism and television together to redefine news-delivery.

    It is a powerful tool for the promotion of socio-political agendas: this was most effectively demonstrated during the Vietnam War during which, as Michael Mandelbaum wrote in 1982,

    … regular exposure to the early realities of battle is thought to have turned the public against the war, forcing the withdrawal of American troops and leaving the way clear for the eventual Communist victory.

    This opinion, as expressed by then-president Lyndon Johnson, was also defended by Mandelbaum as a truism in the same work (Print Culture and Video Culture, vol. 111, no. 4, Daedalus, pp. 157-158).

    In the entertainment versus informative programming debate, an important contribution was made by Neil Postman in his 1985 work Amusing Ourselves to Death, wherein he warned of the decline in humankind’s ability to communicate and share serious ideas and the role television played in this decline because of its ability to only transfer information, not interaction.

    Watch here…

    [youtube http://www.youtube.com/watch?v=FRabb6_Gr2Y?rel=0]

    And continued here…

    [youtube http://www.youtube.com/watch?v=zHd31L6XPEQ?rel=0]

    .

    Arguing along similar veins in his landmark speech in 1990 at a computer science meeting in Germany, Postman said,

    Everything from telegraphy and photography in the 19th century to the silicon chip in the twentieth has amplified the din of information, until matters have reached such proportions today that, for the average person, information no longer has any relation to the solution of problems.

    In his conclusion, he blamed television for severing the tie between information and action.

    The advent of the television also played a significant role in American feminism.