Month: July 2022

  • 65 years of the BCS theory

    Thanks to an arithmetic mistake, I thought 2022 was the 75th anniversary of the invention (or discovery?) of the BCS theory of superconductivity. It’s really the 65th anniversary, but since I’d worked myself up to write about it, I’m going to. 🤷🏽‍♂️ It also helps that the theory is a remarkable fact of nature that make sense of what is weirdly a macroscopic effect of microscopic causes.

    There are several ways to classify superconductors – materials that conduct electricity with zero resistance under certain conditions. One of them is as conventional or unconventional. A superconductor is conventional if BCS theory can explain its superconductivity. ‘BCS’ are the initials of the theory’s three originators: John Bardeen, Leon Cooper and John Robert Schrieffer. BCS theory explains (conventional) superconductivity by explaining how the electrons in a material enter a collective superfluidic state.

    At room temperature, the valence electrons flow around a material, being occasionally scattered by the grid of atomic nuclei or impurities. We know this scattering as electrical resistance.

    An illustration of a lattice of sodium and chlorine atoms in a sodium chloride crystal. Credit: Benjah-bmm27, public domain

    The electrons also steer clear of each other because of the repulsion of like charges (Coulomb repulsion).

    When the material is cooled below a critical temperature, however, vibrations in the atomic lattice encourage the electrons to become paired. This may defy what we learnt in high school – that like charges repel – but the picture is a little more complicated, and it might make more sense if we adopt the lens of energy instead.

    A system will favour a state in which it has lower energy than one in which it has more energy. When two carriers of like charges, like two electrons, approach each other, they repel each other more strongly the closer they get. This repulsion increases the system’s energy (in some form, typically kinetic energy).

    In some materials, conditions can arise in which two electrons can pair up – become correlated with each other – across relatively long distances, without coming close to each other, rendering the Coulomb repulsion irrelevant. This correlation happens as a result of the electrons’ effect on their surroundings. As an electron moves through the lattice of positively charged atomic nuclei, it exerts an attractive force on the nuclei, which respond by tending towards the electron. This increases the amount of positive potential near the electron, which attracts another electron nearby to move closer as well. If the two electrons have opposite spins, they become correlated as a Cooper pair, kept that way by the attractive potential imposed by the atomic lattice.

    Leon Cooper explained that neither the source of this potential nor its strength matter – as long as it is attractive, and the other conditions hold, the electrons will team up into Cooper pairs. In terms of the system’s energy, the paired state is said to be energetically favourable, meaning that the system as a whole has a lower energy than if the electrons were unpaired below the critical temperature.

    Keeping the material cooled to below this critical temperature is important: while the paired state is energetically favourable, the state itself arises only below the critical temperature. Above the critical temperature, the electrons can’t access this state altogether because they have too much kinetic energy. (The temperature of a material is the average kinetic energy of its constituent particles.)

    Cooper’s theory of the electron pairs fit into John Bardeen’s theory, which sought to explain changes in the energy states of a material as it goes from being non-superconducting to superconducting. Cooper had also described the formation of electron pairs one at a time, so to speak, and John Robert Schrieffer’s contribution was to work out a mathematical way to explain the formation of millions of Cooper pairs and their behaviour in the material.

    The trio consequently published its now-famous paper, ‘Microscopic Theory of Superconductivity’, on April 1, 1957.

    (I typo-ed this as 1947 on a calculator, which spit out the number of years since to be 75. 😑 One could have also expected me to remember that this is India’s 75th year of independence and that BCS theory was created a decade after 1947, but the independence hasn’t been registering these days.)

    Anyway, electrons by themselves belong to a particle class called fermions. The other known class is that of the bosons. The difference between fermions and bosons is that the former obey Pauli’s exclusion principle while the latter do not. The exclusion principle forbids two fermions in the same system – like a metal – from simultaneously occupying the same quantum state. This means the electrons in a metal have a hierarchy of energies in normal conditions.

    However, a Cooper pair, while composed of two electrons, is a boson, and doesn’t obey Pauli’s exclusion principle. The Cooper pairs of the material can all occupy the same state – i.e. the state with the lowest energy, more popularly called the ground state. This condensate of Cooper pairs behaves like a superfluid: practically flowing around the material, over, under and through the atomic lattice. Even when a Cooper pair is scattered off by an atomic nucleus or an impurity in the material, the condensate doesn’t break formation because all the other Cooper pairs continue their flow, and eventually also reintegrate the scattered Cooper pair. This flow is what we understand as electrical superconductivity.

    “BCS theory was the first microscopic theory of superconductivity,” per Wikipedia. But since its advent, especially since the late 1970s, researchers have identified several superconducting materials, and behaviours, that neither BCS theory nor its extensions have been able to explain.

    When a material transitions into its superconducting state, it exhibits four changes. Observing these changes is how researchers confirm that the material is now superconducting. (In no particular order:) First, the material loses all electric resistance. Second, any magnetic field inside the material’s bulk is pushed to the surface. Third, the electronic specific heat increases as the material is cooled before dropping abruptly at the critical temperature. Fourth, just as the energetically favourable state appears, some other possible states disappear.

    Physicists experimentally observed the fourth change only in January this year – based on the transition of a material called Bi-2212 (bismuth strontium calcium copper oxide, a.k.a. BSCCO, a.k.a. bisko). Bi-2212 is, however, an unconventional superconductor. BCS theory can’t explain its superconducting transition, which, among other things, happens at a higher temperature than is associated with conventional materials.

    In the January 2022 study, physicists also reported that Bi-2212 transitions to its superconducting state in two steps: Cooper pairs form at 120 K – related to the fourth sign of superconductivity – while the first sign appears at around 77 K. To compare, elemental rhenium, a conventional superconductor, becomes superconducting in a single step at 2.4 K.

    A cogent explanation of the nature of high-temperature superconductivity in cuprate superconductors like Bi-2212 is one of the most important open problems in condensed-matter physics today. It is why we still await further updates on the IISc team’s room-temperature superconductivity claim.

  • A ‘bold’ vision

    ‘Support Europe’s bold vision for responsible research assessment’, Nature editorial, July 27, 2022:

    The Agreement on Reforming Research Assessment, announced on 20 July and open for signatures on 28 September, is perhaps the most hopeful sign yet of real change. More than 350 organizations have pooled experience, ideas and evidence to come up with a model agreement to create more-inclusive assessment systems. The initiative, four years in the making, is the work of the European University Association and Science Europe (a network of the continent’s science funders and academies), in concert with predecessor initiatives. It has the blessing of the European Commission, but with an ambition to become global.

    Signatories must commit to using metrics responsibly, for example by stopping what the agreement calls “inappropriate” uses of journal and publication-based metrics such as the journal impact factor and the h-index. They also agree to avoid using rankings of universities and research organizations — and where this is unavoidable, to recognize their statistical and methodological limitations.

    I’m curious if calling this plan “bold” is a way to caution readers that they must proceed cautiously, with considerable scepticism, instead of embracing such ideas with both arms. The plan itself is not bold but – as the editorial itself acknowledges, ironically – in line with what many accomplished research groups and institutes around the world have already expressed a desire for.

    Also relevant here is the fact that the editorial appeared in Nature – a journal that has played up its impact factor and the significance of the various papers it has published (to their respective fields) to play up its prestigious nature. Scientists are seeking to install a new research evaluation process in the first place because of the damage such prestige has wrought to the practice of science, essentially as it has come to substitute rigour and transparency, so overturning the tyranny of prestige will deal a blow to Nature‘s large profit margins.

  • WordPress.com rolls back its botched ‘experiment’

    So, WordPress.com has restored the family of premium plans that it had until April this year, and has done away with the controversial ‘Starter’ and ‘Pro’ plans. The announcement on the WordPress.com blog yesterday has already garnered a high 65 comments, even as the post itself was brief and didn’t contain indication that WordPress.com had screwed up with the new plans. Excerpt:

    Our philosophy has always been one of experimenting, learning, and adjusting. As we began to roll out our new pricing plans a couple of months back, we took note of the feedback you shared. What we heard is that some of you missed the more granular flexibility of our previous plans. Additionally, the features you needed and pricing of the new plans didn’t always align for you. This led us to a decision that we believe is the right call.

    You might recall that when the new plans were announced in April, my blog post reacting to them became a big deal on the Hacker News forum on that day, and (probably) first drew the attention of Automattic chief Matt Mullenweg and WordPress.com CEO Dave Martin. Since then, WordPress.com has been working to adapt the ‘Starter’ and ‘Pro’ plans for different markets as well as introduced Ă  la carte upgrades to remove ads, add custom CSS and buy more storage space. However, the company continued to receive negative feedback on the changes from the previous plans.

    One vein that I really resonated with was a rebuttal of WordPress.com’s claim that the older plans were messy whereas the newer ones are clearer. That’s absolutely not true. But on July 21, they seemed to have finally really listened and changed their minds for the better. (And even then, there are many expressions of confusion among the 65 comments.)

    I also want to point out here that WordPress.com is being disingenuous when it claims its new plans were an “experiment”. That’s bullshit. No experiment rolls out to all users on production, is accompanied by formal announcements of change on the official blog and, in the face of criticism, forces the CEO to apologise for a hamfisted rollout process – all without mentioning the word ‘experiment’ even once. WordPress.com is saying now that its development has followed the path of “experimenting, learning, and adjusting” when all it did was force the change, inform users post facto, then solicited feedback on which it acted (before doing that in advance), and finally reverted to a previous state.

  • Should ‘geniuses’ be paid extra?

    A newsletter named Ideas Sleep Furiously had an essay propounding a “genius basic income” on May 28. Here are the first two paragraphs that capture a not-insignificant portion of the essay’s message:

    Professor Martin Hairer is one of the world’s most gifted mathematicians. An Austrian-Brit at Imperial College London, he researches stochastic partial differential equations and holds two of maths’ most coveted prizes. In 2014, he became only the second person with a physics PhD to win a Fields Medal, an award granted every four years to mathematicians under 40 and considered to be the equivalent of the Nobel Prize. Hairer also won the 2021 Breakthrough Prize in Mathematics, which comes with a $3 million cheque. When the Guardian covered Hairer’s win, they noted: ‘[his] major work, a 180-page treatise that introduced the world to “reguarity structures”, so stunned his colleagues that one suggested it must have been transmitted to Hairer by a more intelligent alien civilisation.’ The journalist asked Hairer how he’d spend the prize money. His response: “We moved to London somewhat recently, three years ago, and we are still renting. So it might be time to buy a place to live.”

    Most readers of the Guardian that day no doubt understood the absurdity of London house prices. Morning coffee in hand, many will have tut-tutted in dismay at Hairer’s comical remark and mentally filed it under somebody really ought to fix this housing crisis. But how many stopped to consider the greater absurdity? After all, here was a man who, not that long ago, would’ve had a team around him devoted to deflecting such petty problems, to getting others out of his way and allowing him to focus on the thing that only he and a handful of people could understand, let alone do. But the real story wasn’t that a maths genius in modern Britain couldn’t afford a comfortable home close to work. The real story was that it passed without comment.

    Matthew Archer, the essay’s author (and who ends the newsletter edition with a request to readers to share it “to spread the gospel of rationality”), contends that people like Hairer ought to be freed of the tedium of figuring out where to live, how to get around the city, groceries, and other “quotidian constraints that plague mere mortals”. Instead, Archer argues, a “genius” like Hairer ought to be paid a “genius basic income” so that he, and his brain “built for advanced mathematics”, can focus on solving hard problems that contribute to human welfare and civilisation.

    Archer’s essay addresses this problem both within and without university settings, but within academic ones. Another important thrust of his essay is the way American ‘child geniuses’ are treated at American schools, and how inefficiencies in the country’s school system have the eventual effect of encouraging these children not to develop their special skills but to fit in, leading to an “epidemic of gifted underachievement”. This is quite likely true of the Indian school system as well, but his overall idea is not a good one – especially in India, and probably in the West as well. Archer’s essay is undergirded by a few assumptions and this is where the problems lie.

    The first is that a country (I’m highly uncertain about the world) can and must reap only one sort of benefit from the “geniuses” at its universities. This is an insular view of the problems that are deemed worthy of solving, by privileging the interests of the “genius” over the interests of the higher education and research system. If a “genius” is to be paid more, they must also assume more responsibilities than doing the work that they are already doing because they must also dispense their social responsibilities to their university.

    If a mathematician is considered to be the only one who can solve a very difficult problem, encourage them to do so – but not at the expense of them also taking on the usual number of PhD students, teaching hours and other forms of mentorship. We don’t know what we will stand to lose if the mathematical problem goes unsolved but we’re well aware of what we lose when we prevent aspiring students from pursuing a PhD because a suitable mentor isn’t available or capable students from receiving the right amount of attention in the classroom.

    The second is that we need “geniuses”. Do we? Instead of a “genius basic income” that translates to a not insubstantial hike for the “geniuses” at a university or a research facility, pay all students and researchers a proportionate fraction of their incomes more so that they all can worry just a little less about “quotidian constraints”.

    There is a growing body of research showing that the best way to eliminate poverty appears to be giving poor people money and letting them spend it as they see fit. There are some exceptions to this view but they are centered entirely on identifying who is really deserving – a problem that goes away both in the academic setting, where direct income comparisons with the cost of living are possible, and in India (see the third point). I sincerely believe the same could be true vis-Ă -vis inequities within our education and research systems, which are part of a wider environment of existence that has foisted more than mere “quotidian constraints” on its members and which will almost certainly benefit from relieving all of them just a little at a time instead of a select few a lot.

    (Archer quotes David Graeber in his essay to dismiss a counterpoint against his view: “To raise this point risks a tsunami of ‘whataboutery’—what about the average person who can’t afford a home? What about the homeless?! The same people tend to suggest that a highly paid academic doing a job he loves and living in one of the world’s best cities is enough of a reward. In itself this is a sign of a remarkable shift in values. It is also the inheritance of an older belief system, Puritanism, where, in the words of the late anthropologist David Graeber, ‘one is not paid money to do things, however useful or important, that one actually enjoys.’” When Graeber passed away in September 2020, I remember anthropologist Alpa Shah tweeting this: “I often thought of David Graeber as a genius. But of the many things that David taught me, it was that there is in fact a genius in each of us.”)

    In India in particular, the Council for Scientific and Industrial Research doesn’t pay students and researchers enough as well as has a terrible reputation of paying them so late that many young researchers are in debt or are leaving for other jobs just to feed their families.

    (Aside: While Hairer suggests that he could think about buying a house in London only after he’d won $3 million with a Breakthrough Prize, the prize itself once again concentrates a lot of money into the hands of a few that have already excelled, and most of whom are men.)

    The third assumption is that school and education reform is impossible and even undesirable. Archer writes in his essay:

    “It was only in October last year that the then Mayor of New York City, Bill de Blasio, announced the city’s gifted programme would be replaced because non-white students were underrepresented. Yet as Professor Ellen Winner noted in her 1996 book, Gifted Children: Myths and Realities, scrapping gifted programmes in the name of diversity, equality, and inclusion, has rather ironic effects. Namely, gifted children embedded within a culture, which might not value high achievement …, have no other children ‘with whom to identify, and they may not feel encouraged to develop their skills.’ The activists, then, practice discrimination in the name of non-discrimination.”

    This argument advances a cynical view of the sort of places we can or should expect our schools to be for our children. Keeping a policy going so that white students can receive help with developing their special skills is an abject form of status-quoism that overlooks the non-white students who are struggling to fit in, and are apparently also not being selected for the ‘gifted children’ programme. Clearly, the latter is broken. I would much rather advocate school-level reforms where the institution accommodates everyone as well as pays more and/or different attention to those children who need it, including arranging for activities designed to help develop their skills as well as improve social cohesion.

    The fourth assumption is specific to India and concerns the desirability of the unbalanced improvement of welfare. Providing a few a “genius basic income” will heap privilege on privilege, because those who have already been identified as “geniuses” in India will have had to be privileged in at least two of the following three ways: gender, class and caste.

    Put another way, take a look at the upper management of India’s best academic and research centres, government research bodies and private research facilities, and tell me how many of these people aren’t cis-male Brahmins, rich Brahmins or rich cis-males (‘rich’ here is being used to mean access to wealth before an individual entered academia). If they make up more than 10% of the total population of these individuals, I’ll give you a thousand rupees, even if 10% would also still be abysmal.

    The Indian academic milieu is already highly skewed in favour of Brahmins in particular, and any exercise here that deals with identifying geniuses will identify only Brahmin “geniuses”. This in turn will attach one more casteist module to a system already sorely in need of affirmative action.

    I’m also opposed to the principle outlined by contentions of the type “we don’t have enough money for research, so we should spend what we have wisely”. This is a false problem created by the government’s decision to underspend on research, forcing researchers to fight among themselves about whose work should receive a higher allocation, or any allocation at all. I thought that I would have to make an exception for the “genius basic income”, i.e. that researchers do have only a small amount of money and that they can’t afford such an income for a few people – but then I realised that this is a red herring: even if India invested 1% or even 2% of its GDP in research and development activities (up from the current 0.6%), a “genius basic income” would be a bad idea in principle.

    The fourth assumption allows us to circle back to a general, and especially pernicious, problem, specific to one line from Archer’s essay: “A world in which the profoundly gifted are supported might be a world … with a reverence for the value that gifted people bring.”

    The first two words that popped into my ahead upon reading this sentence were “Marcy Pogge”. Both Geoffrey Marcy and Thomas Pogge were considered to be “geniuses” in their respective fields – astronomy and philosophy – before a slew of allegations of sexual harrassment, many of them from students at their own universities, the University of California and Yale University, revealed an important side of reality: people in charge of student safety and administration at these universities turned away even when they knew of the allegations because the men brought in a lot of grant money and prestige.

    Chasing women out of science, forcing them to keep their mouth shut if they want to continue being in science (after throwing innumerable barriers in their path to entering science in the first place) – this is the unconscionable price we have paid to revere “genius”. This is because the notion of a “genius” creates a culture of exceptionalism, founded among other things on the view (as in the first assumption) that “geniuses” have something to contribute that others can’t and that this contribution is inherently more valuable than that of others. But “geniuses” are people, and people can be assholes if they’re allowed to operate with impunity.

    Archer may contend that this wasn’t the point of his essay; that may be, but ‘reverence’ implies little else. And if this is the position towards which he believes we must all gravitate, forget everything else – it’s reason enough dismiss the idea of a “genius basic income”.

  • The 5ftf blunder

    Automattic owner Matt Mullenweg recently made a scene on Twitter when he called out GoDaddy as a “parasitic” organisation for profiting off of WordPress without making a sufficient number of contributions to the WordPress community and for developing a competitor to WooCommerce, which is Automattic’s ‘WordPress but for e-commerce’. (To the uninitiated, Automattic owns WordPress.com and maintains WordPress.org. WordPress.com is where you pay Automattic to host your website for you on its servers; WordPress.org is where you can download the WordPress CMS and use it on your own servers.) At the heart of the issue is Automattic’s ‘Five for the Future’ (5ftf) initiative, in which companies whose profits depend on the WordPress CMS and the community of developers and users pledge to contribute 5% of their resources to developing WordPress.org. There has been a lot of justifiable backlash against Mullenweg’s tweets, which were in poor taste and which have since been deleted. But most of the articles on the topic that I read weren’t clear or not written well about what their authors’ reasons were to disagree with Mullenweg. So after some reading around, I thought I’d summarise my takeaways as I see them, and in case you might benefit from such a summary as well.

    1. 5ftf appears to mean different things to different people. This has been a recurrent bone of contention because Mullenweg lashed out against GoDaddy because GoDaddy’s contributions were not legitimate, or not legitimate enough, for him. But this is hardly reasonable. Not every entity or individual can contribute in exactly the way Automattic wishes at a given time nor can Automattic, or Mullenweg, presume to know exactly which contributions can be discarded in favour of others. In fact, I’ve been sticking with WordPress even though WordPress.com has been becoming less friendly to bloggers because a) it presents a diverse set of opportunities for me, vis-Ă -vis the projects and services I know how to set up because I know how to use WordPress, and b) WordPress has engendered over the decades a view of publishing on the web that is aligned with progressivist views on publishing on the internet. So in my view I contribute when I recommend WordPress to others, help my fellow journalists and writers set up WordPress websites, provide feedback on WordPress services, build (rudimentary) WordPress plugins and, within my newsroom, promote the use of WordPress towards responsible journalism.

    2. Mullenweg was wrong to abuse GoDaddy in public, in such harsh terms. This was a disagreement that ought to have been settled out of view of public eyes, and certainly not on Twitter. Mullenweg is influential both as an entrepreneur more broadly as well as, more specifically, as someone whose views and policies on digital publishing can potentially affect hundreds of thousands active websites on the internet. By lashing out in this way, all he’s done is made GoDaddy look bad in a way that it probably didn’t deserve to be, and certainly in a way that it would find hard to pushback against as a company. To continue my first point, GoDaddy has also said that it sponsors WordCamps and other events where WordPress-enthusiasts gather to discuss better or new ways to use Automattic products.

    (Aside: In his examples of companies that are doing a better job of giving back to WordPress.org, Mullenweg included Bluehost. Some of you might remember how bad GoDaddy’s customer service was in the previous decade. It was famously, notoriously awful, exacerbated by the fact that for a lot of people, its platform was also the gateway to WordPress. I get the sense that their service has improved now. On the other hand, Bluehost and indeed all hosting companies owned by Newfold Digital have a lousy reputation, among developers and non-developers alike, while Mullenweg is apparently happy with Bluehost’s contributions and it is also listed as one of WordPress.org’s recommended hosts.)

    3. Mullenweg blundered in a surprising way when he indicated in his tweets that he was keeping score. While GoDaddy caught Mullenweg’s attention on this occasion, the fundamental problem is relevant to all of us. You want people to support a cause because they want to, not because someone is keeping track and could be angry with you if you default. Put another way, Mullenweg took the easier-to-implement but harder-to-sustain ‘hardware’ route to instituting a change in the ecosystem than the harder-to-implement but easier-to-sustain ‘software’ route. We’ve come across ample examples of this choice through the pandemic. To get people to wear masks in public, many governments introduced mask mandates. A mask mandate is the hardware path: it enforces material changes among people backed by the threat of punishments. The software path on the other hand would have entailed creating a culture in which mask-wearing is considered to be virtuous and desirable, in which no one is afraid of being punished if they don’t wear masks (for reasonable reasons), and in which people trust the government to be looking out for them. The software path is much longer than the hardware one and governments may have justified their actions saying they didn’t have the time for all this. But while that’s debatable, Automattic doesn’t have such constraints.

    This is why 5ftf should be made aspirational but shouldn’t be enforced, and certainly shouldn’t become an excuse for public disparagement. I and many, many others love WordPress, and a large part of it is because we love the culture and ideas surrounding it. We also understand the problem with for-profit organisations profiting off the work of non-profit organisations. If GoDaddy is really threatening to sink WordPress.org by offering the people hosting their sites on GoDaddy an alternative ecommerce platform or by not giving back nearly as many programming-hours as it effectively consumes, Automattic should either regard GoDaddy as a legitimate competitor and reconsider its own business model or it should pay less attention to its contribution scorecard and more to why and how others contribute the way they do. Finally, if GoDaddy is really selfish in a way that is not compatible with WordPress.org’s future as Automattic sees it to be, Automattic’s grouch should be divorced cleanly from the 5ftf initiative.

  • A quantum theory of consciousness

    We seldom have occasion to think about science and religion at the same time, but the most interesting experience I have had doing that came in October 2018, when I attended a conference called ‘Science for Monks’* in Gangtok, Sikkim. More precisely, it was one edition of a series of conferences by that name, organised every year between scientists and science communicators from around the world and Tibetan Buddhist monks in the Indian subcontinent. Let me quote from the article I wrote after the conference to illustrate why such engagement could be useful:

    “When most people think about the meditative element of the practice of Buddhism, … they think only about single-point meditation, which is when a practitioner closes their eyes and focuses their mind’s eye on a single object. The less well known second kind is analytical meditation: when two monks engage in debate and question each other about their ideas, confronting them with impossibilities and contradictions in an effort to challenge their beliefs. This is also a louder form of meditation. [One monk] said that sometimes, people walk into his monastery expecting it to be a quiet environment and are surprised when they chance upon an argument. Analytical meditation is considered to be a form of evidence-sharpening and a part of proof-building.”

    As interesting as the concept of the conference is, the 2018 edition was particularly so because the field of science on the table that year was quantum physics. That quantum physics is counter-intuitive is a banal statement; it is chock-full of twists in the tale, interpretations, uncertainties and open questions. Even a conference among scientists was bound to be confusing – imagine the scope of opportunities for confusion in one between scientists and monks. As if in response to this risk, the views of the scientists and the monks were very cleanly divided throughout the event, with neither side wanting to tread on the toes of the other, and this in turn dulled the proceedings. And while this was a sensible thing to do, I was disappointed.

    This said, there were some interesting conversations outside the event halls, in the corridors, over lunch and dinner, and at the hotel where we were put up (where speakers in the common areas played ‘Om Mani Padme Hum’ 24/7). One of them centered on the rare (possibly) legitimate idea in quantum physics in which Buddhist monks, and monks of every denomination for that matter, have considerable interest: the origin of consciousness. While any sort of exposition or conversation involving the science of consciousness has more often than not been replete with bad science, this idea may be an honourable exception.

    Four years later, I only remember that there was a vigorous back-and-forth between two monks and a physicist, not the precise contents of the dialogue or who participated. The subject was the Orch OR hypothesis advanced by the physicist Roger Penrose and quantum-consciousness theorist Stuart Hameroff. According to a 2014 paper authored by the pair, “Orch OR links consciousness to processes in fundamental space-time geometry.” It traces the origin of consciousness to cellular structures inside neurons called microtubules being in a superposition of states, and which then collapse into a single state in a process induced by gravity.

    In the famous SchrĂśdinger’s cat thought-experiment, the cat exists in a superposition of ‘alive’ and ‘dead’ states while the box is closed. When an observer opens the box and observes the cat, its state collapses into either a ‘dead’ or an ‘alive’ state. Few scientists subscribe to the Orch OR view of self-awareness; the vast majority believe that consciousness originates not within neurons but in the interactions between neurons, happening at a large scale.

    ‘Orch OR’ stands for ‘orchestrated objective reduction’, with Penrose being credited with the ‘OR’ part. That is also the part at which mathematicians and physicists have directed much of their criticism.

    It begins with Penrose’s idea of spacetime blisters. According to him, at the Planck scale (around 10-35 m), the spacetime continuum is discrete, not continuous, and that each quantum superposition occupies a distinct piece of the spacetime fabric. These pieces are called blisters. Pernose postulated that gravity acts on each of these blisters and destabilises them, causing the superposed states to collapse into a single state.

    A quantum computer performs calculations using qubits as the fundamental units of information. The qubits interact with each other in quantum-mechanical processes like superposition and entanglement. At some point, the superposition of these qubits is forced to collapse by making an observation, and the state to which it collapses is recorded as the computer’s result. In 1989, Penrose proposed that there could be a quantum-computer-like mechanism operating in the human brain and that the OR mechanism could be the act of observation that forces it to terminate.

    One refinement of the OR hypothesis is the DiĂłsi-Penrose scheme, with contributions from Hungarian physicist Lajos DiĂłsi. In this scheme, spacetime blisters are unstable and the superposition collapses when the mass of the superposed states exceeds a fixed value. In the course of his calculations, DiĂłsi found that at the moment of collapse, the system must emit some electromagnetic radiation (due to the motion of electrons).

    Hameroff made his contribution by introducing microtubules as a candidate for the location of qubit-like objects and which could collectively set up a quantum-computer-like system within the brain.

    There have been some experiments in the last two decades that have tested whether Orch OR could manifest in the brain, based on studies of electron activity. But a more recent study suggests that Orch OR may just be infeasible as an explanation for the origin of consciousness.

    Here, a team of researchers – including Lajos DiĂłsi – first looked for the electromagnetic radiation at the instant the superposition collapsed. The researchers didn’t find any, but the parameters of their experiment (including the masses involved) allowed them to set lower limits on the scale at which Orch OR might work. That is, they had a way to figure out a way in which the distance, time and mass might be related in an Orch OR event.

    They set these calculations out in a new paper, published in the journal Physics of Life Reviews on May 17. According to their paper, they fixed the time-scale of the collapse to 0.025 to 0.5 seconds, which is comparable to the amount of time in which our brain recognises conscious experience. They found that at a spatial scale of 10-15 m – which Penrose has expressed a preference for – a superposition that collapses in 0.025 seconds would require 1,000-times more tubulins as there are in the brain (1020), an impossibility. (Tubulins polymerise to form microtubules.) But at a scale of around 1 nm, the researchers worked out that the brain would need only 1012 tubulins for their superposition to collapse in around 0.025 seconds. This is still a very large number of tubulins and a daunting task even for the human brain. But it isn’t impossible as with the collapse over 10-15 m. According to the team’s paper,

    The Orch OR based on the DP [Diósi-Penrose] theory is definitively ruled out for the case of [10-15 m] separation, without needing to consider the impact of environmental decoherence; we also showed that the case of partial separation requires the brain to maintain coherent superpositions of tubulin of such mass, duration, and size that vastly exceed any of the coherent superposition states that have been achieved with state-of-the-art optomechanics and macromolecular interference experiments. We conclude that none of the scenarios we discuss … are plausible.

    However, the team hasn’t nearly eliminated Orch OR; instead, they wrote that they intend to refine the DiĂłsi-Penrose scheme to a more “sophisticated” version that, for example, may not entail the release of electromagnetic radiation or provide a more feasible pathway for superposition collapse. So far, in their telling, they have used experimental results to learn where their theory should improve if it is to remain a plausible description of reality.

    If and when the ‘Science for Monks’ conferences, or those like it, resume after the pandemic, it seems we may still be able to put Orch OR on the discussion table.

    * I remember it was called ‘Science for Monks’ in 2018. Its name appears to have been changed since to ‘Science for Monks and Nuns’.

  • Unless the West copies us, we’re irrelevant

    We have become quite good at dismissing the more asinine utterances of our ministers and other learned people in terms of either a susceptibility to pseudoscience or, less commonly, a wilful deference to what we might call pseudoscientific ideas in order to undermine “Western science” and its influence. But when a matter of this sort hits the national headlines, our response seems for the large part to be limited to explaining the incident: once some utterance has been diagnosed, it apparently stops being of interest.

    While this is understandable, an immediate diagnosis can only offer so much insight. An important example is the Vedas. Every time someone claims that the Vedas anticipated, say, the Higgs boson or interplanetary spaceflight, the national news machine – in which reporters, editors, experts, commentators, activists and consumers all participate – publishes the following types of articles, from what I have read: news reports that quote the individual’s statement as is, follow-ups with the individual asking them to explain themselves, opinion articles defending or trashing the individual, an editorial if the statement is particularly pernicious, opinion articles dissecting the statement, and perhaps an interview long after to ask the individual what they were really thinking. (I don’t follow TV news but I assume it is either not very different in its content.)

    All of these articles employ a diagnostic attitude towards the news item: they seek to uncover the purpose of the statement because they begin with the (reasonable) premise that the individual was not a fool to issue it and that the statement had a purpose, irrespective of whether it was fulfilled. Only few among them – if any – stop consider the double-edged nature of the diagnosis itself. For example, when a researcher in Antarctica got infected by the novel coronavirus, their diagnosis would have said a lot about humankind – in their ability to be infected even when one individual is highly isolated for long periods of time – as well as about the virus itself.

    Similarly, when a Bharatiya Janata Party bhakt claims that the Vedas anticipated the discovery of the Higgs boson, it says as much about the individual as it does about the individual’s knowledge of the Vedas. Specifically, the biggest loser here, so to speak, are the Vedas, which have been misrepresented to the world’s scientists to sound like an unfalsifiable joke-book. Extrapolate this to all of the idiotic things that our most zealous compatriots have said about airplanes, urban planning, the internet, plastic surgery, nutrition and diets, cows, and mathematics.

    This is misrepresentation en masse of India’s cultural heritage (the cows aren’t complaining but I will never be certain until they can talk), and it is also a window into what these individuals believe to be true about the country itself.

    For example, consider mathematics. One position paper drafted by the Karnataka task force on the National Education Policy, entitled “Knowledge in India”, called the Pythagorean theorem “fake news” simply because the Indian scholar Baudhayana had propounded very similar rules and observations. In an interview to Hindustan Times interview yesterday, the head of this task force, Madan Gopal, said the position paper doesn’t recommend that the theorem be removed from the syllabus but that an addition be made: Baudhayana was the originator of the theorem. Baudhayana was not the originator, but equally importantly, Gopal said he had concluded that Baudhayana was being cheated out of credit based on what Gopal had read… on Quora.

    As a result, Gopal has overlooked and rendered invisible the Baudhayana Sulbasutra as well as has admitted his indifference towards the programme of its study and preservation.

    Consider another example involving the same fellow: Gopal also told Hindustan Times, “Manchester University published a paper saying that the theory of Newton is copied from ancient texts from Kerala.” He is in all likelihood referring to the work of G.G. Joseph, who asserted in 2007 that scholars of the Kerala school of mathematics had discovered some of the constitutive elements of calculus in c. 1350 – a few centuries before Isaac Newton or Gottfried Leibniz. However, Gopal is wrong to claim that Newton “copied” from the work from “ancient texts from Kerala”: in continuation of his work, Joseph discovered that while the work of Madhava and Nilakantha at the Kerala school pre-dated that of Newton and Leibniz, there had been no transfer of knowledge from the Kerala school to Europe in the medieval era. That is, Newton and Leibniz had discovered calculus independently.

    Gopal would have been right to state that Madhava and Nilakantha were ahead of the Europeans of the time, but it’s not clear whether Gopal was even aware of these names or the kind of work in which the members of the Kerala school were engaged. He has as a result betrayed his ignorance as well as squandered an important opportunity to address the role of colonialism and imperialism in the history of mathematics. In fact, Gopal seems to say that unless Newton copied from the “ancient texts,” what the texts themselves record is irrelevant. (Also read: ‘We don’t have a problem with the West, we’re just obsessed with it’.)

    Now, Madan Gopal’s ignorance may not amount to much – although the Union education ministry will be using the position papers as guidance to draft the next generation of school curricula. So let us consider, in the same spirit and vein, Narendra Modi’s claim shortly after he became India’s prime minister for the first time that ancient Indians had been capable of performing an impossible level of plastic surgery. In that moment, he lied – and he also admitted that he had no idea what the contents of the Sushruta Samhita or the Charaka Samhita were and that he didn’t care. He admitted that he wouldn’t be investing in the study, preservation and transmission of these texts because that would be tantamount to admitting that only a vanishing minority is aware of their contents. Also, why do these things and risk finding out that the texts say something else entirely?

    Take all of the party supporters’ pseudoscientific statements together – originating from the Madan Gopals and culminating with Modi – and it becomes quite apparent, beyond the momentary diagnoses of each of these statements, that while we already knew that they have no idea what they are talking about, we must admit that they have no care for what the purported sources of their claims actually say. That is, they don’t give a damn about the actual Vedas, the actual Samhitas or the various actual sutras, and they are unlikely to preserve or study these objects of our heritage in their original forms.

    Just as every new Patanjali formulation forgets Ayurveda for the sake of AyurvedaÂŽ, every new utterance about Ancient Indian Knowledge forgets the Vedas for the sake of the VedasÂŽ.

    Now, given the statements of this nature from ministers, other members and unquestioning supporters of the BJP, we have reason to believe that they engage in kettle logic. This in turn implies that these individuals may not really believe what they are saying to be true and/or valid, and that they employ their arguments anyway only to ensure the outcome, on which they are fixated. That is, the foolish statements may not implicitly mean that their authors are foolish; on the contrary, they may be smart enough to recognise kettle logic as well as its ability to keep naĂŻve fact-checkers occupied in a new form of the bullshit job. Even so, they must be aware at least that they are actively forgetting the Vedas, the Samhitas and the sutras.

    One way or another, the BJP seems to say, let’s forget.

  • JWST and the sorites paradox

    The team operating NASA’s James Webb Space Telescope (JWST) released its first full-colour image early on July 12, and has promised some more from the same set in the evening. The image is a near-infrared shot of the SMACS 0723 galaxy cluster some 4.6 billion lightyears away. According to a press release accompanying the image’s release, the field of view – which shows scores of galaxies as well as several signs of gravitational lensing (which is evident only when very large distances are involved) – is equivalent to the area occupied by a grain of sand held at arm’s length from the eyes.

    I’m personally looking forward to the telescope’s shot of the Carina Nebula: the Hubble space telescope’s images of this emission nebula were themselves stunning, so the JWST’s shot should be more so!

    Gazing at the JWST’s first image brought to my mind the sorites paradox. Its underlying thought-experiment might resonate with you were you to ponder the classical limit of quantum physics or the concept of emergence as Philip Warren Anderson elucidated it as well. Imagine a small heap of sand before you. You pick up a single grain from the heap and toss it away. Is the sand before you still in a heap? Yes. You put away another grain and check. Still a heap. So you keep going, and a few thousand checks later, you find that you have before you a single grain of sand. Is it still a heap? If your answer is ‘yes’, the follow-up question arises: how can a single grain of sand be a heap? If ‘no’, then when did the heap stop being a heap?

    Another way to conjure the same paradox is to start with one grain of sand and which is evidently not a heap. Then you add one more grain, which is also not a heap, and one more and one more and so forth. Using modus ponens supplies the following line of reasoning: “One mote isn’t a heap. And if one mote isn’t a heap, then two motes don’t make a heap either. And three motes don’t make a heap either. And so on until: if 9,999 motes don’t make a heap, then 10,000 motes don’t make a heap either.” But while straightforward logic has led you to this conclusion, your sense-experience is clear: what lies before you is in fact a heap.

    The paradox came to mind because it’s hard not to contemplate the fact that both the photograph and the goings-on in India at the moment – from the vitriolic bigotry that’s constantly being mainstreamed to the arrest and harassment of journalists, activists and other civilians, both by the ruling dispensation – are the product of human endeavour. I’m not interested in banal expressions of the form “we’re all in this together” (we’re not) or “human intelligence and ingenuity can always be put to better use” (this is useless knowledge); instead, I wonder what the spectrum of human actions – which personal experience has indicated repeatedly to be continuous and ultimately ergodic – looks like that encompasses, at two extremes, actions of such beauty and of such ugliness. When does beauty turn to ugliness?

    Or are these terms discernible only in absolutes – that is, that there is no lesser or higher beauty (or ugliness) but only one ultimate form, and that like the qubits of a quantum computer, between ultimate beauty and ultimate ugliness there are some indeterminate combinations of each attribute for which we have no name or understanding?

    I use ‘beauty’ here to mean that which is deemed worthy of preservation and ‘ugliness’, of erasure. The sorites paradox is a paradox because of the vague predicates: ‘heap’, for example, has no quantitative definition. Similarly, I realise I’m setting up vague, as well as subjective, predicates when I set up beauty and preservation in the way that I have, so let me simplify the question: how do I, how do you, how do we reconcile the heap of sand that is the awesome deep-field shot of a distant galaxy cluster with the single grain of sand that is the contemporary political reality of India? Is a reconciliation even possible – that is, is there still a continuous path of thought, aspiration and action that could take a people seeped in hate and violence to a place of peaceability, tolerance and openness? Or have we fundamentally and irredeemably lost a part of ourselves that has turned us non-ergodic, that will keep us now and for ever from experiencing certain forms of beauty?

    Language and the words that we use about ourselves will play a very important part here – the adjectives we save for ourselves versus those for the people or ideas that offend us, the terms in which we conceive of and describe our actions, everything from the order of words of our shortest poems to that of jargon of our courts’ longest judgments. Our words help us to convince ourselves, and others, that there is beauty in something even if it isn’t readily apparent. A bhakt might find in the annals of OpIndia and The Organiser the same solace and inspiration, and therefore the virtue of preserving what he finds to be beautiful, that a rational progressivist might find in Salvage or Viewpoint. This is among other things because language is how we map meaning to experience – the first point of contact between the material realm and human judgment, an interaction that will forever colour every moral, ethic and justicial conclusion to come after.

    This act of meaning-making is also visible in physics, where there are overlapping names for different parts of the electromagnetic spectrum because the names matter more for the frequencies’ effects on the human body. Similarly, in the book trade, genre definitions can be overlapping – The Three-Body Problem by Cixin Liu is both sci-fi and fantasy, for example – because they matter largely for marketing.

    One way or another, I’m eager, but not yet desperate, for an answer that will keep the door open for some measure of reversibility – and not for the bhakts but for those engaged in pushing back against their ilk. (The bhakts can go to hell.) The cognitive dissonance otherwise – of a world that creates things and ideas worth preserving and of a world that creates things and ideas worth erasing – might just break my ability to be optimistic about the human condition.

    Featured image: The JWST’s image of the SMACS 0723 galaxy cluster. Credit: NASA, ESA, CSA and STScI.

  • The Higgs boson and I

    My first byline as a professional journalist (a.k.a. my first byline ever) was oddly for a tech story – about the advent of IPv6 internet addresses. I started writing it after 7 pm, had to wrap it up by 9 pm and it was published in the paper the next day (I was at The Hindu).

    The first byline that I actually wanted to take credit for appeared around a month later, on July 4, 2012 – ten years ago – on the discovery of the Higgs boson at the Large Hadron Collider (LHC) in Europe. I published a live blog as Fabiola Gianotti, Joe Incandela and Rolf-Dieter Heuer, the spokespersons of the ATLAS and CMS detector collaborations and the director-general of CERN, respectively, announced and discussed the results. I also distinctly remember taking a pee break after telling readers “I have to leave my desk for a minute” and receiving mildly annoyed, but also amused, comments complaining of TMI.

    After the results had been announced, the science editor, R. Prasad, told me that R. Ramachandran (a.k.a. Bajji) was filing the main copy and that I should work around that. So I wrote a ‘what next’ piece describing the work that remained for physicists to do, including open problems in particle physics that stayed open and the alternative theories, like supersymmetry, required to explain them. (Some jingoism surrounding the lack of acknowledgment for S.N. Bose – wholly justifiable, in my view – also forced me to write this.)

    I also remember placing a bet with someone that the Nobel Prize for physics in 2012 wouldn’t be awarded for the discovery (because I knew, but the other person didn’t, that the nominations for that year’s prizes had closed by then).

    To write about the feats and mysteries of particle physics is why I became a science journalist, so the Higgs boson’s discovery being announced a month after I started working was special – not least because it considerably eased the amount of effort I had to put in to pitches and have them accepted (specifically, I didn’t have to spend too much time or effort spelling out why a story was important). It was also a great opportunity for me to learn about how breaking news is reported as well as accelerated my induction into the newsroom and its ways.

    But my interest in particle physics has since waned, especially from around 2017, as I began to focus in my role as science editor of The Wire (which I cofounded/joined in May 2015) on other areas of science as well. My heart is still with physics, and I have greatly enjoyed writing the occasional article about topological phases, neutrino astronomy, laser cooling and, recently, the AdS/CFT correspondence.

    A couple years ago, I realised during a spell of daydreaming that even though I have stuck with physics, my act of ‘dropping’ particle physics as a specialty had left me without an edge as a writer. Just physics was and is too broad – even if there are very few others in India writing on it in the press, giving me lots of room to display my skills (such as they are). I briefly considered and rejected quantum computing and BECCS technologies – the former because its stories were often bursting with hype, especially in my neck of the woods, and the latter because, while it seemed important, it didn’t sit well morally. I was indifferent towards them because they were centered on technologies whereas I wanted to write about pure, supposedly boring science.

    In all, penning an article commemorating the tenth anniversary of the announcement of the Higgs boson’s discovery brought back pleasant memories of my early days at The Hindu but also reminded me of this choice that I still need to make, for my sake. I don’t know if there is a clear winner yet, although quantum physics more broadly and condensed-matter physics more specifically are appealing. This said, I’m also looking forward to returning to writing more about physics in general, paralleling the evolution of The Wire Science itself (some announcements coming soon).

    I should also note that I started blogging in 2008, when I was still an undergraduate student of mechanical engineering, in order to clarify my own knowledge of and thoughts on particle physics.

    So in all, today is a special day.

  • 25 years of Maldacena’s bridge

    Twenty-five years go, in 1997, an Argentine physicist named Juan Martin Maldacena published what would become the most highly cited physics paper in history (more than 20,000 to date). In the paper, Maldacena described a ‘bridge’ between two theories that describe how our world works, but separately, without meeting each other. These are the field theories that describe the behaviour of energy fields (like the electromagnetic fields) and subatomic particles, and the theory of general relativity, which deals with gravity and the universe at the largest scales.

    Field theories have many types and properties. One of them is a conformal field theory: a field theory that doesn’t change when it undergoes a conformal transformation – i.e. one which preserves angles but not lengths pertaining to the field. As such, conformal field theories are said to be “mathematically well-behaved”.

    In relativity, space and time are unified into the spacetime continuum. This continuum can broadly exist in one of three possible spaces (roughly, universes of certain ‘shapes’): de Sitter space, Minkowski space and anti-de Sitter space. de Sitter space has positive curvature everywhere – like a sphere (but is empty of any matter). Minkowski space has zero curvature everywhere – i.e. a flat surface. Anti-de Sitter space has negative curvature everywhere – like a hyperbola.

    A sphere, a hyperbolic surface and a flat surface. Credit: NASA

    Because these shapes are related to the way our universe looks and works, cosmologists have their own way to understand these spaces. If the spacetime continuum exists in de Sitter space, the universe is said to have a positive cosmological constant. Similarly, Minkowski space implies a zero cosmological constant and anti-de Sitter space a negative cosmological constant. Studies by various space telescopes have found that our universe has a positive cosmological constant, meaning ‘our’ spacetime continuum occupies a de Sitter space (sort of, since our universe does have matter).

    In 1997, Maldacena found that a description of quantum gravity in anti-de Sitter space in N dimensions is the same as a conformal field theory in N â€“ 1 dimensions. This – called the AdS/CFT correspondence – was an unexpected but monumental discovery that connected two kinds of theories that had thus far refused to cooperate. (The Wire Science had a chance to interview Maldacena about his past and current work in 2018, in which he provided more insights on AdS/CFT as well.)

    In his paper, Maldacena demonstrated his finding by using the example of string theory as a theory of quantum gravity in anti-de Sitter space – so the finding was also hailed as a major victory for string theory. String theory is a leading contender for a theory that can unify quantum mechanics and general relativity. However, we have found no experimental evidence of its many claims. This is why the AdS/CFT correspondence is also called the AdS/CFT conjecture.

    Nonetheless, thanks to the correspondence, (mathematical) physicists have found that some problems that are hard on the ‘AdS’ side are much easier to crack on the ‘CFT’ side, and vice versa – all they had to do was cross Maldacena’s ‘bridge’! This was another sign that the AdS/CFT correspondence wasn’t just a mathematical trick but could be a legitimate description of reality.

    So how could it be real?

    The holographic principle

    In 1997, Maldacena proved that a string theory in five dimensions was the same as a conformal field theory in four dimensions. However, gravity in our universe exists in four dimensions – not five. So the correspondence came close to providing a unified description of gravity and quantum mechanics, but not close enough. Nonetheless, it gave rise to the possibility that an entity that existed in some number of dimensions could be described by another entity that existed in one fewer dimensions.

    Actually, in fact, the AdS/CFT correspondence didn’t give rise to this possibility but proved it, at least mathematically; the awareness of the possibility had existed for many years until then, as the holographic principle. The Dutch physicist Gerardus ‘t Hooft first proposed it and the American physicist Leonard Susskind in the 1990s brought it firmly into the realm of string theory. One way to state the holographic principle, in the words of physicist Matthew Headrick, is thus:

    “The universe around us, which we are used to thinking of as being three dimensional, is actually at a more fundamental level two-dimensional and that everything we see that’s going on around us in three dimensions is actually happening in a two-dimensional space.”

    This “two-dimensional space” is the ‘surface’ of the universe, located at an infinite distance from us, where information is encoded that describes everything happening within the universe. It’s a mind-boggling idea. ‘Information’ here refers to physical information, such as, to use one of Headrick’s examples, “the positions and velocities of physical objects”. In beholding this information from the infinitely faraway surface, we apparently behold a three-dimensional reality.

    It bears repeating that this is a mind-boggling idea. We have no proof so far that the holographic principle is a real description of our universe – we only know that it could describe our reality, thanks to the AdS/CFT correspondence. This said, physicists have used the holographic principle to study and understand black holes as well.

    In 1915, Albert Einstein’s general theory of relativity provided a set of complicated equations to understand how mass, the spacetime continuum and the gravitational force are related. Within a few months, physicists Karl Swarzschild and Johannes Droste, followed in subsequent years by Georges Lemaitre, Subrahmanyan Chandrasekhar, Robert Oppenheimer and David Finkelstein, among others, began to realise that one of the equations’ exact solutions (i.e. non-approximate) indicated the existence of a point mass around which space was wrapped completely, preventing even light from escaping from inside this space to outside. This was the black hole.

    Because black holes were exact solutions, physicists assumed that they didn’t have any entropy – i.e. that its insides didn’t have any disorder. If there had been such disorder, it should have appeared in Einstein’s equations. It didn’t, so QED. But in the early 1970s, the Israeli-American physicist Jacob Bekenstein noticed a problem: if a system with entropy, like a container of hot gas, was thrown into the black hole, and the black hole doesn’t have entropy, where does the entropy go? It had to go somewhere; otherwise, the black hole would violate the second law of thermodynamics – that the entropy of an isolated system, like our universe, can’t decrease.

    Bekenstein postulated that black holes must also have entropy, and that the amount of entropy is proportional to the black hole’s surface area, i.e. the area of the event horizon. Bekenstein also worked out that there is a limit to the amount of entropy a given volume of space can contain, as well as that all black holes could be described by just three observable attributes: their mass, electric charge and angular momentum. So if a black hole’s entropy increases because it has swallowed some hot gas, this change ought to manifest as a change in one, some or all of these three attributes.

    Taken together: when some hot gas is tossed into a black hole, the gas would fall into the event horizon but the information about its entropy might appear to be encoded on the black hole’s surface, from the point of view of an observer located outside and away from the event horizon. Note here that the black hole, a sphere, is a three-dimensional object whereas its surface is a flat, curved sheet and therefore two-dimensional. That is, all the information required to describe a 3D black hole could in fact be encoded on its 2D surface – which evokes the AdS/CFT correspondence!

    However, that the event horizon of a black hole preserves information about objects falling into the black hole gives rise to another problem. Quantum mechanics requires all physical information (like “the positions and velocities of physical objects”, in Headrick’s example) to be conserved. That is, such information can’t ever be destroyed. And there’s no reason to expect it will be destroyed if black holes lived forever – but they don’t.

    Stephen Hawking found in the 1970s that black holes should slowly evaporate by emitting radiation, called Hawking radiation, and there is nothing in the theories of quantum mechanics to suggest that this radiation will be encoded with the information preserved on the event horizon. This, fundamentally, is the black hole information loss problem: either the black hole must shed the information in some way or quantum mechanics must be wrong about the preservation of physical information. Which one is it? This is a major unsolved problem in physics, and it’s just one part of the wider context that the AdS/CFT correspondence inhabits.

    For more insights into this discovery, do read The Wire Science‘s interview of Maldacena.

    I’m grateful to Nirmalya Kajuri for his feedback on this article.

    Sources: