Scicomm

  • Yes, scientific journals should publish political rebuttals

    (The headline is partly click-bait, as I admit below, because some context is required.) From ‘Should scientific journals publish political debunkings?’Science Fictions by Stuart Ritchie, August 27, 2022:

    Earlier this week, the “news and analysis” section of the journal Science … published … a point-by-point rebuttal of a monologue a few days earlier from the Fox News show Tucker Carlson Tonight, where the eponymous host excoriated Dr. Anthony Fauci, of “seen everywhere during the pandemic” fame. … The Science piece noted that “[a]lmost everything Tucker Carlson said… was misleading or false”. That’s completely correct – so why did I have misgivings about the Science piece? It’s the kind of thing you see all the time on dedicated political fact-checking sites – but I’d never before seen it in a scientific journal. … I feel very conflicted on whether this is a sensible idea. And, instead of actually taking some time to think it through and work out a solid position, in true hand-wringing style I’m going to write down both sides of the argument in the form of a dialogue – with myself.

    There’s one particular exchange between Ritchie and himself in his piece that threw me off the entire point of the article:

    [Ritchie-in-favour-of-Science-doing-this]: Just a second. This wasn’t published in the peer-reviewed section of Science! This isn’t a refereed paper – it’s in the “News and Analysis” section. Wouldn’t you expect an “Analysis” article to, like, analyse things? Including statements made on Fox News?

    [Ritchie-opposed-to-Science-doing-this]: To be honest, sometimes I wonder why scientific journals have a “News and Analysis” section at all – or, I wonder if it’s healthy in the long run. In any case, clearly there’s a big “halo” effect from the peer-reviewed part: people take the News and Analysis more seriously because it’s attached to the very esteemed journal. People are sharing it on social media because it’s “the journal Science debunking Tucker Carlson” – way fewer people would care if it was just published on some random news site. I don’t think you can have it both ways by saying it’s actually nothing to do with Science the peer-reviewed journal.

    [Ritchie-in-favour]: I was just saying they were separate, rather than entirely unrelated, but fair enough.

    Excuse me but not at all fair enough! The essential problem is the tie-ins between what a journal does, why it does them and what impressions they uphold in society.

    First, Science‘s ‘news and analysis’ section isn’t distinguished by its association with the peer-reviewed portion of the journal but by its own reportage and analyses, intended for scientists and non-scientists alike. (Mea culpa: the headline of this post answers the question in the headline of Ritchie’s post, while being clear in the body that there’s a clear distinction between the journal and its ‘news and analysis’ section.) A very recent example was Charles Piller’s investigative report that uncovered evidence of image manipulation in a paper that had an outsized influence on the direction of Alzheimer’s research since it was published in 2006. When Ritchie writes that the peer-reviewed journal and the ‘news and analysis’ section are separate, he’s right – but when he suggests that the former’s prestige is responsible for the latter’s popularity, he’s couldn’t be more wrong.

    Ritchie is a scientist and his position may reflect that of many other scientists. I recommend that he and others who agree with him consider the section from the PoV of a science journalist, when they will immediately see as we do that it has broken many agenda-setting stories as well as has published several accomplished journalists and scientists (Derek Lowe’s column being a good example). Another impression that could change with the change of perspective is the relevance of peer-review itself, and the deceptively deleterious nature of an associated concept he repeatedly invokes, which could as well be the pseudo-problem at the heart of Ritchie’s dilemma: prestige. To quote from a blog post in which University of Regensburg neurogeneticist Björn Brembs analysed the novelty of results published by so-called ‘prestigious’ journals, and published in February this year:

    Taken together, despite the best efforts of the professional editors and best reviewers the planet has to offer, the input material that prestigious journals have to deal with appears to be the dominant factor for any ‘novelty’ signal in the stream of publications coming from these journals. Looking at all articles, the effect of all this expensive editorial and reviewer work amounts to probably not much more than a slightly biased random selection, dominated largely by the input and to probably only a very small degree by the filter properties. In this perspective, editors and reviewers appear helplessly overtaxed, being tasked with a job that is humanly impossible to perform correctly in the antiquated way it is organized now.

    In sum:

    Evidence suggests that the prestige signal in our current journals is noisy, expensive and flags unreliable science. There is a lack of evidence that the supposed filter function of prestigious journals is not just a biased random selection of already self-selected input material. As such, massive improvement along several variables can be expected from a more modern implementation of the prestige signal.

    Take the ‘prestige’ away and one part of Ritchie’s dilemma – the journal Science‘s claim to being an “impartial authority” that stands at risk of being diluted by its ‘news and analysis’ section’s engagement with “grubby political debates” – evaporates. Journals, especially glamour journals like Science, haven’t historically been authorities on ‘good’ science, such as it is, but have served to obfuscate the fact that only scientists can be. But more broadly, the ‘news and analysis’ business has its own expensive economics, and publishers of scientific journals that can afford to set up such platforms should consider doing so, in my view, with a degree and type of separation between these businesses according to their mileage. The simple reasons are:

    1. Reject the false balance: there’s no sensible way publishing a pro-democracy article (calling out cynical and potentially life-threatening untruths) could affect the journal’s ‘prestige’, however it may be defined. But if it does, would the journal be wary of a pro-Republican (and effectively anti-democratic) scientist refusing to publish on its pages? If so, why? The two-part answer is straightforward: because many other scientists as well as journal editors are still concerned with the titles that publish papers instead of the papers themselves, and because of the fundamental incentives of academic publishing – to publish the work of prestigious scientists and sensational work, as opposed to good work per se. In this sense, the knock-back is entirely acceptable in the hopes that it could dismantle the fixation on which journal publishes which paper.

    2. Scientific journals already have access to expertise in various fields of study, as well as an incentive to participate in the creation of a sensible culture of science appreciation and criticism.

    Featured image: Tucker Carlson at an event in West Palm Beach, Florida, December 19, 2020. Credit: Gage Skidmore/Wikimedia Commons, CC BY-SA 2.0.

  • What makes ‘good science journalism’?

    From ‘Your Doppelgänger Is Out There and You Probably Share DNA With Them’, The New York Times, August 23, 2022:

    Dr. Esteller also suggested that there could be links between facial features and behavioral patterns, and that the study’s findings might one day aid forensic science by providing a glimpse of the faces of criminal suspects known only from DNA samples. However, Daphne Martschenko, a postdoctoral researcher at the Stanford Center for Biomedical Ethics who was not involved with the study, urged caution in applying its findings to forensics.

    There are two big problems here: 1) Esteller’s comment is at the doorstep of eugenics, and 2) the reporter creates a false balance by reporting both Esteller’s comment and Martschenko’s rebuttal to that comment, when in fact the right course of action would’ve been to drop this portion entirely, as well as take a closer look at why Esteller et al. conducted the study in the first place and whether the study paper and other work at the Esteller lab is suspect.

    This said, it’s a bit gratifying (in a bad way) when a high-stature foreign news publication like The New York Times makes a dangerous mistake in a science-related story. Millions of people are misinformed, which sucks, but when independent scientists and other readers publicly address these mistakes, their call-outs create an opportunity for people (though not as many as are misinformed) to understand exactly what is wrong and, more importantly from the PoV of readers in India, that The New York Times also makes mistakes, that it isn’t a standard-bearer of good science journalism and that being good is a constant and diverse process.

    1) “NYT also makes mistakes” is important to know if only to dispel the popular and frustrating perception that “all American news outlets are individually better than all Indian news outlets”. I had to wade through a considerable amount of this when I started at The Hindu a decade ago – at the hands of most readers as well as some colleagues. I still face this in a persistent way in the form of people who believe some article in The Atlantic is much better than an article on the same topic in, say, The Wire Science, for few, if any, reasons beyond the quality of the language. But of course this will always set The Atlantic and The Wire Science and its peers in India apart: English isn’t the first language for many of us – yet it seldom gets in the way of good storytelling. In fact, I’ve often noticed American publications in particular to be prone to oversimplification more often than their counterparts in Europe or, for that matter, in India. In my considered (but also limited) view, the appreciation of science stories is also a skill, and the population that aspires to harbour it in my country is often prone to the Dunning-Kruger effect.

    2) “NYT isn’t a standard-bearer of good science journalism” is useful to know because of the less-than-straightforward manner in which publications acquire a reputation for “good science journalism”. Specifically, publications aren’t equally good at covering all branches of scientific study; some are better in some fields and others are at some others. Getting your facts right, speaking to all the relevant stakeholders and using sensitive language will get you 90% of the way, but you can tell the differences between publications by how well they cover the remaining 10%, which comes from beat knowledge, expertise and having the right editors.

    3) “Being good is a constant and diverse process” – ‘diverse’ because of the previous point and ‘constant’ because, well, that’s how it is. It’s not that our previous work doesn’t keep us in good standing but that we shouldn’t overestimate how much that standing counts for. This is especially so in this age of short attention spans, short-lived memories and the subtle but pervasive encouragement to be hurtful towards others on the internet. “Good science journalism” is a tag we need to get by getting every single story right – and in this sense, you, the reader, are better off not doling out lifetime awards to outlets. Instead, understand that no outlet is going to be uniformly excellent at all times and evaluate each story on its own merits. This way, you’ll also create an opportunity for Indian news outlets to be free of the tyranny of unrealistic expectations and even surprise you now and then with excellence of our own.

    Finally, none of this is to say that these mistakes happen. They shouldn’t and they’re entirely preventable. Instead, it’s a reminder to keep your eyes peeled at all times and not just when you’re reading an article produced by an Indian outlet.

  • On the record about a source of irritation

    I need to go on the record about a source of mild irritation that seems to resurface in periodic fashion: the recent Current Affairs article about the “dangerous populist science of Yuval Noah Harari”. It’s an excellent article; however, I’m irritated by the fact that it awakened so many more people (at least in my circles) to the superficiality of Harari’s books, especially Homo Deus and Sapiens, than several other articles published many years ago appear to have managed. These books are seven and 11 years old, respectively – sufficient time for these books to become popular as well as for their problems to have become noticeable. I myself have known for at least seven years that Harari’s history books are full of red flags that signal a lack of engagement with the finer but crucial themes of the topics on which he pontificates. Anyone who has been trained in science or has engaged continuously with matters of science (like science journalists) should have been able to pick up on these red flags. Why didn’t they? Yet the Current Affairs article elicited the sort of response from many people that suggested they were glad to have been alerted to his nonsense.

    To me, this has all been baffling – and symptomatic of the difficult problem of determining who it is that we can learn about good science from without such determination devolving into bad gatekeeping. There are many simple solutions to this difficult problem, of course, but their stake to simplicity is in turn made disingenuous by the fact that the people at large don’t adopt them. So it is that thousands pick up Homo Deus and believe they’re been enlightened science-wise and then, years later, marvel at a reality-check. Some of these solutions: familiarise yourself with the ‘index of evidence’; apply ad verecundiam: trust experts on the specific topic more than, say, a theoretical physicist writing about mRNA vaccines; attribute all claims openly to their firsthand sources; take even mild conflicts of interest very, very seriously (red-flag #2435: Silicon Valley techbros swooned over Harari’s books; the CoI here is that they’re technoptimists, a technocratic ideology that refuses to admit the precepts of basic sociology and thus focuses on dog-whistles); and always act in good faith.

    All such habits of good science, but especially the last one, need to be instilled among all people (and not just scientists and science journalists) over time, so that everyone can communicate good science well. But even then you might not learn that you shouldn’t get your science from Harari or Steven Pinker or others of their ilk, so please remember it now don’t make this mistake again. And I, in turn, will try to stop making the mistake of assuming timeous reader interest on a topic is entirely predictable.

    Featured image: Modified photos of Yuval Noah Harari, March 2017. Credit (original): Daniel Naber/Wikimedia Commons, CC BY-SA 4.0.

  • Dams are bad for rivers. Are skyscrapers bad for winds?

    I was recently in Dubai and often in the shadow of very tall buildings, including the Burj Khalifa and many of its peers on the city’s famed Sheikh Zayed Road. The neighbourhood in which my relatives in the city live has also acquired several new tall apartment buildings in the last decade. My relatives lost their view of the sunrise, sure, but they also lost the wind as and when it blew. And I began to wonder whether, just as dams and gates can kill a river by destroying its natural flow, skyscrapers could distort the wind and consequently the way both people and air pollution are affected by it.

    Wind speed is particularly interesting. When architects design tall buildings, they need to account for the structure’s ability to withstand wind velocity, which increases with altitude and whose effects on the structure also diversify. For example, when a building causes a wind current to split up to either side as it flows past, the current forms vortices on the other side of the building. This phenomenon is called vortex-shedding (see the video below). The formation of these vortices causes the wind pressure around the building to undulate in a way that can sway the building from side to side. Depending on the building’s integrity and design, this can lead to anything from cracked window glass to… well, catastrophe.

    However, it seems such effects – of the wind on buildings – are relatively more popular than the effects of tall buildings on the wind itself. For starters, a building that presents a flat face to oncoming wind can force the wind to scatter across the face (especially if the building is the tallest in the area). So a part of the wind flows upwards along the building, some flows around the sides and some flows downwards. The last one has been known to lead to a downdraughts strong enough to topple standing lorries and move cars.

    The faster the wind, the faster the downdraught. A paper published in December 2019 reported that the average wind speed around the world has been increasing since 2010. The paper was concerned with the effects of this phenomenon on opportunities for wind-based power but it should be interesting to analyse its conclusions vis-à-vis the world’s, including India’s, skyscrapers as well.

    If the streets around the building are too narrow for a sufficient distance, they can further accelerate the downdraught, as well as natural low-altitude winds, turning the paths into deadly wind tunnels. This is due to the Venturi effect. A 1990 study found that trees can help counter it.

    With the exception of Mumbai, most Indian cities don’t yet have the skyscraper density of, say, Singapore, New York or Dubai, but the country is steadily urbanising and its cities’ population densities are on the rise. (Pardon me, I’ve made a rookie mistake: skyscrapers aren’t high density – see this and this. Instead, let me say:) The rich in India are becoming richer, and as cities expand, there’s no reason why more skyscrapers shouldn’t pop up – either as lavish residences for the ultra-wealthy or to accommodate corporate offices. We are all already familiar with an obsession among the powers that be with building increasingly taller structures as a pissing contest.

    A view of a portion of Mumbai's skyline on March 25, 2023.
    A view of a portion of Mumbai’s skyline, March 25, 2023. Credit: আজিজ/Wikimedia Commons, CC BY-SA 4.0

    This possibility is encouraged by the fact that most of India’s cities (if not all of them) are semi-planned at best. City officials also seldom enforce building codes. Experts have written about the effects of the latter on Indians’ exposure to hydro/seismological disasters (remember: buildings kill people), but in future, we should expect there to be an effect due to the buildings’ interaction with the wind as well.

    Poorly enforced building codes, especially when helped along by corrupt government, also have the effect of enabling builders to violate floor-safety indices and build structures so tall that they exacerbate water shortage, water pollution, local road traffic, power consumption, etc. The travails of South Usman Road in Chennai, where I lived for many years, come to mind. In fact, it is telling that India’s tallest building, the Palais Royal in Mumbai, has also been beleaguered by litigation over illegalities in its construction. According to a 2012 post on the Structural Engineering Forum of India website, consulting firm RWDI analysed the effects of winds on the Palais Royal but the post has nothing to suggest the reciprocal was also true.

    Remember also that most of India’s cities already have very polluted air (AQI in excess of 200), so we can expect the downdraughts to be foul as well, effectively bringing pollutants down to where the people walk. I’m also similarly concerned about the ability of relatively higher winds to disperse pollutants if they are going to be scattered more often by a higher density of skyscrapers, akin to the concept of a mean free path in physics.

    One thing is for sure: our skyscrapers’ wind problem isn’t just going to blow over.

  • How do you trap an electron?

    I’ve always found the concept of two forces on an object cancelling themselves out strange. We say they cancel if the changes they exert completely offset each other, leaving the object unaffected. But is the object really unaffected? If the two forces act in absolute opposition and at the exact same time, the object may be unaffected. But practically speaking, this is seldom the case and the object experiences some net force, to which in may not respond in a meaningful timeframe or respond in an imperceptible or negligible way.

    For example, imagine you are standing exactly still and two people standing on either side of you punch you hard on your upper arm, in an attempt to move you in the other direction. The two impulses may cancel each other out but you will still feel the pain in your arms. You might counterargue that this is true only because the human body has a considerable bulk, which means a force applied on one side of the body is transmitted through a series of media before it manifests on the other side, and that en route it loses some of its energy as the stress and strain through your muscles. This is true – but the concept of cancellation is actually imperfect even with microscopic objects.

    Consider the case of the quadrupole trap – a device used to hold charged particles like electrons and ions in place, i.e. at a fixed point in three dimensions. This device was invented because it’s impossible to confine a charged particle in a static electric field. Imagine eight electrons are placed at the vertices of an imaginary cube, and a ninth electron is placed at the centre. You might reason that since like charges repel, the repulsive force exerted by the eight electrons should hold the ninth, central electron in place – but no. They won’t. The central electron will drift away if another force acts on it, instead of getting displaced by a little and then returning to its original position.

    This is because of Earnshaw’s theorem. Thanks to Twitter user @catwbutter for explaining it to me thus:

    You can understand the theorem as saying the following: In a configuration of n charges, you ask if one is in equilibrium. [Imagine the cubic prison of n = 8 electrons at the vertices and one at the centre – this one needs to be at equilibrium.] You displace it from its point a little bit. For there to be equilibrium, the force on it needs to point radially inward at the original point you displaced it from, regardless of where you displaced the charge to. This is only possible if there is a charge at the original point – but there isn’t in the setup.

    Formally, Earnshaw’s theorem states that a collection of charged particles (of the same kind, i.e. only electrons or only protons or only ions, etc.) can’t maintain a stable and stationary equilibrium if the only thing maintaining that equilibrium is the electrostatic forces between them. In this case, the concept of ‘cancelling out’ becomes irrelevant because of the way the electric fields around the charged particles behave. One way to make it relevant is to use an exception to Earnshaw’s theorem: by using moving charges or time-varying forces.

    Imagine you’re walking along a path when a cat appears in front of you and blocks the way. You step to the cat’s right but it moves and still blocks you. You step to the left and it moves again. You’re stepping right and left respectively because you see a gap there for you to go through, but every time you try, the cat moves quickly to block you. Scientists applied a similar kind of thinking with the quadrupole ion trap. They surround a clump of electrons, or any charged particles, with three objects. One is a hyperbolic cylinder (visualised below) called a ring electrode; it is capped at each end by two hyperbolic electrodes. The ring electrode needs to be exactly halfway between the capping electrodes. The electrons are injected into the centre.

    Note that in the first image above, the ring electrode and the sides of the capping electrodes should ideally be inclined at an angle of a little over 53º relative to the z axis. But whatever the angle is, when an electric current is applied to the electrodes, the resulting electric field inside the trap will have four poles – thus the name ‘quadrupole’ – and the field along the poles (the hazy area in the image below) will be asymptotic to the electrodes.

    This electric field has two important properties. The first is that it is inhomogenous: it is not uniform in different directions. Instead, it is weakest at the centre and becomes stronger as the field becomes squeezed between the electrodes. Second, the electric field is periodic, meaning that it constantly changes between two directions – thanks to the alternating current (AC) supplied to the electrodes. (Recall that AC periodically reverse its direction while DC doesn’t.)

    The resulting periodic inhomogenous electric field exerts a unique influence on the electrons at the centre of the trap. If the field had been periodic homogenous instead and if something had knocked an electron away from the centre, the electron would have oscillated about its new point, moving back and forth. But because the field is inhomogenous, one half of the electron’s oscillation will be in an area where the field is stronger and the other half will be through an area where the field is weaker. And the stronger-field area will exert a stronger force on the electron than the force exerted by the weaker-field area. The result will be that the electron will experience a net force towards the weaker field area. This is called the ponderomotive force. And because the weakest field lies at the centre – where the electrons are originally confined – the apparatus will move any displaced electrons back there. Thus, it’s a trap.

    When Wolfgang Paul, Helmut Steinwedel and others first developed the quadrupole ion trap in the latter half of the 20th century, they found that the motion of the charged particles within the trap could be modelled according to Mathieu’s equation. This is a differential equation that the French mathematician Émile Léonard Mathieu had uncovered in the 19th century itself, when he was studying the vibrating membranes of elliptical drums.

    During the operation of the quadrupole ion trap, the charged particles experience ponderomotive forces in two directions in alternating fashion: a radial force exerted by the capping electrodes and an axial force exerted by the ring electrode (roughly, from the sides and from the top-bottom). The frequency of the AC applied to the electrodes has to be such that the forces switch sides faster than the electrons can escape. This is the cat analogy from earlier: the cat is the electric field configuration and you are the trapped particle.

    With this device in mind, ask yourself: have the electrons been kept in place because counteracting forces have cancelled themselves out? No – that is a static picture that doesn’t allow for any deviations from the normal. If an electron does get displaced from the cubic prison described earlier, Earnshaw’s theorem ensures that it can just escape altogether.

    The quadrupole ion trap represents a more dynamic picture. Here, electrons are either held in place or coaxed back into place by a series of forces interacting in a sophisticated way, sometimes in opposite directions but never quite simultaneously, such that particles can get displaced, but when they are, they are gently but surely restored to the desired state. In this picture, counteracting forces still leave behind a net force. In this picture, erring is not the end of the world.

    Featured image credit: Martin Adams/Unsplash.

  • The physics of Spain’s controversial air-con decree

    The Government of Spain published a decree earlier this week that prevents air-conditioners from being set at a temperature lower than 27º C in the summer in an effort to lower energy consumption and wean the country off of natural gas pumped from Russia.

    Twitter thread by Euronews compared the measure to one by France, to keep the doors and windows of air-conditioned spaces closed. However, the two measures are not really comparable because the France’s measure is in a manner of speaking shallower, because it doesn’t go as far as thermodynamics allows us. Instead, Spain’s move is comparable to one that Japan instituted a couple years ago. Some basic thermodynamics here should be enlightening.

    Let us consider two scenarios. In the first: Air-conditioners operate at different efficiencies at different temperatures. From about five years ago, I remember the thermodynamic efficiency variation to be around 10% across the range of operating temperatures. Also note that most air-conditioners are designed and tested to operate at or near 23º to 25º C – an ambient temperature range that falls within the ideal ranges across most countries and cultures, although it may not account for differences in wind speed, relative humidity and, of course, living conditions.

    So let’s say an air-conditioner operates at 55% efficiency when the temperature setting is at 27º C. It will incur a thermodynamic penalty if it operates at a lower temperature. Let’s say the penalty is 10% at 20º C. (I’ve spelt out the math of this later in this post.) This will be 10% of 55%, which means the thermodynamic efficiency at 20º C will be 55% – 5.5% = 49.5%. Similarly, there could be a thermodynamic efficiency gain when the air-conditioner temperature is set at a higher 32º C instead of 27º C. This gain translates to energy saved. Let’s call this figure ES (for ‘energy saved’).

    In the second scenario: the air-conditioner works by pumping heat out of a closed system – a room, for example – into the ambient environment. The cooler the room needs to be, the more work the air-conditioner has to undertake to pump more heat out of the room. This greater work translates to a greater energy consumption. Let’s call this amount EC.

    Now, the question for policymakers is whether ES is greater than EC in the following conditions:

    1. The relative humidity is below a certain value;
    2. When the room’s minimum temperature is restricted to 27º C;
    3. The chances of thermal shock; and
    4. The given strength of the urban heat-island effect.

    Let’s cycle through these conditions.

    1. Relative humidity – The local temperature and the relative humidity together determine the wet-bulb temperature. As I have explained before, exposure to a wet-bulb temperature greater than 32º C can quickly debilitate humans, and after a few hours could even lead to death. But as it happens, if the indoor temperature is 27º C, the wet-bulb temperature can never reach 32º C; even at 99% relative humidity, it reaches a value of 26.92º C.

    2. 27º C limit – The operating range of the sole air-conditioner in my house is 18º to 32º C when the ambient temperature is 18º to 48º C. In thermodynamic speak, an air-conditioner operates on the reverse Carnot cycle, and for such cycles, there is a simple, fixed formula to calculate the maximum coefficient of performance (CoP). The higher the CoP, the higher the machine’s thermodynamic efficiency. (Note that while the proportionality holds, the CoP doesn’t directly translate to efficiency.) Let’s fix the ambient temperature to 35º C. If the indoor temperature is 20º C, the max. CoP is 1.33, and if the indoor temperature is 27º C, the max. CoP is 3.37. So there is an appreciable thermodynamic efficiency gain if we set the air-conditioner’s temperature to a higher value (within the operating range and assuming the ambient temperature is greater than the indoor temperature).

    3. Thermal shock – The thermal shock is an underappreciated consequence of navigating two spaces at markedly different temperatures. It arises particularly in the form of the cold-shock response, when the body is suddenly exposed to a low temperature temperature after having habituated itself to a higher one – such as 20º C versus 40º C. The effect is especially pronounced on the heart, which has to work harder to pump blood than it did when the body was in warmer surroundings. In extreme cases, the cardiac effects include vasoconstriction and heart failure. Cold-shock response is most relevant in areas where the ambient conditions are hot and arid, such as in Rajasthan, where the outdoors routinely simmer at 40-45º C in the summer while people intuitively respond by setting their air-conditioners to 18º C or even lower.

    4. Urban heat islands – When a single air-conditioner is required to extract enough heat from a room to lower the room’s temperature by 15º C instead of by 8º C, it will consume more energy. If its thermal efficiency is (an extremely liberal) 70%, 30% of the heat it consumes will be discarded as waste heat back into the environment. Imagine a medium-sized office building fit with 25 such air-conditioners, a reasonable estimate. During the day, then, it will be similarly reasonable to conclude that the temperature in the immediate vicinity of the building will increase by 0.5º or so. If there are a cluster of buildings, the temperature increase is bound to be on the order of 2º to 3º C, if not more. This can only exacerbate the urban heat-island effect, which adds to our heat stress as well as degrades the local greenery and faunal diversity.

    Take all four factors together now and revisit the Spanish government’s decree to limit air-conditioners’ minimum operating temperature to 27º C during summer – and it seems entirely reasonable. However, a similar rule shouldn’t be instituted in India because Spain is much smaller and has lower meteorological and climatological variations, and also has less income inequality, which translates to lower exposure to life-threatening living conditions and better access to healthcare on average.

  • 65 years of the BCS theory

    Thanks to an arithmetic mistake, I thought 2022 was the 75th anniversary of the invention (or discovery?) of the BCS theory of superconductivity. It’s really the 65th anniversary, but since I’d worked myself up to write about it, I’m going to. 🤷🏽‍♂️ It also helps that the theory is a remarkable fact of nature that make sense of what is weirdly a macroscopic effect of microscopic causes.

    There are several ways to classify superconductors – materials that conduct electricity with zero resistance under certain conditions. One of them is as conventional or unconventional. A superconductor is conventional if BCS theory can explain its superconductivity. ‘BCS’ are the initials of the theory’s three originators: John Bardeen, Leon Cooper and John Robert Schrieffer. BCS theory explains (conventional) superconductivity by explaining how the electrons in a material enter a collective superfluidic state.

    At room temperature, the valence electrons flow around a material, being occasionally scattered by the grid of atomic nuclei or impurities. We know this scattering as electrical resistance.

    The electrons also steer clear of each other because of the repulsion of like charges (Coulomb repulsion).

    When the material is cooled below a critical temperature, however, vibrations in the atomic lattice encourage the electrons to become paired. This may defy what we learnt in high school – that like charges repel – but the picture is a little more complicated, and it might make more sense if we adopt the lens of energy instead.

    A system will favour a state in which it has lower energy than one in which it has more energy. When two carriers of like charges, like two electrons, approach each other, they repel each other more strongly the closer they get. This repulsion increases the system’s energy (in some form, typically kinetic energy).

    In some materials, conditions can arise in which two electrons can pair up – become correlated with each other – across relatively long distances, without coming close to each other, rendering the Coulomb repulsion irrelevant. This correlation happens as a result of the electrons’ effect on their surroundings. As an electron moves through the lattice of positively charged atomic nuclei, it exerts an attractive force on the nuclei, which respond by tending towards the electron. This increases the amount of positive potential near the electron, which attracts another electron nearby to move closer as well. If the two electrons have opposite spins, they become correlated as a Cooper pair, kept that way by the attractive potential imposed by the atomic lattice.

    Leon Cooper explained that neither the source of this potential nor its strength matter – as long as it is attractive, and the other conditions hold, the electrons will team up into Cooper pairs. In terms of the system’s energy, the paired state is said to be energetically favourable, meaning that the system as a whole has a lower energy than if the electrons were unpaired below the critical temperature.

    Keeping the material cooled to below this critical temperature is important: while the paired state is energetically favourable, the state itself arises only below the critical temperature. Above the critical temperature, the electrons can’t access this state altogether because they have too much kinetic energy. (The temperature of a material is the average kinetic energy of its constituent particles.)

    Cooper’s theory of the electron pairs fit into John Bardeen’s theory, which sought to explain changes in the energy states of a material as it goes from being non-superconducting to superconducting. Cooper had also described the formation of electron pairs one at a time, so to speak, and John Robert Schrieffer’s contribution was to work out a mathematical way to explain the formation of millions of Cooper pairs and their behaviour in the material.

    The trio consequently published its now-famous paper, ‘Microscopic Theory of Superconductivity’, on April 1, 1957.

    (I typo-ed this as 1947 on a calculator, which spit out the number of years since to be 75. 😑 One could have also expected me to remember that this is India’s 75th year of independence and that BCS theory was created a decade after 1947, but the independence hasn’t been registering these days.)

    Anyway, electrons by themselves belong to a particle class called fermions. The other known class is that of the bosons. The difference between fermions and bosons is that the former obey Pauli’s exclusion principle while the latter do not. The exclusion principle forbids two fermions in the same system – like a metal – from simultaneously occupying the same quantum state. This means the electrons in a metal have a hierarchy of energies in normal conditions.

    However, a Cooper pair, while composed of two electrons, is a boson, and doesn’t obey Pauli’s exclusion principle. The Cooper pairs of the material can all occupy the same state – i.e. the state with the lowest energy, more popularly called the ground state. This condensate of Cooper pairs behaves like a superfluid: practically flowing around the material, over, under and through the atomic lattice. Even when a Cooper pair is scattered off by an atomic nucleus or an impurity in the material, the condensate doesn’t break formation because all the other Cooper pairs continue their flow, and eventually also reintegrate the scattered Cooper pair. This flow is what we understand as electrical superconductivity.

    “BCS theory was the first microscopic theory of superconductivity,” per Wikipedia. But since its advent, especially since the late 1970s, researchers have identified several superconducting materials, and behaviours, that neither BCS theory nor its extensions have been able to explain.

    When a material transitions into its superconducting state, it exhibits four changes. Observing these changes is how researchers confirm that the material is now superconducting. (In no particular order:) First, the material loses all electric resistance. Second, any magnetic field inside the material’s bulk is pushed to the surface. Third, the electronic specific heat increases as the material is cooled before dropping abruptly at the critical temperature. Fourth, just as the energetically favourable state appears, some other possible states disappear.

    Physicists experimentally observed the fourth change only in January this year – based on the transition of a material called Bi-2212 (bismuth strontium calcium copper oxide, a.k.a. BSCCO, a.k.a. bisko). Bi-2212 is, however, an unconventional superconductor. BCS theory can’t explain its superconducting transition, which, among other things, happens at a higher temperature than is associated with conventional materials.

    In the January 2022 study, physicists also reported that Bi-2212 transitions to its superconducting state in two steps: Cooper pairs form at 120 K – related to the fourth sign of superconductivity – while the first sign appears at around 77 K. To compare, elemental rhenium, a conventional superconductor, becomes superconducting in a single step at 2.4 K.

    A cogent explanation of the nature of high-temperature superconductivity in cuprate superconductors like Bi-2212 is one of the most important open problems in condensed-matter physics today. It is why we still await further updates on the IISc team’s room-temperature superconductivity claim.

  • A quantum theory of consciousness

    We seldom have occasion to think about science and religion at the same time, but the most interesting experience I have had doing that came in October 2018, when I attended a conference called ‘Science for Monks’* in Gangtok, Sikkim. More precisely, it was one edition of a series of conferences by that name, organised every year between scientists and science communicators from around the world and Tibetan Buddhist monks in the Indian subcontinent. Let me quote from the article I wrote after the conference to illustrate why such engagement could be useful:

    “When most people think about the meditative element of the practice of Buddhism, … they think only about single-point meditation, which is when a practitioner closes their eyes and focuses their mind’s eye on a single object. The less well known second kind is analytical meditation: when two monks engage in debate and question each other about their ideas, confronting them with impossibilities and contradictions in an effort to challenge their beliefs. This is also a louder form of meditation. [One monk] said that sometimes, people walk into his monastery expecting it to be a quiet environment and are surprised when they chance upon an argument. Analytical meditation is considered to be a form of evidence-sharpening and a part of proof-building.”

    As interesting as the concept of the conference is, the 2018 edition was particularly so because the field of science on the table that year was quantum physics. That quantum physics is counter-intuitive is a banal statement; it is chock-full of twists in the tale, interpretations, uncertainties and open questions. Even a conference among scientists was bound to be confusing – imagine the scope of opportunities for confusion in one between scientists and monks. As if in response to this risk, the views of the scientists and the monks were very cleanly divided throughout the event, with neither side wanting to tread on the toes of the other, and this in turn dulled the proceedings. And while this was a sensible thing to do, I was disappointed.

    This said, there were some interesting conversations outside the event halls, in the corridors, over lunch and dinner, and at the hotel where we were put up (where speakers in the common areas played ‘Om Mani Padme Hum’ 24/7). One of them centered on the rare (possibly) legitimate idea in quantum physics in which Buddhist monks, and monks of every denomination for that matter, have considerable interest: the origin of consciousness. While any sort of exposition or conversation involving the science of consciousness has more often than not been replete with bad science, this idea may be an honourable exception.

    Four years later, I only remember that there was a vigorous back-and-forth between two monks and a physicist, not the precise contents of the dialogue or who participated. The subject was the Orch OR hypothesis advanced by the physicist Roger Penrose and quantum-consciousness theorist Stuart Hameroff. According to a 2014 paper authored by the pair, “Orch OR links consciousness to processes in fundamental space-time geometry.” It traces the origin of consciousness to cellular structures inside neurons called microtubules being in a superposition of states, and which then collapse into a single state in a process induced by gravity.

    In the famous Schrödinger’s cat thought-experiment, the cat exists in a superposition of ‘alive’ and ‘dead’ states while the box is closed. When an observer opens the box and observes the cat, its state collapses into either a ‘dead’ or an ‘alive’ state. Few scientists subscribe to the Orch OR view of self-awareness; the vast majority believe that consciousness originates not within neurons but in the interactions between neurons, happening at a large scale.

    ‘Orch OR’ stands for ‘orchestrated objective reduction’, with Penrose being credited with the ‘OR’ part. That is also the part at which mathematicians and physicists have directed much of their criticism.

    It begins with Penrose’s idea of spacetime blisters. According to him, at the Planck scale (around 10-35 m), the spacetime continuum is discrete, not continuous, and that each quantum superposition occupies a distinct piece of the spacetime fabric. These pieces are called blisters. Pernose postulated that gravity acts on each of these blisters and destabilises them, causing the superposed states to collapse into a single state.

    A quantum computer performs calculations using qubits as the fundamental units of information. The qubits interact with each other in quantum-mechanical processes like superposition and entanglement. At some point, the superposition of these qubits is forced to collapse by making an observation, and the state to which it collapses is recorded as the computer’s result. In 1989, Penrose proposed that there could be a quantum-computer-like mechanism operating in the human brain and that the OR mechanism could be the act of observation that forces it to terminate.

    One refinement of the OR hypothesis is the Diósi-Penrose scheme, with contributions from Hungarian physicist Lajos Diósi. In this scheme, spacetime blisters are unstable and the superposition collapses when the mass of the superposed states exceeds a fixed value. In the course of his calculations, Diósi found that at the moment of collapse, the system must emit some electromagnetic radiation (due to the motion of electrons).

    Hameroff made his contribution by introducing microtubules as a candidate for the location of qubit-like objects and which could collectively set up a quantum-computer-like system within the brain.

    There have been some experiments in the last two decades that have tested whether Orch OR could manifest in the brain, based on studies of electron activity. But a more recent study suggests that Orch OR may just be infeasible as an explanation for the origin of consciousness.

    Here, a team of researchers – including Lajos Diósi – first looked for the electromagnetic radiation at the instant the superposition collapsed. The researchers didn’t find any, but the parameters of their experiment (including the masses involved) allowed them to set lower limits on the scale at which Orch OR might work. That is, they had a way to figure out a way in which the distance, time and mass might be related in an Orch OR event.

    They set these calculations out in a new paper, published in the journal Physics of Life Reviews on May 17. According to their paper, they fixed the time-scale of the collapse to 0.025 to 0.5 seconds, which is comparable to the amount of time in which our brain recognises conscious experience. They found that at a spatial scale of 10-15 m – which Penrose has expressed a preference for – a superposition that collapses in 0.025 seconds would require 1,000-times more tubulins as there are in the brain (1020), an impossibility. (Tubulins polymerise to form microtubules.) But at a scale of around 1 nm, the researchers worked out that the brain would need only 1012 tubulins for their superposition to collapse in around 0.025 seconds. This is still a very large number of tubulins and a daunting task even for the human brain. But it isn’t impossible as with the collapse over 10-15 m. According to the team’s paper,

    The Orch OR based on the DP [Diósi-Penrose] theory is definitively ruled out for the case of [10-15 m] separation, without needing to consider the impact of environmental decoherence; we also showed that the case of partial separation requires the brain to maintain coherent superpositions of tubulin of such mass, duration, and size that vastly exceed any of the coherent superposition states that have been achieved with state-of-the-art optomechanics and macromolecular interference experiments. We conclude that none of the scenarios we discuss … are plausible.

    However, the team hasn’t nearly eliminated Orch OR; instead, they wrote that they intend to refine the Diósi-Penrose scheme to a more “sophisticated” version that, for example, may not entail the release of electromagnetic radiation or provide a more feasible pathway for superposition collapse. So far, in their telling, they have used experimental results to learn where their theory should improve if it is to remain a plausible description of reality.

    If and when the ‘Science for Monks’ conferences, or those like it, resume after the pandemic, it seems we may still be able to put Orch OR on the discussion table.

    * I remember it was called ‘Science for Monks’ in 2018. Its name appears to have been changed since to ‘Science for Monks and Nuns’.

  • 25 years of Maldacena’s bridge

    Twenty-five years go, in 1997, an Argentine physicist named Juan Martin Maldacena published what would become the most highly cited physics paper in history (more than 20,000 to date). In the paper, Maldacena described a ‘bridge’ between two theories that describe how our world works, but separately, without meeting each other. These are the field theories that describe the behaviour of energy fields (like the electromagnetic fields) and subatomic particles, and the theory of general relativity, which deals with gravity and the universe at the largest scales.

    Field theories have many types and properties. One of them is a conformal field theory: a field theory that doesn’t change when it undergoes a conformal transformation – i.e. one which preserves angles but not lengths pertaining to the field. As such, conformal field theories are said to be “mathematically well-behaved”.

    In relativity, space and time are unified into the spacetime continuum. This continuum can broadly exist in one of three possible spaces (roughly, universes of certain ‘shapes’): de Sitter space, Minkowski space and anti-de Sitter space. de Sitter space has positive curvature everywhere – like a sphere (but is empty of any matter). Minkowski space has zero curvature everywhere – i.e. a flat surface. Anti-de Sitter space has negative curvature everywhere – like a hyperbola.

    Because these shapes are related to the way our universe looks and works, cosmologists have their own way to understand these spaces. If the spacetime continuum exists in de Sitter space, the universe is said to have a positive cosmological constant. Similarly, Minkowski space implies a zero cosmological constant and anti-de Sitter space a negative cosmological constant. Studies by various space telescopes have found that our universe has a positive cosmological constant, meaning ‘our’ spacetime continuum occupies a de Sitter space (sort of, since our universe does have matter).

    In 1997, Maldacena found that a description of quantum gravity in anti-de Sitter space in N dimensions is the same as a conformal field theory in N – 1 dimensions. This – called the AdS/CFT correspondence – was an unexpected but monumental discovery that connected two kinds of theories that had thus far refused to cooperate. (The Wire Science had a chance to interview Maldacena about his past and current work in 2018, in which he provided more insights on AdS/CFT as well.)

    In his paper, Maldacena demonstrated his finding by using the example of string theory as a theory of quantum gravity in anti-de Sitter space – so the finding was also hailed as a major victory for string theory. String theory is a leading contender for a theory that can unify quantum mechanics and general relativity. However, we have found no experimental evidence of its many claims. This is why the AdS/CFT correspondence is also called the AdS/CFT conjecture.

    Nonetheless, thanks to the correspondence, (mathematical) physicists have found that some problems that are hard on the ‘AdS’ side are much easier to crack on the ‘CFT’ side, and vice versa – all they had to do was cross Maldacena’s ‘bridge’! This was another sign that the AdS/CFT correspondence wasn’t just a mathematical trick but could be a legitimate description of reality.

    So how could it be real?

    The holographic principle

    In 1997, Maldacena proved that a string theory in five dimensions was the same as a conformal field theory in four dimensions. However, gravity in our universe exists in four dimensions – not five. So the correspondence came close to providing a unified description of gravity and quantum mechanics, but not close enough. Nonetheless, it gave rise to the possibility that an entity that existed in some number of dimensions could be described by another entity that existed in one fewer dimensions.

    Actually, in fact, the AdS/CFT correspondence didn’t give rise to this possibility but proved it, at least mathematically; the awareness of the possibility had existed for many years until then, as the holographic principle. The Dutch physicist Gerardus ‘t Hooft first proposed it and the American physicist Leonard Susskind in the 1990s brought it firmly into the realm of string theory. One way to state the holographic principle, in the words of physicist Matthew Headrick, is thus:

    “The universe around us, which we are used to thinking of as being three dimensional, is actually at a more fundamental level two-dimensional and that everything we see that’s going on around us in three dimensions is actually happening in a two-dimensional space.”

    This “two-dimensional space” is the ‘surface’ of the universe, located at an infinite distance from us, where information is encoded that describes everything happening within the universe. It’s a mind-boggling idea. ‘Information’ here refers to physical information, such as, to use one of Headrick’s examples, “the positions and velocities of physical objects”. In beholding this information from the infinitely faraway surface, we apparently behold a three-dimensional reality.

    It bears repeating that this is a mind-boggling idea. We have no proof so far that the holographic principle is a real description of our universe – we only know that it could describe our reality, thanks to the AdS/CFT correspondence. This said, physicists have used the holographic principle to study and understand black holes as well.

    In 1915, Albert Einstein’s general theory of relativity provided a set of complicated equations to understand how mass, the spacetime continuum and the gravitational force are related. Within a few months, physicists Karl Swarzschild and Johannes Droste, followed in subsequent years by Georges Lemaitre, Subrahmanyan Chandrasekhar, Robert Oppenheimer and David Finkelstein, among others, began to realise that one of the equations’ exact solutions (i.e. non-approximate) indicated the existence of a point mass around which space was wrapped completely, preventing even light from escaping from inside this space to outside. This was the black hole.

    Because black holes were exact solutions, physicists assumed that they didn’t have any entropy – i.e. that its insides didn’t have any disorder. If there had been such disorder, it should have appeared in Einstein’s equations. It didn’t, so QED. But in the early 1970s, the Israeli-American physicist Jacob Bekenstein noticed a problem: if a system with entropy, like a container of hot gas, was thrown into the black hole, and the black hole doesn’t have entropy, where does the entropy go? It had to go somewhere; otherwise, the black hole would violate the second law of thermodynamics – that the entropy of an isolated system, like our universe, can’t decrease.

    Bekenstein postulated that black holes must also have entropy, and that the amount of entropy is proportional to the black hole’s surface area, i.e. the area of the event horizon. Bekenstein also worked out that there is a limit to the amount of entropy a given volume of space can contain, as well as that all black holes could be described by just three observable attributes: their mass, electric charge and angular momentum. So if a black hole’s entropy increases because it has swallowed some hot gas, this change ought to manifest as a change in one, some or all of these three attributes.

    Taken together: when some hot gas is tossed into a black hole, the gas would fall into the event horizon but the information about its entropy might appear to be encoded on the black hole’s surface, from the point of view of an observer located outside and away from the event horizon. Note here that the black hole, a sphere, is a three-dimensional object whereas its surface is a flat, curved sheet and therefore two-dimensional. That is, all the information required to describe a 3D black hole could in fact be encoded on its 2D surface – which evokes the AdS/CFT correspondence!

    However, that the event horizon of a black hole preserves information about objects falling into the black hole gives rise to another problem. Quantum mechanics requires all physical information (like “the positions and velocities of physical objects”, in Headrick’s example) to be conserved. That is, such information can’t ever be destroyed. And there’s no reason to expect it will be destroyed if black holes lived forever – but they don’t.

    Stephen Hawking found in the 1970s that black holes should slowly evaporate by emitting radiation, called Hawking radiation, and there is nothing in the theories of quantum mechanics to suggest that this radiation will be encoded with the information preserved on the event horizon. This, fundamentally, is the black hole information loss problem: either the black hole must shed the information in some way or quantum mechanics must be wrong about the preservation of physical information. Which one is it? This is a major unsolved problem in physics, and it’s just one part of the wider context that the AdS/CFT correspondence inhabits.

    For more insights into this discovery, do read The Wire Science‘s interview of Maldacena.

    I’m grateful to Nirmalya Kajuri for his feedback on this article.

    Sources:

  • A giant leap closer to the continuous atom laser

    One of the most exotic phases of matter is called the Bose-Einstein condensate. As its name indicates, this type of matter is one whose constituents are bosons – which are basically all subatomic particles whose behaviour is dictated by the rules of Bose-Einstein statistics. These particles are also called force particles. The other kind are matter particles, or fermions. Their behaviour is described by the rules of Fermi-Dirac statistics. Force particles and matter particles together make up the universe as we know it.

    To be a boson, a particle – which can be anything from quarks (which make up protons and neutrons) to entire atoms – needs to have a spin quantum number of certain values. (All of a particle’s properties can be described by the values of four quantum numbers.) An important difference between fermions and bosons is that Pauli’s exclusion principle doesn’t apply to bosons. The principle states that in a given quantum system, no two particles can have the same set of four quantum numbers at the same time. When two particles have the same four quantum numbers, they are said to occupy the same state. (‘States’ are not like places in a volume; instead, think of them more like a set of properties.) Pauli’s exclusion principle forbids fermions from doing this – but not bosons. So in a given quantum system, all the bosons can occupy the same quantum state if they are forced to.

    For example, this typically happens when the system is cooled to nearly absolute zero – the lowest temperature possible. (The bosons also need to be confined in a ‘trap’ so that they don’t keep moving around or combine with each other to form other particles.) More and more energy being removed from the system is equivalent to more and more energy being removed from the system’s constituent particles. So as fermions and bosons possess less and less energy, they occupy lower and lower quantum states. But once all the lowest fermionic states are occupied, fermions start occupying the next lowest states, and so on. This is because of the principle. Bosons on the other hand are all able to occupy the same lowest quantum state. When this happens, they are said to have formed a Bose-Einstein condensate.

    In this phase, all the bosons in the system move around like a fluid – like the molecules of flowing water. A famous example of this is superconductivity (at least of the conventional variety). When certain materials are cooled to near absolute zero, their electrons – which are fermions – overcome their mutual repulsion and pair up with each other to form composite pairs called Cooper pairs. Unlike individual electrons, Cooper pairs are bosons. They go on to form a Bose-Einstein condesate in which the Cooper pairs ‘flow’ through the material. In the material’s non-superconducting state, the electrons would have scattered by some objects in their path – like atomic nuclei or vibrations in the lattice. This scattering would have manifested as electrical resistance. But because Cooper pairs have all occupied the same quantum state, they are much harder to scatter. They flow through the material as if they don’t experience any resistance. This flow is what we know as superconductivity.

    Bose-Einstein condensates are a big deal in physics because they are a macroscopic effect of microscopic causes. We can’t usually see or otherwise directly sense the effects of most quantum-physical phenomena because they happen on very small scales, and we need the help of sophisticated instruments like electron microscopes and particle accelerators. But when we cool a superconducting material to below its threshold temperature, we can readily sense the presence of a superconductor by passing an electric current through it (or using the Meissner effect). Macroscopic effects are also easier to manipulate and observe, so physicists have used Bose-Einstein condensates as a tool to probe many other quantum phenomena.

    While Albert Einstein predicted the existence of Bose-Einstein condensates – based on work by Satyendra Nath Bose – in 1924, physicists had the requisite technologies and understanding of quantum mechanics to be able to create them in the lab only in the 1990s. These condensates were, and mostly still are, quite fragile and can be created only in carefully controlled conditions. But physicists have also been trying to figure out how to maintain a Bose-Einstein condensate for long periods of time, because durable condensates are expected to provide even more research insights as well as hold potential applications in particle physics, astrophysics, metrology, holography and quantum computing.

    An important reason for this is wave-particle duality, which you might recall from high-school physics. Louis de Broglie postulated in 1924 that every quantum entity could be described both as a particle and as a wave. The Davisson-Germer experiment of 1923-1927 subsequently found that electrons – which were until then considered to be particles – behaved like waves in a diffraction experiment. Interference and diffraction are exhibited by waves, so the experiment proved that electrons could be understood as waves as well. Similarly, a Bose-Einstein condensate can be understood both in terms of particle physics and in terms of wave physics. Just like in the Davisson-Germer experiment, when physicists set up an experiment to look for an interference pattern from a Bose-Einstein condensate, they succeeded. They also found that the interference pattern became stronger the more bosons they added to the condensate.

    Now, all the bosons in a condensate have a coherent phase. The phase of a wave measures the extent to which the wave has evolved in a fixed amount of time. When two waves have coherent phase, both of them will have progressed by the same amount in the same span of time. Phase coherence is one of the most important wave-like properties of a Bose-Einstein condensate because of the possibility of a device called an atom laser.

    ‘Laser’ is an acronym for ‘light amplification by stimulated emission of radiation’. The following video demonstrates its working principle better than I can in words right now:

    The light emitted by an optical laser is coherent: it has a constant frequency and comes out in a narrow beam if the coherence is spatial or can be produced in extremely short pulses if the coherence is temporal. An atom laser is a laser composed of propagating atoms instead of photons. As Wolfgang Ketterle, who led the creation of the first Bose-Einstein condensate and later won a Nobel Prize for it, put it, “The atom laser emits coherent matter waves whereas the optical laser emits coherent electromagnetic waves.” Because the bosons of a Bose-Einstein condensate are already phase-coherent, condensates make excellent sources for an atom laser.

    The trick, however, lies in achieving a Bose-Einstein condensate of the desired (bosonic) atoms and then extracting a few atoms into the laser while replenishing the condensate with more atoms – all without letting the condensate break down or the phase-coherence being lost. Physicists created the first such atom laser in 1996 but it did not have a continuous emission nor was very bright. Researchers have since built better atom lasers based on Bose-Einstein condensates, although they remain far from being usable in their putative applications. An important reason for this is that physicists are yet to build a condensate-based atom laser that can operate continuously. That is, as atoms from the condensate lase out, the condesate is constantly replenished, and the laser operates continuously for a long time.

    On June 8, researchers from the University of Amsterdam reported that they had been able to create a long-lived, sort of self-sustaining Bose-Einstein condensate. This brings us a giant step closer to a continuously operating atom laser. Their setup consisted of multiple stages, all inside a vacuum chamber.

    In the first stage, strontium atoms (which are bosons) started from an ‘oven’ maintained at 850 K and were progressively laser-cooled while they made their way into a reservoir. (Here is a primer of how laser-cooling works.) The reservoir had a dimple in the middle. In the second stage, the atoms were guided by lasers and gravity to descend into this dimple, where they had a temperature of approximately 1 µK, or one-millionth of a kelvin. As the dimple became more and more crowded, it was important for the atoms here to not heat up, which could have happened if some light had ‘leaked’ into the vacuum chamber.

    To prevent this, in the third stage, the physicists used a carefully tuned laser shined only through the dimple that had the effect of rendering the strontium atoms mostly ‘transparent’ to light. According to the research team’s paper, without the ‘transparency beam’, the atoms in the dimple had a lifetime of less than 40 ms, whereas with the beam, it was more than 1.5 s – a 37x difference. At some point, when a sufficient number of atoms had accumulated in the dimple, a Bose-Einstein condensate formed. In the fourth stage, an effect called Bose stimulation kicked in. Simply put, as more bosons (strontium atoms, in this case) transitioned into the condensate, the rate of transition of additional bosons also increased. Bose stimulation thus played the role that the gain medium plays in an optical laser. The size of the condensate grew until it matched the rate of loss of atoms out of the dimple, and reached an equilibrium.

    And voila! With a steady-state Bose-Einstein condensate, the continuous atom laser was almost ready. The physicists have acknowledged that their setup can be improved in many ways, including by making the laser-cooling effects more uniform, increasing the lifetime of strontium atoms inside the dimple, reducing losses due to heating and other effects, etc. At the same time, they wrote that “at all times after steady state is reached”, they found a Bose-Einstein condensate existing in their setup.