In May, Total Film reported that the production team of Tenet, led by director Christopher Nolan, found that using a second-hand Boeing 747 was better than recreating a scene involving an exploding plane with miniatures and CGI. I’m not clear how exactly it was better; Total Film only wrote:
“I planned to do it using miniatures and set-piece builds and a combination of visual effects and all the rest,” Nolan tells TF. However, while scouting for locations in Victorville, California, the team discovered a massive array of old planes. “We started to run the numbers… It became apparent that it would actually be more efficient to buy a real plane of the real size, and perform this sequence for real in camera, rather than build miniatures or go the CG route.”
I’m assuming that by ‘numbers’ Nolan means the finances. That is, buying and crashing a life-size airplane was more financially efficient than recreating the scene with other means. This is quite the disappointing prospect, as must be obvious, because this calculation limits itself to a narrow set of concerns, or just one as in this case – more bang for the buck – and consigns everything else to being negative externalities. Foremost on my mind is carbon emissions from transporting the vehicle, the explosion and the debris. If these costs were factored in, for example in terms of however much the carbon credits would be worth in the region where Nolan et al filmed the explosion, would the numbers have still been just as efficient? (I’m assuming, reasonably I think, that Nolan et al aren’t using carbon-capture technologies.)
However, CGI itself may not be so calorifically virtuous. I’m too lazy in this moment to cast about on the internet for estimates of how much of the American film industry’s emissions CGI accounts for. But I did find this tidbit from 2018 on Columbia University’s Earth Institute blog:
For example, movies with a budget of $50 million dollars—including such flicks as Zoolander 2, Robin Hood: Prince of Thieves, and Ted—typically produce the equivalent of around 4,000 metric tons of CO2. That’s roughly the weight of a giant sequoia tree.
A ‘green production guide’ linked there leads to a page offering an emissions calculator that doesn’t seem to account for CGI specifically; only broadly “electricity, natural gas & fuel oil, vehicle & equipment fuel use, commercial flights, charter flights, hotels & housing”. In any case, I had a close call with bitcoin-mining many years ago that alerted me to how energy-intensive seemingly straightforward computational processes could get, followed by a reminder when I worked at The Hindu – where the two computers used to render videos were located in a small room fit with its own AC, fixed at 18º C, and when they were rendering videos without any special effects, the CPUs’ fans would scream.
Today, digital artists create most CGI and special effects using graphics processing units (GPUs) – a notable exception was the black hole in Nolan’s 2014 film Interstellar, created using CPUs – and Nvidia and AMD are two of the more ‘leading’ brands from what I know (I don’t know much). One set of tests whose results a site called ‘Tom’s Hardware’ reported in May this year found an Nvidia GeForce RTX 2080 Ti FE GPU is among the bottom 10% of performers in terms of wattage for a given task – in this case 268.7 W to render fur – among the 42 options the author tested. An AMD Radeon RX 5700 XT GPU consumed nearly 80% as much for the same task, falling in the seventh decile. A bunch of users on this forum say a film like Transformers will need Nvidia Quadro and AMD Firepro GPUs; the former consumed 143 W in one fur-rendering test. (Comparability may be affected by differences in the hardware setup.) Then there’s the cooling cost.
Again, I don’t know if Nolan considered any of these issues – but I doubt that he did – when he ‘ran the numbers’ to determine what would be better: blowing up a real plane or a make-believe one. Intuition does suggest the former would be a lot more exergonic (although here, again, we’re forced to reckon with the environmental and social cost of obtaining specific metals, typically from middle-income nations, required to manufacture advanced electronics).
Cinema is a very important part of 21st century popular culture and popular culture is a very important part of how we as social, political people (as opposed to biological humans) locate ourselves in the world we’ve constructed – including being good citizens, conscientious protestors, sensitive neighbours. So constraining cinema’s remit or even imposing limits on filmmakers for the climate’s sake are ridiculous courses of action. This said, when there are options (and so many films have taught us there are always options), we have a responsibility to pick the more beneficial one while assuming the fewest externalities.
The last bit is important: the planet is a single unit and all of its objects occupants are wildly interconnected. So ‘negative externalities’ as such are more often than not trade practices crafted to simplify administrative and/or bureaucratic demands. In the broader ‘One Health’ sense, they vanish.
Scientists have reported that they have found abnormal amounts of a toxic compound called phosphine in Venus’s atmosphere, at 55-80 km altitude. This story is currently all over my Twitter feed because one way to explain this unexpected abundance is that microbes could be producing this gas – as we know them to do on Earth – in oxygen-starved conditions. Nonetheless, we shouldn’t lose sight of the fact that the real proposition here is that there is too much phosphine, not that there is a potential sign of life.
While some scientists have been issuing words of caution along similar lines, others have cut to the other end, writing that making sense of this discovery doesn’t require “alien microbes” at all because chemistry offers possibilities that are much more likely to be the case – and verging on the argument that this possibly can’t be aliens. Between them is the option to keep an open mind, so difficult these days – between an Avi Loeb-esque conception of the universe in which the role of creativity is overemphasised to dream up plausible (but improbable) theories and a hyper-conservative reality that refuses to admit new possibilities because we haven’t plumbed the depths of what we already know to be true enough.
Nonetheless, this is where it is best to stand today – considering we simply don’t know enough about the Venusian atmosphere to refute one argument or support the other. At the same time, I would like to make a finer point. In November 2014, I had published a post explaining the contents of a scientific paper published around then, describing how an exotic form of carbon dioxide could host life. As I wrote:
At about 305 kelvin and 73-times Earth’s atmospheric pressure, carbon dioxide becomes supercritical, a form of matter that exhibits the physical properties of both liquids and gases. … As the study’s authors found, some enzymes were more stable in supercritical carbon dioxide because it contains no water. The anhydrous property also enables a “molecular memory” in the enzymes, when they ‘remember’ their acidity from previous reactions to guide the future construction of organic molecules more easily. The easiest way – no matter that it’s still difficult – to check if life could exist in supercritical carbon dioxide naturally is to … investigate shallow depths below the surface of Venus. Carbon dioxide is abundant on Venus and the planet has the hottest surface in the Solar System. Its subsurface pressures could then harbour supercritical carbon dioxide.
When we do muster as much caution as we can when reporting on recently published papers presenting evidence of new mysteries, we evoke the possibility of ‘unknown unknowns’ – things that we don’t know we don’t know, as perfectly illustrated in the case of carbon monoxide on Titan. At the same time, are we aware that ‘unknown unknowns’ also make way for the possibility of alien life-forms with biological foundations we may never conceive of until we encounter a real, live example? I am not saying that there is life on Venus or elsewhere. I am saying that the knowledge-based defences we employ to protect ourselves from hype and reckless speculation in this case could just as easily work against our favour, and close us off to new possibilities. And since such caution is often considered a virtue, it is quite important that we don’t indulge it.
There is a wonderful paragraph in a paper from 2004 that I’m reminded of from time to time, when considering the possibility of aliens for a science article or a game of Dungeons & Dragons:
The universe of chemical possibilities is huge. For example, the number of different proteins 100 amino acids long, built from combinations of the natural 20 amino acids, is larger than the number of atoms in the cosmos. Life on Earth certainly did not have time to sample all possible sequences to find the best. What exists in modern Terran life must therefore reflect some contingencies, chance events in history that led to one choice over another, whether or not the choice was optimal.
The amount of communicative effort to describe the fact of a ball being thrown is vanishingly low. It’s as simple as saying, “X threw the ball.” It takes a bit more effort to describe how an internal combustion engine works – especially if you’re writing for readers who have no idea how thermodynamics works. However, if you spend enough time, you can still completely describe it without compromising on any details.
Things start to get more difficult when you try to explain, for example, how webpages are loaded in your browser: because the technology is more complicated and you often need to talk about electric signals and logical computations – entities that you can’t directly see. You really start to max out when you try to describe everything that goes into launching a probe from Earth and landing it on a comet because, among other reasons, it brings together advanced ideas in a large number of fields.
At this point, you feel ambitious and you turn your attention to quantum technologies – only to realise you’ve crossed a threshold into a completely different realm of communication, a realm in which you need to pick between telling the whole story and risk being (wildly) misunderstood OR swallowing some details and making sure you’re entirely understood.
Last year, a friend and I spent dozens of hours writing a 1,800-word article explaining the Aharonov-Bohm quantum interference effect. We struggled so much because understanding this effect – in which electrons are affected by electromagnetic fields that aren’t there – required us to understand the wave-function, a purely mathematical object that describes real-world phenomena, like the behaviour of some subatomic particles, and mathematical-physical processes like non-Abelian transformations. Thankfully my friend was a physicist, a string theorist for added measure; but while this meant that I could understand what was going on, we spent a considerable amount of time negotiating the right combination of metaphors to communicate what we wanted to communicate.
However, I’m even more grateful in hindsight that my friend was a physicist who understood the need to not exhaustively include details. This need manifests in two important ways. The first is the simpler, grammatical way, in which we construct increasingly involved meanings using a combination of subjects, objects, referrers, referents, verbs, adverbs, prepositions, gerunds, etc. The second way is more specific to science communication: in which the communicator actively selects a level of preexisting knowledge on the reader’s part – say, high-school education at an English-medium institution – and simplifies the slightly more complicated stuff while using approximations, metaphors and allusions to reach for the mind-boggling.
Think of it like building an F1 racecar. It’s kinda difficult if you already have the engine, some components to transfer kinetic energy through the car and a can of petrol. It’s just ridiculous if you need to start with mining iron ore, extracting oil and preparing a business case to conduct televisable racing sports. In the second case, you’re better off describing what you’re trying to do to the caveman next to you using science fiction, maybe poetry. The problem is that to really help an undergraduate student of mechanical engineering make sense of, say, the Casimir effect, I’d rather say:
According to quantum mechanics, a vacuum isn’t completely empty; rather, it’s filled with quantum fluctuations. For example, if you take two uncharged plates and bring them together in a vacuum, only quantum fluctuations with wavelengths shorter than the distance between the plates can squeeze between them. Outside the plates, however, fluctuations of all wavelengths can fit. The energy outside will be greater than inside, resulting in a net force that pushes the plates together.
I wouldn’t say the following even though it’s much less wrong:
The Casimir effect can be understood by the idea that the presence of conducting metals and dielectrics alters the vacuum expectation value of the energy of the second-quantised electromagnetic field. Since the value of this energy depends on the shapes and positions of the conductors and dielectrics, the Casimir effect manifests itself as a force between such objects.
Put differently, the purpose of communication is to be understood – not learnt. And as I’m learning these days, while helping virologists compose articles on the novel coronavirus and convincing physicists that comparing the Higgs field to molasses isn’t wrong, this difference isn’t common knowledge at all. More importantly, I’m starting to think that my physicist-friend who really got this difference did so because he reads a lot. He’s a veritable devourer of texts. So he knows it’s okay – and crucially why it’s okay – to skip some details.
I’m half-enraged when really smart scientists just don’t get this, and accuse editors (like me) of trying instead to misrepresent their work. (A group that’s slightly less frustrating consists of authors who list their arguments in one paragraph after another, without any thought for the article’s structure and – more broadly – recognising the importance of telling a story. Even if you’re reviewing a book or critiquing a play, it’s important to tell a story about the thing you’re writing about, and not simply enumerate your points.)
To them – which is all of them because those who think they know the difference but really don’t aren’t going to acknowledge the need to bridge the difference, and those who really know the difference are going to continue reading anyway – I say: I acknowledge that imploring people to communicate science more without reading more is fallacious, so read more, especially novels and creative non-fiction, and stories that don’t just tell stories but show you how we make and remember meaning, how we memorialise human agency, how memory works (or doesn’t), and where knowledge ends and wisdom begins.
There’s a similar problem I’ve faced when working with people for whom English isn’t the first language. Recently, a person used to reading and composing articles in the passive voice was livid after I’d changed numerous sentences in the article they’d submitted to the active voice. They really didn’t know why writing, and reading, in the active voice is better because they hadn’t ever had to use English for anything other than writing and reading scientific papers, where the passive voice is par for the course.
I had a bigger falling out with another author because I hadn’t been able to perfectly understand the point they were trying to make, in sentences of broken English, and used what I could infer to patch them up – except I was told I’d got most of them wrong. And they couldn’t implement my suggestions either because they couldn’t understand my broken Hindi.
These are people that I can’t ask to read more. The Wire and The Wire Science publish in English but, despite my (admittedly inflated) view of how good these publications are, I’ve no reason to expect anyone to learn a new language because they wish to communicate their ideas to a large audience. That’s a bigger beast of a problem, with tentacles snaking through colonialism, linguistic chauvinism, regional identities, even ideologies (like mine – to make no attempts to act on instructions, requests, etc. issued in Hindi even if I understand the statement). But at the same time there’s often too much lost in translation – so much so that (speaking from my experience in the last five years) 50% of all submissions written by authors for whom English isn’t the first language don’t go on to get published, even if it was possible for either party to glimpse during the editing process that they had a fascinating idea on their hands.
And to me, this is quite disappointing because one of my goals is to publish a more diverse group of writers, especially from parts of the country underrepresented thus far in the national media landscape. Then again, I acknowledge that this status quo axiomatically charges us to ensure there are independent media outlets with science sections and publishing in as many languages as we need. A monumental task as things currently stand, yes, but nonetheless, we remain charged.
What does the term ‘super-spreader’ mean? According to an article in the MIT Tech Review on June 15, “The word is a generic term for an unusually contagious individual who’s been infected with disease. In the context of the coronavirus, scientists haven’t narrowed down how many infections someone needs to cause to qualify as a superspreader, but generally speaking it far exceeds the two to three individuals researchers initially estimated the average infected patient could infect.”
The label of ‘super-spreader’ seems to foist the responsibility of not infecting others on an individual, whereas a ‘super-spreader’ can arise only by dint of an individual and her environment together. Consider the recent example of two hair-stylists in Springfield, Missouri, who both had COVID-19 (but didn’t know it) even as they attended to 139 clients over more than a week. Later, researchers found that none of the 139 had contracted COVID-19 because they all wore masks, washed hands, etc.
Hair-styling is obviously a high-contact profession but just this fact doesn’t suffice to render a hair-stylist a ‘super-spreader’. In this happy-making example, the two hair-stylists didn’t become super-spreaders because a) they maintained personal hygiene and wore masks, and b) so did the people in their immediate environment.
While I couldn’t find a fixed definition of the term ‘super-spreader’ on the WHO website, a quick search revealed a description from 2003, when the SARS epidemic was underway. Here, the organisation acknowledges that ‘super-spreading’ in itself is “not a recognised medical condition” (although the definition may have been updated since, but I doubt it), and that it arises as a result of safety protocols breaking down.
“… [in] the early days of the outbreak …, when SARS was just becoming known as a severe new disease, many patients were thought to be suffering from atypical pneumonia having another cause, and were therefore not treated as cases requiring special precautions of isolation and infection control. As a result, stringent infection control measures were not in place. In the absence of protective measures, many health care workers, relatives, and hospital visitors were exposed to the SARS virus and subsequently developed SARS. Since infection control measures have been put in place, the number of new cases of SARS arising from a single SARS source case has been significantly reduced. When investigating current chains of continuing transmission, it is important to look for points in the history of case detection and patient management when procedures for infection control may have broken down.”
This view reaffirms the importance of addressing ‘super-spreads’ not as a consequence of individual action or offence but as the product of a set of circumstances that facilitate the rapid transmission of an infectious disease.
In another example, on July 21, the Indian Express reported that the city of Ahmedabad had tested 17,000 ‘super-spreaders’, of which 122 tested positive. The article was also headlined ‘Phase 2 of surveillance: 122 super-spreaders test positive in Ahmedabad’.
According to the article’s author, those tested included “staff of hair cutting-salons as well as vendors of vegetables, fruits, grocery, milk and medicines”. The people employed in all these professions in India are typically middle-class (economically) at best, and as such enjoy far fewer social, educational and healthcare protections than the economic upper class, and live in markedly more crowded areas with uneven access to transportation and clean water.
Given these hard-to-escape circumstances, identifying the people who were tested as ‘super-spreaders’ seems not only unjust but also an attempt by the press in this case as well as city officials to force them to take responsibility for their city’s epidemic status and preparedness – which is just ridiculous because it criminalises their profession (assuming, reasonably I’d think, that wilfully endangering the health of others around you during a pandemic is a crime).
The Indian Express also reported that the city was testing people and then issuing them health cards – which presumably note that the card-holder has been tested together with the test result. Although I’m inclined to believe the wrong use of the term ‘super-spreader’ here originated not with the newspaper reporter but with the city administration, it’s also frustratingly ridiculous that the people were designated ‘super-spreaders’ at the time of testing, before the results were known – i.e. super-spreader until proven innocent? Or is this a case of officials and journalists unknowingly using two non-interchangeable terms interchangeably?
Or did this dangerous mix-up arise because most places and governments in India don’t have reason to believe ‘high-contact’ is different from ‘super-spreader’?
But be personal and interpersonal hygiene as they may, officials’ use of one term instead of the other also allows them to continue to believe there needn’t or shouldn’t be a difference either. And that’s a big problem because even as the economically middle- and lower-classes may not be able to access better living conditions and amenities, thinking there’s no difference between ‘high-contact’ and ‘super-spreader’ allows those in charge to excuse themselves from their responsibilities to effect that difference.
A webinar by The Life of Science on the construct of the ‘scientific genius’ just concluded, with Gita Chadha and Shalini Mahadev, a PhD scholar at HCU, as panellists. It was an hour long and I learnt a lot in this short time, which shouldn’t be surprising because, more broadly, we often don’t stop to question the conduct of science itself, how it’s done, who does it, their privileges and expectations, etc., and limit ourselves to the outcomes of scientific practice alone. The Life of Science is one of my favourite publications for making questions like these part of its core work (and a tiny bit also because it’s run by two good friends).
I imagine the organisers will upload a recording of the conversation at some point (edit: hopefully by Monday, says Nandita Jayaraj); they’ve also offered to collect the answers to many questions that went unanswered, only for lack of time, and publish them as an article. This was a generous offer and I’m quite looking forward to that.
I did have yet another question but I decided against asking it when, towards the end of the session, the organisers made some attempts to get me to answer a question about the media’s role in constructing the scientific genius, and I decided I’d work my question into what I could say. However, Nandita Jayaraj, one of The Life of Science‘s founders, ended up answering it to save time – and did so better than I could have. This being the case, I figured I’d blog my response.
The question itself that I’d planned to ask was this, addressed to Gita Chadha: “I’m confused why many Indians think so much of the Nobel Prizes. Do you think the Nobel Prizes in particular have affected the perception of ‘genius’?”
This query should be familiar to any journalist who, come October, is required to cover the Nobel Prize announcements for that year. When I started off at The Hindu in 2012, I’d cover these announcements with glee; I also remember The Hindu would carry the notes of the laureates’ accomplishments, published by the Nobel Foundation, in full on its famous science and tech. page the following day. At first I thought – and was told by some other journalists as well – that these prizes have the audience’s attention, so the announcements are in effect a chance to discuss science with the privilege of an interested audience, which is admittedly quite unusual in India.
However, today, it’s clear to me that the Nobel Prizes are deeply flawed in more ways than one, and if journalists are using them as an opportunity to discuss science – it’s really not worth it. There are many other ways to cover science than on the back of a set of prizes that simply augments – instead of in any way compensating for – a non-ideal scientific enterprise. So when we celebrate the Nobel Prizes, we simply valorise the enterprise and its many structural deformities, not the least of which – in the Indian context – is the fact that it’s dominated by upper-caste men, mostly Brahmins, and riddled with hurdles for scholars from marginalised groups.
Brahmins are so good at science not because they’re particularly gifted but because they’re the only ones who seem to have the opportunity – a fact that Shalini elucidated very clearly when she recounted her experiences as a Dalit woman in science, especially when she said: “My genius is not going to be tested. The sciences have written me off.” The Brahmins’ domination of the scientific workforce has a cascading set of effects that we then render normal simply because we can’t conceive of a different way science can be, including sparing the Brahmin genius of scrutiny, as is the privilege of all geniuses.
(At a seminar last year, some speakers on stage had just discussed the historical roots of India being so bad at experimental physics and had taken a break. Then, I overheard an audience member tell his friend that while it’s well and good to debate what we can and can’t pin on Jawaharlal Nehru, it’s amusing that Brahmin experts will have discussions about Brahmin physicists without either party considering if it isn’t their caste sensibility that prevents them from getting their hands dirty!)
The other way the Nobel Prizes are a bad for journalists indicts the norms of journalism itself. As I recently described vis-à-vis ‘journalistic entropy’, there is a sort of default expectation of reporters from the editorial side to cover the Nobel Prize announcements for their implicit newsworthiness instead of thinking about whether they should matter. I find such arguments about chronicling events without participating in them to be bullshit, especially when as a Brahmin I’m already part of Indian journalism’s caste problem.
Instead, I prefer to ask these questions, and answer them honestly in terms of the editorial policies I have the privilege to influence, so that I and others don’t end up advancing the injustices that the Nobel Prizes stand for. This is quite akin to my, and others’, older argument that journalists shouldn’t blindly offer their enterprise up as a platform for majoritarian politicians to hijack and use as their bullshit megaphones. But if journalists don’t recast their role in society accordingly, they – we – will simply continue to celebrate the Nobel laureates, and by proxy the social and political conditions that allowed the laureates in particular to succeed instead of others, and which ultimately feed into the Nobel Prizes’ arbitrarily defined ‘prestige’.
Note that the Nobel Prizes here are the perfect examples, but only examples nonetheless, to illustrate a wider point about the relationship between scientific eminence and journalistic notability. The Wire for example has a notability threshold: we’re a national news site, which means we don’t cover local events and we need to ensure what we do cover is of national relevance. As a corollary, such gatekeeping quietly implies that if we feature the work of a scientist, then that scientist must be a particularly successful one, a nationally relevant one.
And when we keep featuring and quoting upper-caste male scientists, we further the impression that only upper-caste male scientists can be good at science. Nothing says more about the extent to which the mainstream media has allowed this phenomenon to dominate our lives than the fact of The Life of Science‘s existence.
It would be foolish to think that journalistic notability and scientific eminence aren’t linked; as Gita Chadha clarified at the outset, one part of the ‘genius’ construct in Western modernity is the inevitability of eminence. So journalists need to work harder to identify and feature other scientists by redefining their notability thresholds – even as scientists and science administrators need to rejig their sense of the origins and influence of eminence in science’s practice. That Shalini thinks her genius “won’t be tested” is a brutal clarification of the shape and form of the problem.
On June 24, a press release from CERN said that scientists and engineers working on upgrading the Large Hadron Collider (LHC) had “built and operated … the most powerful electrical transmission line … to date”. The transmission line consisted of four cables – two capable of transporting 20 kA of current and two, 7 kA.
The ‘A’ here stands for ‘ampere’, the SI unit of electric current. Twenty kilo-amperes is an extraordinary amount of current, nearly equal to the amount in a single lightning strike.
In the particulate sense: one ampere is the flow of one coulomb per second. One coulomb is equal to around 6.24 quintillion elementary charges, where each elementary charge is the charge of a single proton or electron (with opposite signs). So a cable capable of carrying a current of 20 kA can essentially transport 124.8 sextillion electrons per second.
The line is composed of cables made of magnesium diboride (MgB2), which is a superconductor and therefore presents no resistance to the flow of the current and can transmit much higher intensities than traditional non-superconducting cables. On this occasion, the line transmitted an intensity 25 times greater than could have been achieved with copper cables of a similar diameter. Magnesium diboride has the added benefit that it can be used at 25 kelvins (-248 °C), a higher temperature than is needed for conventional superconductors. This superconductor is more stable and requires less cryogenic power. The superconducting cables that make up the innovative line are inserted into a flexible cryostat, in which helium gas circulates.
The part in bold could have been more explicit and noted that superconductors, including magnesium diboride, can’t carry an arbitrarily higher amount of current than non-superconducting conductors. There is actually a limit for the same reason why there is a limit to the current-carrying capacity of a normal conductor.
This explanation wouldn’t change the impressiveness of this feat and could even interfere with readers’ impression of the most important details, so I can see why the person who drafted the statement left it out. Instead, I’ll take this matter up here.
An electric current is generated between two points when electrons move from one point to the other. The direction of current is opposite to the direction of the electrons’ movement. A metal that conducts electricity does so because its constituent atoms have one or more valence electrons that can flow throughout the metal. So if a voltage arises between two ends of the metal, the electrons can respond by flowing around, birthing an electric current.
This flow isn’t perfect, however. Sometimes, a valence electron can bump into atomic nuclei, impurities – atoms of other elements in the metallic lattice – or be thrown off course by vibrations in the lattice of atoms, produced by heat. Such disruptions across the metal collectively give rise to the metal’s resistance. And the more resistance there is, the less current the metal can carry.
These disruptions often heat the metal as well. This happens because electrons don’t just flow between the two points across which a voltage is applied. They’re accelerated. So as they’re speeding along and suddenly bump into an impurity, they’re scattered into random directions. Their kinetic energy then no longer contributes to the electric energy of the metal and instead manifests as thermal energy – or heat.
If the electrons bump into nuclei, they could impart some of their kinetic energy to the nuclei, causing the latter to vibrate more, which in turn means they heat up as well.
Copper and silver have high conductance because they have more valence electrons available to conduct electricity and these electrons are scattered to a lesser extent than in other metals. Therefore, these two also don’t heat up as quickly as other metals might, allowing them to transport a higher current for longer. Copper in particular has a higher mean free path: the average distance an electron travels before being scattered.
In superconductors, the picture is quite different because quantum physics assumes a more prominent role. There are different types of superconductors according to the theories used to understand how they conduct electricity with zero resistance and how they behave in different external conditions. The electrical behaviour of magnesium diboride, the material used to transport the 20 kA current, is described by Bardeen-Cooper-Schrieffer (BCS) theory.
According to this theory, when certain materials are cooled below a certain temperature, the residual vibrations of their atomic lattice encourages their valence electrons to overcome their mutual repulsion and become correlated, especially in terms of their movement. That is, the electrons pair up.
While individual electrons belong to a class of particles called fermions, these electron pairs – a.k.a. Cooper pairs – belong to another class called bosons. One difference between these two classes is that bosons don’t obey Pauli’s exclusion principle: that no two fermions in the same quantum system (like an atom) can have the same set of quantum numbers at the same time.
As a result, all the electron pairs in the material are now free to occupy the same quantum state – which they will when the material is supercooled. When they do, the pairs collectively make up an exotic state of matter called a Bose-Einstein condensate: the electron pairs now flow through the material as if they were one cohesive liquid.
In this state, even if one pair gets scattered by an impurity, the current doesn’t experience resistance because the condensate’s overall flow isn’t affected. In fact, given that breaking up one pair will cause all other pairs to break up as well, the energy required to break up one pair is roughly equal to the energy required to break up all pairs. This feature affords the condensate a measure of robustness.
But while current can keep flowing through a BCS superconductor with zero resistance, the superconducting state itself doesn’t have infinite persistence. It can break if it stops being cooled below a specific temperature, called the critical temperature; if the material is too impure, contributing to a sufficient number of collisions to ‘kick’ all electrons pairs out of their condensate reverie; or if the current density crosses a particular threshold.
At the LHC, the magnesium diboride cables will be wrapped around electromagnets. When a large current flows through the cables, the electromagnets will produce a magnetic field. The LHC uses a circular arrangement of such magnetic fields to bend the beam of protons it will accelerate into a circular path. The more powerful the magnetic field, the more accurate the bending. The current operational field strength is 8.36 tesla, about 128,000-times more powerful than Earth’s magnetic field. The cables will be insulated but they will still be exposed to a large magnetic field.
Type I superconductors completely expel an external magnetic field when they transition to their superconducting state. That is, the magnetic field can’t penetrate the material’s surface and enter the bulk. Type II superconductors are slightly more complicated. Below one critical temperature and one critical magnetic field strength, they behave like type I superconductors. Below the same temperature but a slightly stronger magnetic field, they are superconducting and allow the fields to penetrate their bulk to a certain extent. This is called the mixed state.
A hand-drawn phase diagram showing the conditions in which a mixed-state type II superconductor exists. Credit: Frederic Bouquet/Wikimedia Commons, CC BY-SA 3.0
Say a uniform magnetic field is applied over a mixed-state superconductor. The field will plunge into the material’s bulk in the form of vortices. All these vortices will have the same magnetic flux – the number of magnetic field lines per unit area – and will repel each other, settling down in a triangular pattern equidistant from each other.
An annotated image of vortices in a type II superconductor. The scale is specified at the bottom right. Source: A set of slides entitled ‘Superconductors and Vortices at Radio Frequency Magnetic Fields’ by Ernst Helmut Brandt, Max Planck Institute for Metals Research, October 2010.
When an electric current passes through this material, the vortices are slightly displaced, and also begin to experience a force proportional to how closely they’re packed together and their pattern of displacement. As a result, to quote from this technical (yet lucid) paper by Praveen Chaddah:
This force on each vortex … will cause the vortices to move. The vortex motion produces an electric field1 parallel to [the direction of the existing current], thus causing a resistance, and this is called the flux-flow resistance. The resistance is much smaller than the normal state resistance, but the material no longer [has] infinite conductivity.
1. According to Maxwell’s equations of electromagnetism, a changing magnetic field produces an electric field.
Since the vortices’ displacement depends on the current density: the greater the number of electrons being transported, the more flux-flow resistance there is. So the magnesium diboride cables can’t simply carry more and more current. At some point, setting aside other sources of resistance, the flux-flow resistance itself will damage the cable.
There are ways to minimise this resistance. For example, the material can be doped with impurities that will ‘pin’ the vortices to fixed locations and prevent them from moving around. However, optimising these solutions for a given magnetic field and other conditions involves complex calculations that we don’t need to get into.
The point is that superconductors have their limits too. And knowing these limits could improve our appreciation for the feats of physics and engineering that underlie achievements like cables being able to transport 124.8 sextillion electrons per second with zero resistance. In fact, according to the CERN press release,
The [line] that is currently being tested is the forerunner of the final version that will be installed in the accelerator. It is composed of 19 cables that supply the various magnet circuits and could transmit intensities of up to 120 kA!
§
While writing this post, I was frequently tempted to quote from Lisa Randall‘s excellent book-length introduction to the LHC, Knocking on Heaven’s Door (2011). Here’s a short excerpt:
One of the most impressive objects I saw when I visited CERN was a prototype of LHC’s gigantic cylindrical dipole magnets. Event with 1,232 such magnets, each of them is an impressive 15 metres long and weighs 30 tonnes. … Each of these magnets cost EUR 700,000, making the ned cost of the LHC magnets alone more than a billion dollars.
The narrow pipes that hold the proton beams extend inside the dipoles, which are strung together end to end so that they wind through the extent of the LHC tunnel’s interior. They produce a magnetic field that can be as strong as 8.3 tesla, about a thousand times the field of the average refrigerator magnet. As the energy of the proton beams increases from 450 GeV to 7 TeV, the magnetic field increases from 0.54 to 8.3 teslas, in order to keep guiding the increasingly energetic protons around.
The field these magnets produce is so enormous that it would displace the magnets themselves if no restraints were in place. This force is alleviated through the geometry of the coils, but the magnets are ultimately kept in place through specially constructed collars made of four-centimetre thick steel.
… Each LHC dipole contains coils of niobium-titanium superconducting cables, each of which contains stranded filaments a mere six microns thick – much smaller than a human hair. The LHC contains 1,200 tonnes of these remarkable filaments. If you unwrapped them, they would be long enough to encircle the orbit of Mars.
When operating, the dipoles need to be extremely cold, since they work only when the temperature is sufficiently low. The superconducting wires are maintained at 1.9 degrees above absolute zero … This temperature is even lower than the 2.7-degree cosmic microwave background radiation in outer space. The LHC tunnel houses the coldest extended region in the universe – at least that we know of. The magnets are known as cryodipoles to take into account their special refrigerated nature.
In addition to the impressive filament technology used for the magnets, the refrigeration (cryogenic) system is also an imposing accomplishment meriting its own superlatives. The system is in fact the world’s largest. Flowing helium maintains the extremely low temperature. A casing of approximately 97 metric tonnes of liquid helium surrounds the magnets to cool the cables. It is not ordinary helium gas, but helium with the necessary pressure to keep it in a superfluid phase. Superfluid helium is not subject to the viscosity of ordinary materials, so it can dissipate any heat produced in the dipole system with great efficiency: 10,000 metric tonnes of liquid nitrogen are first cooled, and this in turn cools the 130 metric tonnes of helium that circulate in the dipoles.
Featured image: A view of the experimental MgB2 transmission line at the LHC. Credit: CERN.
Every July 4, I have occasion to remember two things: the discovery of the Higgs boson, and my first published byline for an article about the discovery of the Higgs boson. I have no trouble believing it’s been eight years since we discovered this particle, using the Large Hadron Collider (LHC) and its ATLAS and CMS detectors, in Geneva. I’ve greatly enjoyed writing about particle physics in this time, principally because closely engaging with new research and the scientists who worked on them allowed me to learn more about a subject that high school and college had let me down on: physics.
In 2020, I haven’t been able to focus much on the physical sciences in my writing, thanks to the pandemic, the lockdown, their combined effects and one other reason. This has been made doubly sad by the fact that the particle physics community at large is at an interesting crossroads.
In 2012, the LHC fulfilled the principal task it had been built for: finding the Higgs boson. After that, physicists imagined the collider would discover other unknown particles, allowing theorists to expand their theories and answer hitherto unanswered questions. However, the LHC has since done the opposite: it has narrowed the possibilities of finding new particles that physicists had argued should exist according to their theories (specifically supersymmetric partners), forcing them to look harder for mistakes they might’ve made in their calculations. But thus far, physicists have neither found mistakes nor made new findings, leaving them stuck in an unsettling knowledge space from which it seems there might be no escape (okay, this is sensationalised, but it’s also kinda true).
Right now, the world’s particle physicists are mulling building a collider larger and more powerful than the LHC, at a cost of billions of dollars, in the hopes that it will find the particles they’re looking for. Not all physicists are agreed, of course. If you’re interested in reading more, I’d recommend articles by Sabine Hossenfelder and Nirmalya Kajuri and spiralling out from there. But notwithstanding the opposition, CERN – which coordinates the LHC’s operations with tens of thousands of personnel from scores of countries – recently updated its strategy vision to recommend the construction of such a machine, with the ability to produce copious amounts of Higgs bosons in collisions between electrons and positrons (a.k.a. ‘Higgs factories’). China has also announced plans of its own build something similar.
Meanwhile, scientists and engineers are busy upgrading the LHC itself to a ‘high luminosity version’, where luminosity represents the number of interesting events the machine can detect during collisions for further study. This version will operate until 2038. That isn’t a long way away because it took more than a decade to build the LHC; it will definitely take longer to plan for, convince lawmakers, secure the funds for and build something bigger and more complicated.
There have been some other developments connected to the current occasion in terms of indicating other ways to discover ‘new physics’, which is the collective name for phenomena that will violate our existing theories’ predictions and show us where we’ve gone wrong in our calculations.
The most recent one I think was the ‘XENON excess’, which refers to a moderately strong signal recorded by the XENON 1T detector in Italy that physicists think could be evidence of a class of particles called axions. I say ‘moderately strong’ because the statistical significance of the signal’s strength is just barely above the threshold used to denote evidence and not anywhere near the threshold that denotes a discovery proper.
It’s evoked a fair bit of excitement because axions count as new physics – but when I asked two physicists (one after the other) to write an article explaining this development, they refused on similar grounds: that the significance makes it seem likely that the signal will be accounted for by some other well-known event. I was disappointed of course but I wasn’t surprised either: in the last eight years, I can count at least four instances in which a seemingly inexplicable particle physics related development turned out to be a dud.
The most prominent one was the ‘750 GeV excess’ at the LHC in December 2015, which seemed to be a sign of a new particle about six-times heavier than a Higgs boson and 800-times heavier than a proton (at rest). But when physicists analysed more data, the signal vanished – a.k.a. it wasn’t there in the first place and what physicists had seen was likely a statistical fluke of some sort. Another popular anomaly that went the same way was the one at Atomki.
But while all of this is so very interesting, today – July 4 – also seems like a good time to admit I don’t feel as invested in the future of particle physics anymore (the ‘other reason’). Some might say, and have said, that I’m abandoning ship just as the field’s central animus is moving away from the physics and more towards sociology and politics, and some might be right. I get enough of the latter subjects when I work on the non-physics topics that interest me, like research misconduct and science policy. My heart of physics itself is currently tending towards quantum mechanics and thermodynamics (although not quantum thermodynamics).
One peer had also recommended in between that I familiarise myself with quantum computing while another had suggested climate-change-related mitigation technologies, which only makes me wonder now if I’m delving into those branches of physics that promise to take me farther away from what I’m supposed to do. And truth be told, I’m perfectly okay with that. 🙂 This does speak to my privileges – modest as they are on this particular count – but when it feels like there’s less stuff to be happy about in the world with every new day, it’s time to adopt a new hedonism and find joy where it lies.
I feel a lot of non-science editors just switch off when they read science stuff.
A friend told me this earlier today, during yet another conversation about how many of the editorial issues that assail science and health journalism have become more pronounced during the pandemic (by dint of the pandemic being a science and health ‘event’). Even earlier, editors would switch off whenever they’d read science news, but then the news would usually be about a new study discussing something coffee could or couldn’t do to the heart.
While that’s worrying, the news was seldom immediately harmful, and lethal even more rarely. In a pandemic, on the other hand, bullshit that makes it to print hurts in two distinct ways: by making things harder for good health journalists to get through to readers with the right information and emphases, and of course by encouraging readers to do things that might harm them.
But does this mean editors need to know the ins and outs of the subject on which they’re publishing articles? This might seem like a silly question to ask but it’s often the reality in small newsrooms in India, where one editor is typically in charge of three or four beats at a time. And setting aside the argument that this arrangement is a product of complacency and not taking science news seriously more than resource constraints, it’s not necessarily a bad thing either.
For example, a political editor may not be able to publish incisive articles on, say, developments in the art world, but they could still help by identifying reliable news sources and tap their network to commission the right reporters. And if the organisation spends a lot more time covering political news, and with more depth, this arrangement is arguably preferable from a business standpoint.
Of course, such a setup is bound to be error-prone, but my contention is that it doesn’t deserve to be written off either, especially this year – when more than a few news publishers suddenly found themselves in the middle of a pandemic even as they couldn’t hire a health editor because their revenues were on the decline.
For their part, then, publishers can help minimise errors by being clear about what editors are expected to do. For example, a newsroom can’t possibly do a great job of covering science developments in the country without a science editor; axiomatically, non-science editors can only be expected to do a superficial job of standing in for a science editor.
This said, the question still stands: What are editors to do specifically, especially those suddenly faced with the need to cover a topic they’re only superficially familiar with? The answer to this question is important not just to help editors but also to maintain accountability. For example, though I’ve seldom covered health stories in the past, I also don’t get to throw my hands up as The Wire‘s science, health and environment editor when I publish a faulty story about, say, COVID-19. It is a bit of a ‘damned if you do, damned if you don’t’ situation, but it’s not entirely unfair either: it’s the pandemic, and The Wire can’t not cover it!
In these circumstances, I’ve found one particular way to mitigate the risk of damnation, so to speak, quite effective. I recently edited an article in which the language of a paragraph seemed off to me because it wasn’t clear what the author was trying to say, and I kept pushing him to clarify. Finally, after 14 emails, we realised he had made a mistake in the calculations, and we dropped that part of the article. More broadly, I’ve found that nine times out of ten, even pushbacks on editorial grounds can help identify and resolve technical issues. If I think the underlying argument has not been explained clearly enough, I send a submission back even if it is scientifically accurate or whatever.
Now, I’m not sure how robust this relationship is in the larger scheme of things. For example, this ‘mechanism’ will obviously fail when clarity of articulation and soundness of argument are not related, such as in the case of authors for whom English is a second language. For another, the omnipresent – and omnipotent – confounding factor known as unknown unknowns could keep me from understanding an argument even when it is well-made, thus putting me at risk of turning down good articles simply because I’m too dense or ignorant.
But to be honest, these risks are quite affordable when the choice is between damnation for an article I can explain and damnation for an article I can’t. I can (and do) improve the filter’s specificity/sensitivity 😄 by reading widely myself, to become less ignorant, and by asking authors to include a brief of 100-150 words in their emails clarifying, among other things, their article’s intended effect on the reader. And fortuitously, when authors are pushed to be clearer about the point they’re making, it seems they also tend to reflect on the parts of their reasoning that lie beyond the language itself.
The Large Hadron Collider (LHC) performs an impressive feat every time it accelerates billions of protons to nearly the speed of light – and not in terms of the energy alone. For example, you release more energy when you clap your palms together once than the energy imparted to a proton accelerated by the LHC. The impressiveness arises from the fact that the energy of your clap is distributed among billions of atoms while the latter all resides in a single particle. It’s impressive because of the energy density.
A proton like this should have a very high kinetic energy. When lots of protons with such amounts of energy come together to form a macroscopic object, the object will have a high temperature. This is the relationship between subatomic particles and the temperature of the object they make up. The outermost layer of a star is so hot because its constituent particles have a very high kinetic energy. Blue hypergiant stars, thought to be the hottest stars in the universe, like Eta Carinae have a surface temperature of 36,000 K and a surface 57,600-times larger than that of the Sun. This isn’t impressive on the temperature scale alone but also on the energy density scale: Eta Carinae ‘maintains’ a higher temperature over a larger area.
Now, the following headline and variations thereof have been doing the rounds of late, and they piqued me because I’m quite reluctant to believe they’re true:
This headline, as you may have guessed by the fonts, is from Nature News. To be sure, I’m not doubting the veracity of any of the claims. Instead, my dispute is with the “coolest lab” claim and on entirely qualitative grounds.
The feat mentioned in the headline involves physicists using lasers to cool a tightly controlled group of atoms to near-absolute-zero, causing quantum mechanical effects to become visible on the macroscopic scale – the feature that Bose-Einstein condensates are celebrated for. Most, if not all, atomic cooling techniques endeavour in different ways to extract as much of an atom’s kinetic energy as possible. The more energy they remove, the cooler the indicated temperature.
The reason the headline piqued me was that it trumpets a place in the universe called the “universe’s coolest lab”. Be that as it may (though it may not technically be so; the physicist Wolfgang Ketterle has achieved lower temperatures before), lowering the temperature of an object to a remarkable sliver of a kelvin above absolute zero is one thing but lowering the temperature over a very large area or volume must be quite another. For example, an extremely cold object inside a tight container the size of a shoebox (I presume) must be lacking much less energy than a not-so-extremely cold volume across, say, the size of a star.
This is the source of my reluctance to acknowledge that the International Space Station could be the “coolest lab in the universe”.
While we regularly equate heat with temperature without much consequence to our judgment, the latter can be described by a single number pertaining to a single object whereas the former – heat – is energy flowing from a hotter to a colder region of space (or the other way with the help of a heat pump). In essence, the amount of heat is a function of two differing temperatures. In turn it could matter, when looking for the “coolest” place, that we look not just for low temperatures but for lower temperatures within warmer surroundings. This is because it’s harder to maintain a lower temperature in such settings – for the same reason we use thermos flasks to keep liquids hot: if the liquid is exposed to the ambient atmosphere, heat will flow from the liquid to the air until the two achieve a thermal equilibrium.
An object is said to be cold if its temperature is lower than that of its surroundings. Vladivostok in Russia is cold relative to most of the world’s other cities but if Vladivostok was the sole human settlement and beyond which no one has ever ventured, the human idea of cold will have to be recalibrated from, say, 10º C to -20º C. The temperature required to achieve a Bose-Einstein condensate is the temperature required at which non-quantum-mechanical effects are so stilled that they stop interfering with the much weaker quantum-mechanical effects, given by a formula but typically lower than 1 K.
The deep nothingness of space itself has a temperature of 2.7 K (-270.45º C); when all the stars in the universe die and there are no more sources of energy, all hot objects – like neutron stars, colliding gas clouds or molten rain over an exoplanet – will eventually have to cool to 2.7 K to achieve equilibrium (notwithstanding other eschatological events).
This brings us, figuratively, to the Boomerang Nebula – in my opinion the real coolest lab in the universe because it maintains a very low temperature across a very large volume, i.e. its coolness density is significantly higher. This is a protoplanetary nebula, which is a phase in the lives of stars within a certain mass range. In this phase, the star sheds some of its mass that expands outwards in the form of a gas cloud, lit by the star’s light. The gas in the Boomerang Nebula, from a dying red giant star changing to a white dwarf at the centre, is expanding outward at a little over 160 km/s (576,000 km/hr), and has been for the last 1,500 years or so. This rapid expansion leaves the nebula with a temperature of 1 K. Astronomers discovered this cold mass in late 1995.
(“When gas expands, the decrease in pressure causes the molecules to slow down. This makes the gas cold”: source.)
The experiment to create a Bose-Einstein condensate in space – or for that matter anywhere on Earth – transpired in a well-insulated container that, apart from the atoms to be cooled, was a vacuum. So as such, to the atoms, the container was their universe, their Vladivostok. They were not at risk of the container’s coldness inviting heat from its surroundings and destroying the condensate. The Boomerang Nebula doesn’t have this luxury: as a nebula, it’s exposed to the vast emptiness, and 2.7 K, of space at all times. So even though the temperature difference between itself and space is only 1.7 K, the nebula also has to constantly contend with the equilibriating ‘pressure’ imposed by space.
Further, according to Raghavendra Sahai (as quoted by NASA), one of the nebula’s cold spots’ discoverers, it’s “even colder than most other expanding nebulae because it is losing its mass about 100-times faster than other similar dying stars and 100-billion-times faster than Earth’s Sun.” This implies there is a great mass of gas, and so atoms, whose temperature is around 1 K.
All together, the fact that the nebula has maintained a temperature of 1 K for around 1,500 years (plus a 5,000-year offset, to compensate for the distance to the nebula) and over 3.14 trillion km makes it a far cooler “coolest” place, lab, whatever.
The people involved with the RECOVERY clinical trial have announced via statements to the press that they have found very encouraging results about the use of dexamethasone in people with severe COVID-19 who had to receive ventilator support. However, the study’s data isn’t available for independent verification yet. So irrespective of how pumped the trial’s researchers are, wait. Studies in more advanced stages of the publishing process have been sunk before.
Dexamethasone is relatively cheap and widely available. But that doesn’t mean it will continue to remain that way in future. The UK government has already announced it has stockpiled 200,000 doses of the drug, and other countries with access to supply may follow suit. Companies that manufacture the drug may also decide to hike prices, foreseeing rising demand, leading to further issues of availability.
Researchers found in their clinical trials that the drug reduced mortality among patients with COVID-19 and who needed ventilator support by around 33%, and who needed oxygen by about 20%. This describes a very specific use-case, and governments must ensure that if the drug is repurposed for COVID-19, its use is limited to people who fulfil the specific criteria that benefit from the drug’s use. As the preliminary report notes, “It is important to recognise that we found no evidence of benefit for patients who did not require oxygen and we did not study patients outside the hospital setting.” In addition, dexamethasone is a steroid, and indiscriminate use is quite likely to lead to adverse side effects with zero benefits.
The novel coronavirus pandemic is not a tragedy in the number of deaths alone. An important long term effect will be disability, considering the virus has been known to affect multiple parts of the body, including the heart, brain and the kidneys, apart from the lungs themselves, even among patients who have survived. Additionally, it cuts mortality in patients in a later stage of the COVID-19 infection. So go easy on words like ‘game-changer’. Dexamethasone isn’t exactly one because game-changers need to allow people to contract the virus but not fear disability or their lives…
… or in fact not fear contracting the virus at all – like a vaccine or an efficacious prophylactic. This is very important, for example, because of what we have already seen in Italy and New York. Many patients who don’t need ventilator support or oxygen care still need hospital care, and the unavailability of hospital beds and skilled personnel can lead to more deaths than may be due to COVID-19. This ‘effect’, so to speak, is more pronounced in developing nations, many of which have panicked and formulated policies that pay way more or way less attention to COVID-19 than is due. In India, for example, nearly 900 people have died due to the lockdown itself.