Month: June 2022

  • The problem that ‘crypto’ actually solves

    From ‘Cryptocurrency Titan Coinbase Providing “Geo Tracking Data” to ICE’The Intercept, June 30, 2022:

    Coinbase, the largest cryptocurrency exchange in the United States, is selling Immigrations and Customs Enforcement a suite of features used to track and identify cryptocurrency users, according to contract documents shared with The Intercept. … a new contract document obtained by Jack Poulson, director of the watchdog group Tech Inquiry, and shared with The Intercept, shows ICE now has access to a variety of forensic features provided through Coinbase Tracer, the company’s intelligence-gathering tool (formerly known as Coinbase Analytics).

    Coinbase Tracer allows clients, in both government and the private sector, to trace transactions through the blockchain, a distributed ledger of transactions integral to cryptocurrency use. While blockchain ledgers are typically public, the enormous volume of data stored therein can make following the money from spender to recipient beyond difficult, if not impossible, without the aid of software tools. Coinbase markets Tracer for use in both corporate compliance and law enforcement investigations, touting its ability to “investigate illicit activities including money laundering and terrorist financing” and “connect [cryptocurrency] addresses to real world entities.”

    Every “cryptocurrency is broken” story these days has a predictable theme: the real world caught up because the real world never went away. The fundamental impetus for cryptocurrencies is the belief of a bunch of people that they can’t trust their money with governments and banks – imagined as authoritarian entities that have centralised decision-making power over private property, including money – and who thus invented a technological alternative that would execute the same solutions the governments and banks did, but sans centralisation, sans trust.

    Even more fundamentally, cryptocurrencies embody neither the pursuit to ensure the people’s control of money nor to liberate art-trading from the clutch of racism. Instead, they symbolise the abdication of the responsibility to reform banking and finance – a far more arduous process that is also more constitutive and equitable. They symbolise the thin line between democracy and majoritarianism: they claimed to have placed the tools to validate financial transactions in the hands of the ‘people’ but fail to grasp that these tools will still be used in the same world that apparently created the need for cryptocurrencies. In this context, I highly recommend this essay on the history of the socio-financial forces that inevitably led to the popularity of cryptocurrencies.

    These (pseudo)currencies have often been rightly described as a solution looking for a problem, because the fact remains that the ‘problem’ they do solve is public non-participation in governance. Its proponents just don’t like to admit it. Who would?

    The identity of cryptocurrencies may once have been limited to technological marvels and the play-things of mathematicians and financial analysts, but their foundational premise bears a deeper, more dispiriting implication. As the value of one virtual currency after the next comes crashing down, after cryptocurrency-based trading and financing schemes come a cropper, and after their promises to be untraceable, decentralised and uncontrollable have been successively falsified, the whole idea ought to be revealed for what it is: a cynical social engineering exercise to pump even more money from the ‘bottom’ of the pyramid to the ‘top’. Yet, the implication: cryptocurrencies will persist because they are vehicles of the libertarian ideologies of their proponents. To attempt to ‘stop’ them is to attempt to stop the ideologues themselves.

  • What the bitcoin price drop reveals about ‘crypto’

    One of the definitive downsides of cryptocurrencies raised its head this week when the nosediving price of bitcoin – brought on by the Luna/Terra crash and subsequent cascading effects – rendered bitcoin mining less profitable. One bitcoin today costs $19,410, so it’s hard to imagine this state of affairs has come to pass – but this is why understanding the ‘permissionless’ nature of cryptocurrency blockchains is important.

    Verifying bitcoin transactions requires computing power. Computing power (think of processing units on your CPU) costs money. So those bitcoin users who provide this power need to be compensated for this expense or the bitcoins ecosystem will make no financial sense. This is why the bitcoin blockchain generates a token when users provide computing power to verify transactions. This process is called mining: the computing power verifies each transaction by solving a complex math problem whose end result adds the transaction to the blockchain, in return for which the blockchain spits out a token (or a fraction of it, averaged over time).

    The idea is that these users should be able to use this token to pay for the computing power they’re providing. Obviously this means these tokens should have real value, like dollar value. And this is why bitcoin’s price dropping below a certain figure is bad news for those providing the computing power – i.e. the miners.

    Bitcoin mining today is currently the preserve of a few mining conglomerates, instead of being distributed across thousands of individual miners, because these conglomerates sought to cash in on bitcoin’s dollar value. So if they quit the game or reduce their commitment to mining, the rate of production of new bitcoins will slow, but that’s a highly secondary outcome; the primary outcome will be less power being available to verify transactions, which will considerably slow the ability to use bitcoins to do cryptocurrency things.

    Bitcoin’s dropping value also illustrates why so many cryptocurrency investment schemes – including those based on bitcoin – are practically Ponzi schemes. In the real world (beyond blockchains), the cost of computing power will but increase over time. This is because of inflation, because of the rising cost of the carbon footprint and because the blockchain produces tokens less often over time. So to keep the profits from mining from declining, the price of bitcoin has to increase, which implies the need for speculative valuation, which then paves the way for pump-and-dump and Ponzi schemes.

    permissioned blockchain, as I have written before, does not provide rewards for contributing computing power because it doesn’t need to constantly incentivise its users to continue using the blockchain and verify transactions. Specifically, a permissioned blockchain uses a central authority that verifies all transactions, whereas a permissionless blockchain seeks to delegate this responsibility to the users themselves. Think of millions of people exchanging money with each other through a bank – the bank is the authority and the system is a permissioned blockchain; in the case of cryptocurrencies, which are defined by permissionless blockchains, the people exchanging the money also verify each other’s transactions.

    This is what leads to the complexity of cryptocurrencies and, inevitably, together with real-world cynicism, an abundance of opportunities to fail. Or, as Robert Reich put it, “all Ponzi schemes topple eventually”.

    Note: The single-quotation marks around ‘crypto’ in the headline is because I think the term ‘crypto’ belongs to ‘cryptography’, not ‘cryptocurrency’.

  • How do you measure peacefulness?

    The study was conceived by Australian technology entrepreneur Steve Killelea [in 2007], and is endorsed by individuals such as former UN Secretary-General Kofi Annan, the Dalai Lama, archbishop Desmond Tutu, former President of Finland and 2008 Nobel Peace Prize laureate Martti Ahtisaari, Nobel laureate Muhammad Yunus, economist Jeffrey Sachs, former president of Ireland Mary Robinson, former Deputy Secretary-General of the United Nations Jan Eliasson and former United States president Jimmy Carter. The updated index is released each year at events in London, Washington, DC, and at the United Nations Secretariat in New York.

    This is a passage from the Wikipedia article on an entity called the ‘Global Peace Index’, which “measures the relative position of nations’ and regions’ peacefulness”. Indices are flawed but useful. Their most significant flaw – and it’s quite significant – is that they attempt to distill out of the complex interactions of a host of factors a single number that, compared to another of its kind, is supposed to enable value judgments of ‘better’ or ‘worse’.

    For example, an academic freedom report published in 2020 gave India a score of 0.352 and Pakistan a score of 0.554. Does this mean all academic centres in India are less academically free than all of those in Pakistan? No. Does this mean Pakistan has 1.5x more academic freedom than India does? Not at all. Indices are useful in a very narrow context, but within that niche, they can be a force for good. There’s a reason the puffy-chested Indian government gets so worked up when the World Press Freedom Index and the Global Hunger Index are published.

    In particular, indices are most useful when they’re compared to themselves. If India’s press-freedom index value dropped from X in 2020 to Y in 2022 (because the government is going around demolishing the homes of dissenters), it’s a snapshot of a real deterioration – a problem that needs fixing by reversing the trend (less by massaging the data, as our leaders have become wont to do, and more by improving freedom for journalists). But there’s an index on the block whose usefulness by all counts, even in the self-referential niche, seems dangerous. This is the Global Peace Index. The 2022 edition was published earlier this week, and based on which a Business Insider article lamented that violence was costing India just too much money (Rs 50.36 lakh crore) and that this is why the country had to get a grip on it.

    A crucial thing about understanding peace (in a given place and time), and which lies squarely in the domain of those things that indices don’t record, is how peace was achieved. For example, India’s freedom struggle might have pulled down the country’s score on the Global Peace Index but at the same time it was justified and led to a better kind of peace for the whole region. Peace is not just the absence of violence but the absence of conditions that give rise to violence, now and forever, in sustainable fashion. This is why it’s possible to justify some forms of violence in the pursuit of some constitutionally determined forms of peace.

    Recently, a couple of my friends, who work in the corporate sector and whose shared philosophy is decidedly libertarian, argued with me over the justification of protest actions like rail roko and bandh. They contended that these activities constituted a violence against the many people whose livelihoods required the affected services. However, their philosophy stopped there, refusing to take the next logical step: it’s by disrupting the provision of these services that protestors get and hold the governmnent’s attention. (Plus the Indian government has the Essential Services Maintenance Act 1968 to ensure not all of the affected services become unavailable.) Why, through his Dandi march, M.K. Gandhi sought to encourage people to not pay their taxes to the British government – a form of economic violence.

    To be sure, violence isn’t just physical; it’s also economic, social, cultural, linguistic; it’s gendered, caste-based, class-based and faith-based. The peace index report acknowledges this when it highlights its ‘Positive Peace Index’ – a measure of “the attitudes, institutions and structures that create and sustain peaceful societies”; Its value “measures the level of societal resilience of a nation or region”. According to the report’s website, the lower the score, the better.

    But then, China and Saudi Arabia have lower scores than India. This is bizarre. KSA is a monarchy and China is an autocracy; in both countries, personal liberties are highly restricted and there are stringent, and in many cases Kafkaesque, punishments for falling afoul of state policy. The way of life imposed by these socio-political structures also constitutes violence. Yet the scores of these countries are comparable to those of Cuba, Mexico and Namibia. I would rank India better because I can (still, with some privileges) speak out against my government without fear of repercussions. Israel’s score, in fact, is lower than that of Palestine, while Russia has a marginally lower score than does Ukraine. It’s inexplicable.

    The India-specific portions of the peace index’s report also illustrate the report’s problems at the sub-national level. To quote:

    Some of the countries to record the biggest deteriorations [in violent demonstrations since 2008] were India, Colombia, Bangladesh and Brazil. … [India] ranks as the 135th most peaceful nation in the 2022 GPI. The country experienced an improvement of 1.4 per cent in overall peacefulness over the past year, driven by an improvement in the Ongoing Conflict domain. However, India experienced an uptick in the violent crime and perceptions of criminality indicators. … In 2020 and 2021, Indian farmers protested against newly introduced laws that removed some guarantees and subsidies on agricultural products.

    First, the report has obtained the data for the ‘level of violent crime’ indicator from the Economist Intelligence Unit (EIU). The EIU’s scoring question for this indicator is: “Is violent crime likely to pose a significant problem for government and/or business over the next two years?” It’s hard not to wonder if, from the right-wing’s point of view, “violent crime” includes that perpetrated by “urban naxals” when they protested against the Citizenship (Amendment) Act 2019. Uttar Pradesh Chief Minister Yogi Adityanath thought so before he was forced to refund Rs 22 lakh he had collected from the protestors. The Delhi police thought so when its chargesheet for the 2020 riots was composed of people whose houses had been burnt down, whose bones broken and whose temples desecrated – and people who had called on the police to arrest BJP leader Kapil Mishra for instigating the riot. How do you figure “perception of criminality” here?

    Second, the report discusses the protests against the three farm laws in a paragraph about “violent demonstrations”, in the same breath and without any qualifications that the protests were peaceful but turned violent when its participants had to defend themselves – including when the son of a national leader ran some of them over with his vehicle and when their attempt to enter Delhi was met with a water cannon and a lathi charge, among other incidents.

    The farmers were demanding higher minimum support prices and lower input costs – hardly the sort of thing that requires violence to fulfil but did because Prime Minister Narendra Modi had no other way to walk away from his promises to Ambani/Adani. Who perpetrated the real violence here – the national leader who doomed India’s farmers so industrialist tycoons would continue to fund his campaigns of communalism or the farmers who blocked roads and highways demanding that he not? Was the ‘Bharat Bandh’ that disrupted activities in several crucial sectors on March 28, 2022, more violent than the “anti-people policies” of the same national leader that they were protesting?

    A peace index that can’t settle these questions won’t see the difference between a spineless and a spineful people.

  • Tech solutions to household labour are also problems

    Just to be clear, the term ‘family’ in this post refers to a cis-het nuclear family unit.

    Tanvi Deshpande writing for Indiaspend, June 12, 2022:

    The Union government’s ambitious Jal Jeevan Mission (JJM) aims to provide tap water to every household in rural India by 2024. Until now, 50% of households have a tap connection, an improvement from August 2019, when the scheme started and 17% of households had a tap connection. The mission’s dashboard shows that in Take Deogao Gram Panchayat that represents Bardechi Wadi, only 32% of the households have tap connections. Of these, not a single one has been provided to Pardhi’s hamlet.

    This meant, for around five months every summer, women and children would rappel down a 60-foot well and spend hours waiting for water to seep into the bottom. In India, filling water for use at home is largely a woman’s job. Globally, women and girls spend 200 million hours every day collecting water, and in Asia, one round trip to collect water takes 21 minutes, on average, in rural areas.

    The water pipeline has freed up time for Bardechi Wadi’s women and children but patriarchal norms, lack of a high school in the village and of other opportunities for development means that these free hours have just turned into more time for household chores, our reporting found.

    Now these women don’t face the risk of death while fetching water but, as Deshpande has written, the time and trouble that the water pipeline has saved them will now be occupied by new chores and other forms of labour. There may have been a time when the latter might have seemed like the lesser of those two evils, but it is long gone. Today, in the climate crisis era – which often manifests as killer heatwaves in arid regions that are already short on water – the problem is access to leisure, to cooling and to financial safeguards. When women are expected to do more chores because they have the time, they lose access to leisure, which is important at least to cool off, but better yet because it is a right per se (Universal Declaration of Human Rights, article 24).

    This story is reminiscent of the effects of the introduction of home appliances into the commercial market. I read a book about a decade ago that documented, among other things, how the average amount of time women (in the US) spent doing household chores hadn’t changed much between the 1920s and the 2000s, even though it coincided wholly with the second industrial revolution. This was because – as in the case of the pipeline of Bardechi Wadi – the purchase and use of these devices freed up women’s time for even more chores. We need the appliances as much as we need the pipeline, just that men should also do household chores. However, the appliances also presented and present more problems than those that pertain to society’s attitudes towards how women should spend their time.

    1. Higher expectations – With the availability of household appliances (like the iron box, refrigerator, washing machine, dish washer, oven, etc.), the standards for various chores shot up as did what we considered to be comfortable living – but what we expected of women didn’t change. So suddenly the women of the house were also responsible for ensuring that the men’s shirts and pants were all the more crinkle-less, that food was served fresh and hot all the time, etc. as well as to enliven family life by inventing/recreating food recipes, serving and cleaning up, etc.

    2. Work + chores – The introduction of more, and more diverse, appliances into the market, aspirations and class mobility together paralleled an increase in women’s labour-force participation through the 20th century. But before these women left for their jobs and after they got home, they still had to household chores as well – including cooking and packing lunch for themselves and for their husbands and/or children, doing the laundry, shopping for groceries, etc.

    3. Think about the family – The advent of tech appliances also foisted on women two closely related responsibilities: to ensure the devices worked as intended and to ensure they fit with the family-unit’s ideals and aspirations. As Manisha Aggarwal-Schifellite wrote in 2016: “The automatic processes of programming the coffeemaker, unlocking an iPad with a fingerprint, or even turning on the light when you get home are the result of years of marketing that create a household problem (your home is too dark, your family too far-flung, your food insufficiently inventive), solves it with a new product, and leaves women to clean up the mess when the technology fails to deliver on its promises”.

    In effect, through the 20th century, industrialisation happened in two separate ways within the household and without. To use writer Ellen Goodman’s evocative words from a 1983 article: “At the beginning of American history …, most chores of daily life were shared by men and women. To make a meal, men chopped the wood, women cooked the stew. One by one, men’s tasks were industrialized outside the home, while women’s stayed inside. Men stopped chopping wood, but women kept cooking.”

    The diversity of responsibilities imposed by household appliances exacts its own cost. A necessary condition of men’s help around the house is that they – we – must also constantly think about which task to perform and when, instead of expecting to be told what to do every time. This is because, by expecting periodic reminders, we are still forcing women to retain the cognitive burden associated with each chore. If you think you’re still helping by sharing everything except the cognitive burden, you’re wrong. Shifting between tasks affects one’s ability to focus, performance and accuracy and increases forgetfulness. Psychologists call this the switch cost.

    It is less clear to me than it may be to others as to the different ways in which the new water pipeline through Bardechi Wadi will change the lives of the women there. But without the men of the village changing how they think about their women and their ‘responsibilities to the house’, we can’t expect anything meaningful. At the same time, the effects of the climate crisis will keep inflating the price these women pay in terms of their psychological, physical and sexual health and agency.

  • A giant leap closer to the continuous atom laser

    One of the most exotic phases of matter is called the Bose-Einstein condensate. As its name indicates, this type of matter is one whose constituents are bosons – which are basically all subatomic particles whose behaviour is dictated by the rules of Bose-Einstein statistics. These particles are also called force particles. The other kind are matter particles, or fermions. Their behaviour is described by the rules of Fermi-Dirac statistics. Force particles and matter particles together make up the universe as we know it.

    To be a boson, a particle – which can be anything from quarks (which make up protons and neutrons) to entire atoms – needs to have a spin quantum number of certain values. (All of a particle’s properties can be described by the values of four quantum numbers.) An important difference between fermions and bosons is that Pauli’s exclusion principle doesn’t apply to bosons. The principle states that in a given quantum system, no two particles can have the same set of four quantum numbers at the same time. When two particles have the same four quantum numbers, they are said to occupy the same state. (‘States’ are not like places in a volume; instead, think of them more like a set of properties.) Pauli’s exclusion principle forbids fermions from doing this – but not bosons. So in a given quantum system, all the bosons can occupy the same quantum state if they are forced to.

    For example, this typically happens when the system is cooled to nearly absolute zero – the lowest temperature possible. (The bosons also need to be confined in a ‘trap’ so that they don’t keep moving around or combine with each other to form other particles.) More and more energy being removed from the system is equivalent to more and more energy being removed from the system’s constituent particles. So as fermions and bosons possess less and less energy, they occupy lower and lower quantum states. But once all the lowest fermionic states are occupied, fermions start occupying the next lowest states, and so on. This is because of the principle. Bosons on the other hand are all able to occupy the same lowest quantum state. When this happens, they are said to have formed a Bose-Einstein condensate.

    In this phase, all the bosons in the system move around like a fluid – like the molecules of flowing water. A famous example of this is superconductivity (at least of the conventional variety). When certain materials are cooled to near absolute zero, their electrons – which are fermions – overcome their mutual repulsion and pair up with each other to form composite pairs called Cooper pairs. Unlike individual electrons, Cooper pairs are bosons. They go on to form a Bose-Einstein condesate in which the Cooper pairs ‘flow’ through the material. In the material’s non-superconducting state, the electrons would have scattered by some objects in their path – like atomic nuclei or vibrations in the lattice. This scattering would have manifested as electrical resistance. But because Cooper pairs have all occupied the same quantum state, they are much harder to scatter. They flow through the material as if they don’t experience any resistance. This flow is what we know as superconductivity.

    Bose-Einstein condensates are a big deal in physics because they are a macroscopic effect of microscopic causes. We can’t usually see or otherwise directly sense the effects of most quantum-physical phenomena because they happen on very small scales, and we need the help of sophisticated instruments like electron microscopes and particle accelerators. But when we cool a superconducting material to below its threshold temperature, we can readily sense the presence of a superconductor by passing an electric current through it (or using the Meissner effect). Macroscopic effects are also easier to manipulate and observe, so physicists have used Bose-Einstein condensates as a tool to probe many other quantum phenomena.

    While Albert Einstein predicted the existence of Bose-Einstein condensates – based on work by Satyendra Nath Bose – in 1924, physicists had the requisite technologies and understanding of quantum mechanics to be able to create them in the lab only in the 1990s. These condensates were, and mostly still are, quite fragile and can be created only in carefully controlled conditions. But physicists have also been trying to figure out how to maintain a Bose-Einstein condensate for long periods of time, because durable condensates are expected to provide even more research insights as well as hold potential applications in particle physics, astrophysics, metrology, holography and quantum computing.

    An important reason for this is wave-particle duality, which you might recall from high-school physics. Louis de Broglie postulated in 1924 that every quantum entity could be described both as a particle and as a wave. The Davisson-Germer experiment of 1923-1927 subsequently found that electrons – which were until then considered to be particles – behaved like waves in a diffraction experiment. Interference and diffraction are exhibited by waves, so the experiment proved that electrons could be understood as waves as well. Similarly, a Bose-Einstein condensate can be understood both in terms of particle physics and in terms of wave physics. Just like in the Davisson-Germer experiment, when physicists set up an experiment to look for an interference pattern from a Bose-Einstein condensate, they succeeded. They also found that the interference pattern became stronger the more bosons they added to the condensate.

    Now, all the bosons in a condensate have a coherent phase. The phase of a wave measures the extent to which the wave has evolved in a fixed amount of time. When two waves have coherent phase, both of them will have progressed by the same amount in the same span of time. Phase coherence is one of the most important wave-like properties of a Bose-Einstein condensate because of the possibility of a device called an atom laser.

    ‘Laser’ is an acronym for ‘light amplification by stimulated emission of radiation’. The following video demonstrates its working principle better than I can in words right now:

    The light emitted by an optical laser is coherent: it has a constant frequency and comes out in a narrow beam if the coherence is spatial or can be produced in extremely short pulses if the coherence is temporal. An atom laser is a laser composed of propagating atoms instead of photons. As Wolfgang Ketterle, who led the creation of the first Bose-Einstein condensate and later won a Nobel Prize for it, put it, “The atom laser emits coherent matter waves whereas the optical laser emits coherent electromagnetic waves.” Because the bosons of a Bose-Einstein condensate are already phase-coherent, condensates make excellent sources for an atom laser.

    The trick, however, lies in achieving a Bose-Einstein condensate of the desired (bosonic) atoms and then extracting a few atoms into the laser while replenishing the condensate with more atoms – all without letting the condensate break down or the phase-coherence being lost. Physicists created the first such atom laser in 1996 but it did not have a continuous emission nor was very bright. Researchers have since built better atom lasers based on Bose-Einstein condensates, although they remain far from being usable in their putative applications. An important reason for this is that physicists are yet to build a condensate-based atom laser that can operate continuously. That is, as atoms from the condensate lase out, the condesate is constantly replenished, and the laser operates continuously for a long time.

    On June 8, researchers from the University of Amsterdam reported that they had been able to create a long-lived, sort of self-sustaining Bose-Einstein condensate. This brings us a giant step closer to a continuously operating atom laser. Their setup consisted of multiple stages, all inside a vacuum chamber.

    In the first stage, strontium atoms (which are bosons) started from an ‘oven’ maintained at 850 K and were progressively laser-cooled while they made their way into a reservoir. (Here is a primer of how laser-cooling works.) The reservoir had a dimple in the middle. In the second stage, the atoms were guided by lasers and gravity to descend into this dimple, where they had a temperature of approximately 1 µK, or one-millionth of a kelvin. As the dimple became more and more crowded, it was important for the atoms here to not heat up, which could have happened if some light had ‘leaked’ into the vacuum chamber.

    To prevent this, in the third stage, the physicists used a carefully tuned laser shined only through the dimple that had the effect of rendering the strontium atoms mostly ‘transparent’ to light. According to the research team’s paper, without the ‘transparency beam’, the atoms in the dimple had a lifetime of less than 40 ms, whereas with the beam, it was more than 1.5 s – a 37x difference. At some point, when a sufficient number of atoms had accumulated in the dimple, a Bose-Einstein condensate formed. In the fourth stage, an effect called Bose stimulation kicked in. Simply put, as more bosons (strontium atoms, in this case) transitioned into the condensate, the rate of transition of additional bosons also increased. Bose stimulation thus played the role that the gain medium plays in an optical laser. The size of the condensate grew until it matched the rate of loss of atoms out of the dimple, and reached an equilibrium.

    And voila! With a steady-state Bose-Einstein condensate, the continuous atom laser was almost ready. The physicists have acknowledged that their setup can be improved in many ways, including by making the laser-cooling effects more uniform, increasing the lifetime of strontium atoms inside the dimple, reducing losses due to heating and other effects, etc. At the same time, they wrote that “at all times after steady state is reached”, they found a Bose-Einstein condensate existing in their setup.

  • What arguments against the ‘next LHC’ say about funding Big Physics

    A few days ago, a physicist (and PhD holder) named Thomas Hartsfield published a strange article in Big Think about why building a $100-billion particle physics machine like the Large Hadron Collider (LHC) is a bad idea. The article was so replete with errors things that even I – a not-physicist and not-a-PhD-holder – cringed reading them. I also wanted to blog about the piece but theoretical physicist Matthew Strassler beat me to it, with a straightforward post about the many ways in which Hartsfield’s article was just plain wrong, especially coming from a physicist. But I also think there were some things that Strassler either overlooked or left unsaid and which to my mind bear fleshing out – particularly points that have to do with the political economy of building research machines like the LHC. I also visit in the end the thing that really made me want to write this post, in response to a seemingly throwaway line in Strassler’s post. First, the problems that Hartsfield’s piece throws up and which deserve more attention:

    1. One of Hartsfield’s bigger points in his article is that instead of spending $100 billion on one big physics project, we could spend it on 100,000 smaller projects. I agree with this view, sensu lato, that we need to involve more stakeholders than only physicists when contemplating the need for the next big accelerator or collider. However, in making the argument that the money can be redistributed, Hartsfield presumes that a) if a big publicly funded physics project is cancelled, the allocated money that the government doesn’t spend as a result will subsequently be diverted to other physics prohects, and b) this is all the money that we have to work with. Strassler provided the most famous example of the fallacy pertinent to (a): the Superconducting Super Collider in the US, whose eventually cancellation ‘freed’ an allocation of $4.4 billion, but the US government didn’t redirect this money back into other physics research grants. (b), on the other hand, is a more pernicious problem: a government allocating $100 billion for one project does not implicitly mean that it can’t spare $10 million for a different project, or projects. Realpolitik is important here. Politicians may contend that after having approved $100 billion for one project, it may not be politically favourable for them to return to Congress or Parliament or wherever with another proposal for $10 million. But on the flip side, both mega-projects and many physics research items are couched in arguments and aspirations to improve bilateral or multilateral ties (without vomiting on other prime ministers), ease geopolitical tensions, score or maintain research leadership, increase research output, generate opportunities for long-term technological spin-offs, spur local industries, etc. Put another way, a Big Science project is not just a science project; depending on the country, it could well be a national undertaking along the lines of the Apollo 11 mission. These arguments matter for political consensus – and axiomatically the research projects that are able to present these incentives are significantly different from those that aren’t, which in turn can help fund both Big Science and ‘Small Science’ projects at the same time. The possibility exists. For example, the Indian government has funded Gaganyaan separately from ISRO’s other activities. $100 billion isn’t all the money that’s available, and we should stop settling for such big numbers when they are presented to us.

    2. These days, big machines like the one Hartsfield has erected as a “straw man” – to use Strassler words – aren’t built by individual countries. They are the product of an international collaboration, typically with dozens of governments, hundreds of universities and thousands of researchers participating. The funds allocated are also spent over many years, even decades. In this scenario, when a $100-billion particle collider is cancelled, no one entity in the whole world suddenly has that much money to give away at any given moment. Furthermore, in big collaborations, countries don’t just give money; often they add value by manufacturing various components, leasing existing facilities, sharing both human and material resources, providing loans, etc. The value of each of these contracts is added to the total value of the project. For example, India has been helping the LHC by manufacturing and supplying components related to the machine’s magnetic and cryogenic facilities. Let’s say India’s Departments of Science and Technology and of Atomic Energy had inked contracts with CERN, which hosts and maintains the LHC, worth $10 million to make and transport these components, but then the LHC had been called off just before its construction was to begin. Does this mean India would have had $10 million to give away to other science projects? Not at all! In fact, manufacturers within the country would have been bummed about losing the contracts.

    3. Hartsfield doesn’t seem to acknowledge incremental results, results that improve the precision of prior measurements and results that narrow the range in which we can find a particle. Instead, he counts only singularly positive, and sensational, results – of which the LHC has had only one: the discovery of the Higgs boson in 2012. Take all of them together and the LHC will suddenly seem more productive. Simply put, precision-improving results are important because even a minute difference between the theoretically predicted value and the observed value could be a significant discovery that opens the door to ‘new physics’. We recently saw this with the mass of a subatomic particle called the W boson. Based on the data collected by a detector mounted on the Tevatron particle accelerator in Illinois, physicists found that the mass of the W boson differed from the predicted value by around 0.12%. This was sufficient to set off a tsunami of excitement and speculation in the particle physics community. (Hartsfield also overlooked an important fact and which Strassler caught: that the LHC collects a lot more data than physicists can process in a single year, which means that when the LHC winds down, physicists will still have many years of work left before they are done with the LHC altogether. This is evidently still happening with the Tevatron, which was shut down in 2011, so Hartsfield missing it is quite weird. Another thing that happened to Tevatron and is still happening with the LHC is that these machines are upgraded over time to produce better results.) Similarly, results that exclude the energy ranges in which a particle can be found are important because they tell us what kind of instruments we should build in future to detect the same particle. We obviously won’t need instruments that sweep the same energy range (nor will we have a guarantee that the particle will be found outside the excluded energy range – that’s a separate problem). There is another point to be made but which may not apply to CERN as much as to Big Science projects in other countries: one country’s research community building and operating a very large research facility signals to other countries that the researchers know what they’re doing and that they might be more deserving of future investments than other candidates with similar proposals. This is one of the things that India lost with the scuttling of the India-based Neutrino Observatory (the loss itself was deserved, to be sure).

    Finally, the statement in Strassler’s post that piqued me the most:

    My impression, from his writing and from what I can find online, is that most of what he knows about particle physics comes from reading people like Ethan Siegel and Sabine Hossenfelder. I think Dr. Hartsfield would have done better to leave the argument to them.

    Thomas Hartsfield has clearly done a shoddy job in his article in the course of arguing against a Big Physics machine like LHC in the future, but his screwing up doesn’t mean discussions on the need for the next big collider should be left to physicists. I admit that Strassler’s point here was probably limited to the people whose articles and videos were apparently Hartsfield’s primary sources of information – but it also seemed to imply that instead of helping those who get things wrong do better next time, it’s okay to ask them to not try again and instead leave the communication efforts to their primary sources. That’s Ethan Siegel and Sabine Hossenfelder in this case – both prolific communicators – but in many instances, bad articles are written by writers who bothered to try while their sources weren’t doing more or better to communicate to the people at large. This is also why it bears repeating that when it comes to determining the need for a Big Physics project of the likes of the LHC, physics is decidedly one non-majority part of it and that – importantly – science communicators also have an equally vital role to play. Let me quote here from an article by physicist Nirmalya Kajuri, published in The Wire Science in February 2019:

    … the few who communicate science can have a lopsided influence on the public perception of an entire field – even if they’re not from that field. The distinction between a particle physicist and, say, a condensed-matter physicist is not as meaningful to most people reading the New York Times or any other mainstream publication as it is to physicists. There’s no reason among readers to exclude [one physicist] as an expert.

    However, very few physicists engage in science communication. The extreme ‘publish or perish’ culture that prevails in sciences means that spending time in any activity other than research carries a large risk. In some places, in fact, junior scientists spending time popularising science are frowned upon because they’re seen to be spending time on something unproductive.

    All physicists agree that we can’t keep building colliders ad infinitum. They differ on when to quit. Now would be a good time, according to Hossenfelder. Most particle physicists don’t think so. But how will we know when we’ve reached that point? What are the objective parameters here? These are complex questions, and the final call will be made by our ultimate sponsors: the people.

    So it’s a good thing that this debate is playing out before the public eye. In the days to come, physicists and non-physicists must continue this dialogue and find mutually agreeable answers. Extensive, honest science communication will be key.

    So more physicists should join in the fray, as should science journalists, writers, bloggers and communicators in general. Just that they should also do better than Thomas Hartsfield to get the details right.

  • What is a supersolid?

    The names that scientists, especially physicists, give to things has been a source of humour or irritation, depending on your point of view. Despite observatories named the Very Large Telescope, succeeded by the Extremely Large Telescope, and in spite of Murray Gell-Mann naming the quarks after a word that appears only once in James Joyce’s Finnegans Wake, I’m firmly on the irritated side. There is no reason that the colour charge, for example, should be called that considering it has nothing to do with colour. Nor should the theory describing how subatomic particles are affected by the colour charge be called quantum chromodynamics.

    For another example, a superfluid is a quantum phase of matter that flows without any resistance – so the name makes sense. But a supersolid is a quantum phase of matter that has the ordered structure of a solid but can flow like a superfluid! (A ‘quantum phase’ is a phase of matter that exists at extremely low temperatures, in quantum systems – i.e. systems where quantum-mechanical forces dominate.) Supersolids are clearly inappropriately named, but we can say the same thing about its properties. In the 1960s, scientists worked out the math and concluded that supersolids should exist – but they weren’t able to create them in the lab until the last decade or so.

    A crystal is a solid whose constituent atoms are arranged in a fixed, repeating pattern. The grid of atoms is called the lattice and each point occupied by an atom is called a node. When a node is empty, i.e. when an atom isn’t present, the site is called a vacancy. When you cool a substance to a lower and lower temperature, you take away energy from it until, at absolute zero, it has no energy. But in quantum systems, the material retains some energy even at absolute zero. This is the energy implicit to the system and is called zero-point energy. This energy allows atoms to move from occupied nodes in the lattice to nearby vacancies.

    Sometimes there could be a cluster of such vacancies. When this cluster moves as a group through the material, it is equivalent to a group of atoms in the lattice moving in the opposite direction. (If you’re sitting at spot 1 on the couch and move to spot 2, it’s equivalent to the vacancy on the couch moving from spot 2 to spot 1.) When this happens, the cloud of vacancies is called a supersolid: the cluster maintains its fixed structure, defined by the lattice, yet it move without resistance through the material.

    The first several attempts to create a supersolid succeeded but they were confined to one dimension. This is because many of them used a common method: to assemble a bunch of atoms of a particular element, “turn them into a superfluid and then add a crystalline structure by altering the interactions between the atoms” (source). This technique doesn’t work well to create two-dimensional supersolids because the “add a crystalline structure” step weakens the fragile superfluid state.

    In 2021, a group of physicists from Austria and Germany attempted to overcome this barrier by using magnetic atoms that formed small clumps with each other as well as allowing the clumps to repel each other and arrange themselves in a two-dimensional array. The jump from one dimension to two dimensions is significant because it allows physicists to explore the presence of other features of supersolids in the system. For example, theoretical calculations say that supersolids can have vortices on their surface. A one-dimensional supersolid doesn’t have a surface per se but a two-dimensional one does. Physicists can also study other features depending on the number of atoms involved. This said, the researchers’ method was cumbersome and didn’t produce a supersolid of sufficient quality.

    In a new study, published on May 13, 2022 (preprint paper), some members of the 2021 group reported that they were able to create a supersolid disc. This is also a two-dimensional supersolid but with a different geometry (the 2021 effort had produced a roughly rhombus-shaped supersolid). More importantly, the researchers devised a new method to synthesise it. While the previous method first introduced the superfluidity and then the crystallinity, in the new method, the physicists introduced both together.

    When you sweat in warm weather, water gets on your skin and then evaporates. When it does, it takes away some heat to change from liquid to vapour, thus cooling your skin. This is called evaporative cooling. When you start with a cloud of atoms, proceed to take away their energy and then progressively remove the most energetic atoms at each stage, you also progressively reduce the average energy of the system, and thus the overall temperature. This is evaporative cooling with atoms. In their study, the research team developed a theory to explain how this form of cooling could be used to create a supersolid inside a circular trap. Then they tested the theory to create a roughly hexagonal supersolid of a few tens of thousands of dysprosium atoms. (Dysprosium is highly magnetic, so its atoms can be clustered by modifying the magnetic field.)

    (a) The almost-circular supersolid of dysprosium atoms, in seven clusters (or droplets) in a hexagonal shape with one cluster at the centre. (b) The research team conducted 68 trials of the same experiment and in each case photographed the dysprosium supersolid after it had moved for 36 milliseconds. This is the ‘averaged’ shot. Credit: arXiv:2107.06680v2

    Considering the way in which physicists have achieved this new supersolid, the name seems all the more confusing.

  • Review: ‘Love, Death & Robots’ 3 (2022)

    Spoilers abound.

    Two overarching impressions. 1) The Telegraph wrote that LDR 3 is about the pitfalls of human greed. I came to a different conclusion. Almost all of the episodes in LDR 3 were about humans meeting the ancient, the mysterious or the new and coming away humbled or humiliated, if they came away at all. It’s an important, but not necessarily interesting, choice of theme at this moment: the apparent centrality of exploitation to the human condition. Perhaps more importantly, LDR 3 seems to reflect on the violence that being good demands of us – call it revolution, survival, whatever – in this age of the banality of greed. Be good. It won’t be easy, but be good.

    The second impression is at the end.

    Episode 1: [Exit Strategies] The very end of the ending delivers a punch that is immediately humorous but out of sorts with the tone and narrative of the rest of the episode. The rest – in the form of a social commentary of humankind’s last days – was informative coming from robots but nothing quite eye-opening or mind-blowing. But it’s short, so it’s easy to enjoy.

    Episode 2: [Bad Travelling] “There’s nothing more terrifying than a man prepared to live by his conviction.” This quote appears in the Netflix series Unabomber, about the manhunt for Ted Kaczynski. This line, or a minor variant of it, apparently originated with Kaczynski and it’s easy to see its imprints on his choice of lifestyle and his radical beliefs about the environment, industrialisation and self-governance. This said, the line has remained with me because it reads like one of the few fundamental truths about the human condition – something you can’t drill further down, something that leads to a fount of insights into what it means to be human. The episode brings this truth to powerful light, and demonstrates how the absolute adherence to doing the right thing – while it may lead to morally desirable outcomes – can appear just as devious as the actions of a chaotic-evil character might.

    Episode 3: [The Very Pulse of the Machine] One thing this episode gets right within the first two minutes is that it resolves a long-standing what-if that two movies have left us with: Prometheus (2012) and The Martian (2015). At the end of Prometheus, Elizabeth Shaw is stranded on an alien moon with the rest of her crew dead and only one spacecraft left to operate, which she intends to use to get to the home-planet of the Engineers. She is determined and focused. Through most of The Martian, Mark Watney has no self-doubt, no anxiety, no panic attack – even though he’s stuck on Mars and needs to find his way back to Earth, and also figure out how to feed himself and keep himself alive. He is instead simply determined and focused. What would it look like to be stranded on an alien world, with the hope of going or returning somewhere, and to have self-doubt as well? This is how this episode of LDR 3 begins – but the problem is that the character quickly consumes some potent drug that makes her high, and the self-doubt is replaced with visual and auditory hallucinations. Ugh.

    Episode 4: [Night of the Mini Dead] This was funny from start to finish, but the laughter didn’t last. It began funny because of the medium – miniatures on a tapletop acting out the deceptively innocuous beginnings of a zombie apocalypse, followed by the apocalypse itself – and stayed funny because the rapid pace of events keep you from thinking too much and because the apocalypse for once seems neither unpredictable nor offers redemptive value. Even the ending, a climax that lasts for all of a second, carries the tone on and leaves you with a chuckle. But give it a few more seconds and you’re probably thinking what the point was. I did. I didn’t get it. Silly little things can set off non-silly non-big things, and when it’s all said and done, none of it matters? Fuck you.

    Episode 5: [Kill Team Kill] This one was pointless. Really. I mean, LDR 3 like its two predecessors keeps its bodily obscenities focused on the male form, which makes sense only if you’re going to make a larger point about toxic masculinity and such things. But this one’s just adrenaline from start to finish.

    Episode 6: [Swarm] LDR 3 was released earlier this year, unintentionally (maybe) coinciding with a time in which experts and stakeholders around the world were pondering how we should treat genetic information – in line with the biological specimens from which they’re obtained or with a separate policy. The man’s ambitions in this episode encode the hubris that we’ve come to expect from the corporate sector vis-à-vis our biological resources – the conviction that the exploiter will triumph by virtue of the destructive tendencies of exploitation. But as in the episode as in the real world, such conviction obscures the complementary admission that we assume we know everything there is to know. We never do, and exploitation always backfires.

    When scientists working for a large company sequence the DNA of a rare plant using a single leaf plucked from a sacred forest, return it to the forest’s stewards once they’re done, and go on to replicate a compound encoded in the genes that provides a beautiful fragrance eventually bottled as an expensive perfume, should the stewards benefit? Should they have asked the stewards first? Should the stewards have much of a say? To my mind the answers to all these questions is ‘yes’, but few of the reasons are rooted in science. The stewards and indeed the complex community of organisms of which they are part often possess an intelligence to which science blinds us. Yet these discussions frequently begin among scientists, involve scientists and repeatedly appeal to scientific principles to claim moral, and eventually political, authority – and it inevitable leads to exploitation. The same thing happens in the episode, which I must say ends on a very gratifying note.

    Episode 7: [Mason’s Rats] No particular thoughts beyond the two impressions.

    Episode 8: [In Vaulted Halls Entombed] One thing that many (but not all) horror productions fail to get is that it’s not the grotesque that really frightens us but that momentary but singularly immense shock of being faced with something that we never expected to face – to have our minds confront something that they can’t possibly conceive. Even more fundamentally, terror erupts when we need to fill in a blank in reality (or in a movie if we’ve suspended disbelief). The brain is a prediction engine, so when it has no reasonable options to choose from, it seems to go haywire, populating the blank with monsters lurking in the dark of our conscience. A production succeeds the moment it creates a suitable blank and forces us to admit that we can’t ignore it. But I’d say there’s one fear that’s even deeper, even more unsettling: the incomprehensible. It’s the blank that’s clearly been filed yet which evades complete comprehension. Hans Giger’s art captured this sensation wonderfully well as did H.P. Lovecraft’s lore of the Old Ones – but I experienced it most profoundly in the latter’s The Outsider. It’s the thing that you know and that you struggle to know, both at once – and it’s the sensation to which this episode builds up. Excellent stuff. Also: “Embrace the suck” – a line worth remembering.

    Episode 9: [Jibaro] The internet suggests this was the most popular episode. I’m going to stick with In Vaulted Halls Entombed but only because of my fascination with the unknown. Without that, the fever-dream that is Jibaro would easily cut ahead. It brings together a “baroque” combination of “film and animation” and “dance and mythology” (source), a melancholy soundtrack and a story so replete with metaphors that it’s hard to come to one conclusion about it – and that apparently was also its maker’s intention. It’s shot through with greed but it doesn’t seem reasonable to stop there, with that conclusion, that Jibaro is a parable about one of the seven deadly sins. Instead, the tale seems to me to be about what how perfectly acceptable being our worst selves looks like (as its maker told Awn), how familiar the Knight and the Siren seem to us. We’ve seen them before, at least parts of them, in the people around us, in the people we read about.

    Recommended reading: Alberto Mielgo’s Sci-Fi Short ‘Jibaro’ Is Not a Critique of Colonialism. Excerpt:

    I don’t want to fall into the same trap as the readings I am criticizing and try and ‘pin down’ Jibaro into a single parable or message. Mielgo is not deliberately making a comment on Cervantes here. Rather, his short film, like his characters, is meant to ‘dance’. It spins on and through and around a variety of tropes, the central one being that of toxic relationships and the way these are both frightening and alluring. But the textual bed in which the deaf knight and the siren sleep together is less that of Spanish colonialism than that of Spanish mythology. The correspondences to the latter are much more precise.

    The second impression: Many of the episodes seem to bear the echoes of episodes in LDR’s still-the-best season 1. Exit Strategies is obviously related to Three Robots, but the other episodes are connected more subtly. Night of the Mini Dead brings to mind Ice AgeMason’s Rats brings to mind The DumpIn the Vaulted Halls Entombed brings to mind both Shape-Shifters and The Secret WarThe Very Pulse of the Machine brings to mind Fish NightJibaro brings to mind Good Hunting (and The Witness in style of animation). Bad Travelling brings to mind Sonnie’s Edge. It’s hard to say if this was intentional (I’m being lazy and not googling), but it’s also hard to explain the raft of similarities. This said, the ultimate effect of LDR 1 was mind-expanding (Beyond the Aquila Rift and Zima Blue remain unmatched); LDR 2 was to remind us that LDR can also be bad; and LDR 3 is a contemplation of the costs of being good.

    Featured image: A scene from Jibaro. Source: Netflix.