Month: September 2022

  • Tales of two peppers

    The 7 Pot Barrackpore starts at the same Scoville Heat Unit (SHU) as the regular ones, but its highest level has frequently approached 1.3M SHU, which can easily set your face on fire. How the name Barrackpore came about, though, is quite intriguing.

    This is the third tweet in a medium-sized thread on Twitter by @Paperclip_In whose length belies the scope of the story it narrates. The thread takes off from the pepper in question, the 7 Pot Barrackpore from Trinidad, winds its way through the famous rebellion of 1857, the fortunes of indentured labourers shipped from India to the Caribbean in the late 19th and early 20th centuries, and ends – as it began – with the town of Barrackpore. All of this in a name, the name shared by two towns 15,000 km apart.

    Reading the thread was fascinating for two reasons. One, of course, was @Paperclip_In’s narration itself; the other was that what little I already knew about hot peppers and Scoville heat units is also tethered to another fervent piece of history.

    Last year’s medicine Nobel Prize was awarded to two scientists for their work on the receptors in the body that were involved in our ability to perceive heat and cold. The work of one of the laureates, David Julius, was based on studying the effects of a compound called capsaicin on the body. Capsaicin is technically 8-methyl-N-vanillyl-6-nonenamide.

    As I wrote at the time, “Capsaicin doesn’t actually burn or damage tissue. Its contact with [a receptor expressed by central nervous system cells] simply prompts the brain to react as if the tissue is being burnt.” This feature of the compound obviously stood out to those interested in new forms of inflicting pain on others. Former MP Lagadapati Rajagopal caused a commotion in Parliament in 2014 when he released an emulsified and pressurised capsaicin resin from a cannister, to oppose the bifurcation of Andhra Pradesh.

    The SHU of any spicy pepper denotes its capsaicin content. Capsaicin itself has an SHU of 16 million – the upper limit. The SHU of 7 Pot Barrackpore is around 1.3 million. In 2007, the chilli with the world’s highest SHU was ‘Naga chilli’ from Northeast India. In 2010, Associated Press reported that DRDO scientists were developing grenades packed with capsaicin extracted from the Naga chilli.

    In 2016, a committee appointed by the Indian government was still considering these ‘chilli grenades’ as substitutes for pellet guns, but eventually decided to load the things with nonivamide. @Paperclip_In is right to describe the 7 Pot Barrackpore, with an SHU of 1.3 million, as being able to “set your face on fire”. Nonivamide has an SHU of more than 9 million.

  • The Merge

    Earlier this month, a major event happened in the cryptocurrency space called the ‘Merge’. In this event, the ethereum blockchain changed the way it achieves consensus – from using a proof-of-work mechanism to a proof-of-stake mechanism.

    A blockchain is a spreadsheet that maintains a record of all the transactions between users using the same blockchain. Every user on a blockchain basically possesses an up-to-date copy of that spreadsheet and helps validate others’ transactions on the blockchain. The rewards that the blockchain produces for desirable user behaviour are called its tokens. For example, tokens on the ethereum blockchain are called ether and those on the bitcoin blockchain are called… well, bitcoins. This is what the users also transact with on the blockchain.

    (See here for a more thorough yet accessible intro to blockchains and NFTs.)

    As a result of the ‘Merge’, according to the foundation that manages the cryptocurrency, the blockchain’s energy consumption dropped by 99.95%.

    The blockchain on which users transact ethereum tokens plus the network is called the ethereum mainnet. During the ‘Merge’, the existing mainnet was replaced with another called the Beacon Chain.

    Imagine the blockchain to be a bridge that moves traffic across a river. Ahead of the ‘Merge’, operators erected a parallel bridge and allowed traffic over it as well. Then, on September 15, 2022, they merged traffic from the first bridge with the traffic on the new one. Once all the vehicles were off the old bridge, it was destroyed.

    Source: ethereum.org

    Each of the vehicles here was an ethereum transaction. During the ‘Merge’, the operators had to ensure that all the vehicles continued to move, none got hijacked and none of them broke down.

    (Sharding – which is expected to roll out in 2023 – is the act of splitting the blockchain up into multiple pieces that different parts of the network use. This way, each part will require fewer resources to use the blockchain even as the network as a whole will be using the blockchain as a whole.)

    Blockchains like those of bitcoin and ethereum need a ‘proof of x’ because they are decentralised: they have no central authority that decides whether a transaction is legitimate. Instead, the validation mechanisms are baked into the processes by which users mine and exchange the coins. Proof-of-work and proof-of-stake are two flavours of one such mechanism. To understand what it does, let’s consider one of the problems it protects a blockchain against: double-spending.

    Say Selvi wants to send 100 rupees to Gokul. Double-spending is the threat of sending the same 100 rupees to Gokul twice, thus converting 100 rupees to 200 rupees. When Selvi uses a bank: she logs into her netbanking account and transfers the funds or she withdraws some cash from the ATM and gives Gokul the notes. Either way, once she’s withdrawn the money from her account, the bank records it and she can’t withdraw the same funds again.

    When she takes the cryptocurrency route: Selvi transfers some ethereum tokens to Gokul over the blockchain. Here, the blockchain requires some way to verify and record the transaction so that it doesn’t recur. If it used proof-of-work, it would require users on the network to share their computing power to solve a complex mathematical problem. The operation produces a numeric result that uniquely identifies the transaction as well as appends the transaction’s details to the blockchain. A copy of the updated blockchain is shared with all the users so that they are all on the same page. If Selvi tries to spend the same coins again – to transfer it to someone else, say – she won’t be able to: the blockchain ‘knows’ now that Selvi no longer has the funds in her wallet.

    The demand for computing power to acknowledge a transaction and add it to the blockchain constitutes proof-of-work: when you supply that power, which is used to do work, you have provided that proof. In exchange, the blockchain rewards you with a coin. (If many people provided computing power, they split the coins released by the blockchain.)

    The reason the Ethereum folks claim their post-Merge blockchain consumes 99.95% less energy is because it doesn’t use proof-of-work to verify transactions. Instead, it uses proof-of-stake: users stake their ethereum tokens for each transaction. Put another way, proof-of-work requires users to prove they have computing power to lose; proof-of-stake requires users to prove they have coins – or wealth – to lose.

    Before each transaction, a validator places some coins as collateral in a ‘smart contract’. This is essentially an algorithm that will not return the coins to the validator if they don’t perform their task properly. Right now, aspiring validators need to deposit 32 ethereum tokens to qualify and join a queue. The network limits the rate at which new validators are added to the network.

    Once a validator is admitted, they are allotted blocks (transactions to be verified) at regular intervals. If a block checks out, the validator casts a vote in favour of that block that is transmitted across the network. Once every 12 seconds, the network randomly chooses a group of validators whose votes are used to make a final determination on whether a block is valid.

    Proof-of-stake is less energy-intensive than proof-of-work but it keeps the ethereum blockchain tethered to the same requirement: the proof of preexisting wealth. In the new paradigm, the blockchain releases new coins as reward when transactions are verified, and those who have staked more stand to gain more – i.e. the rich get richer.

    Note that when the blockchain used the proof-of-work consensus mechanism, a big problem was that a very small number of users provided a very large fraction of the computing power (contrary to cryptocurrencies’ promise to decentralise finance). Proof-of-stake is expected to increase this centralisation of validatory power because the blockchain now favours validators who have more to stake, and rewards them more. Over time, as richer validators stake more, the cost of validating a transaction will also go up – and the ‘poorer’ validators will be forced to drop out.

    Second, the proof-of-stake system requires problematic transactions to be flagged when the validators have staked their ethereum. Once they have withdrawn their stakes, they can’t be penalised. This in turn revives the risk of the double-spending problem, as set out in some detail here.

    The energy consumption of cryptocurrency transactions was and remains a major bit of criticism against this newfangled technological solution to a problem that the world doesn’t have – and that’s the point that sticks with me. The ‘Merge’ was laudable to the extent that it reduced the consumption of energy and mining hardware in a time when the wealthy desperately need to reduce all forms of consumption, but while the ‘cons’ column is one row shorter, the ‘pros’ column remains just as empty.

  • Ramanujan, Nash, Turing, Mirzakhani

    From a short review of a new documentary about the life and work of the Iranian mathematician Maryam Mirzakhani, September 9, 2022:

    While there are other movies about real-life mathematicians such as Nash, Ramanujan and Turing, the special abilities of these individuals are often depicted as making them eccentric in their private lives. In contrast, Mirzakhani lived a “normal” life, was married with a child and simply loved math. I want people to know that mathematicians like her also exist.

    The documentary powerfully conveys the attitude that there’s nothing women can’t do simply because they’re women, which makes it well worth watching from the perspective of diversity and gender.

    This is well and good. I haven’t yet watched the documentary but will at the first opportunity. This said, the review raises a curious point about the impression that films, documentaries, etc. have created about John Nash, Srinivasa Ramanujan and Alan Turing. The reviewer, Prof. Yukari Ito of the Kavli Institute in Japan, has written that they have given us the impression that being a great mathematician requires one to be eccentric, or that contributing to mathematics at the highest level demands the sort of transcendental brilliance that a human mind may never fully comprehend. Ramanujan exemplified this sort of work by setting forth a very large number of axioms in number theory without specifying the steps in between the first principles and the final thing. When he asked, he said a goddess was working through him. It may well be that Ramanujan’s biggest contribution to the idea of mathematics was his incomprehensible mind. However, the stories of Nash and Turing are significantly different. Unlike Ramanujan, they both had formal training in mathematics that allowed them to think more clearly about their respective domains, and neither man attributed their work to any sort of divine intervention. They were eccentric men, sure, but unlike Prof. Ito, I prefer to think that they were distinguished by an exceptionalism that also attends to Maryam Mirzaklhani.

    Specifically, Turing and Nash led normal lives too, in that they had families, they had homes and they had to work with the same quotidian constraints as many others of their generation (presumably minus misogyny, racism, etc. because they were white men). Sure, they were oddities in their respective social milieus, but I don’t believe that lends itself to the impression that mathematics and eccentricism are linked, at least in the cases of Nash and Turing. Nash was ill (he later developed schizophrenia) and Turing was gay well before the UK accepted homosexuality. It applies perfectly in Ramanujan’s case, of course. But by lumping the three men together, I fear that Prof. Ito’s review misidentifies the real nature of Mirzakhani’s achievement: not that she leads a ‘normal’ life but that she is a woman, and a woman from Iran. This is also what I meant by the exceptionalism that attends to Mirzakhani. Consider who the subjects of our films and documentaries are. Ramanujan, Nash and Turing had films made about them because they were eccentric – and Mirzakhani doesn’t escape this sampling bias as much as confirms it. There is a documentary about her because she hails from a country where women don’t have many of the rights that their counterparts in most other parts of the world enjoy, and because she was the first woman to be awarded the Fields Medal. There are several male and female mathematicians, and in fact mathematicians of other genders, who are perfectly brilliant as well as lead perfectly normal lives (in Prof. Ito’s definition). It’s just that their experiences may not make for a good movie. In fact, it may well be that what most people consider ‘normal’ hasn’t ever been the subject of a movie about a good mathematician.

    The structural issues that Prof. Ito overlooks also include a significant part of what allowed the men she mentioned to be successful – the division of labour in society, within their homes, where as men they were free to focus on their work without contributing to helping their partners run the house or attending to any kind of tedious administrative work at their places of employment. This is as much an indictment of patriarchy as that attitude among prestigious institutes that continues to this day – that brilliant men’s ‘eccentricities’ should be excused so that they can keep bringing in the grants, the citations and the awards. Mirzakhani was not normal. I’m not familiar with her story (I really need to watch the documentary) but I’m certain that she had more barriers in her way to achieve the level of success that she did. That in turn elevates her achievements in a sad way, and might also inspire others to think that mathematics stands to benefit through more than just mathematical contributions. After all, aren’t we paying attention to Mirzakhani herself because of the Fields Medal committee’s disgraceful dismissal of women’s contributions for eight decades?

  • A physics story of infinities, goats and colours

    When I was writing in August about physicist Sheldon Glashow’s objection to Abdus Salam being awarded a share of the 1979 physics Nobel Prize, I learnt that it was because Salam had derived a theory that Glashow had derived as well, taking a different route, but ultimately the final product was non-renormalisable. A year or so later, Steven Weinberg derived the same theory but this time also ensured that it was renormalisable. Glashow said Salam shouldn’t have won the prize because Salam hadn’t brought anything new to the table, whereas Glashow had derived the initial theory and Weinberg had made it renormalisable.

    His objections aside, the episode brought to my mind the work of Kenneth Wilson, who made important contributions to the renormalisation toolkit. Specifically, using these tools, physicists ensure that the equations that they’re using to model reality don’t get out of hand and predict impossible values. An equation might be useful to solve problems in 99 scenarios but in one, it might predict an infinity (i.e. the value of a physical variable approaches a very large number), rendering the equation useless. In such cases, physicists use renormalisation techniques to ensure the equation works in the 100th scenario as well, without predicting infinities. (This is a simplistic description that I will finesse throughout this post.)

    In 2013, when Kenneth Wilson died, I wrote about the “Indian idea of infiniteness” – including how scholars in ancient India had contemplated very large numbers and their origins, only for this knowledge to have all but disappeared from the public imagination today because of the country’s failure to preserve it. In both instances, I never quite fully understood what renormalisation really entailed. The following post is an attempt to fix this gap.

    You know electrons. Electrons have mass. Not all this mass is implicit mass per se. Some of it is the mass of the particle itself, sometimes called the shell mass. The electron also has an electric charge and casts a small electromagnetic field around itself. This field has some energy. According to the mass-energy equivalence (E = mc2approx.), the energy should correspond to some mass. This is called the electron’s electromagnetic mass.

    Now, there is an equation to calculate how much a particle’s electromagnetic mass will be – and this equation shows that this mass is inversely proportional to the particle’s radius. That is, smaller the particle, the more its electromagnetic mass. This is why the mass of a single proton, which is larger than the electron, has a lower contribution from its electromagnetic mass.

    So far so good – but quickly a problem arises. As the particle becomes smaller, according to the equation, its electromagnetic mass will increase. In technical terms, as the particle radius approaches zero, its mass will approach infinity. If its mass approaches infinity, the particle will be harder to move from rest, or accelerate, because a very large and increasing amount of energy will be required to do so. So the equation predicts that smaller charged particles, like quarks, should be nearly impossible to move around. Yet this is not what we see in experiments, where these particles do move around.

    In the first decade of the 20th century (when the equation existed but quarks had not yet been discovered), Max Abraham and Hendrik Lorentz resolved this problem by assuming that the shell mass of the particle is negative. It was the earliest (recorded) instance of such a tweak – so that the equations we use to model reality don’t lose touch with that reality – and was called renormalisation. Assuming the shell mass is negative is silly, of course, but it doesn’t affect the final result in a way that breaks the theory. To renormalise, in this context, assumes that our mathematical knowledge of the event to be modelled is not complete enough, or that introducing such completeness would make the majority of other problems intractable.

    There is another route physicists take to make sure equations and reality match, called regularisation. This is arguably more intuitive. Here, the physicist modifies the equation to include a ‘cutoff factor’ that represents what the physicist assumes is their incomplete knowledge of the phenomenon to which the equation is being applied. By applying a modified equation in this way, the physicist argues that some ‘new physics’ will be discovered in future that will complete the theory and the equation to perfectly account for the mass.

    (I personally prefer regularisation because it seems more modest, but this is an aesthetic choice that has nothing to do with the physics itself and is thus moot.)

    It is sometimes the case that once a problem is solved by regularisation, the cutoff factor disappears from the final answer – so effectively it helped with solving the problem in a way that its presence or absence doesn’t affect the answer.

    This brings to mind the famous folk tale of the goat negotiation problem, doesn’t it? A fellow in a village dies and bequeaths his 17 goats to three sons thus: the eldest gets half, the middle gets a third and the youngest gets one-ninth. Obviously the sons get into a fight: the eldest claims nine instead of 8.5 goats, the middle claims six instead of 5.67 and the youngest claims two instead of 1.89. But then a wise old woman turns up and figures it out. She adds one of her own goats to the father’s 17 to make up a total of 18. Now, the eldest son gets nine goats, the middle son gets six goats and the youngest son gets two goats. Problem solved? When the sons tally up the goats they received, the realise that the total is still 17. The old woman’s goat is left, which she then takes back and gets on her way. The one additional goat was the cutoff factor here: you add it to the problem, solve it, get a solution and move on.

    The example of the electron was suitable but also convenient: the need to renormalise particle masses originally arose in the context of classical electrodynamics – the first theory developed to study the behaviour of charged particles. Theories that physicists developed later, in each case to account for some phenomena that other theories couldn’t, also required renormalisation in different contexts, but for the same purpose: to keep the equations from predicting infinities. Infinity is a strange number that compromises our ability to make sense of the natural universe because it spreads itself like an omnipresent screen, obstructing our view of the things beyond. To get to them, you must scale an unscaleable barrier.

    While the purpose of renormalisation has stayed the same, it took on new forms in different contexts. For example, quantum electrodynamics (QED) studies the behaviour of charged particles using the rules of quantum physics – as opposed to classical electrodynamics, which is an extension of Newtonian physics. In QED, the charge of an electron actually comes out to be infinite. This is because QED doesn’t have a way to explain why the force exerted by a charged particle decreases as you move away. But in reality electrons and protons have finite charges. How do we fix the discrepancy?

    The path of renormalisation here is as follows: Physicists assume that any empty space is not really empty. There may be no matter there, sure, but at the microscopic scale, the vacuum is said to be teeming with virtual particles. These are pairs of particles that pop in and out of existence over very short time scales. The energy that produces them, and the energy that they release when they annihilate each other and vanish, is what physicists assume to be the energy inherent to space itself.

    Now, say an electron-positron pair, called ‘e’ and ‘p’, pops up near an independent electron, ‘E’. The positron is the antiparticle of the electron and has a positive charge, so it will move closer to E. As a result, the electromagnetic force exerted by E’s electric charge becomes screened at a certain distance away, and the reduced force implies a lower effective charge. As the virtual particle pairs constantly flicker around the electron, QED says that we can observe only the effects of its screened charge.

    By the 1960s, physicists had found several fundamental particles and were trying to group them in a way that made sense – i.e. that said something about why these were the fundamental particles and not others, and whether an incomplete pattern might suggest the presence of particles still to be discovered. Subsequently, in 1964, two physicists working independently – George Zweig and Murray Gell-Mann – proposed that protons and neutrons were not fundamental particles but were made up of smaller particles called quarks and gluons. They also said that there were three kinds of quarks and that the quarks could bind together using the gluons (thus the name). Each of these particles had an electric charge and a spin, just like electrons.

    Within a year, Oscar Greenberg proposed that the quarks would also have an additional ‘color charge’ to explain why they don’t violate Pauli’s exclusion principle. (The term ‘colour’ has nothing to do with colours; it is just the label that unamiginative physicists selected when they were looking for one.) Around the same time, James Bjorken and Sheldon Glashow also proposed that there would have to be a fourth kind of quark, because then the new quark-gluon model could explain three more unsolved problems at the time. In 1968, physicists discovered the first evidence for quarks and gluons in experiments, proving that Zweig, Gell-Mann, Glashow, Bjorken, Greenberg, etc. were right. But as usual, there was a problem.

    Quantum chromodynamics (QCD) is the study of quarks and gluons. In QED, if an electron and a positron interact at higher energies, their coupling will be stronger. But physicists who designed experiments in which they could observe the presence of quarks found the opposite was true: at higher energies, the quarks in a bound state behaved more and more like individual particles, but at lower energies, the effects of the individual quarks didn’t show, only that of the bound state. Seen another way, if you move an electron and a positron apart, the force between them gradually drops off to zero. But if you move two quarks apart, the force between them will increase for short distance before falling off to zero. It seemed that QCD would defy QED renormalisation.

    A breakthrough came in 1973. If a quark ‘Q’ is surrounded by virtual quark-antiquark pairs ‘q’ and ‘q*’, then q* would move closer to Q and screen Q’s colour charge. However, the gluons have the dubious distinction of being their own antiparticles. So some of these virtual pairs are also gluon-gluon pairs. And gluons also carry colour charge. When the two quarks are moved apart, the space in between is occupied by gluon-gluon pairs that bring in more and more colour charge, leading to the counterintuitive effect.

    However, QCD has had need of renormalisation in other areas, such as with the quark self-energy. Recall the electron and its electromagnetic mass in classical electrodynamics? This mass was the product of the electromagnetic energy field that the electron cast around itself. This energy is called self-energy. Similarly, quarks bear an electric charge as well as a colour charge and cast a chromo-electric field around themselves. The resulting self-energy, like in the classical electron example, threatens to reach an extremely high value – at odds with reality, where quarks have a relatively lower, certainly finite, self-energy.

    However, the simple addition of virtual particles wouldn’t solve the problem either, because of the counterintuitive effects of the colour charge and the presence of gluons. So physicists are forced to adopt a more convoluted path in which they use both renormalisation and regularisation, as well as ensure that the latter turns out like the goats – where a new factor introduced into the equations doesn’t remain in the ultimate solution. The mathematics of QCD is a lot more complicated than that of QED (they are notoriously hard even for specially trained physicists), so the renormalisation and regularisation process is also correspondingly inaccessible to non-physicists. More than anything, it is steeped in mathematical techniques.

    All this said, renormalisation is obviously quite inelegant. The famous British physicist Paul A.M. Dirac, who pioneered its use in particle physics, called it “ugly”. This attitude changed the most due to the work of Kenneth Wilson. (By the way, his PhD supervisor was Gell-Mann.)

    Quarks and gluons together make up protons and neutrons. Protons, neutrons and electrons, plus the forces between them, make up atoms. Atoms make up molecules, molecules make up compounds and many compounds together, in various quantities, make up the objects we see all around us.

    This description encompasses three broad scales: the microscopic, the mesoscopic and the macroscopic. Wilson developed a theory to act like a bridge – between the forces that quarks experience at the microscopic scale and the forces that cause larger objects to undergo phase transitions (i.e. go from solid to liquid or liquid to vapour, etc.). When a quark enters or leaves a bound state or if it is acted on by other particles, its energy changes, which is also what happens in phase transitions: objects gain or lose energy, and reorganise themselves (liquid –> vapour) to hold or shed that energy.

    By establishing this relationship, Wilson could bring to bear insights gleaned from one scale to difficult problems at a different scale, and thus make corrections that were more streamlined and more elegant. This is quite clever because even renormalisation is the act of substituting what we are modelling with what we are able to observe, and which Wilson improved on by dropping the direct substitution in favour of something more mathematically robust. After this point in history, physicists adopted renormalisation as a tool more widely across several branches of physics. As physicist Leo Kadanoff wrote in his obituary for Wilson in Nature, “It could … be said that Wilson has provided scientists with the single most relevant tool for understanding the basis of physics.”

    This said, however, the importance of renormalisation – or anything like it that compensates for the shortcomings of observation-based theories – was known earlier as well, so much so that physicists considered a theory that couldn’t be renormalised to be inferior to one that could be. This was responsible for at least a part of Sheldon Glashow’s objection to Abdus Salam winning a share of the physics Nobel Prize.

    Sources:

    1. Introduction to QCD, Michelangelo L. Mangano
    2. Lectures on QED and QCD, Andrey Grozin
    3. Lecture notes – Particle Physics II, Michiel Botje
    4. Lecture 5: QED
    5. Introduction to QCD, P.Z. Skands
    6. Renormalization: Dodging Infinities, John G. Cramer
  • How do you make a mode-locked laser?

    Given

    Mode-locked lasers are lasers that are capable of producing intense ultra-short pulses of light at a very high rate.

    Concepts

    Set 1

    Take a bunch of atoms, excite them and place them in a box covered with mirrors in all directions. Send in one photon, a particle of light, to intercept one of these atoms. Unable to get more excited, the atom will get de-excited by emitting the interceptor photon and another photon identical to it. Because the box is covered with mirrors, these two photons bounce off a wall and intercept two more atoms. The same thing happens, over and over. A hole in the box allows the ‘extra’ photons to escape to the outside. This light is what you would see as laser light. Of course it’s a lot more complicated than that but if you had to pare it down to the barest essentials (and simplify it to a ridiculous degree), that’s what you’d get. The excited atoms that are getting de-excited together make up the laser’s gain medium. The mirror-lined box that contains the atoms, and has a specific design and dimensions, is called the optical cavity.

    Set 2

    Remember wave-particle duality? And remember Young’s double-slit experiment? The photons bouncing back and forth inside the optical cavity are also waves bouncing back and forth. When two waves meet, they interfere – either constructively or destructively. When they interfere destructively, they cancel each other out. When they interfere constructively, they produce a larger wave.

    A view of a simulation of a double-slit experiment with electrons (particles). The destructively interfered waves are ‘visible’ as no-waves whereas the constructively interfered waves are visible as taller waves. Credit: Alexandre Gondran/Wikimedia Commons, CC BY-SA 4.0

    As thousands of waves interfere with each other, only the constructively interfered waves survive inside the optical cavity. These waves are called modes. The frequencies of the modes are together called the laser’s gain bandwidth. Physicists can design lasers with predictable modes and gain bandwidth using simple formulae. They just need to tweak the optical cavity’s design and the composition of the gain medium. For example, a laser with a helium-neon gain medium has a gain bandwidth of 1.5 GHz. A laser with a titanium-doped sapphire gain medium has a gain bandwidth of 128,000 GHz.

    Set 3

    Say there are two modes in a laser’s gain medium. Say they’re out of phase. Remember the sine wave? It looks like this: ∿. A wave’s phase denotes the amount of the wave-shape it has completed. The modes are the waves that survive in the laser’s optical cavity. If there are only two modes and they’re out of phase, the laser’s light output is going to be sputtering – very on-and-off. If there are thousands of modes, the output is going to be a lot better: even if they are all out of phase, their sheer number is going to keep the output intensity largely uniform.

    Two sinusoidal waves offset from each other by a phase shift θ. When θ = 0º, the waves will be in phase. Credit: Peppergrower/Wikimedia Commons, CC BY-SA 3.0

    But there’s another scenario in which there are many modes and the modes are all in phase. In this optical cavity, the modes would all constructively interfere with each other and produce a highly amplified wave at periodic intervals. This big wave would appear as a short-duration but intense pulse of light – and the laser producing it would be called a mode-locked laser.

    Like in the previous instance, there are simple formulae to calculate how often a pulse is produced, depending on the optical cavity design and the gain medium’s properties. These formulae also show that the wider the modes’ range of frequencies – i.e. the gain bandwidth – the shorter the duration of the light pulse will be. For example, the helium-neon laser has a lower gain bandwidth, so its lowest pulse duration is around 300 picoseconds. The titanium-doped sapphire laser has a higher gain bandwidth, so its lowest pulse duration is 3.4 femtoseconds. In the former duration, light would have travelled around 9 cm; in the latter, it would have travelled only 1 µm.

    Brief interlude

    • An optical cavity of the sort described above is called a Fabry-Pérot cavity. The LIGO detector used to record and study gravitational waves uses a pair of Fabry-Pérot cavities to increase the distance each beam of laser light travels inside the structure, increasing the facility’s sensitivity to a level required to be affected by gravitational waves.
    • Aside from the concepts described above, ensuring a mode-locked laser works as intended requires physicists to adjust many other parts of the device. For example, they need to control the cavity’s dispersion (if waves of different frequencies propagate differently), the laser’s linewidth (the range of frequencies in the output), the shape of the pulse, and the physical attributes of the optical cavity and the gain medium (their temperature, e.g.).

    Method

    How do you ‘lock’ the modes together? The two most common ways are active and passive locking. Active locking is achieved by placing a material or a device that exhibits the electro-optic effect inside the optical cavity. In such a material, its optical properties change if an electric field is applied. A popular example is the crystal lithium niobate: in the presence of an electric field, its refractive index increases, meaning light takes longer to pass through it. Remember that the farther a light wave propagates, the more its phase evolves. So a wave’s phase can be ‘adjusted’ by passing it through the crystal and then tuning the applied electric field (very simplistically speaking), to get its phase right. What actually happens is more complicated, but by repeatedly modulating the light waves inside the cavity in this manner, the phases of all the waves can be synchronised.

    A lithium niobate wafer. Credit: Smithy71, CC0

    Passive locking dispenses with an external modulator (like the applied electric field); instead, it encourages the light waves to get their phases in sync by repeatedly interacting with a passive object inside the cavity. A common example is a semiconductor saturable absorber, which absorbs light of low intensity and transmits light of high intensity. A related technique is Kerr-lens mode-locking, in which low- and high-intensity waves are focused at different locations inside the cavity and the high intensity waves are allowed to exit. Kerr-lens mode-locking is capable of producing extremely intense pulses of laser light.

    Conclusion

    Thus, we have a mode-locked laser. They have several applications. Two that are relatively easier to explain are nuclear fusion and eye surgery. While ‘nuclear fusion’ describes a singular outcome, there are many ways to get there. One is to heat electrons and ions to a high temperature and confine them using magnetic fields, encouraging them to recombine. This is called magnetic confinement. Another way is to hold a small amount of hydrogen in a very small container (technically, a hohlraum) and then compress it further using ultra-short high-intensity laser pulses. This is the inertial containment method, and it can make use of mode-locked lasers. In refractive eye surgery, doctors use a series of laser pulses, each only a few femtoseconds long, to cut a portion of the cornea during LASIK surgery.

    Addendum

    If your priority is the laser’s intensity over the pulse duration or the repetition rate, you could use an alternative technique called giant pulse formation (a.k.a. Q-switching). The fundamental principle is simple – sort of like holding your farts in and letting out a big one later. When the laser is first being set up, the gain medium is pumped into the optical cavity. Once it is sufficiently full, the laser will start operating. In terms of energy – remember that the atoms making up the gain medium are excited. In the giant pulse formation technique, an attenuator is placed inisde the cavity: this device prevents photons from being reflected around. As a result, the laser can’t operate even when the gain medium is more than dense enough for the laser to operate.

    After a point, the pumping is stopped. Some atoms in the medium might spontaneously emit some energy and become de-excited, but by and large, the optical cavity will contain a (relatively) large amount of energy that also remains stable over time – certainly more energy than if the laser had been allowed to start earlier. Once this steady state is reached, the attenuator is quickly switched to allow photons to move around inside the cavity. Because the laser then begins with a gain medium of higher density, its first light output has very high intensity. The ‘Q’ of ‘Q-switching’ refers to the cavity’s quality factor. On the flip side, in giant pulse formation, the gain medium’s density also drops rapidly, and subsequent pulses are not so intense. This compromises the laser’s repetition rate.

  • The strange NYT article on taming minks

    I’m probably waking up late to this but the New York Times has published yet another article in which it creates a false balance by first focusing on the problematic side of a story for an inordinately long time, without any of the requisite qualifications and arguments, before jumping, in the last few paragraphs to one or two rebuttals that reveal, for the first time, that there could in fact be serious problems with all that came before.

    The first instance was about a study on the links between one’s genes and facial features. The second is a profile of a man named Joseph Carter who tames minks. The article is headlined ‘How ‘the Most Vicious, Horrible Animal Alive’ Became a YouTube Star’. You’d think minks are “vicious” and “horrible” because they’re aggressive or something like that, but no – you discover the real reason all the way down in paragraph #12:

    “Pretty much everyone I asked, they told me the same thing — ‘They’re the most vicious, horrible animal alive,’” Mr. Carter said. “‘They’re completely untamable, untrainable, and it doesn’t really matter what you do.’”

    So, in 2003, he decided that he would start taming mink. He quickly succeeded.

    Putting such descriptors as “vicious” and “horrible” in single-quotes in the headline doesn’t help if those terms are being used – by unnamed persons, to boot – to mean minks are hard to tame. That just makes them normal. But the headline’s choice of words (and subsequently the refusal by the first 82% (by number of paragraphs) of the piece to engage with the issue) gives the impression that the newspaper is going to ignore that. A similar kind of dangerous ridiculousness emerges further down the piece, with no sense of irony:

    “You can’t control, you can’t change the genetics of an individual,” he said. “But you can, with the environment, slightly change their view of life.”

    Why do we need to change minks’ view of anything? Right after, the article segues to a group of researchers at a veterinary college in London, whose story appears to supply the only redeeming feature of Carter’s games with minks: the idea that in order to conduct their experiments with minks, the team would have to design more challenging tasks with higher rewards than they were handing out. Other than this, there’s very little to explain why Carter does what he does.

    There’s a flicker of an insight when a canal operator says Carter helps them trap the “muskrats, rats, raccoons and beavers” that erode the canal’s banks. There’s another flicker when the article says Carter buys “many of his animals” from fur farms, where the animals are killed before they’re a year old when in fact they could live to three, as they do with Carter. Towards the very end, we learn, Carter also prays for his minks every night.

    So he’s saving them in the sort of way the US saves other countries?

    It’s hard to say when he’s also setting out to tame these animals to – as the article seems to suggest – see if he can succeed. In fact, the article is so poorly composed and structured that it’s hard to say if the story it narrates is a faithful reflection of Carter’s sensibilities or if it’s just lazy writing. We never find out if Carter has ever considered ‘rescuing’ these animals and releasing them into the wild or if he has considered joining experts and activists fighting to have the animal farms shut. We only have the vacuous claim that is Carter’s belief that he’s giving them a “new life”.

    The last 18% of the article also contains a few quotes that I’d call weak for not being sharp enough to poke more holes in Carter’s views, at least as the New York Times seems to relay them. There is one paragraph citing a 2001 study about what makes mink stressed and another about the welfare of Carter’s minks being better than those that are caged in the farms. But the authors appear to have expended no sincere effort to link them together vis-à-vis Carter’s activities.

    In fact, there is a quote by a scientist introduced to rationalise Carter’s views: “It’s like any thoroughbred horse, or performance animal — or birds of prey who go out hunting. If asked, they probably would prefer to hunt.” Wouldn’t you think that if they were asked, and if they could provide an answer that we could understand, they would much rather be free of all constraints rather than being part of Carter’s circus?

    There is also a dubious presumption here that creates a false choice – between being caged in a farm and being tamed by a human: that the minks ought to be grateful because some humans are choosing to stress them less, instead of not stress them whatsoever. Whether a mink might be killed by predators or have a harder time finding food in the wild, if it is released, is completely irrelevant.

    Then comes the most infuriating sentence of the lot, following the scientist’s quote: “Mr. Carter has his own theories.” Not ideas or beliefs but theories. Because scientists’ theories, tested as they need to be against our existing body of knowledge and with experiments designed to eliminate even inadvertent bias, are at least semantically equivalent to Carter’s “own theories”, founded on his individual need for self-justification and harmony with the saviour complex.

    And then the last paragraph:

    “Animals don’t have ethics,” he said. “They have sensation, they can feel pain, they have the ability to learn, but they don’t have ethics. That’s a human thing.”

    I don’t know how to make sense of it, other than with the suspicion that the authors and/or editors grafted these lines to the bottom because they sounded profound.