We don’t have as much data as I would like. Given the data that we have, I am putting this on the table, and it bothers people to even think about that, just like it bothered the Church in the days of Galileo to even think about the possibility that the Earth moves around the sun. Prejudice is based on experience in the past. The problem is that it prevents you from making discoveries. If you put the probability at zero per cent of an object coming into the solar system, you would never find it!
There’s a bit of Paul Feyerabend at work here. Specifically:
A scientist who wishes to maximise the empirical content of the views he holds and who wants to understand them as clearly as he possibly can must therefore introduce other views; that is, he must adopt a pluralistic methodology. He must compare ideas with other ideas rather than with ‘experience’ and he must try to improve rather than discard the views that have failed in the competition. … Knowledge so conceived is not a series of self-consistent theories that converges towards an ideal view; it is not a gradual approach to the truth. It is rather an ever increasing ocean of mutually incompatible alternatives, each single theory, each fairy-tale, each myth that is part of the collection forcing the others into greater articulation and all of them contributing, via this process of competition, to the development of our consciousness.
p. 13-14, ch. 2, Against Method, Paul Feyerabend, Verso 2010.
A member of the Indian Space Research Organisation (ISRO) has confirmed it has made progress on its plan to test a new class of reusable rockets.
B.N. Suresh, an honorary distinguished professor at ISRO, discussed some details of the ADMIRE project at a meeting of the Indian National Science Academy on December 26. And in doing so, he continues a vaunted tradition of announcing mission updates sans any advertisement and in seemingly random speeches made around the country.
The test-rocket, called ADMIRE – it’s not clear what the acronym stands for – is distinct from the Reusable Launch Vehicle programme also in development. According to Times of India, Suresh told an audience of scientists that ISRO has plans for a test site and a test as well. These are expected to be off of its usual launchpad in Sriharikota, although the dates are unknown.
ADMIRE looks like a two-stage launch vehicle. According to an image first seen in early 2018, it is shorter and narrower than the Polar Satellite Launch Vehicle and its first stage is fit with landing legs that include grid fins.
Instead of launching the rocket up and coming straight back down, ADMIRE might possibly have to steer through the air using the fins and slow down a bit before firing retrograde propulsion thrusters. Apart from technical reasons, this is also necessary because of geography. The Indian landmass is surrounded by other countries. Depending on where above ground ADMIRE begins its descent, it will first have to get closer to India and then start to land. Otherwise, it might drop over a different country.
K. Sivan, ISRO’s current chairman, told The Wire in 2016 that the Andaman and Nicobar Islands could be a suitable landing spot if such a manoeuvre is ever executed.
In fact, some have picked up on the fact that ADMIRE resembles the liquid-fuel-powered L40 strap-on booster used on the Geosynchronous Satellite Launch Vehicle (GSLV) Mk II. This in turn triggered speculation about whether multiple ADMIRE-type rockets, working together in a cluster configuration alongside the Mk II’s first stage, could be used to recover it à la the Falcon 9.
These ideas aren’t entirely unfounded. Sivan also said ISRO itself was “very seriously” thinking about it. Suresh also reportedly said at the meeting that the ADMIRE vehicle will be fit with “a laser altimeter and a NavIC receiver” – both technologies developed and included in the upcoming Chandrayaan 2 mission. But at the same time, ISRO hasn’t clarified whether ongoing Mk II launches are being modified in any way to provide data for the ADMIRE project.
The GSLV Mk II (left) and the ADMIRE launch vehicle Source: Reddit
It is even possible that the R&D exercises associated with ADMIRE may not ever result in a functional vehicle.
On the other hand, ISRO does plan to deploy the Reusable Launch Vehicle, or RLV, probably by 2030. The RLV is more along the lines of the emeritus NASA Space Shuttle and embodies a competing paradigm of reusable rockets. While ADMIRE uses the vertical-takeoff and vertical-landing model, the RLV functions like a spaceplane.
A potpourri of announcements in the past have indicated that the RLV will be powered by five semi-cryogenic engines, letting the vehicle carry at least 10,000 kg to the low-Earth orbit. When coming back down, it is known that the RLV will use a scramjet engine to land like an airplane on a special runway many kilometres long.
As part of its development, ISRO conducted a technology demonstration (TD) in May 2016, where a downscaled prototype of the spaceplane was carried 65 km by a rocket booster, after which it dropped back down into the Bay of Bengal. In August the same year, ISRO also tested the scramjet engine, onboard an RH-560 sounding rocket. A second TD is expected to happen later in 2019, where another RLV prototype will attempt to land like an airplane after descending from a great height.
Evidently, the RLV programme is farther along than the ADMIRE programme.
In the last few years, international competition in terms of reducing launch costs and lifting heavier payloads to orbit has stiffened considerable, and space agencies around the world are racing to capture bigger slices of the market.
This is precisely why ISRO itself has fast-tracked the development of its Small Satellite Launch Vehicle, a dedicated launcher for small- and cubesats with a very short turnaround time. In this framework, it’s likely that ISRO will build and use the RLV even though it is also working on the competing ADMIRE design. And ADMIRE itself might yield a proper launch vehicle only in the very-long-term.
On December 18, Manmohan Singh took a jibe at Narendra Modi for not holding any press conferences in his term as PM. To get in on the action, 15 people at The Wire (including myself) pitched 15 questions we’d like to ask Modi if we ever got the chance. The full list is here to view.
My question, eleventh on the list, has been whittled down. Understandably so since the original text was 182 words long. If I’d asked it during a presser or wherever, I’d have sounded like one of those gasbags we love to hate – the guy hogging the mic, sounds like he may not actually have a question and is actually bragging about how much he knows.
With apologies for that, this is the question I’d like to ask Modi should I get the chance (as of this moment). I’ve presented it in full.
Many scientists and science academies have protested that lawmakers’ words and actions – including your own – are negating India’s efforts to improve scientific temper in society. Your government is increasing spending for ‘conventional’ science and for research on gaumutra at the same time.
Government-funded research on these projects presents neither accessible evidence for claims nor sources of data, and experiments don’t have any protocols to follow. This is especially dangerous in healthcare (ayurveda, BGR 34, homeopathy, etc.). On the other hand, your government constantly wants scientists to deliver more and win Nobel Prizes while also working towards “national priorities” that you refuse to set in stone.
Your ministers say absolute spending on R&D has been the highest in your term but forget that it’s an abysmal fraction (0.7%) of the GDP. All science departments received more money in the latest Union budget – but while the MST got a 6.1% hike, the AYUSH ministry got a 13% hike, and postdocs around the country have protested at least twice for better stipends.
Jamie Farnes, a theoretical physicist at Oxford University, recently had a paper published that claimed the effects of dark matter and dark energy could be explained by replacing them with a fluid-like substance that was created spontaneously, had negative mass and disobeyed the general theory of relativity. As fantastic as these claims are, Farnes’s paper made the problem worse by failing to explain the basis on which he was postulating the existence of this previously unknown substance.
But that wasn’t the worst of it. Oxford University published a press releasesuggesting that Farnes’s paper had “solved” the problems of dark matter/energy and stood to revolutionise cosmology. It was reprinted by PhysOrg; Farnes himself wrote about his work for The Conversation. Overall, Farnes, Oxford and the science journalists who popularised the paper failed to situate it in the right scientific context: that, more than anything else, it was a flight of fancy whose coattails his university wanted to ride.
The result was disaster. The paper received a lot of attention in the popular science press and among non-professional astronomers, so much so that the incident had to be dehyped by Ethan Siegel, Sabine Hossenfelder and WiredUK. You’d be hard-pressed to find a better countermeasures team.
The paper’s coverage in the international press. Source: Google News
Of course, the science alone wasn’t the problem: the reason Siegel, Hossenfelder and others had to step in was because the science journalists failed to perform their duties. Those who wrote about the paper didn’t check with independent experts about whether Farnes’s work was legit, choosing instead to quote directly from the press release. It’s been acknowledged in the past – though not sufficiently – that university press officers who draft the releases needed to buck up; rather, more importantly, the universities need to have better policies about what roles their press releases are supposed to perform.
However, this isn’t to excuse the science journalists but to highlight two things. First: they weren’t the sole points of failure. Second: instead of looking at this episode as a network where the nodes represent different points of failure, it would be useful to examine how failures at some nodes could have increased the odds of a failure at others.
Of course, if the bad science journalists had been replaced by good ones, this problem wouldn’t have happened. But ‘good’ and ‘bad’ are neither black/white nor permanent characterisations. Some journalists – often those pressed for time, who aren’t properly trained or who simply have bad mandates from their superiors in the newsroom – will look for proxies for goodness instead of performing the goodness checks themselves. And when these proxy checks fail, the whole enterprise comes down like a house of cards.
The university’s name is one such; and in this case, ‘Oxford University’ is a pretty good one. Another is that the paper was published in a peer-reviewed journal.
In this post, I want to highlight two others that’ve been overlooked by Siegel, Hossenfelder, etc.
The first is PhysOrg, which has been a problem for a long time, though it’s not entirely to blame. What many people don’t seem to know is that PhysOrg reprints press releases. It undertakes very little science writing, let alone science journalism, of its own. I’ve had many of my writers – scientists and non-scientists alike – submit articles with PhysOrg used here and there as a citation. They assume they’re quoting a publication that knows what it’s doing but what they’re actually doing is straight-up quoting press releases.
The little bit this is PhysOrg’s fault is because PhysOrg doesn’t state anywhere on its website that most of what it puts out is unoriginal, unchecked, hyped content that may or may not have a scientist’s approval and certainly doesn’t have a journalist’s. So buyers beware.
Science X, which publishes PhysOrg, has a system through which universities can submit their press releases to be published on the site. Source: PhysOrg
The second is The Conversation. Unlike PhysOrg, these guys actually add value to the stories they publish. I’m a big fan of them, too, because they amplify scientists’ voices – an invaluable action/phenomenon in countries like India, where scientists are seldom heard.
The way they add value is that they don’t just let the scientists write whatever they’re thinking; instead, they’ve an editorial staff composed of people with PhDs in the relevant fields as well as experienced in science communication. The staff helps the scientist-contributors shape their articles, and fact-check and edit them. There have been one or two examples of bad articles slipping through their gates but for the most part, The Conversation has been reliable.
HOWEVER, they certainly screwed up in this case, and in two ways. In the first way, they screwed up from the perspective of those, like me, who know how The Conversation works by straightforwardly letting us down. Something in the editorial process got shorted. (The regular reader will spot another giveaway: The Conversation usually doesn’t use headlines that admit the first-person PoV.)
Further, Wired also fails to mention something The Conversation itself endeavours to clarify with every article: that Oxford University is one of the institutions that funds the publication. I know from experience that such conflicts of interest haven’t interfered with its editorial judgment in the past, but now it’s something we’ll need to pay more attention to.
In the second way, The Conversation failed those people who didn’t know how it works by giving them the impression that it was a journalism outlet that saw sense in Farnes’s paper. For example, one scientist quoted in Wired‘s dehype article says this:
Farnes also wrote an article for The Conversation – a news outlet publishing stories written by scientists. And here Farnes yet again oversells his theory by a wide margin. “Yeah if @Astro_Jamie had anything to do with the absurd text of that press release, that’s totally on him…,” admits Kinney.
“The evidence is very much that he did,” argues Richard Easther, an astrophysicist at Auckland University. What he means by the evidence is that he was surprised when he realised that the piece in The Conversation had been written by the scientist himself, “and not a journo”.
Easther’s surprise here is unwarranted but it exists because he’s not aware of what The Conversation actually does. And like him, I imagine many journalists and other scientists don’t know what The Conversation‘s editorial model is.
Given all of this, let’s take another look at the proxy-for-reliability checklist. Some of the items on it we discussed earlier – including the name of the university – still carry points, and with good reason, although none of them by themselves should determine how the popular science article should be written. That should still follow the principles of good science journalism. However, “article in PhysOrg” has never carried any points, and “article in The Conversation” used to carry some points but which now fall to zero.
Beyond the checklist itself, if these two publications want to improve their qualitative perception, they should do more to clarify their editorial architectures and why they are what they are. It’s worse to give a false impression of what you do than to provide zero points on the checklist. On this count, PhysOrg is guiltier than The Conversation. At the same time, if the impression you were designed to provide is not the impression readers are walking away with, the design can be improved.
If it isn’t, they’ll simply assume more and more responsibility for the mistakes of poorly trained science journalists. (They won’t assume resp. for the mistakes of ‘evil’ science journalists, though I doubt that group of people exists).
After its licentious article about Earth having a second moon, I thought National Geographic had published another subpar piece when I saw this headline:
Small Nuclear War Could Reverse Global Warming for Years
The headline is click-bait. The article itself is about how regional nuclear war, such as between two countries like India and Pakistan, can have global consequences, especially on the climate and agriculture. That it wouldn’t take World War III + nuclear winter for the entire world to suffer the consequences of a few – not hundreds of – nuclear explosions. And that we shouldn’t labour with the presumption that detonating a few nuclear bombs would be better than having to set all of them off. So I wouldn’t have used that headline – which seems to suggest we should maybe implanting the atmosphere with thousands of tonnes of some material to cool the planet down.
I don’t think it’s silly to come to that conclusion. Scientists at the oh-so-exalted Harvard and Yale Universities are suggesting something similar: injecting the stratosphere with an aerosol to absorb heat and cool Earth’s surface. Suddenly, global warming isn’t our biggest problem, these guys are. Through a paper published in the journal Environmental Research Letters, they say that it would be both feasible and affordable to “cut the rate of global warming in half” (source: CNN) using this method. From their paper:
Total pre-start costs to launch a hypothetical SAI effort 15 years from now are ~$3.5 billion in 2018 US $. A program that would deploy 0.2 Mt of SO2 in year 1 and ramp up linearly thereafter at 0.2 Mt SO2/yr would require average annual operating costs of ~$2.25 billion/yr over 15 years. While these figures include all development and direct operating costs, they do not include any indirect costs such as for monitoring and measuring the impacts of SAI deployment, leading Reynolds et al (2016) to call SAI’s low costs a solar geoengineering ‘trope’ that has ‘overstayed its welcome’. Estimating such numbers is highly speculative. Keith et al (2017), among others, simply takes the entire US Global Change Research Program budget of $3 billion/yr as a rough proxy (Our Changing Planet 2016), more than doubling our average annual deployment estimates.
Whether the annual number is $2.25 or $5.25 billion to cut average projected increases in radiative forcing in half from a particular date onward, these numbers confirm prior low estimates that invoke the ‘incredible economics’ of solar geoengineering (Barrett 2008) and descriptions of its ‘free driver’ properties (Wagner and Weitzman 2012, 2015, Weitzman 2015).
My problem isn’t that these guys undertook their study. Scientifically devised methods to engineering the soil and air to slow or disrupt global warming have been around for many decades (including using a “space-based solar shield”). The present study simply evaluated one idea to find that it is eminently possible and that it could deliver a more than acceptable return per dollar spent (notwithstanding the comment on unreliable speculation and its consequences). Heck, the scientists even add:
Dozens of countries would have both the expertise and the money to launch such a program. Around 50 countries have military budgets greater than $3 billion, with 30 greater than $6 billion.
I’m all for blue-sky research – even if this particular analysis may not qualify in that category – and that knowing something is an end in and of itself. I.e., knowledge cannot be useless because knowing has value. Second: I don’t think any government or organisation is going to be able to implement a regional, leave alone global, SAI programme just because this paper has found that it is a workable idea. Then again, ability is not the same as consideration and consideration has its consequences as well.
My grouse is with a few lines in the paper’s ‘Conclusion’, where the scientists state that they “make no judgment about the desirability of [stratospheric aerosol injection].” They go on to state that their work is solely from an “engineering perspective” – as if to suggest that should anyone seriously consider implementing SAI, their paper is happy to provide the requisite support.
However, the scientists should have passed judgment about the desirability of SAI instead of copping out. I can’t understand why they chose to do so; it is the easiest conclusion in the whole enterprise. No policymaker or lawmaker who thinks anthropogenic global warming (AGW) is real is going to consider this method to deal with the problem (or maybe they will, who knows; the Delhi government thinks it’s responding right by installing giant air filters in public spaces). As David Archer, a geophysicist at the University of Chicago, told CNN:
It will be tempting to continue to procrastinate on cleaning up our energy system, but we’d be leaving the planet on a form of life-support. If a future generation failed to pay their climate bill they would get all of our warming all at once.
By not judging the “desirability of SAI”, the scientists have effectively abdicated their responsibility to properly qualify the nature and value of their work, and situate it in its wider political context. They have left the door open to harmful use of their work as well. Consider the difference between a lawmaker brandishing a journal article that simply lays out the “engineering perspective” and another having to deal with an article that discusses the engineering as well as the desirability vis-à-vis the nature and scope of AGW.
The India-based Neutrino Observatory (INO), a mega science project stranded in the regulatory boondocks since the Centre okayed it in 2012, received a small shot in the arm earlier this week.
On November 2, the National Green Tribunal (NGT) dismissed an appeal by activists against the environment ministry’s clearance for the project.
The activists had alleged that the environment ministry lacked the “competence” to assess the project and that the environmental clearance awarded by the ministry was thus invalid. But the principal bench of the NGT ruled that “it was correct on the part of the EAC and the [ministry] to appraise the project at their level”.
The INO is a Rs-1,500-crore project that aims to build and install a 50,000-tonne detector inside a mountain near Theni, Tamil Nadu, to study natural elementary particles called neutrinos.
The environment ministry issued a clearance in June 2011. But the NGT held it in abeyance in March 2017 and asked the INO project members to apply for a fresh clearance. G. Sundarrajan, the head of an NGO called Poovulagin Nanbargal that has been opposing the INO, also contended that the project was within 5 km of the Mathikettan Shola National Park. So the NGT also directed the INO to get an okay from the National Board for Wildlife.
Poovulagin Nanbargal (Tamil for ‘Friends of Flora’) and other activists have raised doubts about the integrity of the rock surrounding the project site, damage to water channels in the area and even whether nuclear waste will be stored onsite. However, all these concerns have been allayed or debunked by the collaboration and the media. (At one point, former president A.P.J. Abdul Kalam wrote in support of the project.)
Sundarrajan has also been supported by Vaiko, leader of the Marumalarchi Dravida Munnetra Kazhagam party.
In June 2017, INO members approached the Tamil Nadu State Environmental Impact Assessment Authority. After several meetings, it stated that the environment ministry would have to assess the project in the applicable category.
The ministry provided the consequent clearance in March 2018. Activists then alleged that this process was improper and that the ministry’s clearance would have to be rescinded. The NGT struck this down.
As a result, the INO now has all but one clearance – that of the National Board for Wildlife – it needs before the final step: to approach the Tamil Nadu Pollution Control Board for the final okay. Once that is received, construction of the project can be underway.
Once operational, the INO is expected to tackle multiple science problems. Chief among them is the neutrino mass hierarchy: the relative masses of the three types of neutrinos, an important yet missing detail that holds clues about the formation and distribution of galaxies in the universe.
A group of Danish physicists that doubted last year whether two American experiments to detect gravitational waves had actually confused noise for signal has reared its head once more. New Scientistreported earlier this week that the group, from the Niels Bohr Institute in Copenhagen, independently analysed the experimental data and found the results to be an “illusion” instead of the actual thing.
The twin Laser Interferometer Gravitational-wave Observatories (LIGO), located in the American states of Washington and Louisiana, made the world’s first direct detection of gravitational waves in September 2015. The labs behind the observatories announced the results in February 2016 after multiple rounds of checking and rechecking. The announcement bagged three people instrumental in setting up LIGO the Nobel Prize for physics in 2017.
However, in June that year, Andrew Jackson, the spokesperson for the Copenhagen group, first raised doubts about LIGO’s detection. He claimed that because of the extreme sensitivity of LIGO to noise, and insufficient efforts on scientists’ part to eliminate such noise from their analysis, what the ‘cleaned-up’ data shows as signs of gravitational waves is actually an artefact of the analysis itself.
As David Reitze, LIGO executive director, told Ars Technica, “The analysis done by Jackson et al. looks for residuals after subtracting the best fit waveform from the data. Because the subtracted theoretical waveforms are not a perfect reconstruction of the true signal, … [they] find residuals at a very low level and claim that we have instrumental artefacts that we don’t understand. So therefore he believes that we haven’t detected a gravitational wave.”
Scientists working with LIGO had rebutted Jackson’s claims back then. The fulcrum of their argument rest on the fact that LIGO data is very difficult to analyse and that Jackson and co. had made some mistakes in their independent analysis. They also visited the Niels Bohr Institute to work with Jackson and his team, and held extended discussions with him in teleconferences, according to Ars Technica. But Jackson hasn’t backed down.
LIGO detects gravitational waves using a school-level physics concept called interference. When two light waves encounter each other, two things happen. In places where a crest of one wave meets a crest of the other, they combine to form a bigger crest; similarly for troughs. Where a crest of one wave meets the trough of another, they cancel each other. As a result, when the recombined wave hits a surface, the viewer sees a fringe pattern: alternating bands of light and shadow. The light areas denote where one crest met another and the shadow, where one crest met a trough.
Each LIGO detector consists of two kilometre-long corridors connected like an ‘L’ shape. A machine at the vertex fires two laser beams down each corridor. The beams bounce off a mirror at the end come back towards the vertex to interfere with each other. The lasers are tuned such that, in the absence of a gravitational wave, they reconvene with destructive interference: full shadow.
When a gravitational wave passes through LIGO, distorting space as it does, one arm of LIGO becomes shorter than the other for a fleeting moment. This causes the laser pulse in that corridor to reach sooner than the other and there’s a fringed interference pattern. This alerts scientists to the presence of a gravitational wave. The instrument is so sensitive that it can detect distortions in space as small as one-hundredth the diameter of a proton.
At the same time, because it’s so sensitive, LIGO also picks up all kinds of noise in its vicinity, including trucks passing by a few kilometres away and little birds perching on the detector housing. So analysts regularly have dry-runs with the instrument to understand what noise in the signal looks like. When they do detect a gravitational wave, they subtract the noise from the data to see what the signal looks like.
But this is a horribly oversimplified version. Data analysts – and their supercomputers – take months to clean up, study and characterise the data. The LIGO collaboration also subjects the final results to multiple rechecks to prevent premature or (inadvertent) false announcements. The analysts’ work has since spawned its own field of study called numerical relativity.
Since the September 2015 detection, LIGO has made five more gravitational wave detections. Some of these have been together with other observatories in the world. Such successful combined efforts lend further credence to LIGO’s claims. The prime example of this was the August 2017 discovery of gravitational waves from a merger of neutron stars in a galaxy 130-140 million lightyears away. Over 70 other observatories and telescopes around the world joined in the effort to study and characterise the merger.
This is why LIGO scientists have asserted that when Jackson claims they’ve made a mistake, their first response is to ask his team to recheck its calculations. But though their response hasn’t changed the second time Jackson and co. have hit back, a better understanding of the problem has emerged: Is LIGO doing enough to help others make sense of its data?
For one, the tone of some of these responses hasn’t gone down well. Peter Coles, a theoretical cosmologist at the Cardiff and Maynooth Universities, wrote on his blog:
I think certain members – though by no means all – of the LIGO team have been uncivil in their reaction to the Danish team, implying that they consider it somehow unreasonable that the LIGO results such be subject to independent scrutiny. I am not convinced that the unexplained features in the data released by LIGO really do cast doubt on the detection, but unexplained features there undoubtedly are. Surely it is the job of science to explain the unexplained?
From LIGO’s perspective, the fundamental issue is that their data – a part of which is in the public domain isn’t easily understood or processed. And Jackson believes LIGO could be hiding some mistakes behind this curtain of complexity.
His and his group’s opinion, however, remains in the minority. According to the New Scientist report itself, many scientists who sided with Jackson don’t think LIGO has messed up but that it needs to do more to help independent experts understand its data better. Sabine Hossenfelder, a theoretical physicist at the Frankfurt Institute for Advanced Studies, wrote on her blog on November 1:
… the issue for me was that the collaboration didn’t make an effort helping others to reproduce their analysis. They also did not put out an official response, indeed have not done so until today. I thought then – and still think – this is entirely inappropriate of a scientific collaboration. It has not improved my opinion that whenever I raised the issue LIGO folks would tell me they have better things to do.
The LIGO collaboration finally issued a statement on November 1. Excerpt:
The features presented in Creswell et al. arose from misunderstandings of public data products and the ways that the LIGO data need to be treated. The LIGO Scientific Collaboration and Virgo Collaboration (LVC) have full confidence in our published results. We are preparing a paper that will provide more details about LIGO detector noise properties and the data analysis techniques used by the LVC to detect gravitational-wave signals and infer their source properties.
A third LIGO instrument is set to come up by 2022, this one in India. The two American detectors and other gravitational-wave observatories are all located on almost the same plane in the northern hemisphere. This limits the network’s ability to pinpoint the location of sources of gravitational waves in the universe. A detector in India would solve this problem because it would be outside the plane.
Indian scientists have also been a significant part of LIGO’s effort to study gravitational waves. Thirty-seven of them were part of a larger group of physicists awarded the Special Breakthrough Prize for fundamental physics in 2016.
Scientists at the Cern nuclear physics lab near Geneva are investigating whether a bizarre and unexpected new particle popped into existence during experiments at the Large Hadron Collider. Researchers on the machine’s multipurpose Compact Muon Solenoid (CMS) detector have spotted curious bumps in their data that may be the calling card of an unknown particle that has more than twice the mass of a carbon atom.
The prospect of such a mysterious particle has baffled physicists as much as it has excited them. At the moment, none of their favoured theories of reality include the particle, though many theorists are now hard at work on models that do. “I’d say theorists are excited and experimentalists are very sceptical,” said Alexandre Nikitenko, a theorist on the CMS team who worked on the data. “As a physicist I must be very critical, but as the author of this analysis I must have some optimism too.”
Senior scientists at the lab have scheduled a talk this Thursday at which Nikitenko and his colleague Yotam Soreq will discuss the work. They will describe how they spotted the bumps in CMS data while searching for evidence of a lighter cousin of the Higgs boson, the elusive particle that was discovered at the LHC in 2012.
This announcement – of a possibly new particle weighing about 28 GeV – is reminiscent of the 750 GeV affair. In late 2015, physicists spotted an anomalous bump in data collected by the LHC that suggested the existence of a previously unknown particle weighing about 67-times as much as the carbon atom. The data wasn’t qualitatively good enough for physicists to claim that they had evidence of a new particle, so they decided to get more.
This was December. By August next year (2016), before the new data was out, theoretical physicists had written and published over 500 papers on the arXiv preprint server on what the new particle could be and how theoretical models could have to be changed to make room for it. But at the 38th International Conference on High-Energy Physics, LHC scientists unveiled the new data said that the anomalous bump in the data had vanished and that what physicists had seen earlier was likely a random fluctuation in lower quality observations.
The new announcement of a 28 GeV particle seems set for a similar course of action. I’m not pronouncing that no new particle will be found – that’s for physicists to determine – but only writing in defence of those who would cover this event even though it seems relatively minor and like history’s repeating itself. Anomalies like these are worth writing about because of the Standard Model of particle physics, which has been historically so good at making predictions about particles’ properties that even small deviations from it are big news.
At the same time, it’s big news in a specific context with a specific caveat: that we might be chasing an ambulance here. For example, The Guardian only says that the anomalous signal will have to be verified by other experiments, leaving out the part where the signal that LHC scientists already have is pretty weak (4.2σ and 2.9σ (both local as opposed to global) in two tests in the 8 TeV data and 2.0σ and 1.4σ deficit in the 13 TeV data). It also doesn’t mention the 750 GeV affair even though the two narratives already appear to be congruent.
If journalists leave such details out, I’ve a feeling they’re going to give their readers the impression that this announcement is more significant than it actually is. (Call me a nitpicker but I’m sure being accurate will allow engaged readers to set reasonable expectations about what to expect in the story’s next chapter as well as keep them from becoming desensitised to journalistic hype.)
Those who’ve been following physics news will be aware of the ‘nightmare scenario’ assailing particle physics, and in this context there’s value in writing about what’s keeping particle physicists occupied – especially in their largest, most promising lab.
But thanks to the 750 GeV affair, most recently, we also know that what any scientist or journalist says or does right now is moot until LHC scientists present sounder data + confirmation of a positive/negative result. And journalists writing up these episodes without a caveat that properly contextualises where a new anomaly rests on the arc of a particle’s discovery will be disingenuous if they’re going to justify their coverage based on the argument that the outcome “could be” positive.
The outcome could be negative and we need to ensure the reader remembers that. Including the caveat is also a way to do that without completely obviating the space for a story itself.
Featured image: The CMS detector, the largest of the five detectors that straddle the LHC, and which spotted the anomalous signal corresponding at a particle at the 28 GeV mark. Credit: CERN.
The universe is supposed to contain equal quantities of matter and antimatter. But this isn’t the case: there is way more matter than antimatter around us today. Where did all the antimatter go? Physicists trying to find the answer to this question believe that the universe was born with equal amounts of both. However, the laws of nature that subsequently came into effect were – and are – biased against antimatter for some reason.
In the language of physics, this bias is called a CP symmetry violation. CP stands for charge-parity. If a positively charged particle is substituted with its negatively charged antiparticle and if its spin is changed to its mirror image, then – all other properties being equal – any experiments performed with either of these setups should yield the same results. This is what’s called CP symmetry. CPT – charge, parity and time – symmetry is one of the foundational principles of quantum field theory.
Physicists try to explain the antimatter shortage by studying CP symmetry violation because one of the first signs that the universe has a preference for one kind of matter over the other emerged in experiments testing CP symmetry in the mid-20th century. The result of this extensive experimentation is the Standard Model of particle physics, which makes predictions about what kinds of processes will or won’t exhibit CP symmetry violation. Physicists have checked these predictions in experiments and verified them.
However, there are a few processes they’ve been confused by. In one of them, the SM predicts that CP symmetry violation will be observed among particles called neutral B mesons – but it’s off about the extent of violation.
This is odd and vexing because as a theory, the SM is one of the best out there, able to predict hundreds of properties and interactions between the elementary particles accurately. Not getting just one detail right is akin to erecting the perfect building only to find the uniformity of its design undone by a misalignment of a few centimetres. It may be fine for practical purposes but it’s not okay when what you’re doing is building a theory, where the idea is to either get everything right or to find out where you’re going wrong.
But even after years of study, physicists aren’t sure where the SM is proving insufficient. The world’s largest particle physics experiment hasn’t been able to help either.
Mesons and kaons
A pair of neutral B mesons can decay into two positively charged muons or two negatively charged muons. According to the SM, the former is supposed to be produced in lower amounts than the latter. In 2010 and 2011, the Dø experiment at Fermilab, Illinois, found that there were indeed fewer positive dimuons being produced – but there was sufficient evidence that the number was off by 1%. Physicists believe that this inexplicable deviation could be the result of hitherto undiscovered physical phenomena interfering with the neutral B meson decay process.
This discovery isn’t the only one of its kind. CP violation was first discovered in processes involving particles called kaons in 1964, and has since been found affecting different types of B mesons as well. And just the way some processes violate CP symmetry more than the theory says they should, physicists also know of other processes that don’t violate CP symmetry even though the theory allows them to do so. These are associated with the strong nuclear force and this difficulty is called the strong CP problem – one of the major unsolved problems of physics.
It is important to understand which sectors, i.e. groups of particles and their attendant processes, violate CP symmetry and which don’t because physicists need to put all the facts they can get together to find patterns in them, seeds of theories that can explain how the creation of antimatter at par with matter was aborted at the cosmic dawn. This in turn means that we keep investigating all the known sectors in greater detail until we have something that will allow us to look past the SM unto a more comprehensive theory of physics.
It is in this context that in the last few years, another sector has joined this parade: the neutrinos. Neutrinos are extremely hard to trap because they interact with other particles only via the weak nuclear force, which is much weaker than the name suggests. Though a few trillion neutrinos will pass through your body in your lifetime, maybe three will interact with the atoms in your body. To surmount this limitation, physicists and engineers have built very large detectors to study them as they zoom in from all directions: outer space, from inside Earth, from the Sun, etc.
Neutrinos exhibit another property called oscillations. There are three types or flavours of neutrinos – called electron, muon and tau (note: an electron neutrino is different from an electron). Neutrinos of one flavour can transform into neutrinos of another flavour at a rate predicted by the SM. The T2K experiment in Japan has been putting this to the test. On October 24, it reported via a paper in the journal Physical Review Letters that it had found signs of CP symmetry violation in neutrinos as well.
A new sector
If neutrinos obeyed CP symmetry, then muon neutrinos should be transforming into electron neutrinos – and muon antineutrinos should be transforming into electron antineutrinos – at the rates predicted by the SM. But the transformation rate seems to be off. Physicists from T2K had reported last year that they had weak evidence of this happening. According to the October 24 paper, the evidence this year is less weak almost by half – but still not strong enough to shake up the research community.
While the trend suggests that T2K will indeed find that the neutrinos sector violates CP symmetry as it takes more data, enough experiments in the past have forced physicists to revisit their models after more data punctured this or that anomaly present in a smaller dataset. We should just wait and watch.
But what if neutrinos do violate CP symmetry? There are major implications, and one of them is historical.
When the C, P and T symmetries were formulated, physicists thought they were each absolute: that physical processes couldn’t violate any of them. But in 1956, it was found that the weak nuclear force does not obey C or P symmetries. Physicists were shaken up but not for long; they quickly rallied and fronted an idea 1957 that C or P symmetries could be broken but both together constituted a new and absolute symmetry: CP symmetry. Imagine their heartbreak when James Cronin and Val Fitch found evidence for CP symmetry violation only seven years later.
As mentioned earlier, neutrinos interact with other particles only via the weak nuclear force – which means they don’t abide by C or P symmetries. If within the next decade we find sufficient evidence to claim that the neutrinos sector doesn’t abide by CP symmetry either, the world of physics will be shaken up once more, although it’s hard to tell if any more hearts will be broken.
In fact, physicists might just express a newfound interest in mingling with neutrinos because of the essential difference between these particles on the one hand and kaons and B mesons on the other. Neutrinos are fundamental and indivisible whereas both kaons and B mesons are made up of smaller particles called quarks. This is why physicists have been able to explain CP symmetry violations in kaons and B mesons using what is called the quark-mixing model. If processes involving neutrinos are found to violate CP symmetry as well, then physicists will have twice as many sectors as before in which to explore the matter-antimatter problem.
The winners of this year’s Nobel Prizes are being announced this week. The prizes are an opportunity to discover new areas of research, and developments there that scientists consider particularly notable. In this endeavour, it is equally necessary to remember what the Nobel Prizes are not.
For starters, the Nobel Prizes are not lenses through which to view all scientific pursuit. It is important for everyone – scientists and non-scientists alike – to not take the Nobel Prizes too seriously.
The prizes have been awarded to white men from Europe and the US most of the time, across the medicine, physics and chemistry categories. This presents a lopsided view of how scientific research has been undertaken in the world. Many governments take pride in the fact that one of their citizens has been awarded this prize, and often advertise the strength of their research community by boasting of the number of Nobel laureates in their ranks. This way, the prizes have become a marker of eminence.
However, this should not blind us from the fact that there are equally brilliant scientists from other parts of the world that have done, and are doing, great work. Even research institutions do this; for example, this is what the Institute for Advanced Study at Princeton University, New Jersey, says on its website:
The Institute’s mission and culture have produced an exceptional record of achievement. Among its Faculty and Members are 33 Nobel Laureates, 42 of the 60 Fields Medalists, and 17 of the 19 Abel Prize Laureates, as well as many MacArthur Fellows and Wolf Prize winners.
What the prizes are
Winning a Nobel Prize may be a good thing. But not winning a Nobel Prize is not a bad thing. That is the perspective often lost in conversations about the quality of scientific research. When the Government of India expresses a desire to have an Indian scientist win a Nobel Prize in the next decade, it is a passive admission that it does not consider any other marker of quality to be worth the endorsement. Otherwise, there are numerous ways to make the statement that the quality of Indian research is at par with the rest of the world’s (if not better in some areas).
In this sense, what the Nobel Prizes afford is an easy way out. Consider the following analogy: when scientists are being considered for promotions, evaluators frequently ask whether a scientist in question has published in “prestigious” journals like Nature, Science, Cell, etc. If the scientist has, it is immediately assumed that the scientist is undertaking good research. Notwithstanding the fact that supposedly “prestigious” journals frequently publish bad science, this process of evaluation is unfair to scientists who publish in other peer-reviewed journals and who are doing equally good, if not better, work. Just the way we need to pay less attention to which journals scientists are publishing in and instead start evaluating their research directly, we also need to pay less attention to who is winning Nobel Prizes and instead assess scientists’ work, as well as the communities to which the scientists belong, directly.
Obviously this method of evaluation is more arduous and cumbersome – but it is also the fairer way to do it. Now the question arises: is it more important to be fair or to be quick? On-time assessments and rewards are important, particularly in a country where resource optimisation carries greater benefits as well as where the population of young scientists is higher than in most countries; justice delayed is justice denied, after all. At the same time, instead of settling for one or the other way, why not ask for both methods at once: to be fair and to be quick at the same time? Again, this is a more difficult way of evaluating research than the methods we currently employ, but in the longer run, it will serve all scientists as well as science better in all parts of the world.
Skewed representation of ‘achievers’
Speaking of global representation: this is another area where the Nobel Foundation has faltered. It has ensured that the Nobel Prizes have accrued immense prestige but it has not simultaneously ensured that the scientists that it deems fit to adorn that prestige have been selected equally from all parts of the world. Apart from favouring white scientists from the US and Europe, the Nobel Prizes have also ignored the contributions of women scientists. Thus far, only two women have won the physics prize (out of 206), four women the chemistry prize (out of 177) and 12 women the medicine prize (out of 214).
One defence that is often advanced to explain this bias is that the Nobel Prizes typically reward scientific and technological achievements that have passed the test of time, achievements that have been repeatedly validated and whose usefulness for the common people has been demonstrated. As a result, the prizes can be understood to be awarded to research done in the past – and in this past, women have not made up a significant portion of the scientific workforce. Perhaps more women will be awarded going ahead.
This arguments holds water but only in a very leaky bucket. Many women have been passed over for the Nobel Prizes when they should not have been, and the Nobel Committee, which finalises each year’s laureates, is in no position to explain why. (Famous omissions include Rosalind Franklin, Vera Rubin and Jocelyn Bell Burnell.) This defence becomes even more meaningless when you ask why so few people from other parts of the world have been awarded the Nobel Prize. This is because the Nobel Prizes are a fundamentally western – even Eurocentric – institution in two important ways.
First, they predominantly acknowledge and recognise scientific and technological developments that the prize-pickers are familiar with, and the prize-pickers are a group made up of all previous laureates and a committee of Swedish scientists. Additionally, this group is only going to acknowledge research that is already familiar with and by people its own members have heard of. It is not a democratic organisation. This particular phenomenon has already been documented in the editorial boards of scientific journals, with the effect that scientific research undertaken with local needs in mind often finds dismal representation in scientific journals.
Second, according to the foundation that awards them, the Nobel Prizes are designated for individuals or groups who work has granted the “greatest benefit on mankind”. For the sciences, how do you determine such work? In fact, one step further, how do we evaluate the legitimacy and reliability of scientific work at all? Answer: we check whether the work has followed certain rules, passed certain checks, received the approval of the author’s peers, etc. All of these are encompassed in the modern scientific publishing process: a scientists describes the work they have done in a paper, submits the paper to a journal, the journal gets the paper reviewed up the scientist’s peers, once it is okay the paper is published. It is only when a paper is published that most people consider the research described in it to be worth their attention. And the Nobel Prizes – rather the people who award them – implicitly trust the modern scientific publishing process even though the foundation itself is not obligated to, essentially as a matter of convenience.
However, what about the knowledge that is not published in such papers? More yet, what about the knowledge that is not published in the few journals that get a disproportionate amount of attention (a.k.a. the “prestige” titles like Nature, Science and Cell). Obviously there are a lot of quacks and cracks whose ideas are filtered out in this process but what about scientists conducting research in resource-poor economies who simply can’t afford the fancy journals?
What about scientists and other academics who are improving previously published research to be more sensitive to the local conditions in which it is applied? What about those specialists who are unearthing new knowledge that could be robust but which is not being considered as such simply because they are not scientists – such as farmers? It is very difficult for these people to be exposed to scholars in other parts of the world and for the knowledge they have helped create/produce to be discovered by other people. The opportunity for such interactions is diminished further when the research conducted is not in English.
In effect, the Nobel Prizes highlight people and research from one small subset of the world. There are a lot of people, a lot of regions, a lot of languages and a lot of expertise excluded from this subset. As the prizes are announced one by one, we need to bear these limitations in mind and choose our words carefully, so as to not exalt the prizewinners too much and downplay the contributions of numerous others in the same field as well as in other fields and, more importantly, we must not assume that the Nobel Prizes are any kind of crowning achievement.