A computational account of Nobel Prize History

September 29, 2022

Dr John Wettlaufer, A. M. Bateman Professor of Geophysics, Mathematics, and Physics at Yale University, research professor at the Nordic Institute for Theoretical Physics, and a member of the Nobel Committee for Physics, discusses the contributions from the laureates of the 2021 Nobel Prize in Physics, his insights into complex system modeling, and his personal experience serving as a Nobel Committee member

One half of the 2021 Nobel Prize in Physics was awarded to Giorgio Parisi. Which of his many contributions stood out for winning him the Nobel Prize?

The Nobel Prize Committee recognized Parisi’s work on what is called replica symmetry breaking in frustrated spin-glasses. Glass is a class of solid-state materials; unlike other solid-state materials, such as ceramics, glasses are more difficult to be modeled, since they lack long-range periodic order. Historically, efforts to understand the phase behavior of glasses have driven many great advances in research. For instance, the analysis of the model by David Sherrington and Scott Kirkpatrick led the latter to develop the now famous computational method called ‘simulated annealing’, driven by the very slow convergence of solutions with so many states. Many challenges remained, such as the negative entropy and the lack of stability of the model solutions. Although the main problem was related to the assumption of ‘replica symmetry’, as pointed out by Jairo Rolim Lopes de Almeida and David J. Thouless, an effective method of breaking this symmetry was still unknown. Parisi solved this problem by realizing that there are an infinite number of states within the ordered phase of the spin glass, and by further introducing a new order parameter. Notably, the idea that Parisi proposed was an extension of the ‘replica trick’ method introduced by Samuel Frederick Edwards and Philip Warren Anderson. Given the timing of his work, it had a substantial impact on how people set out to think about networks, optimization, the concept of ultrametricity, and so forth — the list goes on. Ultimately, Parisi’s proposal concerning symmetry breaking among replicas was a conceptual advance that reconciled major obstacles in spin-glasses.

Which other contributions made by Parisi are worth highlighting?

Parisi has made many contributions on the computational front. For instance, he developed a Lattice Gauge theory for fermions that introduced the first numerical method of what we now call ‘Bosonization’, which enables the simulation of quasi-particles or pseudo-particles of fermions. He also computed numerical estimates of massive hadrons and developed computational schemes for Monte Carlo simulations of such fermionic systems. In the 1980s, he worked with Nicola Cabibbo as a proponent and the first scientific coordinator of a project that was directly involved in the design of a family of parallel computers that were dedicated to calculating the mass spectrum of the lattice gauge theories, including quantum chromodynamics. Parisi also developed an idea about simulated tempering that has evolved into ‘parallel tempering’, which became one of the state-of-the-art algorithms in the simulation of discrete glass systems. Finally, in this subset of accomplishments, Parisi introduced the swap algorithm, which recently allowed the simulation of thermalized glassy liquid models beyond experimental time scales. Parisi is a master with paper and pencil, but he also understands that, to make progress, there are computational needs to be addressed. While we recognized that Parisi’s contributions on statistical physics have had a huge impact in condensed-matter physics, these contributions have also had legs in many other areas. I think this is typically how science progresses today, by toggling back and forth between analytical and numerical solutions.

The other half of the 2021 Nobel Prize in Physics was jointly awarded to Syukuro Manabe and Klaus Hasselmann. Which of their many contributions stood out for winning them the Nobel Prize?

Manabe’s main contributions were in pioneering climate modeling. To provide some context, when the ground is heated, the air rises and this convective fluid flow has a controlling influence on the temperature gradient vertically in the atmosphere. At the top of the troposphere, there is incoming radiation of visible light from space, and there is also outgoing longwave (infrared) radiation to space. In order to correctly describe this process in numerical simulations, the vertical temperature gradient needs to be correctly captured. However, there are many physical processes that cannot be resolved exactly: to make reliable predictions, a good approximation scheme is required. Motivated by observations and the

We are facing many challenges related to climate change. How can mathematical modeling and computational science play a role in overcoming these challenges?

The development of climate models and numerical methods themselves is of crucial importance. For example, Hasselmann developed the fingerprint method to wed climate models and observations; the idea is to explain the statistical significance of observational trends by incorporating them into a model. This is a specific approach to what we nowadays call data assimilation, which is, in all of its facets, the key to using observations. The challenge here is that we only know what the observations are up to right now; as Niels Bohr once said, “prediction is very difficult, especially if it’s about the future!” This is the problem with models themselves: there is a lot of inter-model variability, the models are hugely expensive to run, and it is not obvious which models provide the best solution. Note that there are still unresolved processes for prediction. For instance, we still cannot resolve turbulent motions, and they are essential: we don’t know which clouds are going to form and where, and we cannot predict the mixing of water masses in the ocean that changes properties such as nutrients, salinity and temperature.

So, where should the computational emphasis be placed? The key is data. My contention is that we are not (yet!) using data to its fullest potential fruition. The satellite era began around 1979 and offered very high-resolution, daily observations of all sorts of things. If we go back in time and look at longer timescales, we have a lot of proxy data of various types with reasonably high resolution in space and time; we also have paleoclimate data on very long timescales, which is coarsely resolved in time and space. The question is then: how to use the confluence of sparse and finely resolved data in a rational manner that is linked to the underlying physics? For example, it is common practice to look at paleoclimate data by just comparing the timing of various wiggles. However, there are very powerful mathematical methods that we and others use from non-equilibrium statistical physics that I think are really valuable but not yet common in the field. Going forward, we need to inform models with the up-to-date data in a robust and self-consistent way, which I think will involve science, software, hardware — an interdisciplinary collaboration. This sounds like a cliché, but it really doesn’t work if people don’t speak each other’s languages.Recently, we have had more extreme weather events, such as hurricanes. For example, work by Kerry Emanuel shows evidence of an increase in hurricane intensity, but not frequency, that is attributable to greenhouse gases. However, not only do climate models struggle to capture hurricanes, but practically speaking, the havoc they wreak on humanity depends on both their intensity and frequency. Clearly, there is a great need to combine data and models to optimize the utility of both.

How important do you think machine learning and data-driven approaches will be for the field of climate modeling in the future?

At the highest level, they will be extremely important going forward. However, I worry about the way machine learning models are most commonly applied to various problems. Many people are using machine learning as a black box — they can download and use models without fundamental knowledge of how the models actually work. My own opinion is that we really need to have data-driven approaches, however we need to advance the use of data and machine learning by knowing how the latter actually works. In my own research field, there are collaborations working on climate problems in which researchers are theoretical computer scientists, mathematicians, physicists and, importantly, climate scientists, which is a really important type of interaction going forward.

How important is multiscale modeling to complex system modeling?

It’s very important! As a matter of fact, the importance of multiscale modeling has been recognized for a very long time across a vast range of systems, and the natural question is: what is the dominant scale? The frameworks that have been developed to answer this question are really at the heart of many disciplines, ranging from materials science and condensed-matter physics to engineering and high-energy physics. A notable example is a method called the renormalization group, in which the key idea is the same as asking whether there is a scale that repeats itself and what scale needs to be resolved for computing the behavior of a system. Similarly, in applied mathematics, there was a parallel development on asymptotic methods, particularly for differential equations, by recognizing the self-similarities in solutions.

What is it like to be a Nobel Committee member?

Nobel Laureates are nominated in the year that they receive the prize, meaning that there are no holdovers into the next year. The nomination deadline is at the end of January each year and there is a protocol for nominations, leading to a great deal of work in organizing them into different areas. In each year, the committee also solicits feedback from experts in the community who assess certain aspects of the development of the fields. The exciting thing is that we get the absolute up-to-date understanding of the whole corpus of physics from the world’s experts every year, and we also get the history. As a result, we learn an enormous amount, and from a selfish standpoint, it is absolutely fantastic! On the other hand, time is a zero-sum game: it is a great deal of work to digest and interpret the reports received.

Each year, the committee writes a report of the entire effort and the recommendation for that year’s prize. This document is written in Swedish and only available to the physics class of the Royal Swedish Academy of Sciences. After 50 years, the names of the nominees and nominators are publicly available and historians of science can apply to gain access to the archive of materials. As committee members, we use computers for this effort but they are modified so that they cannot be connected to the Internet. Each year on the day of the announcement, the physics class gathers to make a final decision on the recommendation of the committee, and immediately following this the incipient laureates are called.

What is most exciting about being a Nobel Committee member?

The most exciting part is calling the winners: just hearing their responses is a fantastic moment. Think about it: they have been working in their field for many decades, and then someone calls them, telling them they have received the Nobel Prize. I mean, this is just so exciting to hear — to someone’s research career, it is like a first-order phase transition!

From Nature Computational Science, September 26, 2022

External link: