In Wednesday’s Future Perfect newsletter, my colleague Dylan Matthews wrote about the case for skepticism about this year’s Nobel Prize in Economics winners. His argument was that while their theories are interesting, there’s plenty of reason to doubt just how correct those theories are.
For several other Nobels this year, however, my skepticism runs in the opposite direction. The Physics Nobel was awarded this year to John J. Hopfield and Geoffrey E. Hinton “for foundational discoveries and inventions that enable machine learning with artificial neural networks.”
Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.
The award unquestionably reflects serious, impressive, world-changing work on their research topics, almost certainly some of the most impactful work out there. The hotly debated question is, well, whether this Nobel Prize in Physics should actually count as physics.
Together, Hopfield and Hinton did much of the foundational work on neural networks, which store new information by changing the weights between neurons. The Nobel committee argues that Hopfield and Hinton’s background in physics provided inspiration for their foundational AI work, and that they reasoned by analogies to molecule interactions and statistical mechanics when developing the early neural networks.
That’s cool, but is it physics?
Some people aren’t buying it. “Initially, I was happy to see them recognised with such a prestigious award, but once I read further and saw it was for Physics, I was a bit confused,” Andrew Lensen, an artificial intelligence researcher, told Cosmos magazine. “I think it is more accurate to say their methods may have been inspired by physics research.”
“I’m speechless. I like ML [machine learning] and ANN [artificial neural networks] as much as the next person, but hard to see that this is a Physics discovery,” tweeted physicist Jonathan Pritchard. “Guess the Nobel got hit by AI hype.”
The resentment over AI stealing the spotlight only intensified when the Chemistry Nobel was announced. It went in part to Google DeepMind founder Demis Hassabis and his colleague John Jumper for AlphaFold 2, a machine-learning protein-structure predictor.
One of the hardest problems in biology is anticipating the many molecular interactions that influence how a protein printed from a given string of amino acids will fold up. Understanding protein structure better will dramatically speed drug development and foundational research.
AlphaFold, which can cut the time needed to understand protein structure by orders of magnitude, is a huge achievement and very encouraging about the eventual ability of AI models to make major contributions in this field. It’s surely Nobel-worthy — if there were a Nobel in biology. (There isn’t, so Chemistry had to do.)
The Chemistry Nobel strikes me as much less of a stretch than the Physics one; inasmuch as it inspired resentful grumbling, I suspect that’s primarily because along with the Physics award, it was starting to look like a trend. “Computer science seemed to be completing its Nobel takeover,” Nature wrote after the Chemistry award was announced.
The Nobels were betting on AI, declaring on one of the world’s most prestigious stages that the accomplishments of AI researchers with machine learning constituted serious, respectable, and world-class contributions to the fields that had loosely inspired them. In a world where AI is both an increasingly big deal and where a lot of people find it overhyped and extremely annoying, that’s a fraught statement.
Overhyped is a bad way to think about AI
Is AI overhyped? Yes, absolutely. There is a constant barrage of obnoxious, overstated claims about what AI can do. There are people raising absurd sums of money by tacking “AI” on to business models that don’t have much to do with AI at all. Enthusiasm for “AI-based” solutions often exceeds any understanding of how they actually work.
But all of that can — and, indeed, does — coexist with AI being genuinely a very big deal. The protein-folding achievements of AlphaFold happened in the context of preexisting contests on better protein-folding prediction, because it was well understood that solving that problem really mattered. Whether or not you have any enthusiasm for chatbots and generative art, the same techniques have brought the world cheap, fast, and effective transcription and translation — making all kinds of research and communication tasks much easier.
And we’re still in the very early days of using the machine learning systems that Hinton and Hopfield first laid out the framework for. I do think some people who position themselves as “against the AI hype” are effectively leaning against the wall of an early 20th-century factory saying, “Have you gotten electricity to solve all your problems yet? No? Hmmm, guess it wasn’t such a big deal.”
It was hard in the early 20th century to anticipate where electricity would take us, but it was in fact quite easy to see that the ability to hand off major chunks of human labor to machines would matter a lot.
Similarly, it is not hard to see that AI is going to matter. So while it’s true that there is an obnoxious and enthusiastic gaggle of clueless investors and dishonest fundraisers eager to tag everything with AI, and while it’s true that companies often systematically overstate how cool their latest models are, it’s not “hype” to see AI as an enormously big deal and one of the leading scientific and intellectual contributions of our day. It’s just accurate.
The Nobel Prize committee may or may not have been trying to ride the hype train — they’re just regular people with the same range of motivations as anyone else — but the work they identified really does matter, and we all live in a world that has been enriched by it.