Press "Enter" to skip to content

Nobel Prize in Physics awarded for neural networks

In 1981, at California Institute of Technology, physicist John Hopfield worked with his colleagues Richard Feynman and Carer Mead to create a new course for students that would cover some of the most recent technological advances called “The Physics of Computation.” It was one of the first classes of its kind and was the forerunner to many university programs in computer science that are so familiar to us today. This field was new to Hopfield, as it was to many physicists who had spent much of their career before this working on solid-state physics and DNA synthesis. 

Using his previous experience in biology, Hopfield tied together the concept of neural networks in our brains with the development of computational models, developing the Hopfield network in 1982. This model is a kind of associative memory made up of a single layer of neurons, where each neuron is connected to every other neuron except itself. The network is able to save and recreate patterns by using the atomic spin of the network’s material, which basically turns each node into a magnet. Saved images require low energy to move through the network, creating ‘memory,’ which allows the network to fill in data from a distorted or incomplete image. This development was the forerunner of modern artificial intelligence networks that use a similar structure of nodes to work.

Around the same time, computer science professor Geoffery Hinton took Hopfields’ work and expanded on it. Hinton popularized the Boltzmann machine, a model that uses statistical mechanics techniques to model random processes. This type of model can be used to build a network that recognizes a characteristic of data, which is trained by feeding examples to the model and then testing how well it can categorize the different objects . 

More recently, in 2012, Hinton and two of his graduate students at the University of Toronto built a system that could analyze thousands of photos and teach itself to recognize and categorize common objects like flowers and cars. Hinton carries the title of being called “The Godfather of AI” for this work, and he and his students moved to Google to expand their work in this area. 

Now, Both Hopfield and Hinton are sharing the Nobel Prize in Physics for their work in physics-inspired artificial neural networks. The Royal Swedish Academy of Sciences, which chooses the awardees each year, announced earlier this month that the award would be given for the development of power methods that laid the foundation for modern machine learning, including creating an associative memory that can store data and find properties from that data.

Along with the incredible accomplishment, the laureates have shared their concerns about their own innovations. Hinton shocked the computational community when, in 2023, he quit his job at Google, where he had been working for more than a decade, in order to speak more freely about the dangers of AI. Hinton said in an interview that he now regrets his life’s work, arguing that it is hard to prevent bad actors from using it to do bad things. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton said. 

Hopfield also promotes caution in this field, saying: “The world doesn’t need unlimited speed in developing AI. Until we understand more about the limitations of the systems you can make — where you stand on this hazard ladder — I worry.”

Some in the physics community have made the argument that the prizewinning work was in computer science, rather than in physics. Hopfield responded to this, saying: “My definition of physics is that physics is not what you’re working on, but how you’re working on it. If you have the attitude of someone who comes from physics, it’s a physics problem.”

Hopfield, now in his 90s, and Hinton, 76, see themselves as passing the field on to a new generation of researchers. Hopfield also stresses the importance of interdisciplinary learning and the importance of uniting different fields. When asked if he had any advice for PhD students, Hopfield pointed out how important it is to see the intersection points of different areas in order to solve problems: “I’ve always found the interfaces interesting because they contain interesting people with different motivations, and listening to them bicker is quite instructive.”