Home World News Discoveries made by Nobel Prize winners in Physics explained

Discoveries made by Nobel Prize winners in Physics explained

0
Discoveries made by Nobel Prize winners in Physics explained

The Nobel Prize in Physics was awarded Tuesday to two scientists for discoveries that laid the basis for the artificial intelligence used by hugely popular tools like ChatGPIT.

British-Canadian Geoffrey Hinton, known as the “Godfather of AI,” and American physicist John Hopfield were awarded the prize for “discoveries and inventions enabling machine learning with artificial neural networks,” the Nobel jury said.

But what are they, and what does it all mean? Here are some answers.

What are neural networks and machine learning?

Mark van der Wilk, an expert in machine learning at the University of Oxford, told AFP that an artificial neural network is a mathematical construct “completely inspired” by the human brain.

Our brain consists of a network of cells called neurons, which respond to external stimuli – such as things our eyes see or ears hear – by sending signals to each other.

When we learn things, some connections between neurons become stronger, while others become weaker.

Unlike traditional computing, which works like reading a recipe, artificial neural networks roughly mimic this process.

Biological neurons are replaced with simple computations—sometimes called “nodes”—and the incoming stimuli from which they learn are replaced by training data.

The idea is that this can allow the network to learn over time – hence the term machine learning.

What did Hopefield discover?

But before machines could learn, another human trait was necessary: ​​memory.

Have you ever had difficulty remembering a word? Consider the swan. You can use similar words – punk, nice, vampire – before attacking the swan.

“If you’re given a pattern that’s not exactly what you need to remember, you need to fill in the blanks,” van der Wilk said.

“That’s how you remember a special memory.”

This was the idea behind the “Hopfield network” – also called “associative memory” – which the physicist developed in the early 1980s.

Hopfield’s contribution meant that when an artificial neural network is given something that is slightly wrong, it can cycle through previously stored patterns to find the closest match.

This proved to be a big step for AI.

What about Hinton?

In 1985, Hinton revealed his contributions to the field – or at least one of them – called Boltzmann machines.

Named after 19th century physicist Ludwig Boltzmann, this concept introduced an element of randomness.

This randomness is ultimately the reason why today’s AI-powered image generators can produce endless variations on the same signal.

Hinton also showed that the more layers a network has, “the more complex its behavior can become”.

This made it easier to “efficiently learn the desired behavior,” French machine learning researcher Francis Bach told AFP.

What is it used for?

Despite these ideas being present, many scientists lost interest in the field in the 1990s.

Machine learning requires extremely powerful computers capable of handling large amounts of information. These algorithms require millions of images of dogs to be able to tell a dog apart from a cat.

So it wasn’t until the 2010s that a wave of breakthroughs “revolutionized everything related to image processing and natural language processing,” Bach said.

From reading medical scans to directing self-driving cars, from predicting the weather to creating deepfakes, the uses of AI are now so numerous that they have become difficult to count.

But is this really physics?

Hinton had already won the Turing Award, considered the Nobel of computer science.

But many experts said it was a well-deserved Nobel victory in the field of physics, starting science on the path that will lead to AI.

French researcher Damien Querlioz explained that these algorithms were originally “inspired by physics by transferring the concept of energy to the field of computing”.

Van der Wilk said the first Nobel “acknowledges the contributions of the physics community as well as the laureates” “for the methodological development of AI.”

And while ChatGPT can sometimes make AI really creative, it’s important to remember the “machine” part of machine learning.

Van der Wilk insisted, “There is no magic happening here.”

“After all, everything in AI is multiplication and addition.”

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version