Ay Godfather warns

0
8
Ay Godfather warns

Ay Godfather warns

Jeffrey Hinton has again warned against the growing use of AI. This time, he said that at this speed, these models may soon develop a private language of their own.

Listen to the story

Advertisement
Ay Godfather warns
Jeffri Hinton (Photo/Getty Image)

In short

  • Jeffri Hinton warns that Aye can develop her personal language
  • AI’s internal communication can be out of comprehension for humans, he said
  • Hinton did not consider AI security first in his career

Geoffri Hinton, man has called many people the Godfather of AI, so far it has issued another precaution note, and this time it seems directly something directly from a SCIFI film. Speaking on the One Decision Podcast, the Nobel Prizewining Scientist warned that Artificial Intelligence may soon develop a private language of its own, one that even its human creators will not understand.

Advertisement

“Right now, the AI system calls ‘chain of thought’ logic in English, so we can follow what it is doing,” Hintan explained. “But it becomes more scary if they develop their internal languages to talk to each other.”

He says, AI can be taken to unwanted and unnecessary fields. The machines have already demonstrated the ability to produce “terrible” ideas, and there is no reason to believe the ideas that we will always be in a language that we can track.

The words of Hinton carry weight. He, finally, the 2024 Nobel Physics Award winner, whose initial work on the neural network paved the way for today’s deep learning models and Largescale AI System. Nevertheless, he says that he did not fully appreciate the dangers in his career until much later.

He said, “I should have realized very quickly what the final threats were going to happen,” he accepted. “I always thought that the future was far away and I wish I would have thought about security soon.” Now, this delay promotes his advocacy.

One of the biggest apprehensions of Hinton is how to learn AI systems. Unlike humans, those who should share knowledge laborly, digital minds can copy and paste what they know in a moment.

“Imagine if 10,000 people learned something and all of them came to know immediately, then this is the same in these systems,” they explained on BBC News.

This collective, network intelligence means that AI can score its learning at a speed, no one can be human. Current models like GPT4 already exclude humans when it comes to raw general knowledge. For now, logic is our stronghold – but this benefit, hint, is shrinking rapidly.

While he is outspoken, Hinton says that other people in the industry are very few upcoming. “Many people in big companies are reducing the risk,” he said, their personal concerns have not been reflected in their public statements. A remarkable exception, they say, the CEO of Google Deepmind is Demis Hasabis, which the Hintan credits to show real interest in dealing with these risks.

To get out of Hinton’s high profile from Google in 2023, he says it was not a protest. He said, “I left Google because I was 75 years old and could not effectively program. But when I left, perhaps I could talk more independently about all these risks,” they say.

Advertisement

While governments take initiatives like the new “AI Action Plan” of the White House, Hinton believes that regulation alone will not be sufficient.

The real task, he argues, is to make AI that is “a philanthropist guarantee”, a long order, given that these systems may soon think that no human can fully follow.

– Ends

LEAVE A REPLY

Please enter your comment!
Please enter your name here