British-Canadian computer scientist Geoffrey Hinton, often referred to as the “godfather” of artificial intelligence (AI), has raised concerns that the technology could lead to human extinction within the next 30 years.
Professor Hinton, who was awarded the Nobel Prize in Physics earlier this year for his work, estimates there is a “10% to 20%” chance that AI could result in human extinction in the next three decades. This is an increase from their previous prediction of a 10% probability.
In an interview with BBC Radio 4’s Today programme, Mr Hinton was asked whether his views on a potential AI apocalypse had changed. He responded, “Not really, 10% to 20%.” Asked if the chances had increased, Hinton said, “If anything. You see, we’ve never had to deal with things more intelligent than us before.
He continued, “And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few such examples. There is a mother and child. Evolution did a lot of work in allowing the child to control the mother, but this is the only example I know of.”
Mr. Hinton, who is also an emeritus professor at the University of Toronto, described humans as babies compared to advanced AI systems. “I like to think about it this way: Imagine yourself and your three-year-old child. We’ll be three years old,” he said.
His concerns regarding the technology first surfaced publicly when he resigned from his role at Google in 2023 to speak more freely about the dangers of unregulated AI development. He warned that “bad actors” could exploit AI to cause harm.
Reflecting on the rapid progress of AI development, Hinton said, “I didn’t think it would be where we are now. I thought we would get here at some point in the future.”
He expressed concern that experts in the field now predict that AI systems could become smarter than humans within the next 20 years, saying this is “a very scary idea.”
Mr Hinton underlined the need for government regulation, noting that the pace of development was “very, very fast, much faster than I expected”. He warned that relying solely on large companies driven by profit motives will not ensure the safe development of AI. “The only thing that can force those big companies to do more research on security is government regulation,” he said.