Yann LeCun, Meta’s chief AI scientist, says warnings about the danger of AI are complete nonsense

0
8
Yann LeCun, Meta’s chief AI scientist, says warnings about the danger of AI are complete nonsense

Yann LeCun, Meta’s chief AI scientist, says warnings about the danger of AI are complete nonsense

The man who played a major role behind the artificial intelligence we see today has reportedly said that we exaggerate the advantages and disadvantages of AI. Meta’s chief AI scientist Yann LeCun says AI doesn’t even have the intelligence of our pets, let alone humans.

listen to the story

Advertisement
Yann LeCun, Meta’s chief AI scientist, says warnings about the danger of AI are complete nonsense
Meta chief AI scientist Yann LeCun (Image: Meta)

Yann LeCun, a renowned computer scientist who is a professor at New York University, and chief AI scientist at Meta, says that we overestimate the pros and cons of artificial intelligence (AI). In an interview with the Wall Street Journal, LeCun said that we often overestimate how smart AI is. He quipped that AI does not even have the intelligence of our pets, leave alone humans. On the other hand, he also believes that the harms associated with AI – mostly the dangers that could come with its development – ​​have also been underestimated. In fact, he adds, “It’s complete BS”.

Advertisement

Yann LeCun has been a key figure in the rise of artificial intelligence, particularly the development of deep learning. He co-created the Convolutional Neural Network (CNN), a breakthrough that powers most of today’s image and speech recognition systems. His work has influenced advances in computer vision, natural language processing, and autonomous systems. As a founding member of Meta’s AI Research Lab (FAIR), LeCun has driven innovations that have helped make AI more accessible and scalable. His contributions have shaped modern AI applications, cementing his position as a key architect of the AI ​​revolution.

Yann LeCun believes that concerns about AI are often exaggerated, especially compared to the more dramatic warnings given by many experts in the industry. LeCun sees AI as an extremely valuable tool, fundamental to the operation of the meta. It powers everything from real-time translation to content moderation, helping fuel Meta’s growth and contributing to the company’s $1.5 trillion valuation. His team, which includes a product-focused division called FAIR and GenAI, continuously advances large language models and other AI technologies, deeply integrating them into Meta’s products.

Still, despite recognizing the importance of AI, LeCun is skeptical about some of the more dire predictions from others in the field. For example, he believes that today’s AI systems, while powerful, are not truly intelligent. He often criticizes the exaggerated claims of AI startups and leaders such as OpenAI’s Sam Altman, who recently suggested that artificial general intelligence (AGI) could arrive “within a few thousand days”. Lacan argues that such predictions are premature, arguing that the world has not yet devised a system that can even approach the cognitive complexity of the domestic cat, being something more advanced than human intelligence. Forget about it.

“It seems to me that ‘before we immediately figure out how to control AI systems smarter than us,’ we need to design a system smarter than a domestic cat,” LeCun wrote in a post on X. “Signs need to be initiated.” ,

LeCun’s perspective contrasts sharply with that of Geoffrey Hinton, who has become a vocal critic of the rapid development of AI. Hinton, who spent more than a decade at Google, was instrumental in developing the neural networks that serve as the backbone for popular AI models like ChatGPT and Bard. However, he has become concerned about the potential risks associated with AI. In 2023, Hinton made headlines when he left Google, warning about the dangers posed by the rise of powerful AI systems. He raised concerns about the spread of misinformation, the potential for AI to disrupt job markets, and the existential risks associated with machines surpassing human intelligence.

In a particularly ominous tone, Hinton highlighted the possibility of AI systems gaining the ability to manipulate human behavior. He suggested that advanced AI could become highly persuasive by leveraging its vast knowledge of literature, history, and political strategies, which could pose a threat to social stability. Hinton’s warnings add fuel to the ongoing debate about the potential dangers of AI, emphasizing the need for caution as the technology develops.

While LeCun acknowledges that AI poses challenges, he remains optimistic about its future, and dismisses fears of imminent super-intelligent machines as far-fetched. For them, the focus should remain on innovation and using AI’s capabilities to solve real-world problems. The differing approaches between Lacan and Hinton underscore a central tension in the AI ​​field: whether we should focus on mitigating the dangers of hypothetical futures or take advantage of the transformative potential of AI today.

Advertisement

LEAVE A REPLY

Please enter your comment!
Please enter your name here