AI guru Yann LeCun says ChatGPT is good at math, chess and coding but bad at handling reality
AI pioneer Yann LeCun emphasizes that large language models (LLMs) like ChatGPT may be good in domains like chess and mathematics, but these models cannot understand reality.


We live in a time when artificial intelligence (AI) is improving rapidly. Tech giants like Google, Meta and OpenAI are working on improving their respective AI models with the aim of reaching Artificial General Intelligence (AGI) – a state where AI can think and reason like humans. However, Yann LeCun, AI pioneer and former chief scientist of Meta AI, emphasizes that while AI models like ChatGPT may be very good at various tasks, they are not even close to understanding reality.
In a recent episode of Scientific Controversy, Yann LeCun was asked whether large language models (LLMs) like ChatGPT could reach AGI in the future or whether their limitations were already visible. LeCun pointed out that while LLMs have achieved notable success in some areas, their broader capabilities are not expanding at the same rate. According to LeCun, the improvements are particularly evident in mathematics, chess, and code generation.
Why are AI models like ChatGPT good at math, chess, and programming?
LeCun explained that when it comes to domains like mathematics and programming, symbol manipulation brings results, and it is a process that is suitable for AI models, because LLMs can go through sequences of symbols.
The AI ​​guru explained, “No, they don’t. And in fact, we see performance saturating and mathematics and code generation are two domains where manipulation of symbols really gives you something, it drives your thinking to some extent, right? I mean, you run it by intuition, but symbol manipulation actually has meaning.” He continued, “”So this type of problem, LLMs can handle actually very well, where the logic actually involves searching through a sequence of symbols. “But that’s only for a few problems.”
LeCun also drew a comparison with chess, another domain where LLMs excel by searching through possible moves or derivations to find a solution. He said, “Playing chess is another game. You search through sequences of moves, you know, for a good one, or in mathematics search through sequences of derivations that will produce a particular result, right?”
This essentially indicates that LLMs are good in domains where a set of symbols is sufficient to achieve the desired result. However, real life is quite different.
Why don’t AIs like ChatGPT perform well in reality?
Lacan argued that the previous domains represent only a small subset of the problems facing humans. When it comes to navigating the physical world or handling tasks that require an understanding of continuous, high-dimensional environments, current models fall short. He explained, “But in the real world, you know, in higher-dimensional continuum things where discovery is concerned, like, how do I move my muscles, you know, grab this, you know, grab this, this glass here.”
This makes it more complex for LLMs to understand how to plan these real-world tasks than humans. He further added, “I’m not going to do it with my left hand, right? I have to change hands with it and then hold it, right? You need to plan and understand what’s possible, what’s not possible.”
Yann Lacan highlighted that language models have made significant progress in symbolic tasks, but their ability to interact with or model the complexities of the real world is limited. The manipulation of symbols and logical reasoning in limited domains does not easily translate into the physical reasoning and planning required for real-world activities.
Lecan’s comments indicate that further progress in AI will require approaches that go beyond current language models, especially if the goal is to achieve AGI or more sophisticated forms of machine intelligence.




