AI pioneer Yann LeCun stirred up a lively debate today by advising the next generation of developers to steer clear of working on large language models (LLMs). Speaking at VivaTech in Paris, LeCun emphasized that LLMs are already dominated by large companies and suggested focusing on next-gen AI systems instead.
The comments from Meta’s chief AI scientist and NYU professor sparked a flurry of questions and discussions on the limitations of current LLMs. When pressed for more details, LeCun hinted at working on the next generation of AI systems himself and encouraged others to compete in the same space.
The response on social media platforms like X was immediate, with developers and AI experts proposing various alternatives to LLMs. Suggestions included boundary-driven AI, multi-tasking and multi-modality, categorical deep learning, and more.
While some agreed with LeCun’s stance on moving beyond LLMs, dissenters argued that there is still untapped potential in working on these models. Meta’s own extensive work on LLMs was cited as evidence that LeCun might be trying to stifle competition.
LeCun’s critique of LLMs extended to their limitations in achieving human-level intelligence. He highlighted the lack of understanding of the physical world, reasoning capabilities, and hierarchical planning in current LLMs. Meta’s recent unveiling of the Video Joint Embedding Predictive Architecture (V-JEPA) was positioned as a step towards advanced machine intelligence.
The debate also reignited old rivalries in the AI community, with references to past disagreements between LeCun and other AI luminaries like Geoffrey Hinton. The fundamental disagreement over the future of AI, particularly in relation to LLMs, is likely to persist for some time.