Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

AI pioneer LeCun advises next-gen AI builders: ‘Avoid fixating on LLMs’

Reading Time: < 1 minute

AI pioneer Yann LeCun stirred up a lively debate today by advising the next generation of developers to steer clear of working on large language models (LLMs). Speaking at VivaTech in Paris, LeCun emphasized that LLMs are already dominated by large companies and suggested focusing on next-gen AI systems instead.

The comments from Meta’s chief AI scientist and NYU professor sparked a flurry of questions and discussions on the limitations of current LLMs. When pressed for more details, LeCun hinted at working on the next generation of AI systems himself and encouraged others to compete in the same space.

The response on social media platforms like X was immediate, with developers and AI experts proposing various alternatives to LLMs. Suggestions included boundary-driven AI, multi-tasking and multi-modality, categorical deep learning, and more.

While some agreed with LeCun’s stance on moving beyond LLMs, dissenters argued that there is still untapped potential in working on these models. Meta’s own extensive work on LLMs was cited as evidence that LeCun might be trying to stifle competition.

LeCun’s critique of LLMs extended to their limitations in achieving human-level intelligence. He highlighted the lack of understanding of the physical world, reasoning capabilities, and hierarchical planning in current LLMs. Meta’s recent unveiling of the Video Joint Embedding Predictive Architecture (V-JEPA) was positioned as a step towards advanced machine intelligence.

The debate also reignited old rivalries in the AI community, with references to past disagreements between LeCun and other AI luminaries like Geoffrey Hinton. The fundamental disagreement over the future of AI, particularly in relation to LLMs, is likely to persist for some time.

Taylor Swifts New Album Release Health issues from using ACs Boston Marathon 2024 15 Practical Ways To Save Money