Some of the world’s best-known names in artificial intelligence descended on the small ski resort town of Davos, Switzerland, this week for the World Economic Forum (WEF) annual meeting.
AI dominated many of the discussions among corporations, government leaders, academics, and nongovernmental groups. Yet a clear contrast emerged over how close current models are to replicating human intelligence and what the likely near-term economic impacts of the technology will be.
The large language models (LLMs) that have captivated the world are not a path to human-level intelligence, two AI experts asserted in separate remarks at Davos.
Demis Hassabis, the Nobel Prize–winning CEO of Google DeepMind, and the executive who leads the development of Google’s Gemini models, said today’s AI systems, as impressive as they are, are “nowhere near” human-level artificial general intelligence, or AGI.
Yann LeCun—an AI pioneer who won a Turing Award, computer science’s most prestigious prize, for his work on neural networks—went further, saying that the LLMs that underpin all of the leading AI models will never be able to achieve humanlike intelligence and that a completely different approach is needed.
Their views differ starkly from the position asserted by top executives of Google’s leading AI rivals, OpenAI and Anthropic, who assert that their AI models are about to rival human intelligence.
Dario Amodei, the CEO of Anthropic, told an audience at Davos that AI models would replace the work of all software developers within a year and would reach “Nobel-level” scientific research in multiple fields within two years. He said 50% of white-collar jobs would disappear within five years.OpenAI CEO Sam Altman (who was not at Davos this year) has said we are already beginning to slip past human-level AGI toward “superintelligence,” or AI that would be smarter than all humans combined.
Can LLMs lead to artificial general intelligence?
In a joint WEF appearance with Amodei, Hassabis said that there was a 50% chance AGI might be achieved within the decade, though not through models built exactly like today’s AI systems.
In a later, Google-sponsored talk, he elaborated that “maybe we need one or two more breakthroughs before we’ll get to AGI.” He identified several key gaps, including the ability to learn from just a few examples, the ability to learn continuously, better long-term memory, and improved reasoning and planning capabilities.
“My definition of [AGI] is a system that can exhibit all the cognitive capabilities humans can—and I mean all,” he said, including the “highest levels of human creativity that we always celebrate, the scientists and artists we admire.” While advanced AI systems have begun to solve difficult math equations and tackle previously unproved conjectures, AI will need to develop its own breakthrough conjectures—a “much harder” task—to be considered on par with human intelligence.
LeCun, speaking at the AI House in Davos, was even more pointed in his criticism of the industry’s singular focus on LLMs. “The reason … LLMs have been so successful is because language is easy,” he argued.
He contrasted this with the challenges posed by the physical world. “We have systems that can pass the bar exam, they can write code … but they don’t really deal with the real world. Which is the reason we don’t have domestic robots [and] we don’t have level-five self-driving cars,” he said.
LeCun, who left Meta in November to found Advanced Machine Intelligence (AMI) Labs, argued that the AI industry has become dangerously monolithic. “The AI industry is completely LLM-pilled,” he said.
He said that Meta’s decision to focus exclusively on LLMs and to invest tens of billions of dollars to build colossal data centers contributed to his decision to leave the tech giant. LeCun added that his view that LLMs and generative AI were not the path to human-level AI, let alone the “superintelligence” desired by CEO Mark Zuckerberg, made him unpopular at the company.
“In Silicon Valley, everybody is working on the same thing. They’re all digging the same trench,” he said.
The fundamental limitation, according to LeCun, is that current systems cannot build a “world model” that can predict what is most likely to happen next and connect cause and effect. “I cannot imagine that we can build agentic systems without those systems having an ability to predict in advance what the consequences of their actions are going to be,” he said. “The way we act in the world is that we know we can predict the consequences of our actions, and that’s what allows us to plan.”
LeCun’s new venture hopes to develop these world models through video data. But while some video AI models try to predict pixels frame-by-frame, LeCun’s work is designed to function at a higher level of abstraction to better correspond to objects and concepts.
“This is going to be the next AI revolution,” he said. “We’re never going to get to human-level intelligence by training LLMs or by training on text only. We need the real world.”
What business thinks
Hassabis put the timeline for genuine human-level AGI at “five to 10 years.” Yet the trillions of dollars flowing into AI show the business world isn’t waiting to find out.
The debate over AGI may be somewhat academic for many business leaders. The more pressing question, says Cognizant CEO Ravi Kumar, is whether companies can capture the enormous value that AI already offers.
According to Cognizant research released ahead of Davos, current AI technology could unlock approximately $4.5 trillion in U.S. labor productivity—if businesses can implement it effectively.
But Kumar told Fortune that most businesses had not yet done the hard work of restructuring their businesses or reskilling their workforces to take advantage of AI’s potential.
“That $4.5 trillion will generate real value in enterprises if you start to think about reinvention [of existing businesses],” he noted. He said it also required what he called “the integration” of human labor and digital labor conducted by AI.
“Skilling is no longer a side thing,” he argued. “It has to be a part of the infrastructure story for you to pivot people to the future, create higher wages and upward social mobility, and make this an endeavor which creates shared prosperity.”
This story was originally featured on Fortune.com
Hence then, the article about ai luminaries at davos clash over how close human level intelligence really is was published today ( ) and is available on Fortune ( Middle East ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.
Read More Details
Finally We wish PressBee provided you with enough information of ( AI luminaries at Davos clash over how close human-level intelligence really is )
Also on site :