Big Tech’s AI Models are Lost in Logic

While AI researchers have long studied the subject, there is no sign the logical reasoning gap will be bridged
Some say large language models (LLMs) are a step towards AGI, the rest think of it as merely a cool new tool. Every nook and cranny of the content generation industry—from newsrooms to script writers—is loomed by the fear of being taken over by AI language models. These tools have a credible ability to write everything from a Shakespearean poem to code in several languages. These models can spew well-crafted sentences but lack a fundamental human aspect -- logical reasoning.  Turing awardee Yoshua Bengio had mentioned during an interview with AIM, that the magnitude of data that these systems possess is almost equal to a person reading every day, every waking hour, all their life, and then living 1000 lives. However, they fail at reasoning. “LLMs are encyclopaedic thieves,” he
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Tasmia Ansari
Tasmia Ansari
Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed