Small Language Models Make More Sense for Agentic AI

An SLM should be small and efficient enough to run locally on a laptop, smartphone, or personal GPU, while still being fast and useful for real-world AI agent tasks.
There is a common misconception that the bigger LLMs are, the better they would be. Since the emergence of large language models (LLMs) like GPT-4 and Claude, AI labs and researchers have been racing to build ever-larger systems with more parameters, greater computing demands and higher costs. For instance, OpenAI, SoftBank, and Oracle plan to spend $500 billion on The Stargate Project to build a network of AI data centres and support energy infrastructure in Texas and other locations. The goal is to expand the computing capacity required to develop and run advanced AI models, particularly those created by OpenAI—namely ChatGPT. On the other hand, Meta is on a hiring spree to build superintelligence.  However, a recent position paper from NVIDIA Research presents a provocat
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Siddharth Jindal
Siddharth Jindal
Siddharth is a media graduate who loves to explore tech through journalism and putting forward ideas worth pondering about in the era of artificial intelligence.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed