This Indian AI Startup Proves LLMs No Longer Need Expensive GPUs

“With a 50 billion-parameter model, no GPU will be necessary during fine-tuning or inference.”
The future of running LLMs may no longer rely on expensive infrastructure or GPUs. While India works on developing its own foundational model under the IndiaAI mission, a startup is taking a different approach by exploring how to efficiently run LLMs on CPUs. Founded on the principle of making AI accessible to all, Ziroh Labs has developed a platform called Kompact AI that enables the running of sophisticated LLMs on widely available CPUs, eliminating the need for costly and often scarce GPUs for inference—and soon, for fine-tuning models with up to 50 billion parameters. “With a 50 billion-parameter model, no GPU will be necessary during fine-tuning or inference,” said Hrishikesh Dewan, co-founder of Ziroh Labs, in an exclusive interview with AIM. He further added that work on
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Siddharth Jindal
Siddharth Jindal
Siddharth is a media graduate who loves to explore tech through journalism and putting forward ideas worth pondering about in the era of artificial intelligence.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed