Groq Aims to Provide At Least Half of the World’s AI Compute, Says CEO

For Groq, it isn’t about overtaking NVIDIA but co-existing with it. 
When a company like NVIDIA becomes the preferred option for most AI labs for hardware and rises to the status of the world’s biggest company, does it imply that the GPU maker excels in every domain? Probably not.  American AI infrastructure provider Groq once shared an amusing comment in a blog post. “GPUs are cool for training models, but for inference, they’re slowpokes, leading directly to the great-model-that-no-one-uses problem.” The company is well on its way to beating NVIDIA in providing inference – a crucial process in which a pre-trained AI model applies its learnings to generate outputs.  Groq’s language processing unit (LPU) offers capabilities specific to AI inference in ways much better than a traditional graphics processing unit (GPU). In a
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Supreeth Koundinya
Supreeth Koundinya
Supreeth is an engineering graduate who is curious about the world of artificial intelligence and loves to write stories on how it is solving problems and shaping the future of humanity.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed