Mixture of Experts Will Power the Next Generation of Indic LLMs

With Mixture of experts (MoE), India can blend existing multilingual experts into one multilingual LLM while keeping training and resource costs low.
MoE will Power the Next Generation of Indic LLMs
Image by Nikhil Kumar
The potential of MoE in making Indic LLMs is immense. In a recent podcast with AIM, CognitiveLab founder Aditya Kolavi said that the company has been using the MoE (Mixture of Experts) architecture to fuse Indian languages and build multilingual LLMs.  "We have used the MoE architecture to fuse Hindi, Tamil, and Kannada, and it worked out pretty well," he said. Similarly, Reliance-backed TWO has released its AI model SUTRA, which uses MoE and supports 50+ languages, including Gujarati, Hindi, Tamil, and more, surpassing ChatGPT-3.5. Ola Krutrim is also leveraging Databricks' Lakehouse Platform to enhance its data analytics and AI capabilities while hinting at using MoE to power its Indic LLM platform.  Apart from Indic LLMs, GPT-4, Mixtral-8x7B, Grok-1 and DBRX are
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Sagar Sharma
Sagar Sharma
A software engineer who loves to experiment with new-gen AI. He also happens to love testing hardware and sometimes they crash. While reviving his crashed system, you can find him reading literature, manga, or watering plants.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed