LLMs that Failed Miserably in 2024

Databricks spent $10 million developing DBRX, yet only recorded 23 downloads on Hugging Face last month.
Run LLM locally on computer
Looks like the race to build large language models is winding down, with only a few clear winners. Among them, DeepSeek V3 has claimed the spotlight in 2024, leading the charge for Chinese open-source models. Competing head-to-head with closed-source giants like GPT-4 and Claude 3.5, DeepSeek V3 notched 45,499 downloads last month, standing tall alongside Meta’s Llama 3.1 (491,629 downloads) and Google’s Gemma 2 (377,651 downloads), according to Hugging Face.But not all LLMs launched this year could ride the wave of success—some fell flat, failing to capture interest despite grand promises. Here’s a look at the models that couldn’t make their mark in 2024. 1. Databricks DBRX Databricks launched DBRX, an open-source LLM with 132 billion parameters, in March 2024. It
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Siddharth Jindal
Siddharth Jindal
Siddharth is a media graduate who loves to explore tech through journalism and putting forward ideas worth pondering about in the era of artificial intelligence.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed