Fractal Releases Fathom-R1-14B Reasoning Model on DeepSeek for $499

Fathom-R1-14B is a 14-billion-parameter model derived from Deepseek-R1-Distilled-Qwen-14B. 

Fractal, Mumbai based AI company, has launched a new open-source large language model, Fathom-R1-14B. The model delivers mathematical reasoning performance that surpasses o1-mini and o3-mini, and approaches o4-mini levels, all at a post-training cost of just $499. 

The model is available to try on Hugging Face, and the codebase is on GitHub. It is available under the MIT license, along with datasets and training recipes.

Developed as part of a proposed initiative to build India’s first large reasoning model under the IndiaAI mission, Fathom-R1-14B is a 14-billion-parameter model derived from Deepseek-R1-Distilled-Qwen-14B. 

“We proposed building India’s  first large reasoning model as part of the IndiaAI mission. We proposed building three models (a small one, a mid-sized one and a large one with 70 billion parameters),” said Fractal CEO Srikanth Velamakanni in a LinkedIn post. 

He further added that “This is just a tiny proof of what’s possible.”

On olympiad-level exams AIME-25 and HMMT-25, Fathom-R1-14B achieves 52.71% and 35.26% Pass@1 accuracy, respectively. When allowed additional inference-time compute (cons@64), the scores rise to 76.7% and 56.7%. 

“It delivers performance rivalling closed-source o4-mini (low) with respect to cons@64 ,all while staying within a 16K context window,” the company said.

The model was post-trained using supervised fine-tuning (SFT), curriculum learning, and model merging. 

“We perform supervised fine-tuning on carefully curated datasets using a specific training approach, followed by model merging,” the company said. 

Fractal has also introduced a  separate variant, Fathom-R1-14B-RS, achieved similar results using a combination of reinforcement learning and SFT, costing $967.

Last year, the company launched Vaidya.ai, a multi-modal AI platform designed to offer free and accessible healthcare assistance. Meanwhile, Sarvam, the startup selected for building India’s foundational LLM under the IndiaAI Mission recently unveiled Sarvam-M, a 24-billion parameter open-weights hybrid language model built on top of Mistral Small.

📣 Want to advertise in AIM? Book here

Picture of Siddharth Jindal
Siddharth Jindal
Siddharth is a media graduate who loves to explore tech through journalism and putting forward ideas worth pondering about in the era of artificial intelligence.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed