AI Startups India, News, Funding, Stories and Podcasts https://analyticsindiamag.com/ai-startups/ News and Insights on AI, GCC, IT, and Tech Tue, 30 Sep 2025 08:25:41 +0000 en-US hourly 1 https://analyticsindiamag.com/wp-content/uploads/2025/02/cropped-AIM-Favicon-32x32.png AI Startups India, News, Funding, Stories and Podcasts https://analyticsindiamag.com/ai-startups/ 32 32 India’s Industrial AI Moment: Why VCs Need to Partner with Universities and Startups Now https://analyticsindiamag.com/ai-startups/indias-industrial-ai-moment-why-vcs-need-to-partner-with-universities-and-startups-now/ Tue, 30 Sep 2025 08:25:39 +0000 https://analyticsindiamag.com/?p=10178520

Structured collaboration offers a defensible sourcing advantage, providing access to proprietary technologies before they enter the open market.

The post India’s Industrial AI Moment: Why VCs Need to Partner with Universities and Startups Now appeared first on Analytics India Magazine.

]]>

India’s universities and technology institutes have often brought out cutting-edge industrial research. From predictive modelling for polymers to AI for cybersecurity, some of the most ambitious industrial AI innovations are sprouting inside labs. 

These hubs of innovation, though, are faced with a glaring question: how to scale these breakthroughs into viable, globally competitive businesses?

Collaborations between academic institutions, startups, and venture capitalists could bridge the stubborn “lab-to-market” gap and aid India in emerging as a hub for industrial AI.

Research to Returns

The partnership between the TCG Centre for Research and Education in Science and Technology (CREST) and Haldia Petrochemicals is an example of research translating into industrial value. The project aimed to address a long-standing challenge in polymer production: predicting the Melt Flow Index (MFI), a critical quality metric, in real-time.

Professor Goutam Mukherjee, director, Institute of Advancing Intelligence at TCG CREST, said, “Approximately 99% of the data records are computed using effective imputation techniques.” The final prediction combined MFI forecasts with predicted error corrections to produce robust outcomes, he explained, adding the project not only eliminated the four-hour delay, but also improved profitability and agility.

What’s notable is not just the technical achievement, but the model of collaboration itself. As Mukherjee put it: “Theoretical research is good, but at the same time, we must explore its utility for the society and the business.”

Detect Technologies, incubated from IIT Madras, has developed its flagship product T-Pulse, which provides real-time health monitoring of assets in heavy industries such as oil & gas and steel. It is frequently mentioned in lists of the top AI industrial automation companies in India.

Chakr Innovation, founded by IIT Delhi alumni, developed retrofit emission control devices that reduce diesel generator emissions by up to 90%. The company holds multiple patents and has received policy approvals.

Why Industrial AI Is Harder to Scale

Haldia’s case shows how academic collaboration can yield immediate benefits. But scaling such models across industries requires confronting the structural challenges of industrial AI.

Unlike consumer apps or SaaS tools, industrial AI solutions are deeply contextual. They demand domain knowledge in areas as varied as chemical engineering, power systems, automotive manufacturing, and logistics. They also require significant capital to build prototypes, run pilots, and integrate into real-world plants where downtime is expensive.

This is where venture capitalists often hesitate. As Shashank Randev, founder and general partner at 247VC, said at Cypher 2025 that some founders underestimate the life cycle of the sales process. “From a paid proof of concept to actually generating revenue, and then figuring [out] the efficacy of that product at the enterprise level; that cycle is what we are essentially trying to shorten for our portfolio.”

For many startups, that “pilot purgatory” becomes a graveyard. 

The Academic Spinout Opportunity

Potential AI ventures are plentiful at academic institutions but taking them to market remains a challenge.

Aditya Singh Gaur, deputy manager at C3iHub, IIT Kanpur, said that a significant ‘lab-to-market’ gap prevents breakthroughs from reaching venture-backed scale. “The core challenge lies in a scarcity of structured commercialisation pathways that can de-risk early-stage technology,” he added. 

Gaur advocates for dedicated translational research platforms and deep-tech incubators that provide patient capital, shared infrastructure, and industry partnerships. Equally important are university spinout mechanisms with clear, founder-friendly IP policies. Without these, researchers often lack the incentives or legal clarity to convert their work into startups.

Prasanjeet Sinha, incubation manager at C3iHub, IIT Kanpur, added that VCs want more than surface-level engagements like hackathons and demo days, as these are now seen as insufficient for generating high-quality deal flow. “The industry is leaning towards a deeper, more strategic engagement that provides proprietary access to defensible technology and high-potential founding teams,” Sinha said.

From Hackathons to Real Industry Pilots

So what does “deeper engagement” look like? According to Sinha, models that work include university spinouts with clear IP licensing frameworks, cofunded pilot programs in authentic industrial settings and early access to entrepreneurial talent nurtured into founding teams.

The barrier, he notes, lies in the absence of standardised frameworks. Ambiguity around IP, a lack of co-investment pathways, and weak startup readiness programs hinder collaborations from being repeatable rather than ad hoc.

For VCs, the payoff of solving this is huge. Structured collaboration offers a defensible sourcing advantage, providing access to proprietary technologies before they enter the open market. For universities, it creates a culture where research is not only publishable, but also buildable.

Ecosystem Gaps India Must Solve

For India to genuinely advance in industrial AI, it is imperative to address several critical gaps in the ecosystem. 

Gaur highlights four urgent needs: specialised AI talent for sectors like manufacturing, enhanced access to large-scale GPU/TPU clusters for startups and researchers, the establishment of industrial testbeds for real-world experimentation, and global partnerships for collaborative strategies and data access. Addressing these will position India as a leader in industrial AI, he said. 

Without these, India risks falling behind countries where universities, corporations, and investors are already closely aligned in their pursuit of industrial AI, he added. 

Why VCs Should Care

From a VC perspective, the incentive is not just patriotic,  it’s financial. Industrial AI startups may take longer to mature, but once entrenched, they become deeply defensible businesses. Enterprise clients are sticky, integration is complex, and switching costs are high.

As Randev highlighted, the challenge is ensuring these startups can scale beyond one or two enterprise customers. He said that while evaluating, the questions VCs face are: whether they can find enterprise customers, will this model work, and will they be able to replicate it for 10–15 others?

For VCs willing to engage early with institutional platforms, the upside is privileged access to startups that can dominate global industrial niches. They need to grow from being financiers to active co-creators in the lab-to-market pipeline. 

Mukherjee reflected on his own journey since joining TCG CREST and said he had realised that, “if you work with a problem which comes up from a business point of view, it gives you more problems for your academic institution.”

In other words, collaboration doesn’t just transfer knowledge outward; it deepens the research itself. For investors, that is perhaps the biggest reason to get involved.

The post India’s Industrial AI Moment: Why VCs Need to Partner with Universities and Startups Now appeared first on Analytics India Magazine.

]]>
This Indian AI Startup is Using Google Veo 3 to Create Microdramas  https://analyticsindiamag.com/ai-startups/this-indian-ai-startup-is-using-google-veo-3-to-create-microdramas/ Wed, 24 Sep 2025 12:34:21 +0000 https://analyticsindiamag.com/?p=10178095

Dashverse’s Raftaar has already crossed 1 million views on the company’s DashReels app.

The post This Indian AI Startup is Using Google Veo 3 to Create Microdramas  appeared first on Analytics India Magazine.

]]>

India has developed a love for short microdramas. As Instagram and TikTok reels grew in popularity, they shaped a new habit among viewers who now binge dramas that run just one to two minutes per episode. 

According to a report, the global short drama market driven by vertical bite-sized video content has grown to 7.2 billion dollars from 6.5 billion dollars last year, with projections indicating it could double by 2030. 

Dashverse is tapping into this trend with a twist, relying on generative AI to power its stories. 

The company recently launched Raftaar, India’s first AI-generated microdrama, created using the company’s proprietary generative video platform, Frameo.AI. The series has already crossed 1 million views on the company’s DashReels app.

The 90-minute serialised narrative was produced with Frameo.AI, which the company said reduced production time by 50% and costs by 75% compared to traditional video production. The launch has accelerated DashReels’ growth, with the app surpassing 10 million cumulative installs and generating $2 million in revenue in August.

The company plans to scale production by launching 100 additional AI-generated microdramas by the end of 2025. The premiere of Raftaar came shortly after Dashverse closed a $13 million Series A funding round led by Peak XV Partners with participation from Z47 and Stellaris Venture Partners. 

Journey of Dashverse 

CTO Soumyadeep Mukherjee told AIM that before founding Dashverse in 2023, he led robotics and infrastructure teams at Udaan. His early exposure to computer vision and robotics convinced him that generative AI would open new creative possibilities.

“My background has been mostly around deep tech and AI, or actually robotics is what I started with,” he recalled. “When generative AI started showing promise in images, I knew this was the space to build in.”

Alongside two co-founders, including a longtime school friend from Bangalore, Mukherjee launched DashToon, an AI-driven webtoon app. 

Founded in 2023 with Sanidhya Narain and Lalith Gudipati, the team built DashToon to let users both read vertically scrollable comics and create them through DashToon Studio. The goal was to establish a flywheel where creators could produce, publish, and earn from their work.

Dashverse is not alone. OpenAI is backing the production of an AI-animated feature film called Critterz, created largely using AI tools, including GPT-5 and image-generating models. 

Mukherjee said, “The entertainment industry will grow more because there will be more creators, and as the number of creators increases, the market will grow.” He added that the market will see deep AI penetration, and with OpenAI driving much of this, a huge opportunity is in the offing

Tech Stack

Mukherjee said that Dashverse is not building models to compete with Google Veo 3 or OpenAI Sora, but instead layering its strengths on top. “We’re not competing with them; we actually use them. As they get better, we get better. Our expertise is in understanding creators’ needs and bridging the gap between imagination and output.”

Dashverse is one of the largest users of Veo 3 in India, he added. 

While competitors like Sora and Veo 3 are advancing video models, Dashverse is focusing on solving two unique challenges — consistency and controllability.

Mukherjee said that their technology ensures that a character looks the same across 100 frames, which is essential for long-form storytelling. 

The company has invested heavily in compute infrastructure, including 32 H100 GPUs for training. Inference, however, is handled on the cloud with a custom pipeline. “At least $2 million of our $13 million Series A went into GPUs. Technology development is 70% of our spend,” he revealed.

Team Size and Process 

Excluding creators who are on contract, Dashverse is about 30 people strong, Mukherjee said.

On the production side, nearly 80–90%  of the workflow, he said, takes place inside Dashverse’s in-house tool. This includes image generation, animation, video assembly on a timeline, and iterative editing to meet creative taste. 

Challenges, Adoption, and the Road Ahead

The company’s current focus is on Tier-2 and Tier-3 Indian audiences, who were underserved by mainstream entertainment. “Our content is more a replacement for daily soaps than just short videos,” said Mukherjee.

But challenges remain. “The hardest problem is still the quality of AI-generated content. My holy grail is to make a movie like Inception or Dune with AI. Every day, we push closer by solving consistency, avatars, or orchestration issues. But it’s still a journey.”

Looking ahead, Dashverse wants to open its tools to more creators, expand collaborations with studios, and ultimately dominate the short-drama space. “The intent is to be the largest media house in the world. If we have the creators, we’ll have the consumers.”

Mukherjee acknowledges that the race won’t be easy. “The only fear is not growing fast enough, and someone else beats us to building the first AI micro-drama business.”

The post This Indian AI Startup is Using Google Veo 3 to Create Microdramas  appeared first on Analytics India Magazine.

]]>
Karnataka IT Minister Steers AI Past Bengaluru Into Panchayats and Tier-2 Cities https://analyticsindiamag.com/ai-startups/karnataka-it-minister-steers-ai-past-bengaluru-into-panchayats-and-tier-2-cities/ Wed, 24 Sep 2025 07:30:00 +0000 https://analyticsindiamag.com/?p=10178014

“I'd be a little more cautious and ensure that I push AI across different demographies and socio-economic backgrounds.”

The post Karnataka IT Minister Steers AI Past Bengaluru Into Panchayats and Tier-2 Cities appeared first on Analytics India Magazine.

]]>

The story of artificial intelligence in Karnataka is usually told through Bengaluru’s glass towers, unicorns, and global capability centres (GCCs). But Priyank Kharge, the state’s minister for electronics, IT, BT, and rural development and panchayat raj, wants to broaden the lens. At Cypher 2025, India’s largest AI expo, he outlined a future where AI is not just about high-tech exports but about solving everyday problems in panchayat offices, flood-hit farms, and cattle sheds.

“Every time there is a flood, the crop losses are huge in rural areas,” Kharge said. “We get a lot of manipulated photos claiming that there has been a huge crop loss. So we are using AI to detect whether the image has been tampered with. That way, we will be able to give the right farmers the right compensation.”

This use case may look simple compared with generative models or self-driving cars, but for farmers awaiting relief, the stakes are immediate. Trust in governance and food security depend on getting such systems right.

The minister also cited Karnataka’s experiment with livestock tracking. “Muscle printing the nose of cattle has different prints, just like fingerprints,” he said. “We are using AI and machine learning to see how we can… keep count of livestock.”

By creating a database that recognises unique nose patterns, the state hopes to prevent under-reporting or false claims. The benefits could stretch across subsidies, veterinary care, and even export regulation. Kharge framed it as governance innovation as much as technological advancement.

Rural AI with Guardrails

Kharge is clear that AI adoption cannot be rushed across all contexts. “I would be a little more cautious (on AI) and ensure that I push AI across different demographies and socio-economic backgrounds,” he said. “We don’t want to push it in all directions as of now. But yes, AI is important for all, especially if we’re able to cut down processes in governance.”

The minister further elaborated, stating that Karnataka already has a strong digital foundation to build upon. More than 600 citizen services are available online, and rural internet penetration is over 90%. The state is experimenting with chat-style services in local languages. 

Yet Kharge warned against leaving the masses behind: “We don’t want to do with AI what happened with YouTube, it took 18 years to reach my mother. We must make sure AI reaches the masses faster, but responsibly.”

Startups as Partners in Governance

The minister said that startups can play a central role in making AI useful for governance, provided their work scales. The government’s ‘Grand Challenges’ programme has become one such bridge. Structured problem statements, covering areas such as water, waste, and mobility, are thrown open to innovators.

“We had more than 60 startups that participated,” Kharge said. “We got around 15 startups involved with us, and over seven to eight of them are actually working to solve the problem for the government.”

For young companies, this provides credibility and a chance to pilot solutions with government departments. For the state, it ensures that innovation remains anchored in civic needs rather than abstract technology. “Nothing stops us from adopting private sector solutions if they work,” Kharge said. “Through the Grand Challenges, we are building a bridge where startups can become the government’s first partners, whether it is in healthcare, education, or sustainability.”

Karnataka also funds startups directly. According to Kharge, the state has already supported 1,068 ventures with grants of ₹50 lakh each. This year’s Elevate programme drew 1,700 applications, reflecting strong demand for government-backed support.

Beyond Bengaluru

For Kharge, spreading innovation beyond the capital is both a necessity and a strategy. “Beyond Bengaluru” has become a flagship policy, offering tax incentives and funding only to those setting up outside the city.

“All the subsidies, all the tax incentives, everything… kicks in if you’re investing beyond Bengaluru, whether it’s in Belagavi, Ballari, Manipal, Mysuru, Mandya, anywhere,” he said.

The push is already showing results. Mysuru exported close to ₹4,000-5,000 crore in IT services last year. Hubballi-Dharwad reported around ₹2,000 crore, while Mangaluru generated close to ₹3,000 crore. The state is also rolling out the Deep Local Economic Acceleration Programme to establish more incubators, accelerators, and industrial hubs in tier-2 and tier-3 cities.

“Bengaluru became Bengaluru in three decades,” Kharge said. “We want to set up another Bengaluru in Mysuru or Mangaluru, but it will take time. We are heavily focused on that.”

GCC Powerhouse

While rural AI and Beyond Bengaluru dominate his vision, Kharge is also keen to underline Karnataka’s global strength. “The global capability centres are becoming bigger and bigger. About 47% of all GCCs that are in India are in Karnataka,” he said.

The state is home to more than 850 GCCs, houses over 20,000 registered startups, and hosts 45 unicorns. Its IT exports last year touched ₹4.5 lakh crore. “Out of 110 unicorns, more than 45 are in my state,” Kharge said. “More than 30% of them are into deep tech.”

Karnataka also accounted for more than 45% of the 77 million square feet of office leasing across India last year. In the first half of this year alone, it saw nearly 30 million square feet absorbed, nearly ten times the volume of its closest rival, Maharashtra.

Infrastructure and Compute Challenges

Kharge admitted that infrastructure remains Bengaluru’s biggest pain point. Congested roads and civic strain are pushing talent outward. The state has allocated ₹7,000 crore for city infrastructure this year and is building metro extensions, tunnels, and new corridors. But Kharge said a fairer share of tax revenues from the Centre is essential. “For every ₹100 Karnataka pays, I get back 12 rupees,” he said. “Give me 25 rupees, give me 50 rupees, I will build better roads and create more jobs.”

On the technology side, the state is lobbying for affordable compute. “Technology is not a problem. Technology infrastructure is,” Kharge said, pointing to GPU shortages as a barrier. Karnataka has requested a role in national AI compute initiatives to accelerate access for its startup ecosystem.

Guarding Against AI Misuse

Kharge also addressed the risks of AI misuse, from misinformation to deepfakes. Karnataka is preparing a state-level misinformation bill to curb the dissemination of false content, following reviews by the legal and home departments. “Misinformation, malinformation, disinformation, and fake news… this is something that the government is extremely keen on addressing,” he said.

The state is studying regulatory approaches in the EU and the US to craft its own framework for ethical AI use. “Nobody likes to be regulated,” Kharge said. “But if you don’t regulate certain aspects of AI, it’s going to be counterproductive.”

A Balancing Act

Kharge’s message at Cypher 2025 was consistent: Karnataka’s AI story is not only about scale and exports but also about impact and inclusivity. AI in rural governance may not make global headlines, but it makes the difference between fair compensation and loss for farmers, or between accurate livestock counts and flawed subsidies.

The minister positioned Beyond Bengaluru as the next frontier. Innovation, he argued, must spread to smaller cities, supported by infrastructure, compute, and policies that keep startups anchored to real problems.

“Technology is not a problem. Technology infrastructure is,” Kharge said. “What we need is to apply AI where it delivers immediate value, support startups with policy, and make compute affordable for all.”

The post Karnataka IT Minister Steers AI Past Bengaluru Into Panchayats and Tier-2 Cities appeared first on Analytics India Magazine.

]]>
H-1B Shockwave: What a $100,000 Visa Fee Means for Indian AI Startups https://analyticsindiamag.com/ai-startups/h-1b-shockwave-what-a-100000-visa-fee-means-for-indian-ai-startups/ Mon, 22 Sep 2025 13:30:00 +0000 https://analyticsindiamag.com/?p=10177914

“The proposed fee could act as a catalyst in strengthening India’s AI talent ecosystem.”

The post H-1B Shockwave: What a $100,000 Visa Fee Means for Indian AI Startups appeared first on Analytics India Magazine.

]]>

When the Trump administration announced a one-time $100,000 application fee for new H-1B visas, India’s $283 billion IT services industry reeled. Executives scrambled for clarity. AI startups, in particular, began calculating how this policy shift might alter their growth plans.

Large IT firms worry about costs and delivery models. Startups face a different equation.

Resilience in the Face of Visa Barriers

Contrary to the panic in legacy IT corridors, investors believe AI startups will not be as badly hit.

Abhishek Prasad, managing partner at Cornerstone Ventures, said, “Startups usually don’t target H1B quotas for their US efforts and end up either seeking short-term visas for founders or key team members to visit customers or investors. I don’t see a significant impact on startups scaling in the US market.”

Cash-constrained startups prefer agility over bureaucracy. Instead of relocating engineers, they either hire locally in the US or operate remotely. As some mature, they shift headquarters westward, with hiring dominated by American engineers rather than Indian deputations.

“For other startups, wanting to scale abroad, that same return of talent could also unlock quicker sales cycles and faster trust-building with US buyers as leaders who know the market will bring back context and relationships,” says Piyush Kedia, founder and CEO, InCommon.

Still, the new fee raises the threshold of entry. Scaling in the US will now demand stronger partnerships and capital reserves, not cross-border transfers.

For some, the moment validates earlier bets. Sangram Raje, cofounder and CTO of Prodigal, said, “Back in Trump’s first term, I was convinced Indian machine learning talent would eventually move back because US immigration policies would tighten. At the time, it wasn’t obvious, but with the new $100,000 H-1B rule in his second term, that thesis has played out.”

Turning Crisis into Opportunity

Others see the fee as a chance for India to reverse brain drain.

Madhav Krishna, CEO and founder of Vahan.ai, said, “I believe the proposed fee could act as a catalyst in strengthening India’s AI talent ecosystem. One of the biggest challenges India has faced is brain drain, where our best engineers and researchers often move abroad in search of opportunities.”

Krishna believes changes in visa structures could encourage more people to stay back and build in India. Some may even return.

“India possesses a unique combination of abundant data, diverse linguistic and cultural contexts, and a vibrant startup ecosystem. Together, these factors create an exceptional environment for AI innovation,” he said.

If top engineers stay, India could accelerate homegrown solutions in employment, healthcare, and education. These could serve domestic needs and scale globally.

Shantanu Gangal, cofounder and CEO of Prodigal, links the policy to shifting career choices. “The fee on H-1B visas makes the traditional pathway of pursuing an MBA or master’s in the US much harder to justify. But for engineers today, especially in AI, the equation looks very different. In this field, the market rewards skills and what you can build over pedigree.”

“Don’t wait for a visa, start building in AI now, and the market will find you wherever you are,” Gangal said.

The Risk of Losing More than Just Coders

For the US, the outcome is complex. The “America First” agenda aims to limit immigration. But the immediate effect could be chilling for the tech talent pipeline.

Historically, H-1B holders have made outsized contributions to US innovation. They have founded companies, filed patents, and led R&D efforts.

With such high fees, startups and mid-tier firms may avoid sponsoring visas. Top Indian engineers may choose to stay in India or relocate to Europe, the Middle East, or Southeast Asia.

At a time when the US races against China to lead in AI, the risk is not just fewer coders. It could be the loss of an entrepreneurial engine powered by immigrants.

Unlike tariffs in other sectors, the effects of visa fees on innovation are hard to model. AI progress depends on global talent clusters. If the $100,000 fee diverts talent elsewhere, the long-term impact could stretch from startup valuations to national security.

A Global Paradox

The fee may not hurt Indian AI startups immediately. Many already hire locally or remain in India. But the indirect effects are significant. The country could retain more top talent, boosting its AI ecosystem. The US risks slowing its AI momentum by limiting entrepreneurial engineers.

In an effort to protect American jobs, the US may have inadvertently created an opportunity for India to claim a larger share of the AI future.

The post H-1B Shockwave: What a $100,000 Visa Fee Means for Indian AI Startups appeared first on Analytics India Magazine.

]]>
Assessli’s AI-led Behavioural Model Could Eclipse Language Models https://analyticsindiamag.com/ai-startups/assesslis-ai-led-behavioural-model-could-eclipse-language-models/ Mon, 22 Sep 2025 11:30:00 +0000 https://analyticsindiamag.com/?p=10177879

The Kolkata-based company’s patented methodology is in use across sectors like education, healthcare, and financial services.

The post Assessli’s AI-led Behavioural Model Could Eclipse Language Models appeared first on Analytics India Magazine.

]]>

As artificial intelligence accelerates toward artificial general intelligence (AGI) and artificial superintelligence (ASI), an Indian startup is charting a different course—one that seeks to deeply understand human behaviour and biology, rather than merely predicting patterns from digital footprints.

Kolkata-based Assessli has developed what it calls the world’s first Large Behavioural Model (LBM), a self-supervised AI system trained on human metadata that integrates genomics, neuropsychology, medical data, physiological signals and real-time behaviour to offer unparalleled personalisation.

“Current AI models don’t understand your adaptive data points, human behaviour. They don’t have a biological context like your medical history, genomics, mood, mind, neuropsychology,” said Suraj Biswas, Assessli’s founder and CEO, who is also a genomics and behavioural science expert. “Without mapping adaptive behaviour, AI remains generic, failing to evolve with individuals in real time.”

Assessli’s LBM is built on proprietary data collected from multiple sources: wearable devices, apps like Apple Health and Google Fit, environmental sensors, and biological inputs such as saliva and swab samples. “Every individual creates data points like this… There is no mechanism to capture this,” the founder explained. The model continuously assesses behaviour—from how users scroll on their phones to how they respond emotionally—to adapt and offer tailored advice.

The company’s patented methodology is already in use across sectors like education, healthcare, and financial services. With more than 150,000 active users and 20 million proprietary data points collected, Assessli is on track to gather over one trillion data points in the next two to three years. These will be used to dynamically benchmark users, continuously adjusting to their evolving biological and behavioural profiles.

Its approach distinguishes Assessli from competitors by integrating a complete human metadata layer, rather than relying solely on physical data or medical histories. While companies like Boston Dynamics work on training robots’ movements, and others like Liro focus only on medical data, Assessli’s model synthesises signals across domains, enabling applications from personalised learning environments to medical interventions.

Personalisation Beyond Digital Footprints

“What is meant by 95% or 99% personalisation?” Biswas asked rhetorically. “Every user has a different root cause for asking the same question because of different experiences. It should be hyper-personalised so every individual gets a unique solution.”

Traditional AI recommendation engines operate on shallow data sets—clicks, searches, and keywords—with personalisation precision around 57%, according to a 2024 meta-analysis by Springer. Assessli’s LBM claims to erase this gap by integrating a far wider range of behavioural signals, pushing personalisation accuracy to over 99%.

“For example, in education, we are automating tasks like question generation and evaluation based on behavioural profiles,” the founder said. Teachers can group students according to strengths and weaknesses, enabling tailored learning pathways even in low-resource environments.

Real-World Applications and Tangible Results

A scenario in Assessli’s documentation shows how Aria, a 31-year-old professional, transformed her routine with LBM’s micro-interventions. By analysing her sleep patterns, stress signals, and even grocery choices, LBM helped reduce her migraines by 67%, improved savings, and guided career advancements without requiring extensive manual data entry.

In healthcare, LBM can personalise medication schedules based on genetic markers like BRCA1, optimising treatment for cancer patients. For older adults, AI companions powered by LBM provide emotional support by adapting to memory rhythms and mood fluctuations. The technology also enables “Digital Twin OS”—simulated models of individuals that help gig workers and students optimise daily schedules and workload.

Technical Innovation and Privacy-Centric Architecture

Unlike large language models (LLMs) that require thousands of GPUs to process massive image or video datasets, Assessli’s LBM handles data locally using encrypted, edge-computed signals. “We don’t expose raw biological data to GPUs,” Biswas said. By synthesising biological signals into smaller, context-rich datasets, the model operates with just 30–40 GPUs, making it 90% more cost-efficient than conventional AI.

Every 24 hours, LBM refines its global model using federated sketch learning, ensuring user privacy by only transmitting anonymised gradients rather than personal data. “DNA stays local… differential privacy adds noise to population stats… every query is mapped to a consent bit,” the company’s documentation assures.

Building a Foundation Model for the Future

Assessli’s approach is not merely about building an app. It’s about creating a global platform. “Our motto is not to create use cases. Our motto is to integrate with players, collect data, train the model, and then launch it for everyone,” Biswas emphasised. 

The model is already integrated with educational platforms like iNurture and government initiatives in West Bengal and Maharashtra.

Assessli’s GTM strategy centres on forming partnerships with existing B2B players across sectors like education, healthcare, HR tech, and fintech. Rather than creating standalone products, the company integrates its personalisation model into platforms already serving millions of users, beginning with schools and universities, where automated assessments and tailored learning pathways deliver immediate benefits. 

With expansion efforts in the UK and US, Assessli is positioning itself to collect diverse datasets that will enhance its model’s accuracy while gradually preparing for broader B2C applications.

In a competitive analysis, Assessli’s personalisation accuracy was compared with existing AI models: “Traditional AI guesses based on what you did. LBM knows based on who you are,” the company states. It envisions applications spanning healthcare, e-commerce, robotics, education, and more.

Investor Confidence

Assessli’s vision has attracted significant investor support, helping it scale its AI-driven personalisation model across education, healthcare, and other sectors. The company raised ₹10 lakh from the Indian Statistical Institute (ISI) Kolkata, which validated its foundational methodology early on. Building on that momentum, it secured a pre-seed round of ₹2.3 crore from angel investors, including doctors, researchers, architects, and NRIs.

Assessli successfully closed a seed round of $5 million, drawing support from a wide array of investors who believe in its long-term potential to revolutionise personalised AI. Among its funding efforts, the company has also raised $1.5 million specifically to establish its own data centre, reinforcing its commitment to building secure, privacy-conscious infrastructure.

With plans to onboard over a million users by 2027 and scale its LBM into a global foundation model, Assessli’s ambition reflects a profound shift in how AI interfaces with human experience. It’s not about answering questions—it’s about understanding the person asking them.

The post Assessli’s AI-led Behavioural Model Could Eclipse Language Models appeared first on Analytics India Magazine.

]]>
Why the Next Leap in Speech AI Comes from Kalpa Labs https://analyticsindiamag.com/ai-startups/why-the-next-leap-in-speech-ai-comes-from-kalpa-labs/ Wed, 10 Sep 2025 10:09:33 +0000 https://analyticsindiamag.com/?p=10177417

“The next frontier is not just expressive voices. It’s about building systems that behave like humans in a real call.”

The post Why the Next Leap in Speech AI Comes from Kalpa Labs appeared first on Analytics India Magazine.

]]>

In recent years, large language models (LLMs) have unified a wide range of text-based tasks under a single architecture. One model can code, translate, summarise and generate with remarkable fluidity. This unification, however, has not yet reached the world of speech.

Existing models are either fast but inaccurate or accurate but slow, largely due to high token usage—50 tokens per second—and fixed input padding, which increases computing costs. Speech-focused AI startup Kalpa Labs focuses on creating fast, multilingual, real-time speech-to-text models that resolve the latency and inefficiency issues of systems like Whisper. 

Kalpa Labs aims to reduce audio token rates, eliminate unnecessary padding with configurable “register” tokens and use sparse architectures like mixture-of-experts. This strategy intends to combine the speed of smaller models with the accuracy of larger ones, enhancing real-time transcription and improving user experience across languages.

“If you look at speech models right now, they’re still very fragmented,” Prashant Shishodia, co-founder and CEO of Kalpa Labs, told AIM in an exclusive interaction. “There are specialised models for transcription, text-to-speech, voice cloning, but nothing that can do all of these seamlessly. Our proposition is to take speech models from the 2019 era into the 2025 era of LLMs.”

Key Differentiator

The ambition is clear: build unified speech models that can handle speech-to-text, text-to-speech, voice cloning and even audio editing within a single framework. “Right now, none of the open-source or private models do audio editing. That’s a key capability we’re aiming to unlock,” he added.

Kalpa Labs is solving for both functionality and human-likeness in interaction. Current speech systems, even the most advanced, lack the ability to converse in a natural, duplex manner.

The startup’s vision is to make AI capable of holding long, truly human-like conversations, ones that are responsive, interruptible and sensitive to nuance. “The next frontier is not just expressive voices. It’s about building systems that behave like humans in a real call,” the team noted.

Kalpa Labs has recently pivoted into this space following earlier explorations. “We recently pivoted to this idea, and we’ll be releasing a beta version in the next few weeks,” the co-founder shared. “We are positioning ourselves at the foundational layer.”

YC Acceptance and Global Network

Kalpa Labs has been accepted into the Y Combinator Fall 2025 batch, one of only a handful of India-based teams to make the cut.

“YC has invested heavily in voice AI companies, and we wanted to tap into that network,” Shishodia said. “Many of our potential customers, companies already using ElevenLabs for text-to-speech or Deepgram for transcription, are part of that ecosystem. Accessing them through YC was invaluable.”

However, the process wasn’t without hurdles. From regulatory complexities in India to visa challenges, the team navigated multiple obstacles just to get in. “By the time we receive the YC cheque, the batch will be over,” he noted with a wry smile. “But despite the challenges, it was worth it.”

Scaling the Technology

From the outset, Kalpa Labs is building for scale. All models will be provided via APIs, but the team is also considering open-sourcing edge-capable models.

“There’s no good speech-to-text model that I can run on my Mac today. We want to change that,” Shishodia said. “For sensitive data, people don’t want to rely on cloud services. We’re building edge models with accuracy on par with the largest Whisper models, but runnable locally.”

On the compute side, the approach is pragmatic. “Right now, we have credits to cover us, but to get to state-of-the-art, we’ll need to train extremely large models. The way forward is to train big, then distil down to smaller, more efficient versions.”

Benchmarks and the Illusion of Progress

Speech AI benchmarks have long been a contested measure of progress. “A few years ago, people claimed speech-to-text had surpassed human accuracy. That wasn’t true,” he insists. “Models were overfitting to benchmarks. In the wild, noisy environments, multilingual code-mixing, low-quality microphones, accuracy drops drastically.”

Kalpa Labs is therefore focused on real-world performance, not benchmark gaming. “The whole field needs a refresh in speech benchmarks,” the team argued. “Otherwise we’re fooling ourselves into thinking we’ve solved the problem.”

Tackling Multilingual Complexity

One of Kalpa Labs’ most significant differentiators is its focus on Indian languages and code-mixed speech.

“During tests, models perform well on formal, textbook-style Hindi, but fail on informal, Romanised chat Hindi that people actually use. To fix this, Kalpa Labs is partnering with companies to license large-scale, in-the-wild Indian datasets,” he explained. 

“Without that, building better Indian models is a lost cause,” Shishodia said. “We believe our work can be a step change for Indian languages, and at the same time, competitive at a global level.”

Technical Core: Transformers and Feedback Loops

At the heart of Kalpa’s models are autoregressive transformers, predicting audio chunks sequentially. “We’re experimenting with predicting larger chunks to make the models faster,” Shishodia shared.

The company also emphasises continuous feedback loops. Drawing inspiration from platforms like Midjourney, it wants users’ editing choices to improve the models directly. “Audio editing creates a natural multi-turn feedback cycle. That’s a powerful way to learn what users really want.”

What’s in Store for Kalpa Labs? 

Kalpa Labs is ambitious but realistic about timelines. “AI timelines are short; it’s very hard to predict five years out,” he reflected. “Our immediate goal is to reach state-of-the-art in speech-to-text and text-to-speech within six months, competing with players like ElevenLabs and Deepgram. Beyond that, we want to push the frontier towards truly human-like speech AI.”

Kalpa Labs is chasing a bold vision: to do for speech what GPTs did for text. From unifying fragmented tasks to fixing real-world conversational flaws, from building edge-capable models to solving India’s multilingual complexity, the startup is betting that the next frontier of AI isn’t just about what machines say, but how naturally they say it.

“The next step is not just more expressive voices. It’s making machines talk like humans, in real conversations, in real settings, in real time,” Shishodia summed it up.

The post Why the Next Leap in Speech AI Comes from Kalpa Labs appeared first on Analytics India Magazine.

]]>
Bolna AI’s Bold Bet: Becoming the Distribution Layer for Every Voice Model https://analyticsindiamag.com/ai-startups/bolna-ais-bold-bet-becoming-the-distribution-layer-for-every-voice-model/ Tue, 09 Sep 2025 10:30:00 +0000 https://analyticsindiamag.com/?p=10177325

The voice AI arena is busy. Bolna argues its edge lies in product design and distribution.

The post Bolna AI’s Bold Bet: Becoming the Distribution Layer for Every Voice Model appeared first on Analytics India Magazine.

]]>

In India, phone calls remain the heartbeat of business communication. From customer support lines to recruitment screening, much of this interaction has traditionally depended on humans or clunky interactive voice response (IVR) systems. This tech space, however, is about to evolve, according to Maitreya Wagh, co-founder of Bolna AI, a firm that provides scalable human-like voice AI solutions for B2B enterprises.

“For businesses, around 80% of that will be automated in the next couple of years,” he said in an exclusive conversation with AIM, adding, “that means completely transforming IVR, pre-recorded calls, and even a lot of routine human phone calls. What enterprises really need is an engine or infra layer that fits into their systems and lets them easily build agents that can automate all of these calls.”

Bolna refers to itself as the “distribution layer for every voice model,” highlighting its orchestration platform that integrates multiple speech-to-text and text-to-speech large language models. With an open-source framework, the startup connects telephony providers such as Twilio and Plivo, enabling users to build, test, deploy, and scale conversational voice agents in just minutes.

The platform includes features such as bulk calling, real-time API triggers, human-in-the-loop handoffs, model switching for optimisation, and multilingual support in over 10 Indian and foreign languages, with integrations for tools like Zapier and n8n. 

Beyond Chatbots: Voice AI’s Turf

Bolna’s founders argue that voice AI isn’t simply the next iteration of chatbots. It has the potential to redefine industries where conversation is central. Recruitment, for instance, is already being reshaped.

“All sorts of screening calls, first-level interviews, outreach, onboarding, and training, we’re seeing these rapidly automated,” said Wagh.

He also pointed to consumer-facing applications. Online form-filling on health tech or fintech platforms can be clumsy, often requiring follow-up conversations to capture details. Voice agents, Bolna believes, could eliminate that friction.

Customer service, of course, remains a vast market in India, and the integration of AI in it has changed the game. “It’s a pretty bad experience right now to wait 30-45 minutes to get your queries solved. We don’t think that will even exist in the future,” he said. Though he avoided naming clients still under wraps, Wagh confirmed that large Indian companies are already working with Bolna to automate customer lines.

Standing Out in a Crowd

The voice AI arena is busy, with big tech firms and startups alike rushing in. Bolna argues its edge lies in product design and distribution.

“From day one, we’ve been working on voice AI calling quality, while a lot of others only recently shifted to it from chatbots. The quality of calls with us is significantly different,” said Wagh.

Another differentiator is Bolna’s product-led approach. “Most players are service-led. You go to them, they spend a month building an agent for you, and then deploy it. We are self-serve. Anyone can come, easily build an agent, and start making calls. It’s a great way to get people on board without long demo waits and sales cycles,” he said.

Unlocking Global Doors

One of the biggest milestones in Bolna’s journey so far came this September: acceptance into Y Combinator’s Fall 2025 (YC F25) batch.

“The network just opens a lot of doors,” Wagh acknowledged. “We’ve had meetings with enterprises this week that we didn’t have earlier, and the only thing that changed was the announcement. This goes beyond India. YC unlocks a global brand that was tougher to achieve before.”

Although the company focuses on India, significant model development takes place in the US. Their proximity to YC companies developing voice models enables them to serve as a distribution channel. He added that the startup is currently managing distribution for Eleven Labs in India and plans to extend this approach to other emerging models.

Unlike many startups that join YC at the idea or prototype stage, Bolna arrives with revenue already in hand. “We’re joining YC already at around $400K annual recurring revenue. Our goal is to reach $1 million before the batch ends. I don’t see us pivoting. The vision is clear,” Wagh asserted.

Now Is the Inflexion Point

Voice AI has been hyped for years, but adoption has lagged. According to Wagh, the last six months have been a turning point.

“When we started two years ago, quality wasn’t good enough, and costs were too high. Adoption just wasn’t there. But, in the last three to six months, it has exploded. Every enterprise is looking to build or buy voice AI agents,” he said.

Many enterprises that tried building in-house discovered the effort outweighed the payoff. Wagh explained that it makes more sense to use an orchestration play with multiple models and a better conversational experience. “Cheaper at scale, adoption is becoming quicker.”

The consumer response has also shifted. In campaigns with tens of thousands of calls, few asked if the voice on the other end was AI. “That either means they don’t realise it’s AI, or they don’t care. The job is being done,” Wagh said.

Bolna’s Future 

India, Wagh added, remains a huge opportunity. “It’s a very large, voice-dominated market. Indians like to talk.”

But, the company is also realistic about challenges. Some enterprises may choose to build everything in-house. Others will test multiple models before committing. Bolna’s strategy is to grow fast, build trust, and become the default orchestration partner.

Bolna’s story illustrates how India’s startup ecosystem is shifting from producing SaaS companies that serve local needs to building global infrastructure players. By orchestrating the messy ecosystem of voice AI models, Bolna is betting it can become indispensable to enterprises everywhere.

The post Bolna AI’s Bold Bet: Becoming the Distribution Layer for Every Voice Model appeared first on Analytics India Magazine.

]]>
AI Literacy in Classrooms: Why Fact-Checking and Critical Thinking Can’t Be Optional https://analyticsindiamag.com/ai-startups/ai-literacy-in-classrooms-why-fact-checking-and-critical-thinking-cant-be-optional/ Fri, 05 Sep 2025 03:30:04 +0000 https://analyticsindiamag.com/?p=10177172

Ethics and awareness of bias in algorithms need to be the focus of AI literacy curricula

The post AI Literacy in Classrooms: Why Fact-Checking and Critical Thinking Can’t Be Optional appeared first on Analytics India Magazine.

]]>

Once a buzzword for students, AI has become an everyday learning tool. Homework assistance, language help, and summarising lessons have pushed young leaders worldwide to rely heavily on these AI-powered tools. While this is extremely convenient, there are risks. 

The BrightCHAMPS ‘StudentsSpeakAI’ global survey, which interviewed 1,425 students across 29 countries, reveals a worrying trend: while 58% of students use AI for their studies, nearly 29% never cross-check AI-generated answers, and 23% cannot distinguish between real and AI-generated content.

This raises important questions about media literacy, critical thinking, and the role of educators in shaping the students’ future, where distinguishing fabricated truth from facts could become an essential awareness. 

The Double-Edged Sword

The survey results indicate both opportunity and risk. While 59% of students see learning AI as vital for future readiness, many lack the skills to critically assess AI outputs. About 20% admitted to believing false AI information before later discovering it was incorrect.

For Sweena Mangal, senior AI educator at BrightCHAMPS, this isn’t just about technology, it’s about the fundamental ability to think. “A child’s ability to think critically can be sharpened and honed perfectly well as long as we teach them how the tech behind AI works,” she explained. “When they understand the ‘how’ behind the ‘what’, they will understand and appreciate the ‘why’”.

AI, she added, makes it easier for students to trust information because of how convincingly it mimics human language. That makes fact-checking not a nice-to-have, but a necessity.

The Teachers’ Role

If students are to thrive in this new environment, teachers need to rethink their approach to education. For Mangal, the answer lies in Socratic dialogue.

“If a teacher can instil in a child the ability to look at a topic from multiple vantage points and arrive at their position on the subject after engaging with information that challenges, even contradicts their way of thinking… they’ve done their job,” she said. 

“Because when a child gets into the habit of looking at something from different angles, they also develop the habit of seeking more information, of being prepared, and, most importantly, not getting so attached to one way of thinking that they are unable to change when needed.”

Recognising Biases

One of the grave risks with AI is hidden bias. Algorithms are trained on datasets that often reflect historical and cultural inequalities. Mangal emphasised that bias detection and awareness should be woven into everyday teaching, rather than being treated as an “extra burden.”

“With so much internet access and platform algorithms optimising for stickiness of content over variety of viewpoints or the veracity of truth, it’s far too easy for children to grow up with a unidimensional understanding of the world,” she argued. “If we, as educators and parents, don’t sensitise them to the fact that historically marginalised communities and regions of the world continue to be drastically under-represented on the internet… Who will?”

The survey also found that 12% of students now use AI as their primary mode of online search. While this highlights AI’s growing centrality in learning, it also reveals a creeping dependency that could erode independent thought.

Mangal warns against the consequences: “Over-relying on any tech or one medium impacts an individual’s ability to engage with the topic or question at hand in totality. We all know adults who think of themselves as experts on a subject after reading two paragraphs on Wikipedia. Are they really any different from students who might be over-relying on AI answers?”

Unless addressed, this reliance risks producing a generation less inclined to research deeply, analyse critically, or innovate meaningfully.

Should AI Literacy Be a Core Subject?

Mangal agrees that AI should be part of school curricula, but stresses that teaching AI literacy must go beyond technical usage.

“It needs to be a combination of learning how the technology works + the flaws/WIP nature of the technology + the ethics of it all,” she said. “If an AI curriculum is not getting updated regularly, given how rapidly the tech is developing, it can’t be a valuable one, in my opinion.”

For her, meaningful AI literacy is not just about coding or algorithms, but about instilling an understanding of ethics, bias, and the global impact of technology.

The Role of EdTech Players

While BrightCHAMPS sheds light on student behaviour, companies like ViewSonic are working to provide tools that could support teachers in addressing these challenges in classrooms. 

Muneer Ahmad, vice president, AV business at ViewSonic India, highlighted their efforts: “We believe students shouldn’t just passively consume AI-generated content, they should learn to question it, compare it, and think critically about it. To enable this, we’ve designed our solution for Indian educators with ViewLessons AI Studio and myViewBoard 3.0, offering tools that foster deeper engagement and meaningful learning experiences.”

These platforms, equipped with curriculum-aligned lessons, interactive annotations, and multilingual support, claim to enable teachers to demonstrate to students how to validate and challenge AI-generated content in real-time.

ViewSonic also recognises that educators are under constant pressure. “We understand how challenging it can be for teachers to keep up with rapidly evolving technology. That’s why teacher training and ongoing support are at the heart of our education strategy. We provide both online and offline training programs that make AI integration more approachable and practical for classrooms,” Ahmad said.

Along with initial training, the company provides ongoing technical and pedagogical support to help educators guide students in questioning and applying AI responsibly, he added. 

Ethical Guardrails and Classroom Safety

Technology companies also face the responsibility of ensuring that students are safe as they learn with AI. Ahmad stressed that child safety and data privacy are top priorities for ViewSonic. Beyond that, he explained, “we focus on designing resources with age-appropriate, bias-free content so that learning outcomes are fair and inclusive for all students.”

Importantly, interactive classroom solutions are also making it easier to teach abstract concepts, such as deepfakes and misinformation. Ahmad pointed out: “When teachers use digital whiteboards to place an authentic piece of information alongside a manipulated version, students can see the differences for themselves in real time… This approach makes the risks of misinformation relatable without being intimidating”.

What’s at Stake?

The survey also makes one thing clear: today’s children are tomorrow’s AI natives. Their ability, or inability, to critically engage with AI will shape not only their careers but the health of entire societies.

If students grow up unable to distinguish between truth and fabrication, the consequences extend beyond the classroom to democracy, civic trust, and even national security. As Mangal put it, “If the goal is to help children question the world around them, we need to start with equipping them with information that empowers them to question us.”

Educators, policymakers, and edtech companies alike must treat AI literacy not as a niche subject but as a core competency for the future of humanity.

The post AI Literacy in Classrooms: Why Fact-Checking and Critical Thinking Can’t Be Optional appeared first on Analytics India Magazine.

]]>
Can Data Fragmentation Strengthen Startups Through Federated Learning? https://analyticsindiamag.com/ai-startups/can-data-fragmentation-strengthen-startups-through-federated-learning/ Wed, 03 Sep 2025 10:30:00 +0000 https://analyticsindiamag.com/?p=10177094

Despite challenges like skewed datasets and model convergence, FL presents a promising solution for navigating India's fragmented data landscape.

The post Can Data Fragmentation Strengthen Startups Through Federated Learning? appeared first on Analytics India Magazine.

]]>

India’s AI ecosystem shifts from demos to deployment, with the Digital Personal Data Protection Act of 2023 prompting startups to explore federated learning (FL). FL enables training AI models across institutions without raw data leaving its source, maintaining privacy.

In a country where data is fragmented by geography, institutions, and infrastructure, FL is not just a compliance tool. It is an innovation engine that can unlock collaboration in healthcare, banking, agriculture, telecom, sports, and education.

Healthcare: Collaborative Diagnostics Without Data Pools

Healthcare is perhaps the most natural fit for FL in India. Patient records, diagnostic scans, and pathology data are deeply sensitive, but the demand for collaborative models is immense.

SigTuple, a Bengaluru-based health-tech startup, has explored federated medical imaging models to improve pathology diagnostics across hospitals. Additionally, Qure.ai, an AI radiology company, uses FL techniques to train diagnostic models on data from multiple clinical partners without centralising raw scans.

Moreover, FL aligns closely with India’s National Cancer Grid and the IndiaAI-CATCH programme, which aim to build large-scale oncology AI models. For startups, the incentive is clear that FL enables them to compete in regulated healthcare environments. 

NodeOps cofounder Naman Kabra said, “Another approach is FedProx, a well-known algorithm that modifies the local optimisation problem to keep client models from diverging too far from the global model. This helps with convergence in non-IID (non-independent and identically distributed data) settings.”

He also mentioned that startups can look into multi-task learning within an FL framework, where different clients are essentially working on various but related tasks that are being federated together.

Instead of seeing a hospital’s data on a specific disease as an imbalance, we can view it as a specialised dataset that enhances our overall understanding when combined with others, he said. 

The goal is also to create systems that are resilient to fragmentation, rather than dependent on uniformity, which is crucial for the success of federated learning on a national scale, Kabra emphasised. 

More Use Cases

While the financial sector relies heavily on data, FL offers a solution by consolidating sensitive data and enabling the creation of shared fraud detection and credit scoring models without compromising privacy.

Research shows that FL enhances fraud detection accuracy. While large-scale deployments in India are still emerging, lending startups could leverage FL to develop credit risk models using data from non-banking financial companies (NBFCs) and microfinance institutions. Payment service providers could also collaborate on fraud detection, which is crucial as digital transaction volumes in India surpass 100 billion annually.

With 800 million smartphone users, India is a key testing ground for edge AI. For instance, Google’s Gboard uses FL to personalise predictive text while keeping users’ data private. Indian telecom operators could employ FL for network optimisation and call drop predictions, utilising data from distributed base stations.

India’s diverse datasets across regions and crops can benefit from FL in the agritech industry. Researchers at IIIT-Allahabad have created a federated crop disease detection system that attained 97.25% accuracy while keeping data local. 

Sports and Fitness Tech: Training Without Leaking Secrets

The Indian sports-tech market is booming, from IPL analytics to grassroots athlete development. But teams guard their proprietary data closely. FL allows collaboration without compromise.

Spoda AI, led by CEO Vibhu Pillai, applies FL in sports analytics. Pillai told AIM, “The majority of use cases in sports can have separate data sources that are heterogeneous in nature as long as the central model remains the same and is not impacted by local models. This approach would provide more safety, something we ensure at Spoda for our clients.”

Researchers are exploring FL for injury prediction models, co-training across clubs without exposing player health data, wearables and fitness apps, where models update on-device and sync securely, fan engagement platforms, and building recommender systems without centralising personal identities.

Additionally, in Indian education systems, FL offers advantages in terms of privacy, decentralisation, and edge computation. Nonetheless, it also faces challenges such as non-iid data, inconsistent device performance, connectivity issues, and even adversarial threats. Sector-specific academic analysis reinforces that FL has promise, especially for education, but comes with real-world implementation barriers, especially relevant to India’s infrastructure landscape.

“It is particularly important to assess the use case at hand and make sure the data in silos is useful enough for training the central model. If there are extremely contradicting or polarised local AI models, then you run a risk of altering the central AI model,” Pillai added. 

Challenges to Overcome

Despite enthusiasm, FL adoption in India faces hurdles such as skewed datasets (e.g., one hospital has more oncology cases than another) and challenges to model convergence. Techniques such as FedProx and personalised federated learning are being explored to address this.

Kabra reframes India’s fragmented data challenge as an advantage: “In India, data isn’t just skewed; it’s fragmented by geography, language, dialect, and socioeconomic strata. This isn’t a bug, but a feature that forces us to develop more robust and adaptable FL techniques.”

He sees this as India’s potential export strength: not just consuming federated models, but building the “picks and shovels”, the tools, MLOps platforms, and orchestration layers for FL worldwide.

However, rural India’s patchy bandwidth makes frequent synchronisation costly. Startups must weigh the extra infrastructure burden of FL against centralised approaches. Convincing institutions to share model updates, even without raw data, remains a cultural and legal hurdle.

As the academic paper warns, “Federated learning in India’s education sector will not succeed unless challenges of infrastructure, heterogeneity, and robustness are addressed in parallel.”

India’s Federated Future

Federated learning is no silver bullet, but it offers a uniquely Indian opportunity. In a country defined by fragmented datasets, vague privacy rules, and enormous data scale, FL is both a necessity and a differentiator.

“At NodeOps, we see this opportunity clearly. The future isn’t about Indian companies consuming data and dev tools; it’s about them building and exporting these tools to the world. We can develop the next-gen MLOps platforms, data orchestration layers, and decentralised compute infrastructure that are purpose-built for fragmented data environments,” Kabra asserts.

The road is long. However, if India embraces its chaotic, heterogeneous data landscape as a testing ground, it could become a significant player in building federated systems that are resilient to fragmentation.

The post Can Data Fragmentation Strengthen Startups Through Federated Learning? appeared first on Analytics India Magazine.

]]>
How Privacy and Sovereignty Could Push the Adoption of Decentralised AI https://analyticsindiamag.com/ai-startups/how-privacy-and-sovereignty-could-push-the-adoption-of-decentralised-ai/ Tue, 02 Sep 2025 10:30:00 +0000 https://analyticsindiamag.com/?p=10177018

Decentralised AI allows India to own the rails of intelligence, not just be a data supplier.

The post How Privacy and Sovereignty Could Push the Adoption of Decentralised AI appeared first on Analytics India Magazine.

]]>

A handful of tech giants, with their massive datasets, billion-dollar GPU clusters, and global reach, seemed to control the Artificial Intelligence space, as they built the most powerful AI systems and dictated the rules of engagement. But only until decentralised AI came into picture.   

A quiet revolution is underway, driven not from Silicon Valley boardrooms, but from decentralised networks spread across hospital servers, local bank branches, and even idle machines in small towns.

Once an academic curiosity, decentralised AI is fast becoming a strategic lever for startups across sectors. By distributing training across devices, nodes, and communities, it promises to democratise AI development, protect sensitive data, and help nations like India assert technological sovereignty.

Why Decentralisation Matters

“Decentralised AI lets sensitive sectors like healthcare or finance keep data where it belongs, in hospitals, banks, or government servers, rather than sending everything to a central cloud. That makes it easier to comply with privacy laws and build public trust,” explains Shashank Sripada, cofounder & COO of Gaia, a company that decentralises AI Inferencing.

Sripada notes that decentralisation allows AI models to be trained on local data under local rules, ensuring sovereignty. “In practice, this is how nations can claim technological sovereignty in AI: you benefit from global innovation but don’t hand the keys to a few foreign companies”.

This message resonates strongly with policymakers and enterprises wary of centralised monopolies. Hitesh Ganjoo, CEO of Iksha Labs, highlights how India’s hospitals and banks are already experimenting with this model. “We implemented [decentralised AI] in a leading private bank, where customer risk models were trained across regional servers without ever pooling raw financial data, and in a multi-city hospital network, where diagnostic models were trained inside hospital firewalls via federated learning. Both reduced regulatory risk while increasing adoption.”

Levelling the Playing Field for Startups

Big Tech has long dominated AI due to its control over data and GPU clusters. Decentralisation could change that.

Ganjoo echoes this, pointing to live pilots: “Regional banks are contributing to fraud detection models without exposing raw transaction data. Edge AI brings lightweight, domain-tuned models directly into hospital IT environments. Blockchain-based AI adds auditability, which could help startups establish credibility in BFSI and healthcare, where trust is everything.”

For legal experts, this levelling effect is just as critical for competition policy. Shivanghi Sukumar, partner at Axiom5 Law Chambers, argues: “Centralised AI models require capital investments in expensive cloud infrastructure, a significant barrier for Indian startups. Decentralised AI counters this by allowing companies to access a distributed network of GPUs and other computational resources. This lowers the financial entry barrier, aligning with the goals of the government’s IndiaAI Mission.”

EdgeUp by Zaryah Angels is an AI-driven ed-tech platform for UPSC coaching, utilising a proprietary small language model trained on local datasets. The founders are reportedly exploring decentralised AI computing to cut infrastructure costs. 

Indrajaal by Grene Robotics is an autonomous drone defence system that employs edge AI and distributed command/control nodes, operating in a decentralised manner.

The Emerging Business Models

If compute and data are the fuel, marketplaces are the new engines. Shibu Paul, VP of international sales at Array Networks, notes that decentralised AI marketplaces will transform collaboration.

“Instead of depending only on hyperscalers, smaller businesses and even individuals with idle capacity will be able to contribute. This mirrors the early days of cloud adoption, but on a global, distributed scale,” Paul said.

The model extends to data. Decentralisation enables secure pooling of specialised datasets, with contributors rewarded via credits or tokens. “Communities of small and medium businesses could train models tailored to their sectors, such as healthcare, agriculture, or transportation. Instead of surrendering control of proprietary data to central authorities, they would retain ownership while collectively benefiting from the trained models”.

Over time, Paul believes these marketplaces will bundle compute, data, and models into industry-specific solutions, creating a transparent and equitable digital economy.

Challenges Ahead

Yet decentralisation is not without its hurdles.

The hard part is stitching all this together at scale, said Sripada, adding, “training across hundreds of nodes is messy; networks fail, updates lag, and models can be biased if each data source is too narrow. Security is another big one: you need to make sure no one is poisoning the model or leaking information through updates.”

Expanding on the pain points, Ganjoo said that most frontier models today are not designed for decentralised deployment. “Optimising them to run ‘small and local’ is still a frontier problem. Enterprises also lack awareness of how decentralised frameworks solve security and compliance problems. Education and ecosystem evangelism are as important as the technology itself.”

There’s also the tricky economics of adoption. While decentralisation reduces dependence on central infra, it increases edge-compute and maintenance overheads. Regulatory compliance adds another layer of complexity, with different frameworks (such as GDPR, India’s DPDP Act, and HIPAA) requiring alignment.

A Hybrid Future?

Despite the promise, none of the experts interviewed expect Big Tech to disappear. And rightly so, the consensus is on coexistence.

“The largest foundation models will still come from the big players because they have the resources. But decentralisation brings balance: startups and communities can fine-tune, localise, and deliver AI in ways Big Tech can’t,  closer to the edge, with more trust and sovereignty built in,” said Sripada.

Paul framed it as a relationship of scale and agility. “Big Tech will provide a stable foundation that acts as the backbone of the AI ecosystem. Startups and decentralised platforms will design sector-specific applications and services that make use of that backbone.”

For India, the significance extends beyond startups. Ganjoo called decentralisation an urgent matter of sovereignty: “Decentralised AI allows India to own the rails of intelligence, not just be a data supplier.”

As Sripada analogised, centralised GPTs may be the “Yahoo.com of 1997”,  impressive gateways, but only the beginning. An explosion may be seen once decentralisation allows niche models, agents, and services to bloom.

Startups then may no longer be challengers on the sidelines, but participate equally as architects of a democratic, transparent, and sovereign AI future.

The post How Privacy and Sovereignty Could Push the Adoption of Decentralised AI appeared first on Analytics India Magazine.

]]>
Lost in Translation: How AI Could Undermine Cultural Nuance https://analyticsindiamag.com/ai-startups/lost-in-translation-how-ai-could-undermine-cultural-nuance/ Thu, 28 Aug 2025 04:19:54 +0000 https://analyticsindiamag.com/?p=10176790

Storytelling is not just about words. It is about cultural, emotional, and sometimes, political weight.

The post Lost in Translation: How AI Could Undermine Cultural Nuance appeared first on Analytics India Magazine.

]]>

In recent years, AI has begun to transform the global translation industry, promising unprecedented speed, scale, and affordability. AI translation is gaining attention and investment, which underscores it as the solution to language barriers. In India, home to 22 official languages and numerous dialects, the importance of this solution is amplified.

Beneath the optimism, though, a complex reality unfolds: while AI increases access to translation, it threatens human translators’ jobs, raises authenticity concerns, and risks oversimplifying cultural nuances. As the boom in AI-based translation gathers pace, linguists, translators, and entrepreneurs are locked in a debate about the actual cost of this technological disruption.

The Promise

“AI translation is transforming the landscape of language services by making them faster, cheaper, and more accessible,” says Jaspreet Bindra, co-founder of AI&Beyond. The surge of startups reflects an apparent demand: authors, businesses, and publishers who could never afford traditional translation services can now reach global or regional audiences through algorithms.

Bindra calls this a “powerful force for inclusion,” but also cautions that translation is not just about words; it is about “meaning, identity, and power.” Outsourcing cultural interpretation to machines in fields like literature, politics, or law, he warns, “must be approached with caution”.

Where AI Fails

Despite improvements in large language models (LLMs), AI still struggles with complex, low-resource languages. Deepika Arun, founder of Kadhai Osai, an audiobook platform, shares her first-hand experience: “I have tried working with a lot of LLMs and AIs to translate from English to Tamil, and my experience has been quite bad. These AI tools have not been able to do a good job with English-to-Tamil translation.”

The issue, as linguists point out, is not simply accuracy but authenticity. Chandan Kumar, assistant professor of English & Cultural Studies at Christ University, argues that AI’s dependence on biased, standardised training data creates a “globally flattened language” devoid of local colour. 

“A machine describing an event will filter out the creative and culturally rich uses of language, such as metaphor, satire, or the poetic ability to ascribe beauty to tragedy. These are not merely linguistic tricks; they are epistemological acts that an AI, without corresponding data, cannot replicate,” Kumar said.

In his view, AI risks generating a global lingua franca that is “technically correct but culturally sterile,” stripping translation of the imperfections that make language alive.

Translators at the Crossroads

For human translators, the consequences are already being felt in the job market. Parvathi Pappu, a professional translator at HindiTelugu Translations, describes the pressure: “Clients and translation agencies assume that because AI exists, work can be done faster or cheaper and shouldn’t cost more because it is just ‘light editing’.”

This has led to what she calls a “race to the bottom,” with professional translators forced to accept lower rates. Many, she says, have taken second jobs unrelated to their training just to survive.

Even when translators are asked to work in hybrid “human-in-the-loop” setups, where AI generates a draft and humans edit, Pappu sees fundamental flaws. “The post-editing process often requires substantial effort and most of the time demands a full retranslation, all because the AI doesn’t recognise subtleties, cultural references, or industry-specific terminology. More importantly, it lacks a human connection,” she explains.

She refused to participate in AI-assisted projects, which brings up a deeper concern: that AI undermines not only the economics but also the ethics of translation.

From a cognitive linguistics perspective, Kumar says that AI’s “understanding” is not genuine comprehension but statistical mimicry. “AI manipulates symbols based on learned patterns; it does not grasp the conceptual meaning behind them,” he explains.

This distinction is not academic nitpicking. In literature, politics, or law, misinterpretation carries profound risks. “When we rely on algorithms for cultural meaning-making, we are essentially outsourcing our judgment to a system that favours the dominant story,” Kumar notes. This creates a bias toward the majority voices encoded in training data, at the expense of marginalised languages and perspectives.

The result, as Pappu observes in the domain of storytelling, is distortion. “Storytelling is not just about words. It is about cultural, emotional, and sometimes, political weight. AI often flattens voices, overlooks cultural references, mistranslates idioms, and even eliminates nuances entirely. Without human cultural mediation, AI can reduce rich stories into something bland, flat, and generic”.

Preservation or Homogenisation?

One of the paradoxes is that AI can both preserve and erode linguistic diversity. By digitising languages, creating multilingual dictionaries, and providing tools for low-resource communities, AI offers lifelines that traditional methods could not. “Technology can play a crucial role in constructing, reconstructing, and promoting these languages on a global scale,” Kumar acknowledges.

But this preservation often comes at the cost of homogenisation. By forcing languages into standardised formats suitable for machine training, AI accelerates the very flattening it seeks to resist. As Kumar puts it: “The choice for many minority language communities may not be between a pure version of their language and a homogenised one. The choice may be between a documented, evolving, and technologically-supported language, however mixed it may become, and complete extinction”.

Future For Translators?

Looking ahead, most experts predict not total automation but a stratified human-machine partnership. Routine and technical translations, such as those for medical, legal, or government documents, can be handled almost entirely by machines, with humans performing final quality checks. But in literature, marketing, and creative writing, human translators will remain indispensable.

“The profession will undergo a challenging but transformative shift,” Kumar predicts. The generalist translator will fade, replaced by specialists like transcreators, who adapt marketing campaigns across cultures, or linguists who fine-tune AI systems to reduce bias.

Academia, too, will adapt by teaching AI literacy, prompt engineering, and critical post-editing skills, while still emphasising the irreplaceable human role in cultural mediation.

The rise of AI translation startups is not just about technology. It’s about power, labour, and identity. As Bindra puts it, the challenge is to balance: “leveraging AI for scale and accessibility, while preserving human judgment where nuance, empathy, and responsibility matter most”

If startups fail to move towards a more inclusive model, they risk creating a louder, but much flatter world, where stories can travel a lot further but lose their soul along the way.

The post Lost in Translation: How AI Could Undermine Cultural Nuance appeared first on Analytics India Magazine.

]]>
Edge Data Centres Take Back Seat as Indian AI Startups Flock to Hyperscale Hubs https://analyticsindiamag.com/ai-startups/edge-data-centres-take-back-seat-as-indian-ai-startups-flock-to-hyperscale-hubs/ Sun, 24 Aug 2025 04:30:00 +0000 https://analyticsindiamag.com/?p=10176607

Edge computing is currently more expensive than traditional data computing and relies on the maturation of the entire ecosystem for success.

The post Edge Data Centres Take Back Seat as Indian AI Startups Flock to Hyperscale Hubs appeared first on Analytics India Magazine.

]]>

Edge data centres, anticipated to expand India’s cloud services market into smaller cities three years ago, have remained an unrealised ambition as applications requiring high-speed connectivity have not developed as forecasted.

Indian AI startups, particularly those involved in training complex AI models, require large-scale computational power and storage capacities. These demands are met more effectively by hyperscale data centres in metropolitan hubs like Mumbai and Chennai, rather than smaller edge data centres in tier 2 and 3 cities. 

Edge Computing in India

Brijesh Patel, founder & CTO of SNDK Corp, an IT solutions and support services company, told AIM that emerging trends in AI, such as Gen AI and reinforcement learning, require substantial computing resources and low-latency processing. In India, this transition is leading to a heightened demand for edge computing infrastructure. Unlike conventional cloud configurations, edge computing positions processing nearer to the data source, facilitating quicker and more effective real-time decision-making.

Experts see reduced latency as the primary benefit of edge computing, especially for gaming, autonomous vehicles and automated traffic management systems. Edge computing faces challenges due to variability in processing and data retrieval, which can negate latency advantages. Increasing computing resources to address this variability contradicts the aim of smaller data centres, a source familiar with the matter said. 

Moreover, applications must be tailored for edge computing, which adds to the developers’ workload. Currently, edge computing is also more expensive than traditional data computing at scale, and its success requires the entire ecosystem to mature alongside it, the source said. 

India is seeing massive investments in data centre capacity to meet AI demand, which is forecast to reach about $100 billion by 2027, with major expansions in Mumbai, Chennai, and Noida by global and domestic players.

Additionally, AI startups usually operate in metropolitan cities where hyperscale data centres provide reliable and cost-effective infrastructure. Consequently, there is less incentive to invest in edge data centres in smaller cities, which face challenges like higher costs and limited talent.

“For most startups, the simplicity and flexibility of the cloud outweigh the complexity of distributed edge infrastructure. Edge holds promise, but cloud economics and maturity still make it a niche choice in India,” said Vaibhav Poonekar, CTO at Decimal Point Analytics.

Startups also rely on hyperscale providers for large-scale model training, while edge computing is suitable only for latency-critical or niche inference workloads. The demand for burst compute and extensive storage capabilities gives hyperscale elasticity a significant advantage, he explained. 

Digital Connexion’s market analysis highlights the industry shift: India is now pivoting toward hyperscale data centres in response to rising AI workloads and increasingly sophisticated backbone networks. Edge facilities have become less central, as the expanded capacity and improved latency of hyperscale data centres, enabled by network upgrades, are fulfilling demands once expected of the edge.

Edge data centres offer low-latency and localised processing, but they lack the scalability of hyperscale cloud providers like AWS and Azure, which are essential for AI training. However, Patel asserts that edge data centres excel in real-time data processing for localised operations, such as autonomous systems and smart devices.

Scale and Cost Efficiency 

Dheeraj Chaudhary, director of technology at Verge Cloud, told AIM, “The digital edge infrastructure in India isn’t saturated yet; it’s simply delayed by fundamentals.” 

Hyperscale cloud providers’ scale and cost efficiency create challenges for smaller edge data centres, particularly in cost-sensitive Indian industries. AWS and Azure offer more scalable and cost-effective models, delivering better returns on investment, Patel highlighted

Decentralised AI-driven automation in manufacturing and logistics reduces dependence on distant data centres. “With the push for faster responses and more intelligent processing, edge data centres are becoming a strategic necessity, supporting critical AI applications in industries like retail, healthcare, and smart cities,” he added.

Edge data centres currently generate limited revenue from AI startups. The demand for these facilities is primarily internal, driven by large companies such as Bharti Airtel, rather than coming from independent AI startups. 

Chaudhary said, “Until monetisation drivers such as low-latency SaaS, AI workloads, and immersive applications mature, enterprises will remain reliant on centralised clouds.”

Nonetheless, Poonekar believes that Cloud meets immediate needs with elastic growth, while edge remains more tactical than strategic. In India, most startups still view the cloud as the scalable long-term path.

Future of edge computing

Chaudhary and Poonekar both noted that high capital costs and uncertain returns hinder the adoption of edge deployments in tier-2 and tier-3 cities, where latency and connectivity are paramount. Conversely, Tier-1 cities require compute-heavy AI workloads that are best served by hyperscale data centres. They emphasise that edge may still find relevance even as hyperscale continues to dominate at scale.

Companies with large data centres have also announced expansion plans, with Princeton Data announcing plans to establish 230MW of net capacity in India by the end of next year. Equinix also revealed a major expansion last year. Yotta’s Greater Noida facility, announced in October 2022, can expand to 250MW by next year. However, there are no specific plans for edge data centre expansion.

Chaudhary also believes that the future lies not in a competition between edge and cloud computing, but in their orchestrated convergence. As latency-sensitive applications in payment processing, over-the-top (OTT) services, and AI inference become more prevalent, the demands for compliance and the need to include underserved areas of India will make ultra-local compute and storage essential. 

“This is where VergeCloud is positioning itself, building smaller city Points of Presence (PoPs) that extend beyond Content Delivery Networks (CDNs) to deliver true edge capabilities,” he said. This approach also aims to bridge connectivity gaps while aligning with India’s Digital Personal Data Protection (DPDP) Act and facilitating the next phase of digital growth.

The limited scale of latency-sensitive AI applications and operational challenges in tier 2 and 3 cities have further weakened the correlation between AI startups and edge data centres in India.

The post Edge Data Centres Take Back Seat as Indian AI Startups Flock to Hyperscale Hubs appeared first on Analytics India Magazine.

]]>
How Indian AI Startups are Responding to IT Layoffs  https://analyticsindiamag.com/ai-startups/how-indian-ai-startups-are-responding-to-it-layoffs/ Fri, 22 Aug 2025 07:30:00 +0000 https://analyticsindiamag.com/?p=10176538

The reallocation of talent in Indian AI startups presents potential benefits, although challenges with retention persist.

The post How Indian AI Startups are Responding to IT Layoffs  appeared first on Analytics India Magazine.

]]>

An unusual opportunity has emerged for AI startups amid massive layoffs across the IT sector in India. Early signs suggest the adaptability of both the displaced workforce and startups could shape India’s AI landscape in unexpected ways. 

Startups are embracing laid-off IT professionals, seeing an opportunity to bolster their engineering teams, without the immediate costs of hiring fresh graduates and training them. 

The question of retention, however, remains pivotal. While these hires may be a quick fix to address skills shortages, startups need to consider whether they can sustain such hires over the long run. “There may be some integration challenges, laid-off employees might have higher expectations based on previous roles and the company from which they were laid off,” said Aditya Singh Gaur, deputy manager at C3iHub, a technology innovation hub at IIT Kanpur.

Elaborating on challenges for experienced professionals, Gaur said startup environments demand flexibility, a quick decision-making pace, and often offer lower remuneration than that in large corporations.

Anil Agarwal, CEO and cofounder of InCruiter, a talent assessment service, said that laid-off IT professionals end up at AI startups, particularly in roles that demand strong engineering fundamentals or data skills. “These professionals bring with them problem-solving abilities, system knowledge, and process discipline, making them valuable in the short term.”

The Cost-Benefit Trade-Off

The choice between retraining laid-off IT professionals and hiring AI-native freshers is not straightforward. 

Agarwal explains that the trade-off hinges on time-to-productivity and training costs. Retraining IT professionals might require a greater upfront investment, but it offers the benefit of maturity, lower attrition, and domain knowledge that can be critical in project management and system integration.

“Freshers with AI skills can be immediately deployable, but they sometimes lack domain depth and business alignment. Startups balance these options by role-criticality, choosing retraining for core projects needing reliability, and freshers for experimental or high-volume work,” Agarwal said.

The key, according to Gaur, is strategic integration: startups should ensure that these professionals align with team culture and are given clear, challenging roles to avoid frustrations that might arise from mismatched expectations.

Reskilling and Retention

Co-investing in reskilling could benefit everyone involved. Agarwal highlighted that investors could help startups access a strong pool of AI-ready talent, while professionals enjoy career growth and improved retention. Sharing training resources could reduce costs, and by institutionalising reskilling, startups could scale faster despite talent shortages.

This approach could help startups cut redundant costs, as shared training infrastructure would enable them to scale talent acquisition without being hindered by talent shortages or inflated hiring costs. As Agarwal explained, “Reskilling creates a feedback loop where both startups and professionals benefit, leading to better retention and faster scalability.”

“We have observed this often among our client partners… where enterprises are adding a layer of generative AI knowledge to their workforce to make them more efficient and, of course, productive,” Ritesh Malhotra, enterprise head at Great Learning, told AIM. He said that developing talent internally saves time and lowers recruitment costs, improving the return on investment. 

The Efficiency Gap

The ‘efficiency gap’ poses a significant paradox in India’s ambition to become an AI superpower, where high investor valuations conflict with low productivity per capita, Gaur added. 

While a large talent pool and domestic market attract investment, inefficiencies, due to inconsistent infrastructure, skill gaps, and bureaucracy, prevent Indian firms from providing the scalable solutions that justify these valuations. Gaur emphasised that this could lead to perceptions of an overvalued ecosystem deterring long-term investment.

Additionally, despite producing millions of STEM graduates, many are stuck in lower-margin roles like data annotation instead of driving high-value innovations such as foundational model development.

“This not only underutilises their potential but also risks cementing India’s position as a service provider rather than an AI leader, struggling to match the innovative output of hubs in the US or China,” Gaur said.   

What can Regional Universities and Governments do?

In light of the growing demand for AI talent, regional universities and engineering colleges are expected to play a crucial role in closing the talent gap. 

Gaur said that IIT Kanpur established the Wadhwani School of Advanced Artificial Intelligence and Intelligent Systems in April 2025, dedicated to AI, cybersecurity, robotics, and AI policy.

Currently, at least 11 of the 23 Indian Institutes of Technology (IITs) offer BTech programs in AI-related fields such as Artificial Intelligence and Data Science. IIT Kharagpur and IIT Madras launched their AI-focused BTech programs in 2024, he added. 

“Ultimately, performance in a corporate setting isn’t solely dependent on foundational knowledge and is also driven by adaptability, problem-solving abilities, and diverse perspectives. Success is a highly individual matter, governed more by a person’s willingness to learn and grow than by their alma mater,” Malhotra concluded.

As AI startups in India navigate the complexities of hiring laid-off IT talent, it’s clear that retention will be key to ensuring long-term success. While the short-term benefits of hiring experienced professionals are evident, startups must invest in training, cultural alignment, and growth opportunities to truly leverage this talent pool.

The post How Indian AI Startups are Responding to IT Layoffs  appeared first on Analytics India Magazine.

]]>
OpenAI’s ₹399 Wake Up Call for Indian AI Startups https://analyticsindiamag.com/ai-startups/openais-%e2%82%b9399-wake-up-call-for-indian-ai-startups/ Thu, 21 Aug 2025 07:10:46 +0000 https://analyticsindiamag.com/?p=10176376

For years, rivals here have been trying to build ‘India’s ChatGPT’.

The post OpenAI’s ₹399 Wake Up Call for Indian AI Startups appeared first on Analytics India Magazine.

]]>

OpenAI has dropped its most aggressive play for India yet with ChatGPT Go, a ₹399 per month plan devised to give users higher limits on messages, image generation, uploads and memory. For the price of a Jio or Airtel 5G pack, or a Netflix mobile plan, users now get unlimited intelligence for the cost of unlimited calls.

The move could prove to be a wake-up call—if not a death knell—for India’s AI startups.

For years, rivals here have been trying to build “India’s ChatGPT”, but none have reached the brand recognition OpenAI currently enjoys. Perplexity, for instance, has tied up with Airtel to bundle its Pro subscription at no cost. But even with such moves, nothing has come close to ChatGPT’s reach.

ChatGPT has already become synonymous with AI in India. And now, with an India-exclusive budget tier, the gap between local builders and OpenAI has just widened.

Take the timing. Barely a day before ChatGPT Go’s launch, YouTube influencer Dhruv Rathee launched AI Fiesta at ₹999 a month, bundling ChatGPT, Gemini, Claude, and more. His pitch was simple: Indians won’t pay $20–30 for each tool, so offer them all in one affordable package.

However, OpenAI might have just killed it with its Jio-style pricing. At ₹399, ChatGPT Go potentially makes bundled services a tough sell. 

But beyond the pricing wars lies a bigger question: why does ChatGPT need India so badly?

OpenAI’s Second Largest Market, But Biggest for Indian AI Startups

India isn’t just OpenAI’s second-largest market. It’s the testbed. Low per-capita spend, massive adoption. If even a fraction of Indians start paying ₹399 a month, that translates to millions of new subscribers. Rathee’s own survey showed that 70% never upgrade. OpenAI is betting that this will change now.

Not everyone is convinced. Ankush Sabharwal, CEO of CoRover, argues that pricing alone won’t drive adoption. “Is India ready to pay subscription fees for such products? Honestly, not yet,” he told AIM

“In my opinion, it isn’t cheap; it’s still costly. Most B2C AI companies grow on data, and they usually get that data by offering free services. So people should not be surprised if this price drops further, maybe even down to zero, soon,” he added.

And therein lies the paradox. The lower the pricing goes, the larger the impact on AI startups, specifically those aiming to build India’s AI on the idea of “being cheaper”. If building indigenous AI models drags on, someone like OpenAI can overtake them any day.

India’s importance to OpenAI is no secret. The fact that the country is its second-largest market is something Sam Altman has no intention of giving up. In February, he revealed that India’s user base had tripled in a year, but most were on the free tier. That doesn’t move OpenAI’s revenue needle, so the ₹399 plan is the bait. 

Sabharwal, though, believes chasing hype would be the fastest way for AI startups to fail. “The reality is that 70–85% of AI products fail….Competition from global players is only a threat if we chase hype or a ‘me too’ approach. If we build with purpose, rooted in India’s unique challenges across governance, healthcare, education or business, there’s no need to fear foreign companies.”

“To achieve our purpose and solve some real problems, even if we have to partner with the global companies, we should,” he added.

The Harsh Reality

However, this highlights a harsh reality for Indian startups and their survival when OpenAI is selling premium AI at Jio-level costs. As AIM has reported, India simply doesn’t have enough consumers of AI products. Many home-grown AI startups are building in a vacuum, or bypassing consumer products altogether.

In recent months, companies like Krutrim, Fractal, Sarvam and Gnani.ai have started positioning themselves as pioneers of agentic AI. They’ve launched assistants, image generators and voice bots aimed at India’s “mobile-first” population. But most of these launches are still enterprise-first.

Consumer adoption remains a big question mark, which is why most of them stay away. Yet, it is the biggest market for Indian AI startups. Selling here should be the first priority, and chasing that goal is getting increasingly complex.

Ironically, the same risk also hangs over OpenAI.

The market is already crowded with free alternatives: Perplexity comes free with Airtel, Meta AI is embedded into WhatsApp, and Google is handing out free AI Pro subscriptions to students. Against this backdrop, ChatGPT Go sits awkwardly in the middle—not free, not premium. 

However, some see this as an opportunity rather than a threat. Ganesh Gopalan, founder of gnani.ai, believes the move could lift the industry as a whole. “An ₹399 chatbot like ChatGPT Go is more of a boon than a threat for the Indian AI ecosystem, especially startups,” he told AIM, adding that OpenAI’s pricing will also push the government’s IndiaAI Mission forward. 

“While the low pricing could intensify competition, it significantly broadens the market by making AI accessible to millions of price-sensitive users who may not have experimented with such tools before. This rising adoption creates greater awareness and helps normalise AI in everyday workflows,” he added.

Pawan Prabhat, co-founder of Shorthills AI, is also showing caution for OpenAI. “Casual users already have Perplexity free with Airtel and Meta AI in WhatsApp. Professionals will prefer the full ₹2,000 suite. That raises the question: who is ChatGPT Go really for?”

Fail Fast

The answer might be simple. ChatGPT Go isn’t about creating a perfect tier; it’s about making millions of free users into paying ones, at any price point. And that’s where Indian startups are either going to get crushed or the adoption might finally take off.

Competing with free services is already hard. Competing with OpenAI at ₹399, when the brand itself has become a verb, borders on impossible.

For the longest time, Indian AI startups have positioned themselves around Indic use cases, but that focus has limited their ability to scale. As for those not focused on Indic use cases specifically, they are moving abroad for users and enterprise use cases, as even selling to Indian enterprises requires endless proof-of-concepts before a deal is finalised.

Meanwhile, Altman has been blunt about his global ambitions. In his testimony to the US Congress, he said he wants the world to run on the ‘US stack’—US chips, US services and US AI. India, with its scale and diversity, is not just a market. It’s the ultimate test bed. If AI works here, it can work anywhere. 

And that’s exactly why global players keep pouring resources into India, while local startups struggle for visibility. The time has come, then, for India to build its own ‘India stack’ for its AI startups. 

The post OpenAI’s ₹399 Wake Up Call for Indian AI Startups appeared first on Analytics India Magazine.

]]>
AI Startups Depend on Costly APIs of Companies Burning Billions https://analyticsindiamag.com/ai-startups/ai-startups-depend-on-costly-apis-of-companies-burning-billions/ Wed, 20 Aug 2025 13:42:38 +0000 https://analyticsindiamag.com/?p=10176327

‘The question is not whether AI is powerful. It is whether it’s profitable’.

The post AI Startups Depend on Costly APIs of Companies Burning Billions appeared first on Analytics India Magazine.

]]>

The AI industry is stuck in a paradox. On one hand, it is hailed as the most important technology shift of our time. On the other, most companies calling themselves AI startups are little more than thin wrappers around APIs built by OpenAI, Anthropic, or Google. 

Strip away the hype and the reality looks fragile.

Alex Issakova, CEO of Huckr AI, said it bluntly. “80% of AI startups depend on APIs from companies burning billions. How long can that last?”

That dependency is the core weakness. When OpenAI raises prices, when Anthropic cuts back credits, or when Google shifts its model tiers, entire businesses get shaken. As Gergely Orosz, best known for his newsletter The Pragmatic Engineer, noted, many AI startups are “boasting about ARR milestones” that don’t add up once you factor in compute bills. 

A company may claim $100 million in revenue, but its costs are just as high. Margins vanish the moment the API provider decides to change the math.

OpenAI spent $9 billion in 2024 and lost $5 billion. Anthropic burned through $5.6 billion. Google is sinking more than $10 billion every year into AI, while still making its real profits from advertising. If the giants are struggling to make money, what chance do the startups built on top of their work have?

Wasn’t Being a Wrapper Good Enough?

“The majority of people can start with a wrapper and then, over a period of time, build the complexity of having their own model,” Prayank Swaroop, partner at Accel, had earlier told AIM

Swaroop emphasised that Accel has no inhibition in investing in wrapper-based AI companies, as long as the startup can prove its ability to find customers by building GPT or AI wrappers on other products. However, he said that for a research-led foundational model, it is crucial to stand out, and simply creating a GPT wrapper does not qualify as a new innovation.

But the API layer is where the illusion of innovation lies. Most companies branded as “AI-first” are reselling access to models with a slick interface or a narrow workflow. This is also visible with the latest release of YouTuber Dhruv Rathee, which faced severe backlash from the Indian tech community.

Sam Altman, OpenAI’s CEO, has been warning about this for months. On the recent podcast with Nikhil Kamath, Altman said, “Using AI itself does not create a defensible business. You’ve always got to parlay that advantage that comes from using the new technology into a durable business with real value that gets created.”

Altman compared today’s flood of AI wrappers to the early iPhone App Store. At first, people made money selling gimmicks like flashlight apps. Apple eventually absorbed those into the operating system. But Uber, which used the iPhone as an enabler rather than a crutch, became a lasting business. 

The message is clear: if your startup exists because the model doesn’t yet offer the feature, you’re on borrowed time.

Not Enough

OpenAI’s GPT-5 was hyped for two years as the breakthrough that would push AI closer to general reasoning. Instead, users saw a product that was faster and cheaper but not dramatically smarter. Even Altman admitted, “I think we totally screwed up some things on the rollout.”

That stumble matters because of the sheer weight of expectations. Investors, startups, and enterprises all act as if the next model will fix hallucinations and deliver accuracy levels safe for medicine, law, and government. 

Instead, hallucinations are still in the range of 10–20% depending on context. For domains where mistakes can cost lives, that failure rate makes the technology unusable without heavy guardrails.

The obsession with bigger models and shinier demos hides the real problem of profitability. Microsoft has added trillions in market value since its OpenAI partnership, but the money comes from Azure cloud services, not from selling AI subscriptions.

Google still lives off ads. Oracle is chasing cloud margins but barely moving the needle. Beneath them, hundreds of AI startups are living off cheap capital, venture subsidies, and API credits.

Even for Meta, the latest release of Llama 4, to say the least, was underwhelming. Harneet SN, co-founder of Rabbitt AI, earlier told AIM that while Meta’s Llama 4 appears promising on paper with its Mixture-of-Experts architecture and native multimodality, its real-world performance has left some gaps. 

“Its long-context capabilities don’t quite hit the mark they advertise, and image understanding can sometimes be a bit off, leading to unexpected outputs,” he noted. This reaction is echoed across the industry, with some saying, “it’s a model that shouldn’t have been released”.

What Happens Next?

“The question is not whether AI is powerful. It is whether it’s profitable,” argued Issakova.

That is where the bubble risk emerges. If capital tightens, if investors demand returns, if API prices rise even slightly, most AI startups would collapse. Already, churn spikes when subscription prices go up. Consumer demand is not infinite.

Altman himself acknowledged the race against the clock: “You can definitely build an amazing thing with AI, but then you have to go build a real defensible layer around it.” In other words, you need to own the customer, not just the API call. 

Companies like Cursor, which has grown by deeply embedding itself into developer workflows, show that durability is possible. Most others will disappear the moment the foundation models catch up.

Issakova warns that unless AI “builds models that solve real-world problems at scale” and “develops business models not reliant on cheap capital,” the cycle will end in another freeze.

For now, AI is powerful. It is useful. But it is also propped up by billions in losses and fragile startups pretending to be something more than what they are: renters of someone else’s infrastructure. 

Henning Steier, chief marketing and communications officer at Bluespace Ventures, said, “If your startup pitch deck includes ‘we’ll figure out monetisation later’ and your core tech is someone else’s API… congratulations, you’re basically WeWork with GPUs.”

Nik Kotecha, founder of Earl Global has another analogy: “Most AI startups aren’t building companies, they’re renting margins from Microsoft.”

The longer this continues, the sharper the correction will be.

The post AI Startups Depend on Costly APIs of Companies Burning Billions appeared first on Analytics India Magazine.

]]>
Small Successes Mask Fundamental Gaps in India’s AI Ecosystem https://analyticsindiamag.com/ai-startups/small-successes-mask-fundamental-gaps-in-indias-ai-ecosystem/ Wed, 20 Aug 2025 05:44:37 +0000 https://analyticsindiamag.com/?p=10176234

While applications flourish, true innovation remains stunted, risking India’s potential to become a leader in AI.

The post Small Successes Mask Fundamental Gaps in India’s AI Ecosystem appeared first on Analytics India Magazine.

]]>

The growth in India’s AI startup ecosystem, powered by waves of capital and a global-first mindset, often makes the headlines. However, these success stories mask persistent deficiencies in infrastructure, R&D investment, and innovation capacity. 

This systemic gap keeps the AI ecosystem heavily reliant on foreign foundational models, as it lacks sufficient deep research capacity, and struggles to transition from derivative applications to developing core technology. 

Sanchit Vir Gogia, CEO and chief analyst of Greyhound Research, noted that familiar challenges hinder India’s goal to develop GPT-scale models. He said that 58% of decision-makers see the lack of affordable GPU clusters as the main obstacle. In comparison, 52% cite the absence of properly cleared Indic datasets and 49% mention uncertainties regarding model-release regulations and cross-border data flows.

These choke points force most Indian teams to “stick to fine-tuning and distilling imported models rather than attempting true sovereign pre-training,” he said.

The IndiaAI Mission, a ₹10,372 crore initiative, has shifted some focus toward closing these gaps with a call for proposals for indigenous foundational models, robust compute infrastructure, and open data commons. 

But, Gogia emphasises: “Application wins matter, but they need to be anchored to three shared assets: a national compute commons with multi-tenant scheduling, a fully licensed Indic Data Commons, and a transparent and stable regulatory regime for model release and data transfer.”

Applications Over Deeptech

The Times of India reported an “unprecedented” fundraising spree among Indian-origin AI startups employing a global-first approach. Firms like GigaML and Atomicwork have secured early-stage rounds from marquee investors such as Redpoint Ventures, Khosla Ventures, and Lightspeed India. 

The trend is evident: Businesses originating from India are embracing a global-first strategy for entering the market, the TOI noted, with numerous startups attaining substantial annual recurring revenue or securing high valuations while extending their reach beyond India from the outset. This progress indicates both investor trust and an acknowledgement that India’s tech talent is capable of competing on a global stage.

Success stories, however, represent only one layer of India’s AI narrative, where funders favour applications over deeptech. 

India’s VC landscape loves a quick win: much of the capital chases applied AI and rapid commercialisation, chat agents, workflow automation, and analytics tools built on top of Western LLM APIs. 

This productisation-first bias delivers fast adoption and visibility, but as Shreshth Bhatt, a senior research associate at Global Market Insights, noted, “This is not innovation, but the application of existing models … value (innovation) is coming from the West. In the long term, western countries can easily build applications using the foundational models (which they themselves have invented), but India could lag as our innovation relies on the Western technology giants.” 

Without a shift in investor appetite, he warned, “India’s role will remain that of an AI customer, not a creator.”

Data support this trend. In 2024, funding for India’s deeptech sector, including AI, has increased by 78%, yet most of it was directed towards application-level businesses rather than fundamental model research. Funding for early-stage deeptech decreased by 37% as late-stage, market-ready solutions led the transactions.

What’s the Roadblock?

India faces challenges that hinder its innovation, particularly in artificial intelligence. A notable shortage of high-performance AI hardware limits the development of advanced models.

While initiatives like hyperscaler credits and domestic clusters exist, the unpredictable nature of access and system tenancy complicates the efforts for seamless computational support. In addition to hardware constraints, the lack of domain-specific datasets poses a challenge.

Bhatt highlighted that many foundational LLMs in India rely on English-centric datasets, raising concerns about their relevance in a diverse linguistic context.

The current research culture often drives talented individuals abroad, not just for better salaries, but also for improved access to resources, mentorship, and fewer bureaucratic hurdles and infrastructure, he said. Domestic labs typically impose restrictive publication rights and limited collaboration, hindering innovation and talent retention.

Venkata Subramaniam, lead executive of IBM Quantum India, noted that India’s main challenge is that students and early-career researchers seldom get to work on real-world problems at scale. Innovation requires experimentation and iteration, but most talent is limited to theoretical or small projects.

“If a student in a rural college knows exactly what challenge their local farmers face, they should have the computing tools and mentorship to build an AI model to solve it. Today, that bridge simply doesn’t exist,” he added. 

Subramaniam believes the gap in AI development stems from frontier-level infrastructure and training being concentrated in a few corporate labs, leaving most universities and rural areas behind. 

To address this, he suggested implementing shared national GPU clusters for student-led projects focused on India-specific use cases, along with “Build-for-your-community” programs that connect local problem owners with AI talent and a curriculum that integrates domain-specific challenges like crop disease detection with real computing access.

Headline Wins to Global AI Leadership

Gogia aptly summarised: “Public funding, industry matching, and mission-style grants for foundational research must rise together, otherwise India will keep doing the applied work on someone else’s core science.”

Additionally, the establishment of open, rights-cleared domain datasets that accurately represent India’s diverse linguistic and sectoral landscape will be essential. 

A stable and transparent regulatory framework is necessary to facilitate model development, deployment, and cross-border collaboration. Furthermore, implementing long-horizon funding models will safeguard deep R&D initiatives, allowing them to thrive beyond the initial phase of commercial productisation.

Subramanium said the country still hasn’t done enough with productisation to create products that genuinely impact society.  In India, where technology penetration remains low, productisation should be a top priority. “What we lack is the hard engineering needed to take that science to market as real, impactful products,” he said. 

India’s next AI milestone shouldn’t be only another unicorn with a global-first go-to-market strategy, but a homegrown breakthrough in AI science, a sovereign model, algorithm, or dataset that becomes foundational for the world.

The post Small Successes Mask Fundamental Gaps in India’s AI Ecosystem appeared first on Analytics India Magazine.

]]>
Why Indian AI Startups Achieve Recognition But Struggle to Scale-up https://analyticsindiamag.com/ai-startups/why-indian-ai-startups-achieve-recognition-but-struggle-to-scale-up/ Sun, 17 Aug 2025 04:30:00 +0000 https://analyticsindiamag.com/?p=10176007

Nation’s deeptech ambitions seek a synergy between government and private sector initiatives to cross this threshold.

The post Why Indian AI Startups Achieve Recognition But Struggle to Scale-up appeared first on Analytics India Magazine.

]]>

India’s AI startup ecosystem is thriving with ambition, but the path from recognition to true scale is strewn with hurdles. Being recognised by a government body may serve as a launch pad, yet few ventures transition efficiently to raising institutional funding, maturing products, and driving real revenue.

To answer why successful AI scale-ups remain uncommon in India requires one to assess the influence of government initiatives and analyse the obstacles hindering the nation’s deeptech ambitions. 

Recognition by the Indian Department for Promotion of Industry and Internal Trade (DPIIT) serves as a stamp of approval for AI startups. But, what comes next?

“Many founders report that after initial recognition, startups “develop well-structured pilot projects, solutions that demonstrate impressive potential in controlled environments, but often face significant hurdles when attempting to scale in real-world conditions,” said Reva Malhotra, a consulting director who partners with early-stage startups. 

She points out that deploying an AI model across multiple geographies “requires access to extensive networks, local language adaptation, and integration with government systems – all of which demand time, resources, and systemic support.”

In practice, while technical teams show capacity for scale, their progress is limited by the readiness of Indian enterprises to adopt emerging AI solutions. There’s a marked reluctance among large organisations to engage with early-stage startups, further delaying mainstream adoption.

Institutional Funding

Institutional funding remains elusive for many. Government programmes and accelerators have tried to bridge this gap, but the difference in outcomes is striking.

CV Farish, regional lead for AI startup programmes at Google, told AIM, “In the last cohort [Google AI First Accelerator], we facilitated 1,000 investor connections and the startups raised over $61 million funding within six months after the program. Overall, across all our cohorts, startups from our accelerator have collectively raised over $4.5 billion since they joined the program ($5.4 billion overall).” 

He cited SpotDraft, Kroop AI, and Merlin AI as alumni who successfully transitioned to securing substantial capital and scaling their platforms shortly after completing the program.

By contrast, DPIIT-recognised startups, particularly those reliant solely on government schemes, find it harder to attract follow-on investment. The enthusiasm triggered by recognition sometimes fails to translate into momentum. “While grants form an important part of the growth equation, they represent only one element in a far more complex framework,” Malhotra said. Without further support, market access, mentorship, and buyer introductions, many startups stall at the MVP or pilot stage.

“While India now has 115+ unicorns, access to growth capital, compute, and deep research remain ongoing priorities. India’s late-stage institutional funding is improving, but the Series A+ transition is still the critical drop-off point, underlining the importance of continued capital, global integration, and R&D incentives,” said Rohan Dani, a senior associate at the investment platform Blacksoil. 

Scalable Products vs Pilot Projects

The pilot-to-scale dilemma persists in the sector. Malhotra said, “Although the concepts are strong, transforming them into viable, widely adopted products remains a formidable task.” Infrastructure and network limitations mean that many projects remain at the demonstrator stage, lacking the connectivity and customer engagement required for nationwide adoption, she added.

Dani highlighted that the gap between seed grants and Series A+ funding indicates challenges within both the startups and the ecosystem. While over $780 million was raised in 2024 (a 40% increase), early-stage investments dropped by 37%, favouring late-stage firms. 

AI startups with strong traction and product-market fit attract funding but often struggle to convert validation into sustainable revenue, needing better IP, market validation, and monetisation strategies. Investors are wary of long gestation periods and limited exits. Domestic funding is increasing, with Indian venture capitalists (VC) and family offices investing $1.4 billion in H1 2025, yet issues like infrastructure and access to computing resources remain critical.

“Flagship successes like Qure.ai’s international deployments and Sarvam AI’s sovereign multilingual LLMs demonstrate potential when government infrastructure, market pilots, and VC capital align. Such cases are exceptions; the policy-to-practice gap narrows, but robust, scalable business models at Series A+ are essential for sustaining India’s AI growth and bridging funding gaps,” Dani added. 

Even for startups participating in government-grant programmes like Startup India Seed Fund Scheme (SISFS) or Credit Guarantee Scheme for Startups (CGSS), progress is hindered by fragmented support. An AI startup specialising in medical diagnostics, for example, may build an MVP funded through SISFS, but unless “healthcare institutions are prepared to trial or procure the solution, progress stalls,” Malhotra added.

Do Government Grants Drive Real-World Traction?

Government schemes like the SISFS and CGSS offer capital at early stages. However, founders argue that funding alone is insufficient if not paired with tangible market access and post-grant facilitation.

Dani said, “The value of government funds is real, but sustainable impact requires timely support for customer acquisition and technology scaling.”

He added that in India, the startup policy ecosystem is now a key driver of AI and deeptech growth beyond the seed stage. Major initiatives, such as the IndiaAI Mission with a budget outlay of ₹10,300 crore over the next five years, a ₹1 trillion innovation fund, and over 10,000 subsidised GPU units, enhance support for startups. 

The AI4Bharat program promotes open-source, local-language models, benefiting over 40 million students with AI-driven digital learning. Startups like Krutrim, Sarvam AI, Qure.ai, and AgNext showcase the effectiveness of these policies in healthcare, agri-tech and SaaS, Dani added. 

Why Indian AI Startups Pivot Globally

Founders in the AI sector report significant gaps despite national initiatives aimed at fostering innovation. Key challenges include the lack of actionable frameworks for data sharing, regulatory testing environments, and compliance with ethical AI standards, which hinder adoption and implementation, particularly evident in sectors like facial recognition and retail security. 

Additionally, many government programmes fail to provide ongoing connections to customers or the mentoring necessary for scaling businesses. 

A remarkable 93% of Indian startups in Google’s AI First Accelerator “are building for international markets, with their primary & secondary markets extending beyond India from day one,” said Farish. 

“Indian AI startups are increasingly setting their sights on international markets from the outset, demonstrating a bold and aspirational drive for global leadership.”

This global pivot isn’t only aspirational; it’s pragmatic. Domestic challenges, including a scarcity of high-quality data, slow enterprise adoption, long sales cycles, funding gaps for deeptech, and market readiness have compelled founders to look outward. “Addressing these issues within the ecosystem is crucial for strengthening the Indian market for AI startups,” Farish added.

According to Google’s accelerator programme, startups like SpotDraft used the opportunity to “reduce costs by 80%, increase accuracy by 30%, and reduce latency by 70%,” a leap only achievable with guidance and infrastructure.

How Can India Produce More AI Scale-Ups?

India’s AI ecosystem is at an inflection point. Private accelerators and multinational platforms (e.g., Google), by contrast, have demonstrated models of intense engagement, global market reach, and integrated mentorship that drive real-world traction, investor interest, and product scale.

Bridging India’s deeptech gap not only calls for better funding, but also end-to-end support and a readiness to address structural barriers. 

The post Why Indian AI Startups Achieve Recognition But Struggle to Scale-up appeared first on Analytics India Magazine.

]]>
Voicing AI Loves India, But Knows Startups Still Struggle Here https://analyticsindiamag.com/ai-startups/voicing-ai-loves-india-but-knows-startups-still-struggle-here/ Sat, 16 Aug 2025 04:30:00 +0000 https://analyticsindiamag.com/?p=10175937

LTIMindtree invested $6 million in Voicing AI that brings human-like voice capability across more than 20 languages.

The post Voicing AI Loves India, But Knows Startups Still Struggle Here appeared first on Analytics India Magazine.

]]>

When Abhi Kumar founded Voicing AI in 2024, his experience with markets’ response to emerging technologies came into play, particularly how to deal with the initial resistance.

He had managed Microsoft’s emerging market strategy globally, co-heading its AI investment fund M12, and having deployed over $1.2 billion across 120 companies between 2019 and 2023. 

Back in 2014, when Satya Nadella took over Microsoft, Kumar remembers questioning the company’s leadership: how many of you have sent a WhatsApp or WeChat message in the last 24 hours? Half the hands went up, mostly among employees from India or China. 

“Americans had no idea about WhatsApp or WeChat. They used text messaging,” he said. That insight eventually shaped Voicing AI. It was there he began forming a hypothesis: AI could play the same role in this decade that browsers did in the 1990s — an interface unlocking entirely new business models.

“When the first browsers came out, they didn’t hold value themselves, but connected people to the internet and enabled PayPal, Amazon, Netflix. AI, to me, is the next browser,” said Kumar in an exclusive interaction with AIM.

The Overcrowded Agentic AI Space

Voicing AI builds synthetic agents that can take calls, chat, respond to emails, think through customer issues, and complete tasks — essentially replacing the lowest tiers of call centre work (L0 and L1), while escalating complex cases to humans. 

By March 2024, Kumar said, they were passing the Turing test in more than 90% of cases. Still, technology is only half the story, the bigger challenge is figuring out where AI fits into the messy reality of enterprise operations. 

Kumar leans on his venture experience to tackle this: “It’s not the best tech that wins, it’s the tech that’s implemented to deliver business outcomes.”

But even so, the market is getting crowded. More and more agentic AI startups are coming into the space. Players like Gupshup, RevRag.ai, Yellow.ai, Sarvam, and other AI startups have been racing to sign enterprise customers.

This is because most of the agentic AI traction is in the B2B space and not for consumer apps. Startups are racing to stake a claim, but most are building on the same underlying tech stacks, fine-tuning existing LLMs rather than creating defensible IP.  

Read: India has 109 Agentic AI Startups Building in a Vacuum

One of Voicing AI’s competitors, Gupshup has been aggressively expanding its AI offerings, moving beyond messaging APIs to voice bots and customer engagement. Founder-CEO Beerud Sheth had told AIM that he had also been preparing for the agentic AI wave. 

Sheth believes that real value lies in the application layer. That’s where Gupshup’s existing relationships with enterprises become its biggest asset. The same goes for Voicing AI.

In December last year, LTIMindtree invested $6 million in Voicing AI for its proprietary technology that brings human-like voice capability across more than 20 languages, while using all of the open source tools in the market and fine-tuning those for specific use cases.

This demonstrates the importance of securing enterprise adoption over having a sophisticated model. It’s why companies from messaging veterans to new AI-native entrants are crowding the same field, often targeting the same set of early enterprise adopters. 

In such an environment, speed of implementation and integration into messy, real-world workflows becomes the real moat.

PoC Purgatory and Meagre Margins

Still, as the market crowds, the pressure to win large contracts — especially in higher-margin geographies — will only grow. Indian enterprise deals might help with credibility, but they won’t drive the kind of revenue needed to outpace global rivals.

Kumar believes the economics favour AI agents. In the West, labour costs are high, attrition in call centres is a constant problem, and multilingual coverage is expensive. AI can work 24/7, deliver consistent quality, and switch between languages instantly. 

In theory, India should be the perfect market: English is widely spoken, the outsourcing industry is massive, and companies are under pressure to cut costs. In practice, it’s the same story that has played out for SaaS, AI, and nearly every other enterprise tech sector — Indian customers love pilots, but rarely pay.

Kumar finds Indian enterprises to be surprisingly “very very AI-first” in mindset, even if their budgets don’t always match their enthusiasm. “I truly believe AI is the era where India has an intrinsic advantage globally,” he said, pointing to the hybrid human-in-the-loop model that many companies will operate under for years to come.

The economics, however, remain challenging. In the US or Europe, replacing or augmenting a $35,000-a-year customer service agent with an AI agent offers plenty of margin to work with. 

In India, where an equivalent role might pay ₹3 lakh, the cost savings are less compelling, and some companies still opt to “just hire five engineers” rather than pay for automation, said Kumar. 

This leads many Indian AI startups to look abroad for their highest-value customers — a trend that SaaS companies experienced a decade ago.

The problem isn’t new. For over a decade, founders have grumbled about the “PoC purgatory” that plagues Indian enterprise sales. The pattern is familiar: a startup spends months building a pilot for a large company, often heavily customised, but gets no contract, no revenue, just a “we’ll get back to you.”

Read: Free PoCs are Killing Indian AI Startups

Kumar advises AI founders to think globally from day one, building for the markets where the unit economics make the most sense.

He hasn’t written off India though. Some of Voicing AI’s largest customers are here, and he sees the market as a testing ground for operational complexity — multiple languages, high call volumes, and customers who expect fast, personalised service. If a system can handle that, it can handle almost anything.

The post Voicing AI Loves India, But Knows Startups Still Struggle Here appeared first on Analytics India Magazine.

]]>
Columbia Grads Create ‘August’ to Bring AI to Midsize Legal Practices https://analyticsindiamag.com/ai-startups/columbia-grads-create-august-to-bring-ai-to-midsize-legal-practices/ Mon, 11 Aug 2025 12:30:22 +0000 https://analyticsindiamag.com/?p=10175677

“August gives midsize law firms the freedom to shape AI around their own playbooks.”

The post Columbia Grads Create ‘August’ to Bring AI to Midsize Legal Practices appeared first on Analytics India Magazine.

]]>

AI tools are changing the way legal work gets done, whether it’s reviewing contracts or handling research across multiple jurisdictions. Yet, so far, most of these technologies have been built for large law firms with deep pockets, thereby leaving midsize firms out in the cold.

That is exactly the gap August, an AI platform for midsize law firms, is trying to close. The company has raised $7 million in a seed round led by NEA and Pear VC, with additional backing from Afore Capital, leading law schools and angel investors including Gokul Rajaram, Ramp CPO Geoff Charles, OpenAI head of engineering David Azose and Bain Capital Ventures partner Kevin Zhang.

The company said the funding will go towards expanding August’s modular AI agent platform and personalised onboarding model, which adapts to each firm’s unique workflows and legal requirements.

“August gives midsize law firms the freedom to shape AI around their own playbooks, whether they’re advising clients in Miami, Sydney or Mumbai,” said co-founder and CEO Rutvik Rau in a statement. 

Founding Journey 

In an exclusive interview with AIM, Sidhant Raghuvanshi, a lawyer and member of the founding team, shared that the funding round valued August at $28 million. 

Raghuvanshi added that in 2023, three friends from Columbia University—Rau, Thomas Bueler-Faudree, and Joseph Parker—noticed midsize law firms were being overlooked in the legal AI space. 

They met at Columbia’s machine learning research lab, bringing experience from midsize law firms, Blackstone’s data science team and tech companies like DoorDash and PayPal.

Three of them found out that the largest law firms could afford custom-built software and AI tools. Still, there was an entire tier of firms relying on manual, document-heavy processes.

Competitors and USP 

The company is headquartered in New York and is currently expanding its presence in London and India. “We’re onboarding a bunch of law firms here (London), and the same in India,” Raghuvanshi said. The company also has clients in Egypt, Australia, Singapore and the US.

August offers three main functions, including analysis of legal and other documents, legal research across jurisdictions and drafting services. 

Hicksons Lawyers, an Australian firm, reported reviewing 5,000 negligence files 90% faster, while Indian tax firm ELP cut diligence time by 60% using August. A Florida litigation team used the platform to review 40,000 pages in a $100 million dispute, which the company said reduced costs and freed partner time.

“We conducted a process to find the best option, and we chose August because they had the most accurate platform and were the most willing to work with us to solve our specific challenges,” said David Fischl, partner at Hicksons.

Raghuvanshi noted that the company’s main competitors are global players Harvey and Legora. The key difference, he highlighted, is that August focuses on offering customisation for each client rather than a single standardised product. 

“We want to serve mid-sized law firms to help them grow and compete with the bigger firms. That’s the narrative we have within the company,” he said.

He explained that August’s USP is that they offer customisations to every client. “If a law firm’s M&A team and taxation team have different styles of formatting or research, our system adapts to those styles,” Raghuvanshi said.

Hallucinations and Accuracy 

Raghuvanshi revealed that August is model-agnostic. The platform relies on multiple AI models, including Gemini, ChatGPT and Anthropic’s Claude, with custom agents built on top, selecting the best response from different models for each query.

Raghuvanshi said August offers three main capabilities. The first is document analysis, allowing users to query across large sets of files. “You can upload, say, 15,000 legal documents or Excel sheets…and then query over them,” he explained. The second aspect involves legal research, spanning jurisdictions such as India, the UK, Australia and the US. 

The third is drafting, where a lawyer can upload a precedent contract and add new terms. “Our system will be able to give a first draft of the new contract,” he said.

Regarding hallucinations, Raghuvanshi explained that the model always generates responses from the document that has been uploaded. “It goes a few steps beyond RAG because it’s also querying different models at the same time…The models also query against each other to correct themselves,” Raghuvanshi explained. 

He added that the main return on investment for clients has been saving time, which in turn allows lawyers to take on more work. He cited a recent example from a US law firm that needed to prepare deposition questions but lacked the time. The team uploaded about 17,000 emails and nearly 10,000 documents to August’s platform, which analysed the material and generated a list of questions three days before the deposition. 

Sam Altman, CEO of OpenAI, recently warned users about sharing sensitive information with ChatGPT, noting that conversations with the AI do not have legal confidentiality protections like those with doctors, lawyers or therapists.

When asked about legal protections for chats held with AI, Raghuvanshi pointed out that regulation is necessary. “This space definitely needs a lot of regulation in place. Different jurisdictions will adopt different policies,” he said, pointing to the EU AI Act as an example.

While electronic communication, such as WhatsApp messages, has been admitted as evidence in court, Raghuvanshi noted it is unclear whether AI chats should be treated the same way. “Personally, I don’t think it should be allowed, but I’m sure regulators will come up with a much better policy,” he said.

The post Columbia Grads Create ‘August’ to Bring AI to Midsize Legal Practices appeared first on Analytics India Magazine.

]]>
Astra Collapse Shows What Indian AI Startups Still Need to Figure Out https://analyticsindiamag.com/ai-startups/astra-collapse-shows-what-indian-ai-startups-still-need-to-figure-out/ Mon, 11 Aug 2025 11:38:43 +0000 https://analyticsindiamag.com/?p=10175663

Astra landed two major clients and faced no direct competitors in its niche. But it never scaled beyond beta.

The post Astra Collapse Shows What Indian AI Startups Still Need to Figure Out appeared first on Analytics India Magazine.

]]>

In late July 2025, Astra — a young AI sales-tech startup backed by Perplexity AI founder Aravind Srinivas — shut down just four months after having raised funds. The closure wasn’t an isolated blip; it captured the growing pains of India’s AI ecosystem, as it moved from hype to hard reality.

Astra’s cofounder and CEO Supreet Hegde was candid in his exit note. Disagreements with cofounder Ranjan Rajagopalan over growth pace, long enterprise sales cycles, and a lack of trust from potential customers weighed the company down, according to Hegde. 

The sudden rise of competing AI agents only added confusion for buyers.

“Working with larger companies meant navigating lengthy sales cycles, especially as an early-stage startup asking clients to trust us with sensitive data from platforms like Salesforce, G-drive, Slack, and CLM,” Hegde wrote. 

“The current surge of interest and confusion surrounding AI agents added yet another layer of complexity, with many clients unsure of whom to trust or how to evaluate these AI agents,” he added.

Founded in 2023, Astra had pitched itself as the “Chief of Staff for every account executive,” promising to automate 80% of AE activities and boost deal execution quality. 

It landed two major clients and faced no direct competitors in its niche. But it never scaled beyond beta.

The Reckoning of 2024–25

Astra’s challenges mirror a broader reckoning in Indian AI. In 2024–25, the industry stopped being a story about boundless promise and became a test of who could build something that worked — and sell it repeatedly.

Many AI founders say endless unpaid proofs of concept are killing early-stage startups. “AI founders finally skipping selling to Indian customers after doing PoCs after PoCs and then being requested for even more ‘free’ PoCs. There is a limit to this… enough is enough,” Vaibhav Domkundwar, CEO of Better Capital, had said earlier.

Read: Free PoCs are Killing Indian AI Startups

Some of the most talked-about names ran into the same wall. Hyderabad-based Subtl.ai, which had beaten OpenAI benchmarks and counted the State Bank of India as a client, shut down in July 2024. Founder Vishnu Ramesh summed it up saying that they had spread themselves too thin.

InsurStaq.ai, which built the specialist InsurGPT model and had the kind of backers most startups dream of, folded when scaling as competitive pressure outpaced traction.

These weren’t failures. They were the predictable result of companies built in a period that rewarded speed and fundraising over sustainable revenue.

Funding Still Flows — But to Fewer Hands

The wider ecosystem also felt the shock. Citing Tracxn data, the Financial Express published that over 28,000 startups shut down across 2023 and 2024 — 15,921 in 2023 and 12,717 in 2024.

In AI, capital still came in, but to a narrower set of winners. Indian AI startups raised $780.5 million in 2024. 

Krutrim AI, founded by Bhavish Aggarwal, became India’s first AI unicorn in January 2024 after a $50 million round for India-first LLMs that understand 22 scheduled languages and generate content in 10.

Kore.ai pulled $150 million. Atlan raised $105 million for data governance. Neysa secured $50 million for AI cloud infrastructure. Sarvam AI raised $41 million for Indian-language LLMs. Nurix AI, led by Mukesh Bansal, raised $27.5 million for enterprise AI agents.

Despite these and a few others, the funding for AI startups was barely there. New startups continue to raise small amounts of money, but late stage investment is not there. And it gets even worse in 2025.

How to Change This?

A recent Nasscom report shows India’s AI startup count grew 3.7x in a year, crossing 890 ventures, with a 2.8x jump in new formations and a 1.7x rise in patents. Over 83% of these are application-focused, building vertical AI and SaaS tools for faster commercialisation.

In the first half of 2025, the sector raised $990 million, up 30% year-on-year. But high compute costs have now overtaken talent shortages as the biggest scaling barrier.

Arpit Mittal, founder and CEO of edtech startup SpeakX, told AIM that 2024-25 rules from SEBI now ask angels to prove higher net-worth and go through extra accreditation. 

“Many casual angels don’t want that paperwork, so they have paused investing, while the seasoned folks are simply cutting ticket sizes from ₹1-2 crore to ₹50-75 lakh per deal,” he said.

Also, agentic AI is emerging as the next big frontier. According to an earlier Tracxn report, there are around 109 agentic AI startups in India. But most of them are building products for users that don’t exist.

In India, getting paid for a proof of concept (PoC) is becoming a rare win. Most early-stage startups find themselves in endless sales loops where potential clients demand increasingly elaborate demos, only to ghost when it comes to commercial discussions.

For that to happen, India still needs better compute infrastructure, regulatory clarity, and production-ready talent, and most importantly finding the right use cases.

“GenAI startups have the potential to shape the future of AI innovation for emerging markets and beyond,” Rajesh Nambiar, president of Nasscom, said in the report. But this might take more than just building good tech.

The companies weathering the storm aren’t chasing a multipronged strategy. They pick a vertical and focus on it with the right GTM strategy. Enterprise AI, Indic-language models, AI infrastructure, and industry-specific solutions are faring better than broad horizontal plays.

The lesson from setbacks like that of Astra is to start with a customer-facing problem, not a model. Stay in the market long enough to figure out how to make money and build to stay relevant, not just for the next funding round.

The post Astra Collapse Shows What Indian AI Startups Still Need to Figure Out appeared first on Analytics India Magazine.

]]>
Hello, AI speaking: Hiring Goes Vernacular With Voice AI https://analyticsindiamag.com/ai-startups/hello-ai-speaking-hiring-goes-vernacular-with-voice-ai/ Fri, 08 Aug 2025 12:31:13 +0000 https://analyticsindiamag.com/?p=10175495

Catering to the vast semi-skilled workforce, voice agents bridge the skill gap and accelerate hiring.

The post Hello, AI speaking: Hiring Goes Vernacular With Voice AI appeared first on Analytics India Magazine.

]]>

In India’s hinterlands, where English fluency is rare, internet bandwidth often lags and smartphones may still be shared among families, a quiet tech revolution is reshaping the way people find jobs. Voice AI, once a novelty reserved for customer support or virtual assistants, is now becoming a powerful force in recruitment across traditional sectors like manufacturing, BFSI, retail, logistics, and agriculture.

At the forefront of this transformation is a new generation of AI-driven voice agents, built for India’s diverse linguistic and cultural landscape. Their role? To bridge the country’s deep skill gap and growing employment demand by speaking the language of India’s vernacular job seekers, literally and figuratively.

India’s informal and semi-formal workforce is vast, comprising over 80% of all employed people. Most of this population communicates in regional languages, often lacks digital literacy, and relies heavily on basic phones rather than apps or email. 

Traditional HR tech systems weren’t designed for this segment, especially in the informal sector. The result has been a fractured recruitment pipeline filled with inefficiencies, high dropout rates, and costly manual processes.

Voice AI platforms are now trained on fundamental recruiter-candidate interactions across India’s many dialects, capturing nuances that range from hesitation markers to regional slang.

Catering to the Indian Audience 

Companies like Hunar.AI have created contextual training layers over foundational LLMs (like those from OpenAI and Google) to personalise each conversation. The AI model is fine-tuned to understand whether a pause means hesitation or a network lag, and whether an objection is genuine or circumstantial. 

The startup also hired 30 recruiters to conduct recruitment across various industries. These conversations were recorded and were used to build a contextual layer around recruitment to capture different dialects and how they speak, Krishna Khandelwal, cofounder and CEO of Hunar.AI, told AIM

Additionally, “none of the individuals whose voices were recorded was replicated or cloned. The system is not designed to mimic or impersonate specific people. The voice models were created using a combination of licensed datasets and ethically sourced synthetic voice technology,” said Harjoth Sudan, business lead, AI-enabled services at Hunar AI. 

These AI agents might ask if a person has experience selling gold loans, probe into how they generated leads, and assess their willingness to work in the field, all in Hindi, Kannada, or Bhojpuri. The result is a dynamic understanding of candidate readiness.

Hiring Speed and Scale

Recruitment that once took four to five days now happens in under 24 hours.

A leading Indian quick commerce company, which onboards tens of thousands of delivery workers monthly, drastically reduced conversion time by deploying voice AI. Instead of waiting for human recruiters to connect with each lead manually, the voice agent automatically initiates contact, handles documentation guidance, and even nudges the user to install the app and accept the first order. What used to be a leaky, human-driven funnel is now a scalable and responsive hiring engine.

SquadStack, another industry player in this space, handles nearly 50,000 voice interactions within the period of a single meeting, even in low-connectivity areas. Their models are trained on over five lakh hours of contact centre audio, offering fluency in six languages, including Hindi, Tamil, and Malayalam.

“The idea of voice AI replacing human recruiters often triggers healthy scepticism, but when framed as a collaborative partner, the ecosystem becomes far more receptive,” Sudan added. Many HR leaders and talent acquisition heads view voice AI as a scalable extension of their team, particularly beneficial in frontline or high-turnover positions where speed and consistency are essential. The emphasis is on augmentation rather than replacement, he said. 

Voice AI in Career Counselling

In a society where career counselling is often limited to the urban elite, AI could become the first accessible guide for India’s vernacular workforce.

Hunar.AI has started piloting career discovery features within its voice agents. A delivery executive might be asked if they know Excel, and could be guided to a data entry role. Such interaction opens up mobility in jobs, not just across geography, but across skill levels.

Voice AI agents now also act as post-hire feedback gatherers, calling workers after the fifth or tenth assignment to understand job satisfaction, grievances, or logistical issues, feeding insights to the HR teams.

Bias Reduction and Inclusion

Unlike traditional hiring processes, where recruiters might favour candidates based on name, gender, caste, religion or location, AI agents can potentially treat everyone equally. By objectively assessing responses and screening based on skills or intent, not identity, these systems promote fairer hiring across India’s socio-cultural divides.

Voice AI also supports differently-abled users who may struggle with digital interfaces. For many, speaking into a phone is far easier than navigating a smartphone app or typing in English.

Apurv Agrawal, cofounder and CEO of SquadStack, told AIM that SquadStack’s mass hiring processes typically do not focus on religious backgrounds, which may help minimise biases. Human recruiters might unconsciously let biases influence their decisions based on names or personal interactions, he said. 

However, Squadstack’s calls start with a “Ram Ramji” greeting, which does not adhere to religious inclusivity. Given India’s rich diversity of cultures and faiths, it is essential for businesses to adopt neutral and inclusive greetings.

Challenges

Voice AI in India faces key challenges, including accent variability due to linguistic diversity, which makes it hard for models to handle regional dialects. Network lag in rural areas can also lead to dropped calls or misinterpretations. 

Hunar’s Voice Activity Detection system helps resolve latency issues in telephony, utilising servers in India for regulatory compliance and language models located in the US, said Khandelwal.

He added, “we created configurations and trained call recordings to analyse communication patterns across different regions. We assessed round-trip latency and explored the effective use of filler words.”

By enabling meaningful conversations in local languages, AI agents are digitising recruitment. And for millions of job seekers in India’s traditional sectors, that means the most productive recruiter they’ll ever meet may not be a person, but a voice.

The post Hello, AI speaking: Hiring Goes Vernacular With Voice AI appeared first on Analytics India Magazine.

]]>
Guardian ‘Angels’ Turn Gatekeepers as Indian AI Startups Face Tighter Checks https://analyticsindiamag.com/ai-startups/guardian-angels-turn-gatekeepers-as-indian-ai-startups-face-tighter-checks/ Thu, 07 Aug 2025 08:47:36 +0000 https://analyticsindiamag.com/?p=10175245

Only $94.8 million has been raised from 13 funding rounds as of July 30, the lowest since 2017.

The post Guardian ‘Angels’ Turn Gatekeepers as Indian AI Startups Face Tighter Checks appeared first on Analytics India Magazine.

]]>

India’s AI startup boom once saw angel investors racing to grab early equity in anything involving AI in its pitch deck. However, come 2025, the euphoria seems to be cooling. Funding rounds have thinned, seed-stage cheques are drying up, and angel investors are keeping their powder dry.

While investors such as AngelList, Accel and Venture Catalysts have been active with multiple portfolio companies, the number of active angel investors has decreased compared to previous years. Notably, there are significantly fewer new deals reported in 2025, according to data from Traxcn.

Most investments are concentrated in the Seed and Series A stages, with some angel investors contributing at the Seed stage. Moreover, the significant number of Seed investments suggests that early-stage AI startups continue to generate interest. Yet, the sheer volume of angel investments remains relatively lower.

Funding Momentum Grows, But With Caution

The landscape of angel investing in India has just hit a notable $1 billion in commitments, but there are concerning signs that this fledgling asset class might be at risk, the Economic Times reported. The surge in regulatory oversight may play a crucial role in this concern. Throughout last year, there has been an impressive 44% growth in commitments to angel funds and a 33% rise in investments.

While these statistics reflect positive growth, the SEBI seems eager to further enhance the conditions for business operations. While this is an encouraging direction, it’s crucial to exercise caution. 

Speaking about his experience, Arpit Mittal, founder and CEO of edtech startup SpeakX, told AIM, “What we see isn’t a mass exit; it’s more like angels putting their foot on the brake. 2024-25 rules from SEBI now ask angels to prove higher net-worth and go through extra accreditation.” 

“Many casual angels don’t want that paperwork, so they have paused investing, while the seasoned folks are simply cutting ticket sizes from ₹1-2 crore to ₹50-75 lakh per deal,” he added.

As the culture of risk-taking begins to grow beyond established business families, creating fresh funding sources remains crucial. In case angel funds were to vanish due to investors’ reluctance to seek accreditation, many talented entrepreneurs may not even make it to the doorstep of venture capital or private equity.

The contrast with 2021-22 is stark, when the country saw a spike in AI optimism post-COVID-19, fuelled by generative AI breakthroughs and Silicon Valley’s bullishness. Cut to 2025, that optimism has morphed into a cautious wait-and-watch strategy in India.

The number of funding rounds peaked at 53 in 2021. However, a notable decline is expected in 2025, with only $94.8 million raised from 13 rounds as of July 30. Moreover, the count of active angel investors appears to be decreasing compared to previous years, especially with fewer new deals reported for this year.

Ankita Vashistha, managing partner at Arise Ventures, believes that the current landscape seems to be somewhat specific to India. Many Indian startups tend to focus on building applications rather than foundational AI technologies, highlighting a need for deeper innovation in the sector. While there is excitement and investment in AI, there’s also a trend of ‘AI washing’ similar to greenwashing, she added. 

She asserted that while it’s important not to generalise, there’s a clear difference in the maturity of startups focused on core AI technologies. Overall, she anticipates a decline in the volume of deals globally, with the US continuing to attract substantial funding, whereas India may see lower funding levels.

Hard to Evaluate Deep Tech Founders

In a conversation with AIM, Rahul Agarwalla, managing partner at SenseAI Ventures, an early-stage fund investing in AI-first startups, said, “In a world where every pitch deck claims to be ‘AI-powered’, VDAT (variety, data, architecture and team) helps us cut through the noise. It allows us to identify companies that aren’t just adding AI to existing workflows, but rethinking the problem itself through an AI-native lens, companies where the model is inseparable from the product, and the product wouldn’t exist or scale without it.”

The Rise and Dip

Based on Traxcn’s reports, the total funding reached $268.3 million in 2024, the highest in the dataset, with 49 funding rounds, second only to 2021’s 53 rounds. This surge reflects growing investor interest and larger deal sizes, indicating strong confidence in select startups’ growth potential.

Yet, 2025 is already witnessing a significant downturn. If the trend persists, it could mark the lowest investment levels since 2017, highlighting a possible retreat from early-stage funding and suggesting that many startups are struggling to secure necessary capital.

From 2021 to 2023, funding rounds ranged from 43 to 53 annually, but this year’s steep decline suggests reduced confidence in seed and angel-stage investments. Historically, funding spikes in 2018 and 2022 aligned with rapid AI adoption, indicating a tendency for larger bets on fewer startups rather than a proportional increase in funding rounds. This trend may significantly reshape the investment landscape.

Mittal explained that investors are being cautious due to the rising popularity of sectors like climate technology, electric vehicle supply chains and cross-border SaaS, which have drawn attention away from AI. 

Investors now prefer AI startups with clear, paying customers and efficient, smaller teams of around 15 members, rather than larger groups. Valuation fatigue is also a concern. The hype in 2023 has inflated seed valuations to Series A levels, prompting angels to wait for prices to stabilise.

Investment is currently concentrated among a select group of investors familiar with AI or focused on deep-tech. Traditional angel investors and early-stage funds that typically support SaaS or consumer technology seem hesitant to engage.

However, Agarwalla sees SaaS as evolving rather than being outdated. While traditional SaaS provided valuable services with long sales cycles and hard-to-measure outcomes, AI-native companies deliver measurable ROI from day one. They boost productivity, enhance decision-making and automate workflows, actively driving efficiencies that impact the bottom line, he added. 

Nonetheless, Mittal pointed out, “Angels haven’t vanished, they’re just pickier, writing smaller cheques, and hunting for startups that show real traction or deep tech. If you’re building something with a true data advantage and can prove to paying users, the doors are still open, just expect more due diligence questions and leaner valuations than the 2023 peak hype.”

The post Guardian ‘Angels’ Turn Gatekeepers as Indian AI Startups Face Tighter Checks appeared first on Analytics India Magazine.

]]>
Indian AI Startups are Obsessed with Open-Source Small Models  https://analyticsindiamag.com/ai-startups/indian-ai-startups-are-obsessed-with-open-source-small-models/ Thu, 24 Jul 2025 11:06:00 +0000 https://analyticsindiamag.com/?p=10174079

“Just having them openly available, without any restrictions, helped a lot in my research journey.”

The post Indian AI Startups are Obsessed with Open-Source Small Models  appeared first on Analytics India Magazine.

]]>

As foundational models become more capable, a parallel trend is emerging in open-source AI: developers and entrepreneurs are building smaller, more specialised models that can be deployed locally. For many developers, the journey started with Meta’s Llama 2. 

This conversation unfolded on an episode of AI Talks by AIM, powered by Meta. Sunil Abraham, public policy director of data economy and emerging tech at Meta India, led a deep dive into how open-source generative AI is driving real-world impact well beyond Silicon Valley.

He was joined by Pratik Desai, founder and CEO of KissanAI, and Adithya S Kolavi, AI researcher at CognitiveLab. Both are practitioners building frontier models for India’s unique requirements.

“Llama 2 was a very revolutionary step in the open source journey, especially in mine,” Kolavi said.

Kolavi said that Llama was more than just a model; it came with an ecosystem of frameworks that made it easier to run and adapt. “Inferencing was hard initially, but tools like VLM and SGLang came along and integrated with Llama early on, which made things easier,” he said. 

This openness enabled Kolavi to transition from a full-stack developer to a researcher and model developer. “These models can be adapted for different languages and modalities,” he said. “Just having them openly available, without any restrictions, helped a lot in my research journey.”

As the ecosystem matured, fine-tuning support also improved. Hugging Face added integration, and other libraries such as Llama Factory followed suit, making the model even more accessible to developers.

Desai, who also launched the agricultural AI platform, Dhenu, echoed the sentiment. He credited Llama 2 for powering one of India’s earliest domain-specific language models in agriculture. “We used Llama 2 in collaboration with Sarvam AI to train the Dhenu model.” 

He added that until that point, most efforts had focused on fine-tuning models for style, but he and his team chose to experiment with enriching the models by injecting more domain-specific knowledge.

Desai, who transitioned from academia to entrepreneurship, said that open source was the only viable route. He explained that as a bootstrap startup, they relied heavily on open-source tools like PyTorch and TensorFlow to get started.

Reflecting on his decision to move away from academia and focus on applied AI, Desai said, “If your work is actually not impactful or it’s not going to be used by folks, then it’s just a waste of time. There are thousands of papers getting published every year in every conference now, and most of them are not even getting read nowadays.” 

On-Device Models and Local Impact

Kolavi pointed out that open models are crucial for enterprise AI and personal privacy. “Most of the time, enterprises want to host models on their own infrastructure,” he said.

He said that with smaller models being released now—less than 2 billion parameters—users can run them locally with good inference speeds. “You can do document processing or chat across your laptop or phone.”

Desai agreed and took it further. He sees the benefit of open and small models for deployment even in regions with poor internet connectivity. He revealed that he ported the KissanAI assistant onto Android phones. 

The goal was to get the entire assistant working locally, without needing an internet connection. “This is useful where bandwidth is limited or expensive. Even embedded devices in the field can use local inference.”

Desai added that on-premise hosting, particularly for India’s digital public infrastructure, is critical. “If you’re building for smallholder farmers, you can fine-tune the model for that. If you’re building for B2B agri applications, you can tune it differently,” Desai explained. “You don’t need extension workers to physically visit villages. You can have small devices with the same knowledge always available.”

Kolavi addressed the concern of bias. “Every model reflects the data it’s trained on. Western data is overrepresented. But if you have data from your own context, like farming in India, you can fine-tune the model to reduce bias,” he said.

Desai explained the importance of local context with an example, noting that a question about corn should refer to Indian corn rather than its US counterpart. He pointed out that terms like ‘whitefly’ and ‘kernel’ vary across regions and a general-purpose model may not grasp these distinctions.

Where Open Source Goes Next

Both Kolavi and Desai believe open source is here to stay, but not without challenges.

Talking about Chinese models and benchmarks, Kolavi said, “Most benchmarks are taken with a grain of salt because models are trained on them directly. Leaderboards on static datasets don’t scale anymore. Human evaluations like LMSYS’s Arena are better.”

Desai expressed concern over AI models being released under restrictive licences, such as Creative Commons BY-NC, which limit commercial use. “I really do not like to work with those models,” he said, adding that if open weights are being released, they should come with permissive licences that allow repackaging and building startups on top of them.

For Desai, the goal of releasing open weight models should be to “foster an ecosystem of a new startup”.

“We’re working with IndiaAI, Agri Stack, and DPI, helping with open source contributions. We’re now working with Fortune 500 companies across India and the US. And we’re still bootstrapped,” he concluded.

As AI expands into domains like agriculture, the power of open source, from community-driven innovation to local deployments, continues to shape both the market and the mission.

The post Indian AI Startups are Obsessed with Open-Source Small Models  appeared first on Analytics India Magazine.

]]>
India has 109 Agentic AI Startups Building in a Vacuum https://analyticsindiamag.com/ai-startups/india-has-109-agentic-ai-startups-building-in-a-vacuum/ Thu, 17 Jul 2025 10:33:35 +0000 https://analyticsindiamag.com/?p=10173550

While there is a need for coding copilots or workflow agents for autonomous QA testers in startups and enterprises, direct-to-consumer products face almost no demand at all.

The post India has 109 Agentic AI Startups Building in a Vacuum appeared first on Analytics India Magazine.

]]>

In under two years, more than 100 startups have emerged across the country with a singular focus—creating AI systems that not only understand prompts but also take autonomous actions. In a country with over 750 million smartphone users, only a minuscule percentage are using the AI agents.

While there is a need for coding copilots or workflow agents for autonomous QA testers in startups and enterprises, direct-to-consumer products face almost no demand at all.

Despite this, India’s agentic AI landscape is growing fast. Startups claim there’s a consumer boom, but there’s no reliable data to prove the same. Besides,  retention or monetisation is a concern in India. 

There are now 109 active agentic AI companies in India, according to data from Tracxn. These startups are working on tools that can not only generate text or images but also act on behalf of users, completing tasks, automating workflows, and mimicking decision-making. 

On paper, it sounds like the future. In practice, there’s one big missing piece: users. 

India’s Real AI Use Cases Are Still Enterprise

In recent months, companies like Krutrim, Fractal, Sarvam, Puch AI, and Gnani AI have started positioning themselves as pioneers of consumer-facing agentic AI. They’ve launched assistants, image generators, and voice bots aimed at India’s “mobile-first” population.

Krutrim, backed by Ola’s Bhavish Aggarwal, unveiled Kruti, a personal AI agent that can book cabs, order food, generate images, and conduct research. 

Fractal, traditionally an enterprise player, launched tools like Kalaido and Vaidya. Gnani entered the fray with Inya AI, which lets users create plug-and-play voice/chat agents.

Most agentic tools today are proof-of-concept apps masquerading as consumer products. There is little public data on active user numbers, retention, or monetisation. Nearly all platforms remain in beta, offered for free, or targeted at developers and enterprise teams rather than end consumers.

Even Bhashini, the government’s flagship voice translation tool, remains in beta with limited traction, underscoring how even well-funded public efforts have yet to achieve sustained consumer usage.

The consumer-agentic AI story in India remains aspirational, built more on pitch decks than on product-market fit. 

Contrary to the emerging B2C narrative, most agentic AI traction in India is still occurring within enterprises, albeit at a slower-than-expected pace. Companies like Meritto and RevRag are building agents for education and BFSI workflows, not for end-users. 

These agents manage lead qualification, sales automation, or perform call centre support tasks that seldom appear in consumer apps. Even as these companies talk about eventual B2C relevance, their paying users remain institutions, not individuals.

Even selling B2B comes with challenges. Ashutosh Singh, co-founder and CEO of RevRag, had earlier told AIM in India that the sales cycles are slow and decision making is layered with bureaucracy. 

However, one of the biggest myths Singh wants to dispel is that Indian clients don’t pay. “It’s not about inferior tech or lack of money. It’s a game of volume and patience,” he said. “You invest first, like Zomato did, and then you start getting money once the volume kicks in.”

A great example of this is Sarvam. The company has developed the Samvaad platform to enable companies to create conversational voice agents in Indic languages for their platforms, which include WhatsApp and on-call features. 

There is a demand among enterprises and small businesses, but Sarvam did not launch a consumer app, as that requires scaling for the population, which is often better left for companies to do themselves.

Agentic Means Scale

For agentic AI to succeed in India at scale, it requires two key components: infrastructure and interfaces. India lacks widely adopted platforms where agents can plug in. 

To be sure, even the world’s leading startups, such as OpenAI and Anthropic, have not yet successfully launched agents that can perform everyday tasks on a user’s behalf. For example, Perplexity has a shopping agent which can order things for users. However, arguably, it remains easier for people to head to Amazon and order items.

Similarly, the typical Indian consumer juggles a dozen apps, none of which are built to support AI-driven autonomy. Paytm recently announced that it is becoming an AI-first company, with a model that resembles a Superapp. However, even with the Perplexity integration, not much has been achieved in terms of agentic AI transformation.

Furthermore, consumer trust and understanding of autonomous systems remain low. While generative AI tools like ChatGPT and image generators continue to grow in demand, there’s little evidence of persistent usage for AI agents like Kruti, especially outside English-speaking urban clusters.

For example, AIM tested Krutrim’s Kruti app during the launch, and while it looks promising, the issue remains that it is a separate app which only orders through Ola services as of now, such as food and ordering cabs. 

For most users, switching apps to book the same cab makes no sense. And Kruti’s promise of autonomy feels like a detour, not a shortcut.

As AI enthusiasm surges globally, Indian startups are rushing to position themselves as leaders in the agentic wave. But without sustained local adoption, many risk becoming export-oriented tech demos, building for users halfway across the world, or worse, building for a market that doesn’t exist at all.

Until Indian consumers demonstrate a real need for autonomous agents and a willingness to pay, agentic AI in India may remain more fiction than function.

The post India has 109 Agentic AI Startups Building in a Vacuum appeared first on Analytics India Magazine.

]]>
Subtl.ai Collapse Exposes Cracks in India’s AI Scene https://analyticsindiamag.com/ai-startups/subtl-ai-collapse-exposes-cracks-in-indias-ai-scene/ Mon, 07 Jul 2025 13:30:37 +0000 https://analyticsindiamag.com/?p=10172993

“Some investors flirt A LOT with founders,” Vishnu Ramesh said. “But it doesn't mean s*** until they give you a term sheet.”

The post Subtl.ai Collapse Exposes Cracks in India’s AI Scene appeared first on Analytics India Magazine.

]]>

Building an AI startup in India isn’t exactly a walk in the park. Startup founders often feel frustrated due to a multitude of factors, such as limited funding, a lack of investor expertise, and the high demand for free proof of concepts (POCs). As it turns out, there are far deeper reasons for this.

Last week, Vishnu Ramesh, founder of Subtl.ai, posted a heartbreaking message on LinkedIn, signalling the end of the road for the company. “TL;DR: we have started shutting down Subtl.ai,” he wrote. 

That one line of update confirmed what many in India’s AI startup ecosystem are increasingly confronting—ambitious ideas hitting a wall faster than anyone expects.

Subtl.ai, the Hyderabad-based enterprise GenAI startup, had carved out a niche in RAG and AI. With clients like SBI, defence contracts, two airports, and a few others, the company was trying to solve a hard problem—how to make enterprise data usable through natural language interfaces.

It had solid benchmarks as well. Ramesh said that their auto-tuning pipeline, Subtl V2, built by engineers and researchers at IIIT Hyderabad, outperformed RAG built on OpenAI and open-source embeddings by 15-20%

In an interview with AIM earlier, Ramesh revealed that the startup’s ambition was to reduce the dependence on companies like OpenAI.

Furthermore, in a blog post last year, the startup revealed that SBI successfully implemented Subtl.ai, demonstrating 92% accuracy in information retrieval, and 56,570 minutes were saved (equivalent to approximately Rs 5 lakh).

The Promise was Real. The Traction, Limited

Ramesh is onto building another AI startup. “Going vertical AI this time,” he declared in his LinkedIn bio. Instead of blaming the market, customers, or the investors for the shutdown, he places the failure squarely on his own decisions, especially a lack of market focus.

Subtl chased use cases across vastly different industries, from banking to insurance to defence. Nothing was repeatable. “I got stuck handling customers from wildly different domains with wildly different use cases… customers gave no s*** about our other portfolio of work we had done,” he explained.

Despite early wins and a product reportedly called ‘Private Perplexity for Enterprise’, the startup was operating on thin fuel. It had raised around ₹1 crore in angel funding—barely enough to build and maintain a product in an increasingly competitive GenAI market. 

There were no follow-up rounds, and no external signals of new revenue deals. Subtl built APIs that could have powered AI agents, citations, and document retrieval, but never invested in making those APIs developer-friendly. 

“All we did was put a message on our website saying ‘yo reach out if you wanna use our APIs’,” he admitted. There were no open-source SDKs, no integrations with tools like LlamaIndex or Portkey, no real documentation, and no developer community.

According to Tracxn data shared with AIM earlier, over the last five years, 706 AI startups have failed, of which 54 are from India. This has been the case for several AI startups building for Indic use cases in the country, as there is not enough audience to actually test out use cases.

Read: Indic AI is Not Inspiring Enough for Indian Developers

“I’m a better CTO than a CEO,” Ramesh wrote. Lacking domain depth made enterprise sales difficult, and bouncing between industries didn’t help. One of the more painful parts of his reflection was about fundraising. He described how multiple investors had long conversations and seemed interested, but never followed through. 

“Some investors flirt A LOT with founders,” he said. “But it doesn’t mean s*** until they give you a term sheet.” He made hiring and scaling decisions assuming money would come in, but it never did.

Yet, he didn’t paint himself as a victim. “It’s completely on me, I failed my team and investors more than they failed me,” he wrote. The domain is still up, the LinkedIn team profiles still say “Subtl.ai,” but the quiet exit has begun. 

The Sad State of Affairs

Indian AI startups are walking on thin ice, almost all the time. Even though it might seem like the demand is increasing in the country, it is not exactly the case. 

Similar to Ramesh, Vaibhav Domkundwar, CEO of Better Capital, earlier highlighted that a wave of frustration was sweeping through India’s AI and SaaS startup ecosystem, sparking what founders call the ‘Skip India Movement’. 

There is a growing sentiment that Indian enterprises are not worth the time, effort, or resources required to sell to them.

Read: Free PoCs are Killing Indian AI Startups

Then there is the talent problem. “India seriously has a big f***ing talent problem,” said Umesh Kumar, co-founder of Runable, an Indian AI startup building a platform where anyone can build AI agents. “We got around 1,000 applications for a backend engineering role in just the last two to three days, and guess how many were actually decent?” Less than five.

Kumar’s startup was hiring backend developers with a no-nonsense offer: ₹50 lakh base pay, relocation, food, and a shot at working with top-tier talent. The hiring process involved a simple coding task, two calls and one paid trial. 

And yet, his hunt continues.

Just a few months before Subtl.ai’s wind-down, Unikon.ai had a more chaotic departure. On March 1, 2025, multiple developers reported being asked to pack up and leave without a warning. Devices were returned, and offices were cleared. But the startup didn’t die.

It had raised $2 million from prominent Indian angels just nine months earlier and was initially pitched as a GenAI-enabled networking platform. But it soon pivoted into building a D2C skincare brand—a sharp detour from its original AI pitch. The result was a ₹2 crore per month burn rate and no follow-on funding. 

The founder, Aakash Anand, shut down the company in a town hall meeting.

This is similar to what happened with InsurStaq.ai. In September 2024, the company started shutting down after one year of operations, and is now completely inoperative.

Some startups go to the US for quick monetisation and come back to solve the country’s problems, but AI, arguably, is too early for that. RevRag, a B2B agentic AI startup, decided to undertake this herculean task long ago and is now back in India selling AI to enterprises, even if the payoff takes a bit longer.

“We are not quitting India because we think India is a large market overall, even if it takes time. And we will be at it,” Ashutosh Singh, co-founder and CEO of RevRag, told AIM

He acknowledged the challenges of selling in India. The sales cycle is slow, decision-making is layered with bureaucracy, and customers can go cold without a warning. “You might get ghosted, and you won’t even know. You’ll keep following up, but the deal might just disappear,” he explained.

The shutdown of Subtl.ai isn’t just a story of one startup’s stumble—it’s a mirror to the broader Indian AI ecosystem. A mix of premature scaling, fractured focus, lack of developer ecosystem thinking, and disillusioned investors is quietly draining momentum from startups that should be thriving. 

While global AI companies ride waves of hype and capital, Indian founders are often left grappling with free POCs, flaky investors, and talent that can’t meet the bar. Some will regroup, like Ramesh, and build again—hopefully with sharper lessons. But for many others, the silence will be the final word.

The post Subtl.ai Collapse Exposes Cracks in India’s AI Scene appeared first on Analytics India Magazine.

]]>
Anybody Can Vibe Code a Startup Now https://analyticsindiamag.com/ai-startups/anybody-can-vibe-code-a-startup-now/ Sun, 06 Jul 2025 04:42:33 +0000 https://analyticsindiamag.com/?p=10172916

YC CEO Garry Tan noted that some founders are so new to programming that they’ve never known a world without tools like Cursor.

The post Anybody Can Vibe Code a Startup Now appeared first on Analytics India Magazine.

]]>

As vibe coding continues to gain traction, this year has seen a surge in startups built purely on instinct and improvisation. 

In a recent YC podcast, global partner Jared Friedman shared that nearly 25% of founders in the latest batch have said that AI writes more than 95% of their codebase.

Adding to this perspective, CEO Garry Tan noted that some founders are so new to programming that they’ve never known a world without tools like Cursor, which allows them to vibe code using AI.

Citing an example from the current batch, Friedman said the founders have highly technical minds but aren’t classically trained in computer science or programming. Yet, they’re incredibly productive and capable of building impressive products.

“AI is writing almost the entire thing. It reminds me of the discourse around the first digital natives who grew up with the internet. This generation grew up with native AI coding tools, skipped classical software engineering training, and just does it with the vibes.”

These aren’t just one-off stories. YC believes in them. Moreover, others are starting to follow.

Solopreneurs Ship Faster Than Ever

Billy Howell, a self-taught solopreneur, is a poster child for this shift. In a viral LinkedIn post, he shared how AI agents, especially Replit’s, let solo creators ship products at a fraction of traditional costs.

“AI agents (Replit’s) empower solopreneurs and small teams to deliver code at a fraction of the cost it used to. This lowers the barrier to entry for millions of business owners who previously would have to pay an arm and a leg for custom software,” Howell said in a LinkedIn post. 

He even sold an app earlier this year, built on Replit, for $750. It was a KPI tracker for a car technician coaching business. In a YouTube video, he even shared a tutorial on how to build and sell AI apps. 

Howell advised creators not to start with grand startup visions. Instead, he focuses on small, specific pain points that businesses are actually willing to pay to solve. “The easiest thing you can do to set yourself up to develop and sell an app is to find a one-feature problem…It’s so simple,” he said. Whether it’s uploading documents, automating reports, or data entry, these small jobs are easy to scope and ship quickly.

When Howell encountered tasks outside his expertise, he leaned on ChatGPT to get the job done.

“I had clients that needed stuff that I didn’t know how to do, but I said, ‘Sure, I can do that.’ And…just pasting entire documents into ChatGPT, saying ‘Fix this.’”

He started with tools like Airtable and Softr, then moved to coding with AI assistance as his confidence grew. Replit became his go-to platform for fast app development, and tools like V0 helped him generate user interfaces with minimal effort.

Once the prototype worked, Howell sold MVPs for flat fees often in the $500–750 range, and offered ongoing support and hosting for $100–300 per month.

This is the new pace of software development. What once took dozens of engineers and VC funding now takes one person with a clear idea and some tokens.

The trend is so serious that DocuSign has sent a legal notice to Michael Luo, a developer who built a free alternative to its platform. His version offered a similar set of features and was built using ChatGPT, Cursor, and Lovable.

From Side Projects to Fundable Startups

Brad Lindenberg, co-founder of Quadpay and currently leading the United States operations at an investment firm, recently developed a project called Biography Studio AI, which can create full-length biographies of someone using only voice prompts. 

Lindenberg added that he had never coded before and built the project solo using Replit. “It’s a passion project that took eight weeks to build and $1,500 of tokens, which Replit Agent estimated would have cost $1.7 million and a team of eight to build over eight to 12 months without AI.” 

In another instance, Brazilian edtech player Qconcursos generated $3 million in just 48 hours after building their new platform on Lovable, a vibe coding tool. They used just two developers for a task that would have otherwise taken a team of 30.

Lovable co-founder Anton Osika said that he will invest in startups built in Lovable. In a very short time, he received hundreds of names. Projects ranged from personal tools to full-fledged marketplaces. 

He shared that 2.5M sites were built with Lovable in June. “That’s over 10% of all new sites on the internet that month.”

One of them, Everydesk, is a fully vibe-coded marketplace for underutilised office space, currently onboarding clients in the Netherlands. There’s also MakerThrive, a community for vibe coders itself, boasting over 1,500 members and 400 shipped products, all built with over 2,000 commits on Lovable. 

MyEcho lets businesses collect video testimonials from customers, while Components by Damien offers a futuristic UI library featuring glassmorphism and prismatic lighting for modern apps. Resume Optimizer helps users optimise their resumes based on job descriptions. Another notable example is Wilbe, a platform supporting scientist-led ventures, with a 1,300-member community and a combined portfolio valuation of $680 million. 

These are just a few examples of what’s being built on Lovable. At the same time, many are turning to Replit to build software from scratch. 

Replit is helping Zillow, a major online real estate marketplace, involve non-engineers in the development process. Employees who previously couldn’t code are now contributing to the company’s routing system, which connects thousands of home buyers with property agents.

Jesse, who works in product strategy at Jaguars Football, shared in a post on X that he used Replit to build a workflow and scheduling platform for event-based temp staffing. The tool connects event organisers and staffing agencies, allowing teams to request and assign roles for upcoming events. He was initially quoted $68,000 per month for a contract by another company, but ended up building it himself.

“I’m a PM with no coding background, but with Replit, I was able to build it myself for $226 total. In two weeks, the app went from idea to beta testing. Insane where tech is these days.”

Another Reddit user shared how they used Replit’s vibe coding to build a wedding marketplace for Indian weddings quickly. “I fed that prompt into Replit, and within minutes, I had a mockup that was 70–80% of what I envisioned,” they wrote. The app included vendor listings, a booking dashboard, and even a pastel wedding aesthetic, “almost ready to share with potential collaborators across the internet”.

In another instance, Digvijay Dey, a product manager at Vymo, recently experimented with vibe coding using Replit and was stunned by the outcome.

“I’m still wrapping my head around what just happened,” he said, after watching a simple idea, connecting professionals with interview seekers for mock interviews, turn into a full-blown product.

Replit auto-generated a complete file structure, built out a multi-screen UI from signup to user profiles, integrated his Google Cloud account, and even connected Calendar APIs after prompting him for the necessary keys.

“It just kept asking the right questions, and I kept feeding it what it needed,” he said.

Vibe coding is the startup shortcut no one saw coming. Builders are moving from idea to product in days: no engineers, no investors and no gatekeepers. With the right instinct and the right tools, they’re shipping faster, cheaper, and with more creative freedom than ever before.

The vibe isn’t just real. It’s revenue-generating.

The post Anybody Can Vibe Code a Startup Now appeared first on Analytics India Magazine.

]]>
Why Cluely Thinks ‘Cheating’ Is the Future of Work https://analyticsindiamag.com/ai-startups/why-cluely-thinks-cheating-is-the-future-of-work/ Sat, 05 Jul 2025 13:31:10 +0000 https://analyticsindiamag.com/?p=10172913

The founder told AIM that Cluely is often branded as a ‘cheating app,’ but the company is choosing to wear the title like a badge.

The post Why Cluely Thinks ‘Cheating’ Is the Future of Work appeared first on Analytics India Magazine.

]]>

Some startups build products. Cluely, the AI cheating assistant platform builds narratives. When the entire ‘Soham Saga’ was unfolding over the moonlighting issue, the startup seized the opportunity. It released a video saying Parekh had used their tool to crack multiple interviews, even as the latter claimed not to have used it. 

Notably, Cluely is known for building products that help users cheat during high-stakes situations such as job interviews, exams, sales calls, and meetings.

In an exclusive interview with AIM Media House on Monday, Chungin Roy Lee, founder, revealed that Cluely’s marketing and distribution strategy is different from its counterparts.

The company hires influencers because Lee believes the nature of attention has fundamentally changed. “Marketing and growth and distribution look very different today than they looked 10 years ago,” he said, adding that the way to attract a million viewers these days isn’t by buying a Super Bowl ad, but instead, by hiring an influencer who is currently trending on the algorithm.

For Lee, traditional channels like television, podcasts, and even YouTube are no longer effective. “People don’t watch TV. People don’t watch YouTube. People don’t listen to podcasts. What they do is scroll on Instagram, TikTok, and even Twitter,” he said. “You want influencers who are tapped into these algorithms because this is where all of people’s attention is going.”

In a recent podcast, Roy described his deep understanding of virality and content distribution across TikTok, Instagram, and X (formerly Twitter) as his superpower. He believes that most tech people underestimate how algorithmic virality works outside of LinkedIn and X. The term he uses for this kind of marketing is called Rizz marketing. 

What is Cluely?

While at Columbia University, Lee and Neel Shanmugam built Interview Coder, a tool that helped engineers cheat in job interviews by giving AI-powered answers in real-time. Lee was suspended after a post about it went viral on X. That controversy ultimately led to their side project becoming a full-time startup.

The company has raised substantial funding. $5.3 million in seed funding from Abstract Ventures and Susa Ventures, followed by a $15 million round led by Andreessen Horowitz (a16z).

Lee told AIM that Cluely is often branded as a ‘cheating app,’ but the company is choosing to wear the title like a badge.  He explained that the perception is part of the plan.

“It is inevitable that when you see someone use Cluely in an interview, a sales call, a meeting, anything, and they use AI in a way that nobody else knows they’re using AI, someone is going to think that it’s cheating,” Lee said. 

He acknowledged that some people might find it strange to advertise a product as a cheating tool in the coming months. However, he believes that people will eventually recognise that the “cheating tool” label is simply a marketing stunt.

“People who redefine the word cheating. This is a historic change in humanity. And it becomes a lot cooler what we do,” Lee said.

He explained that when they avoid using the word cheating and instead describe Cluely as offering “invisible AI assistance during meetings,” people tend to assume the company is dishonest. “Then people think this company is slimy, they lie, and they’re not honest with the users, which is li bad,” he said. 

According to him, what sets Cluely apart is its extreme transparency in the public eye. “We are extremely honest about everything,” he added.

How Cluely Works

 Lee didn’t reveal every technical detail, but he shared the key challenge Cluely solves. “The main work is in context stitching, pulling information from the screen, audio, and prompts and putting them together for the model to understand.”

He revealed that the company uses the ChatGPT model under the hood, but its real moat lies in how to feed the model the right context.

Speaking about the company culture, Lee shared that his employees have more fun than those at any other company in the world. 

Lee believes that most roles in a startup are unnecessary. In his view, all a company really needs are engineers and influencers. Everything else, he said, is either bloat or easily outsourced.

“I don’t think it’s actually real or necessary,” Lee said, referring to the role of product managers. Cluely keeps its team lean and focused, with every employee having a critical function. “The people that don’t have critical functions, all of that gets outsourced,” he added. 

Business Model and Customers

Cluely charges consumers $20 per month or $100 per year. While it has prosumer users, often leveraging the tool in meetings and sales calls, the majority of its revenue comes from large enterprise contracts.

“Most of our revenue comes from enterprise directly, rather than consumers,” Lee said, noting that enterprise deals are typically priced per seat per month and billed annually.

Lee sees no limit to where Cluely can be applied next. The startup is already experimenting with various verticals where real-time AI support can make a difference. “The idea space of real-time AI augmenting humans is literally infinite,” he said, pointing to education, healthcare, customer service, and creative industries as examples. 

“Every single industry in the world where you need information is an industry where AI can help and where specifically Cluely can help.”

The Five-Year Vision

In Lee’s vision, Cluely will soon be the default interface for interacting with AI, one that overtakes conventional tools like ChatGPT.

“Nobody’s going to use ChatGPT.com in five years,” he said. “Everyone is going to be on Cluely. And this is the way people will digest information from AI. And if it’s not directly Cluely, then it will be some other interface that looks like Cluely.”

The company expects to hit $100 million in annual recurring revenue within its first year. Lee believes that consumer adoption will reinforce enterprise usage and vice versa, creating a self-reinforcing flywheel. “We will be the dominant AI provider for both consumer and enterprise,” he said. “And this will all happen in five years.”

Global Expansion and a Bet on Risk

Cluely is already in conversations to expand to the Middle East and plans to scale globally. “We aim to be global. And the sooner we can be global, the better,” Lee said, hinting that an India office could be on the horizon within a year.

Reflecting on his own journey, dropping out of Columbia University and starting Cluely at 21, Lee offered advice to young founders. “You should take more risk. The upside of risk is much, much higher than you think. And the potential downside of risk is way, way smaller than you think.”

For those just starting out or looking to learn AI, his advice is straightforward: build something. “The best way to learn is by building. The best way to improve mentally is by acting physically,” he said.

With inputs from Anshika Mathews

The post Why Cluely Thinks ‘Cheating’ Is the Future of Work appeared first on Analytics India Magazine.

]]>
Accel’s Case for the Application Layer in India https://analyticsindiamag.com/ai-startups/accels-case-for-the-application-layer-in-india/ Wed, 02 Jul 2025 13:00:00 +0000 https://analyticsindiamag.com/?p=10172759

“You don’t have to be OpenAI to build a transformative AI product.”

The post Accel’s Case for the Application Layer in India appeared first on Analytics India Magazine.

]]>

With the IndiaAI mission gaining traction, investing in AI across different layers, particularly the foundational layer, has come under a tighter spotlight. As the debate around foundational models becoming commoditised increase, many contend that the edge shifts to those who can execute quickly, integrate deeply, and build for real-world use cases. 

In a recent conversation with AIM, Prayank Swaroop, a partner at Accel, spoke about the VC fund’s investment thesis in India. He believes that India’s most promising opportunities in AI lie elsewhere. 

“While we’re closely monitoring developments in foundational models, the highest upside right now is clearly in applied AI,” he said.

Recently, IndiaAI Mission announced the selection of three more startups—Soket AI, Gnani.ai, and Gan.AI—to develop indigenous foundation models. This brings the number of startups under the foundation model development initiative to four, including the previously announced Sarvam AI. 

Sarvam’s funding comes from investors like Lightspeed India Partners, Peak XV Partners, Lightspeed Venture Partners, and Khosla Ventures, among others.

The announcement drew a slew of criticism on social media.

Swaroop asserted that one doesn’t need to be OpenAI to create a transformative AI product. This highlights the fact that many of the most innovative developments are coming from companies that leverage existing models like GPT and Claude.

Inside Accel’s Investment Thesis 

According to Swaroop, Accel is increasingly concentrating on AI across three main areas: agentic enterprise platforms, vertical AI products and AI-enabled services. 

“This surge [of AI adoption] is driven by India’s unmatched engineering talent pool, cost efficiency, and access to domain-specific datasets,” Swaroop said.

The firm’s latest $650 million fund is designed to support startups that bring clear use cases. A key part of its strategy is product-led growth, backing companies that can scale through user demand rather than relying heavily on sales teams, which they believe gives these startups a stronger foundation for long-term success.

Why the Application Layer?

However, this raises the question: why is the application layer considered India’s stronghold in AI? Swaroop makes the case for an AI surge in India driven by its engineering talent pool, cost efficiency, and access to domain-specific datasets. 

Swaroop believes that the bottleneck has shifted from engineering resources to product thinking and strategic depth. “AI is making software development easier, but the winners will be those who build durable products that solve high-value pain points,” he said. 

According to him, founders who adopt a “build for India, scale for the world” mindset, stay user-obsessed, and execute with speed are best positioned to create globally relevant solutions.

He argued that the “services as software” model, where India’s traditional BPO strength is transformed through AI automation, is gaining significant traction. Furthermore, India’s unique advantage lies in the application layer as startups are leveraging open-source and accessible foundational models to build verticalised solutions in healthcare, legal operations, and financial services. 

What’s Everyone’s Missing About AI?

“If anything, AI remains underhyped in terms of its true potential,” he said, talking about how the breadth and depth of AI’s impact across sectors are only beginning to be realised.

Swaroop explained that, contrary to common belief, AI’s potential remains underhyped in several key sectors. Agentic AI is just beginning to demonstrate its power with examples like Genspark (coding), Manus (enterprise workflows), and August.ai (preventive healthcare), showing early signs of a broader shift in productivity tools. 

Another underrated opportunity, he believes, is India-first consumer AI: products built for local languages, cultural contexts, and price sensitivity. AI-powered regional entertainment and Bollywood content generation are in their early days, gaining traction.

Beyond Foundational Models

The recent acquisition of Windsurf by OpenAI and the appointment of a CEO of applications signal a clear convergence between foundational model development and AI applications. 

“You don’t have to be OpenAI to build a transformative AI product, as many successful startups, such as Cursor, RapidClaims, Chronicle and Rocket.new from our portfolio, are building on top of existing models like GPT and Claude,” Swaroop said. 

However, this doesn’t close the door for startups, he argues. The above-mentioned startups are creating meaningful solutions by leveraging existing models, such as GPT or Claude. The competitive edge lies in addressing deep user pain points, maintaining control over the product stack, and optimising for performance and cost.

Swaroop argued that localisation and sensitivity to price and context are essential for capturing the Indian market.

He talks about how while for most Indian founders, building on top of open-source or accessible LLMs offers better returns. However, there is growing interest in building India-specific models that account for local regulation, language diversity, and cost constraints. 

The post Accel’s Case for the Application Layer in India appeared first on Analytics India Magazine.

]]>
Indian Startup Founder Reviews 1,000 Engineer CVs, Finds Less than 5 Worth Hiring https://analyticsindiamag.com/ai-startups/indian-startup-founder-reviews-1000-engineer-cvs-finds-less-than-5-worth-hiring/ Tue, 01 Jul 2025 12:30:00 +0000 https://analyticsindiamag.com/?p=10172673

The startup was hiring backend developers with ₹50 lakh base pay, relocation, food, and a shot at working with top-tier talent. Yet, the hunt continues.

The post Indian Startup Founder Reviews 1,000 Engineer CVs, Finds Less than 5 Worth Hiring appeared first on Analytics India Magazine.

]]>

While Meta, OpenAI, and Google compete fiercely, paying millions and billions of dollars to attract top talent, Indian companies continue to struggle to hire even a single qualified individual. While this has been the case for the longest time, the situation is only worsening with the rise of AI. 

“India seriously has a big f***ing talent problem,” said Umesh Kumar, co-founder of Runable, a platform where anyone can build AI agents. “We got around 1,000 applications for a backend engineering role in just the last two to three days, and guess how many were actually decent? [Less than five].”

Kumar’s startup was hiring backend developers with a no-nonsense offer: ₹50 lakh base pay, relocation, food, and a shot at working with top-tier talent. The hiring process involved a simple coding task, two calls and one paid trial. 

And yet, his hunt continues.

Kumar’s frustration isn’t unique. It’s becoming the new normal. He’s not alone in swimming through a flood of AI-generated junk code and resume lies. The skyrocketing salaries at Silicon Valley startups such as OpenAI and Anthropic, coupled with the so-called GenAI “upskilling”, are pushing Indian software engineers to expect exorbitant salaries. 

However, the truth is that many of them don’t have the technical skills to justify those high salaries. 

In a post last year, author and IT professional Ratnakar Sadasyula narrated the story of candidates demanding extremely high salaries. “Now, that would not be an issue if these people were extraordinarily brilliant, or IIT-NIT passouts.” 

“Most of them are from ordinary engineering colleges. Forget about being extraordinary, they are not even of decent ability (sic),” he said, adding that most did not even possess proper communication skills.

Much of this demand for higher salaries among the new generation stems from the startup boom in 2020, when people were recruited for lofty salaries, at times even without the required skill set or ability. “And now we have an entire generation that acts so entitled, demanding high pay for just about decent skills,” Sadasyula said.

Read: Indian Techies Dream of Big Paychecks, But Face Reality Checks

AI-Washed Resumes, Broken Code

“Code that doesn’t even run,” Kumar said, further pointing out that many engineers can’t even add the libraries needed for the code to work. This sentiment was widely echoed by the community.

Another user shared their ordeal, revealing that they manually reviewed 300 resumes, and out of that large pool, only 15 were remotely decent. They eventually ended up hiring just two of them.

Umesh agrees. His team uses ChatGPT, Claude, and Cursor on a daily basis. But the engineers they hire know where AI ends and logic begins. Others added that the situation is likely going to worsen with the use of AI in colleges.

India produces approximately 1.5 million engineers every year. But ask any founder, CTO, or engineering head—and they’ll tell you: hiring good engineers is a painfully complex process. “Been happening since 2002. I remember interviewing dudes who’d remembered every axiom and design pattern off by heart but couldn’t actually code anything,” Mark A, a seasoned entrepreneur said on X.

According to TeamLease, only 5.5% of Indian engineers are employable with basic programming skills. Yet, thousands of job seekers flood listings, fuelled by LinkedIn optimism and ChatGPT hallucinations.

The situation is no better for product-focused companies. Gautam Goenka, VP of engineering at UiPath, earlier told AIM that hiring is a challenge for them. “You don’t get that talent easily.” While entry-level recruitment poses fewer challenges, the difficulty rises significantly at senior levels.

Read: Why India is Running Out of Skilled Engineers

This is echoed across the board, as most of the universities do not teach their students how to code. “I have a cousin who’s in his third year of CS with no internship and always complains at family gatherings about how bad the market is. I then asked him to show me his resume…Let’s just say bro has a calculator as his project,” a Redditor stated. They explained that it is indeed true that a lot of graduates are just not competent enough in coding to secure CS jobs.

Moreover, startups can’t compete with the compensation, brand pull, or comfort of global tech giants. As Sarbojit Mallick, co-founder of Instahyre, put it, “In today’s Indian job market, we’ve observed that many core engineers aspire to move into managerial roles rather quickly. They often see managerial positions within the broader IT sector as more appealing and potentially offering better compensation.”

This isn’t a new issue. In 2017, it was reported that 95% of Indian engineers were unable to code. It’s now 2025, and not much has changed. Only now, the broken C code has been replaced by AI-injected Python wrappers.

India’s AI Pipe Dream, Choked by a Talent Gap

According to Quess, India has 4.16 lakh AI professionals. However, there is an estimated 51% demand-supply gap. Despite producing five to 10 million STEM grads annually, India is running out of skilled engineers. India has fewer than 2,000 senior AI engineers capable of building foundational AI products.

According to staffing solution provider Xpheno, that’s less than 1% of the engineering talent. This is because of brain drain, low domestic pay (₹9–21 lakh for senior AI engineers), and a global tech market that hires the best Indians, only to ship them out.

Most Indian CS graduates don’t learn to code until after graduation—if at all. Instead, they do research for irrelevant papers or build static websites labelled as “portals”.

Read: Most Indian CS Graduates Can’t Code

Professors, many of whom have never worked in production environments, are ill-equipped to teach real-world skills. According to Reddit threads and developer testimonials, AI tools are now used to replace learning altogether.

Startups like Kumar’s encourage the use of tools like Copilot and Claude. But while their engineers know when to put it away and think, most candidates don’t. Kumar will continue sifting through 995 trash applications to find five decent ones.

The post Indian Startup Founder Reviews 1,000 Engineer CVs, Finds Less than 5 Worth Hiring appeared first on Analytics India Magazine.

]]>
‘We are Not Quitting India,’ RevRag’s $10 Million Agentic AI Roadmap for Two Years https://analyticsindiamag.com/ai-startups/we-are-not-quitting-india-revrags-10-million-agentic-ai-roadmap-for-two-years/ Mon, 23 Jun 2025 10:23:16 +0000 https://analyticsindiamag.com/?p=10172193

The company is currently working towards its first $1 million in Indian revenue, with plans to scale up to $5 million and $10 million by combining India and US operations.

The post ‘We are Not Quitting India,’ RevRag’s $10 Million Agentic AI Roadmap for Two Years appeared first on Analytics India Magazine.

]]>

It is not easy to run an AI startup in India. Companies and enterprises are reluctant to pay for AI services and products without undergoing numerous trials, checks, and proof of concepts (POCs). This has resulted in several startups starting the ‘Skip India Movement,’ in which sales are not dependent on the Indian market.

However, some startups go to the US for quick monetisation and come back to solve the country’s problems, but AI, arguably, is too early for that. RevRag, a B2B agentic AI startup, decided to undertake this herculean task long ago and is now back in India selling AI to enterprises, even if the payoff takes a bit longer.

“We are not quitting India because we think India is a large market overall, even if it takes time. And we will be at it,” Ashutosh Singh, co-founder and CEO of RevRag, told AIM

RevRag is working with the country’s top lenders, insurance companies, and fintech players. With AI-enabled multi-channel orchestration, the startup aims to bridge the gap between enterprises and their customers, ensuring no high-intent prospect is lost due to inefficient follow-ups.

In August 2024, the company raised $600K in its pre-seed funding round, led by Powerhouse Ventures, Kunal Shah (founder of CRED), Viral Bajaria and Premal Shah (co-founders of 6sense), Deepak Anchala (founder and CEO at Stealth AI startup), Vetri Vellore (founder of Rhythms), and over 15 other investors.

Revenue in India Comes in Waves—Not Sprints

Singh acknowledges the challenges of selling in India. The sales cycle is slow, decision-making is layered with bureaucracy, and customers can go cold without warning. “You might get ghosted, and you won’t even know. You’ll keep following up, but the deal might just disappear,” he explained.

Singh estimates that, unlike in the US, where the startup might already achieve an Annual Recurring Revenue (ARR) between $500,000 and $1 million, businesses in India tend to take more time to finalise deals. However, they eventually grow significantly because of high sales volumes. “In India, one must approach work with patience and composure. Revenue grows gradually,” he noted.

The company is currently working towards its first $1 million in Indian revenue, with plans to scale up to $5 million and $10 million by combining India and US operations. “My immediate target is a million dollars from India. After that, the $5–10 million revenue will be a mix of India and the US,” Singh said, estimating a two-year timeline to reach those numbers.

RevRag isn’t putting all its eggs in one basket. It is actively expanding into the US, with POCs already underway with major banks. “We’ll start our US GTM in Q4,” Singh said. “But there will be separate GTM teams for India and the US.”

Still, Singh is clear about his priorities. “Right now, India is our primary market. We flipped back from being a US holding company to focusing on India. We believe that the next 1–2 years are critical as enterprises here mature in their AI adoption.”

One of the biggest myths that Singh wants to bust is that Indian clients don’t pay. “It’s not about inferior tech or lack of money. It’s a game of volume and patience,” he said. “You invest first, like Zomato did, and then you start getting money once the volume kicks in.”

Pricing also emerges as a significant barrier. Even if a POC wins on tech merit, converting it into revenue hinges on aligning price points with business priorities. “The problem is cracking the right pricing at the right time. And you have to solve it with patience,” Singh added.

Despite budget pressures, Singh is clear about one thing: they won’t compromise on tech. “Indians need everything—good cost and good tech. And in that, you have to make it here,” he said. “We will not tweak anything just to cut costs. The product should be good for Indians.”

Despite winning multiple POCs due to its technology, RevRag’s quarterly revenue targets have occasionally fallen short, which Singh attributes to companies taking more time in due diligence. 

While many Indian founders are pivoting to service models due to slow product uptake, Singh believes in a blended approach. “It’s not product vs service. You have to mix both to unlock revenue in India.”

10 People, 20 Clients, and AI Everywhere

Despite being a lean 10-member team, RevRag serves 20 enterprise clients. AI underpins almost every process in the company—from solutioning and prompt engineering to automation and coding. 

“We extensively use AI coding tools. First, we do things manually, then we automate. It’s not just AI; it’s about writing better code and improving processes,” Singh said. He is candid in saying that contact centres will see job losses due to AI. But it’s not all gloom. 

“There are new roles like AI solution engineers, AI testers, and prompt consultants coming up. So while some jobs go, others will be created,” he said.

RevRag builds application-layer AI agents—voice, workflow, embedded—that are trained and orchestrated for specific enterprise use cases. These agents are not built on proprietary foundational models; instead, they leverage open-source and closed-source LLMs, fine-tuned and optimised for tasks such as onboarding, loan servicing, and support in the BFSI sector.

While RevRag is open to deploying homegrown LLMs like Sarvam and Krutrim, Singh notes that latency and production readiness continue to pose challenges. 

“We’ve tried Sarvam, but the latency is high. Once it’s sorted, we would love to deploy it because it understands Indian languages better,” he said. As for Krutrim’s Kruti agent, Singh said they haven’t explored it yet as “it’s too early.”

The company has recently launched two AI agents, Sophie and Emma, both for different use cases. Interestingly, RevRag has intentionally dropped the idea of giving its AI agents Indian-sounding names. 

“If we name it Indian, we won’t be able to sell in the US. If we name it American, we won’t be able to sell in India. So we’ve decided to go nameless and just focus on use cases,” Singh said.

So what sets RevRag apart from IT giants like Infosys, who claim to be building hundreds of AI agents, is the focus and depth of integration. “Nobody in India is doing embedded AI agents the way we are. Our agents can operate in-app and then transfer calls to a human agent, handling the entire workflow,” he added. 

The post ‘We are Not Quitting India,’ RevRag’s $10 Million Agentic AI Roadmap for Two Years appeared first on Analytics India Magazine.

]]>