AI Features - Latest Features & Products in Artificial Intelligence https://analyticsindiamag.com/ai-features/ News and Insights on AI, GCC, IT, and Tech Mon, 29 Sep 2025 03:40:34 +0000 en-US hourly 1 https://analyticsindiamag.com/wp-content/uploads/2025/02/cropped-AIM-Favicon-32x32.png AI Features - Latest Features & Products in Artificial Intelligence https://analyticsindiamag.com/ai-features/ 32 32 Enterprises Beware: Agent-Washing Clouds the Future of AI https://analyticsindiamag.com/ai-features/enterprises-beware-agent-washing-clouds-the-future-of-ai/ Sat, 27 Sep 2025 10:30:00 +0000 https://analyticsindiamag.com/?p=10178265

Vendors mislabel copilots as agents, raising regulatory and operational risks for firms chasing the promise of agentic AI.

The post Enterprises Beware: Agent-Washing Clouds the Future of AI appeared first on Analytics India Magazine.

]]>

Most vendors are mislabeling their products as “agentic AI,” setting unrealistic expectations around tools that are essentially copilots or intelligent automation with a chat interface, according to new research from HFS.

This “agentic-washing” — the gap between what is marketed and what is actually sold — has become the next big trust issue in enterprise AI. Vendors are rebadging copilots as “agents” to imply autonomy and business impact, according to the research authored by Hansa Iyengar, practice leader (BFS & IT Services) at HFS Research. 

A report by Research and Markets on AI Agents projected the AI Agents market to grow from $5.1 billion in 2024 to $47.1 billion in 2030, with a CAGR of 44.8% during 2024-2030. 

Surveying over 1,300 professionals to “learn about the state of AI agents”, the report found that 51% of the respondents said they have already been using AI agents in production, 63% of mid-sized companies deployed agents in production, and 78% have active plans to integrate AI agents. 

The HFS report said regulators on both sides of the Atlantic are already targeting false claims, setting up a collision between hype and compliance.

Gartner forecasts that 40% of enterprise applications are expected to feature task-specific AI agents by the end of 2026, up from less than 5% in 2025. These agents will evolve from AI assistants, currently embedded in most enterprise apps by end-2025, to autonomous, task-capable systems that enhance productivity, collaboration, and workflow orchestration. 

Gartner predicts that agentic AI could account for around 30% of enterprise application software revenue by 2035, surpassing $450 billion. 

“We are still seeing AI assistants being deployed which are agent washed,” Anushree Verma, senior director analyst, Gartner, told AIM.

She added that the rapid growth in popularity of Agentic AI in India is largely driven by hype, while adoption is very low for now with low ‘AI agency’ use cases. Early examples, according to her, take the form of virtual assistant software architectures which creates even further confusion. 

“Customer service and knowledge management remain the top use cases which have advanced the level of ‘AI agency’ in these implementations. We do have some other emerging use cases, for example, SOC agents, Agents for SDLC, Simulation, etc,” she said.

Devil is in the Details

HFS clarifies the differences. 

Copilots are assistants confined to a single app or workflow, triggered by a user, with limited memory and no autonomous planning or open tool choice. 

AI agents are individual systems executing specific tasks with policies, telemetry, and rollback.

Agentic AI refers to orchestrated, autonomous systems that coordinate multiple agents, maintain context, and adapt dynamically to achieve broader business outcomes. 

If a vendor’s AI can’t decompose goals, choose tools across systems, remember context, and recover from failure, HFS says they’re not selling agentic AI, but AI-assisted workflows.

The research referred to UiPath’s Autopilot and Automation Anywhere’s Co-Pilot to illustrate the rebadging trend. 

Both products deliver productivity gains through text-to-automation or natural-language prompts, but they operate within bounded stacks, not open-world autonomy. 

ServiceNow positions its AI Agents as skills-based orchestrators across IT and HR workflows, but again, scope is defined by policy guardrails and configured skills.

The three companies did not respond to AIM‘s queries.

Verma explained that Agentic AI refers to a class of system developed using various architectures, design patterns and frameworks, encompassing both single AI agent and multiagent designs. These systems are capable of performing unsupervised tasks, making decisions and executing end-to-end processes. 

Whereas, AI agents are autonomous or semiautonomous software entities that use AI techniques to perceive, make decisions, take actions and achieve goals in their digital or physical environments.

“It effectively means that Agentic AI practice is used for creating AI agents,” she said. 

Still an Aspiration

Most deployments today remain at Levels 1 and 2 of HFS’ “five levels of agentic maturity.” Copilots handle departmental tasks under human oversight. A smaller group reaches Level 3, where processes are coordinated across bounded systems.

Levels 4 and 5, where multi-agent systems own business outcomes and evolve with minimal human input, remain aspirational. 

Roadmaps such as Intuit’s GenOS describe “done-for-you agentic experiences,” but HFS classifies them as emerging claims pending production-grade evidence.

The risks of overstatement are growing. 

The US Federal Trade Commission launched “Operation AI Comply” in September 2024, warning that deceptive AI marketing falls under consumer-protection laws. 

In parallel, the Council of Europe’s legally binding AI treaty requires lifecycle transparency, impact assessment, and oversight.

Enforcement has already begun. DoNotPay, which marketed itself as the “world’s first robot lawyer,” faces FTC action for deceptive autonomy claims and has been ordered to compensate customers. 

Rytr, an AI writing assistant, enabled mass production of fabricated reviews, failing consumer-protection standards. 

Delphia and Global Predictions, which claimed to be the “first regulated AI financial advisor,” paid $400,000 in penalties after regulators found their claims misleading.

Check Before Subscribing

HFS recommends CIOs use its “two-gate Agentic Reality test” before buying into vendors’ claims: 

Gate one asks whether the system demonstrates agency, goal decomposition, tool use, memory, policy guardrails, and telemetry. 

Gate two tests readiness to scale, requiring multi-agent coordination, API execution, fraud prevention, compliance hooks, and lifecycle support.

If two or more Gate 1 items fail, buyers are looking at an assisted workflow, not an agent.

CIOs should also enforce claims contractually — write “agent” into agreements, demand telemetry, set governance thresholds, define KPIs, require architecture disclosure, and link payments to performance. 

“The bottom line: if a vendor wants a premium for agentic AI, they must earn it with evidence,” HFS said. 

“If a product can’t plan, pick tools across systems, remember context, and recover from failure, it’s a copilot. Label it, limit it, and buy useful assistance at assistant rates.” 

Ashish Kumar, the chief data scientist at Indium, had said that the tech works, but the skill gap is real. Agentic AI needs more than prompts and APIs. It requires thoughtful design, orchestration, modularity, and people who understand both software and business logic.

The post Enterprises Beware: Agent-Washing Clouds the Future of AI appeared first on Analytics India Magazine.

]]>
How Neysa Stands Out in the IndiaAI GPU Race https://analyticsindiamag.com/ai-features/how-neysa-stands-out-in-the-indiaai-gpu-race/ Sat, 27 Sep 2025 04:30:00 +0000 https://analyticsindiamag.com/?p=10178215

Unlike other providers focused on GPU allocation, Neysa claims to deliver an end-to-end AI cloud platform.

The post How Neysa Stands Out in the IndiaAI GPU Race appeared first on Analytics India Magazine.

]]>

India’s AI cloud market is crowded with multiple providers vying for the attention of startups, IITs, and enterprises. The IndiaAI Mission has empanelled over 34,000 GPUs, with another 6,000 on the way. 

Around 72% of these GPUs have been allocated to startups building foundational models, providing a boost to the nation’s AI ambitions.

Yotta Data Services, NxtGen, E2E Networks, and others like Jio, CtrlS, Netmagic, Cyfuture, Sify, Vensysco, Locuz, and Ishan Infotech have carved their own slices of this GPU pie. But, Neysa is staking a distinct claim. 

The Mumbai-based AI acceleration cloud system provider is focussed on the problem that most AI teams face: the AI trilemma, as its chief product officer Karan Kirpalani terms it. 

At Cypher 2025, one of India’s largest AI conferences organised by AIM in Bengaluru, Kirpalani defined this trilemma: building a product with the right unit economics, speed to market, and product-market fit, all while scaling trust, which rarely works in practice. 

“You can build a product at the right cost with speed to market but may fail to align with market needs, or any two of the other criteria. It’s the apartment problem. Pick any two, but you can’t have all three,” he said.

Traditional cloud providers — AWS, Google Cloud, Azure — can solve parts of the problem but rarely all three. “AWS will charge you four times what the prevalent market rate is for an H100 GPU. You get speed, yes, but you miss unit economics. You pivot the other way, buy your own GPUs, and now you’re stuck on speed and scale. No one has solved all three,” Kirpalani elaborated.

Enter Velocis

Velocis Cloud aims to tackle the trilemma. Unlike other providers focused on GPU allocation, Neysa delivers an end-to-end AI cloud platform. From Jupyter notebooks and containers to virtual machines and inference endpoints, everything is pre-integrated and accessible with a click on Velocis Cloud. 

Enterprises get flat-fee pricing, granular observability, and dedicated inference endpoints for models like OpenAI’s GPT-OSS, Meta’s Llama, Qwen, and Mistral. Startups get credit programs to avoid “project-killing” hyperscaler bills. 

“Clients appreciate it more than GPUs. Bare metal, virtual machines, containers, Jupyter notebooks, inference endpoints — you can do all of it with a click, and at far better unit economics than hyperscalers,” Kirpalani said during a podcast at Cypher 2025.

Contrast that with Yotta. CEO Sunil Gupta has ordered 8,000 NVIDIA Blackwell GPUs to expand capacity for IndiaAI projects. Yotta already operates 8,000 H100s and 1,000 L40s, supporting Sarvam, Soket, and other large-scale AI models. “Most large-scale AI model development in India today is happening on Yotta’s infrastructure,” Gupta earlier told AIM

Yotta’s strength is sheer scale, with a platform-as-a-service API layer for enterprise access. At the same time, Yotta also offers similar services, from training on bare metal hardware to deploying custom models and inference on its Shakti AI Cloud platform.

NxtGen takes a long-term, trust-driven approach to AI and cloud. Unlike Neysa, which focuses on end-to-end platform usability and flexibility, NxtGen leverages its legacy as one of India’s first cloud players and government contracts to build enterprise inference and sovereign AI at scale. 

“The first difference is that we have a lot of trust with our customers,” CEO AS Rajgopal told AIM earlier, emphasising that NxtGen is not just providing GPUs but creating an enterprise-grade inference market with open-source, agentic AI platforms. Its philosophy blends early adoption, infrastructure investment, and operational sovereignty.

Standing Out

So where does Neysa fit in this crowded domain? It’s not about who has the most GPUs or the biggest contracts. It’s about usability, predictability, and sovereignty. Kirpalani emphasised India’s need to reduce dependency on foreign models and data centres. 

“For India, investing across the stack and reducing dependency on foreign models, hardware, and data centres is vital,” he said. Neysa’s strategy is to offer variety — supporting multiple open-weights models — and control, ensuring enterprises can fine-tune, self-host, and manage token performance without surprises.

Hardware scale is a consideration, but Neysa is pragmatic. “Seeing a homegrown NVIDIA in five years? Not realistic. Manufacturing silicon is complex. A more realistic approach is to incentivise global manufacturers and ODMs to produce in India,” Kirpalani noted. The focus is on accessible infrastructure and a strong supply chain rather than building chips from scratch.

While Yotta, E2E, NxtGen, and others are racing to deploy GPUs and secure large contracts, Neysa is carving a niche for operational simplicity and sovereign AI. Its Velocis Cloud is designed to let AI teams focus on product development rather than cloud headaches. 

IndiaAI’s GPU push is impressive — 40,000 units and counting — but sheer capacity alone doesn’t solve the trilemma. That’s Neysa’s take.

The post How Neysa Stands Out in the IndiaAI GPU Race appeared first on Analytics India Magazine.

]]>
Two Indian Engineers on a Mission to Automate Home Cooking for the World https://analyticsindiamag.com/ai-features/two-indian-engineers-on-a-mission-to-automate-home-cooking-for-the-world/ Fri, 26 Sep 2025 11:30:00 +0000 https://analyticsindiamag.com/?p=10178244

In a live demonstration for AIM, Posha prepared paneer tikka masala in approximately 25 minutes

The post Two Indian Engineers on a Mission to Automate Home Cooking for the World appeared first on Analytics India Magazine.

]]>

Building a robot that performs mechanical cooking actions is straightforward engineering. Creating one that thinks, perceives, and improvises like a human cook presents an entirely different challenge.

Is the tomato purée thick enough? Do the onions need a few more seconds of sautéing?

Posha, a San Francisco-based startup founded by two Indian engineers, Rohin Malhotra and Raghav Gupta, is pursuing this challenge. 

In an interaction with AIM, co-founder and CTO Rohin Malhotra outlined how the company’s appliance transforms raw ingredients into ready meals. 

For users overwhelmed by AI products that just write emails and generate images, Posha represents a different league: AI applied to physically demanding tasks that people may want to avoid. 

The startup offers an early glimpse of AI handling domestic work that requires real-world perception and judgment.

An Attempt to Bridge the Gap

Posha’s hardware features mechanical arms that pour and stir ingredients, as well as dispense spices through multiple pods, along with an induction pan and integrated oil and water tanks. There is also a display through which users can interact with the appliance, view setup instructions, recipes, and more. 

“The first step would be to choose how many people you’re cooking for,” said Malhotra, as that would guide Posha about the quantity of ingredients to be fed in. The appliance can churn out up to 600 pre-programmed recipes for up to four people. 

Thanks to AI models equipped with computer vision hardware, Posha can ‘watch’ food change during cooking and make real-time decisions about when to adjust the heat, add more ingredients, or proceed to the next step. 

Besides the recipe, Posha can also accommodate for any missing ingredients or specific dietary restrictions intelligently. 

“This happens a lot when some of our customers don’t have enough time, or the lack of skills to prepare the ingredients the right way — Posha can detect that and ensure the final recipe turns out the same way,” said Malhotra. 

The appliance is priced at a one-time fee of $1,499 and is being sold in the United States in limited quantities. The company recently raised $8 million in Series A funding led by Accel Ventures. 

In a live demonstration for AIM, Posha prepared paneer tikka masala, a traditional Indian main course made with cottage cheese, in approximately 25 minutes. 

Where’s the Training Data?

Essentially, the company needed to train the computer vision models to cook all 600 recipes in its database. “One of the challenges was that there was no data on which these models could have been trained. We had to create our own datasets,” said Malhotra. 

The team had to break cooking into component skills that function like “Lego blocks” for recipes. For example, to teach the system frying, engineers cooked 10 different ingredients from raw to burnt, training the camera to recognise colour states from raw to golden brown to black.

Similarly, when cooking ingredients that shrink in size, such as mushrooms, the camera can calculate the percentage by which ingredients’ size decreases, said Malhotra. 

“We had to get a large number of ingredients initially and then create a substantial amount of synthetic data. We also had to cook them and gather all the necessary data before training our models on that,” he added. 

Each time a user cooks a recipe, Posha also collects camera vision data to enhance the models’ efficiency, Malhotra said. 

To achieve the desired outcome, individuals from various fields of expertise were assembled. The company collaborates with mechanical, electronics, manufacturing, software, and AI engineers — and also, chefs. “A few people who work here were chefs in their previous jobs. And now, for their living, teach robots how to cook.” 

Taste Over Tech

None of these efforts that went into development would matter if the food doesn’t taste good. The looks of the finished appliance and its capabilities would all seem worthless. 

“One of the most important aspects which we discovered was that we need to get people to taste the food,” said Malhotra. “What users are really sceptical about is whether Posha can cook good food.” 

“The easiest way to do that is to do a lot of demos. We invited people to our office, and a lot of them became our customers after seeing a demo and tasting the food,” added Malhotra. 

The company’s design philosophy, he said, focuses on providing the user a clear view of how the appliance is cooking their food. “It helps build a level of trust.”

Furthermore, saving time or effort is not the only goal of Posha, but also promoting a healthy diet. 

“The amount of time people spend cooking food at home [in the US] is extremely low. That is increasingly being replaced by processed food from supermarkets or food delivery apps,” said Malhotra. Posha’s library of recipes also contains a long list of health-based dishes across various cuisines and dietary preferences. 

Having said that, Posha isn’t the only company in the smart cooking or cooking technology sector. Startups such as Tovala, June Oven, Anova Culinary, and Impulse Labs offer a variety of products and appliances in the smart kitchen market. 

While most leading AI-enabled cooking solutions are offered by startups and modern companies, it would be interesting to see how mainstream appliance manufacturers respond to both the opportunity and the challenge.

The question that will drive innovation remains the same: how long will it take for incumbents to develop such solutions at scale, and at a fraction of the cost?

The post Two Indian Engineers on a Mission to Automate Home Cooking for the World appeared first on Analytics India Magazine.

]]>
BharatGen and the Pursuit of Sovereign, Scalable AI for India https://analyticsindiamag.com/ai-features/bharatgen-and-the-pursuit-of-sovereign-scalable-ai-for-india/ Fri, 26 Sep 2025 09:58:16 +0000 https://analyticsindiamag.com/?p=10178237

“Knowledge-driven components are important because we don't want everything to be just algorithmic innovation.”

The post BharatGen and the Pursuit of Sovereign, Scalable AI for India appeared first on Analytics India Magazine.

]]>

Generative AI is evolving beyond the race for larger models, focusing on sovereignty, data ownership, and cultural alignment. For India, where multilingual diversity defines daily life, the challenge lies in building AI that reflects these realities while remaining scalable and cost-efficient.

The answer may lie in BharatGen, a consortium-led effort to create multilingual and multimodal AI that is sovereign, frugal, and rooted in India’s priorities.

At Cypher 2025, Ganesh Ramakrishnan, professor at the department of computer science and engineering, IIT Bombay, said, “India’s AI opportunity, converting the diversity into a strength by leveraging the similarity across languages, getting back our skilled engineers and researchers to work together.”

The project brings together IITs and other institutions under a not-for-profit structure, combining academic research with practical applications. Initially supported by the Department of Science and Technology, BharatGen recently received a significant boost in the form of a ₹900 crore grant under the IndiaAI Mission.

This whole-of-government approach, with the Ministry of Electronics and IT stepping in alongside earlier support, aims to scale the models towards the trillion-parameter range and enable the creation of agentic systems for Bharat.

As Ramakrishnan explained, this is a deep-frogging opportunity to shift India from being a “use case capital” to an IP producer, while reinforcing privacy and cultural preservation.

Models Born from India’s Context

BharatGen has already released models ranging from 500 million to 7 billion parameters. Among them is Param-1, a 2.9 billion-parameter language model pre-trained from scratch with 33% Indian data, including 25% Hindi.

“We also released several domain-specific models in agriculture, legal, finance, and Ayurveda,” Ramakrishnan said, emphasising the localisation strategy.

The consortium has also launched multimodal systems. The Sooktam family powers text-to-speech, Shrutam focuses on automatic speech recognition, and Patram stands as India’s first 7 billion-parameter document vision-language model.

These systems are intended to serve Indian needs rather than mimic global templates. “This is actually the seat of India’s AI ecosystem, having our feet on the ground through applications, while also ensuring that we are building models which are not just aping the Western models,” Ramakrishnan emphasised.

Applications such as Krishisathi, accessible via WhatsApp, demonstrate how these models can reach ordinary users. From speech-to-speech systems capable of conveying emotion to compact diffusion-based voice models that work with minimal data, BharatGen’s experiments point towards a personalised, inclusive future for Indian AI.

Also Read: BharatGen’s ‘Recipe’ for Building a Trillion Parameters Indic Model

Research, Sovereignty, and Scaling Ahead

Research is central to BharatGen’s approach, with over 15 papers published in top-tier venues within a year. The consortium has collected more than 13,000 hours of speech data across Indian regions, embedding fidelity and provenance checks into its data pipelines.

Ramakrishnan described this as a “virtuous cycle” of recipes and indigenous benchmarks, ensuring models evolve from robust foundations.

Training challenges remain formidable, with even mid-sized models requiring hundreds of GPUs over weeks. Yet BharatGen’s frugal philosophy has produced compact multilingual architectures that perform competitively on benchmarks.

The recent government funding promises to accelerate this trajectory. With resources to train much larger models, the project can now aim for trillion-parameter systems, speech agents capable of handling multilingual tasks, and multimodal document models for domains such as governance, healthcare, and finance.

At its core, BharatGen is a strategic exercise in sovereignty. By embedding knowledge-driven components, focusing on explainability, and leveraging linguistic similarities across Indian languages, the initiative seeks to create AI that is not only technically strong but also aligned with India’s cultural and national priorities.

As Ramakrishnan concluded, it is about turning diversity into strength and laying the foundation for India to lead, not follow, in the age of generative AI.

Also Read: How BharatGen Took the Biggest Slice of IndiaAI’s GPU Cake

The post BharatGen and the Pursuit of Sovereign, Scalable AI for India appeared first on Analytics India Magazine.

]]>
How Pradhi AI Embeds Emotional Intelligence in Voice AI https://analyticsindiamag.com/ai-features/how-pradhi-ai-embeds-emotional-intelligence-in-voice-ai/ Fri, 26 Sep 2025 06:30:00 +0000 https://analyticsindiamag.com/?p=10178175

As businesses recognise the potential of voice-driven tech, Pradhi AI is laying the foundation for an empathetic, responsive AI ecosystem.

The post How Pradhi AI Embeds Emotional Intelligence in Voice AI appeared first on Analytics India Magazine.

]]>

Text-to-speech-based generative models have propelled the industry towards faster and more efficient consumer engagement and hiring. Yet, replicating human tone, emotion, and subtlety remains far off. Voice AI holds a unique promise, but complexities as well. 

Pradhi AI Solutions, a Hyderabad-based startup, has been focused on pioneering a voice-driven, emotionally intelligent AI platform. 

From Multigraphs to Market Models

The company’s roots lie in deep tech research. CEO & co-founder Vijayalaksmi Raghavan, in a conversation with AIM, recalled how her team translated abstract mathematical concepts, such as multigraphs, into deployable predictive models. These could monitor real-time metrics such as the melt flow index. 

Their system is built in layers. It extracts over a hundred voice-based measures, refines them through network graph theory and statistical techniques, and delivers actionable insights.

This is more than academic. The model is designed for enterprise deployment, embedding intelligence into everyday customer and sales conversations. By focusing on extracting significance from voice, Pradhi AI allows organisations to see far beyond what traditional text-based interfaces can capture.

Moving Beyond Sentiment Analysis

Voice AI has long been associated with sentiment detection: happy, sad, or angry tones tagged at a superficial level. Raghavan argues this is shallow.

“Speech emotion recognition is an evolving field. Today, semantic analysis only tells you so much. A human can distinguish the difference between a flat ‘okay’ and an enthusiastic ‘okay!’, but a large language model can’t,” she explained.

Pradhi AI, in collaboration with IIT Delhi, is researching prosody — the rhythm, stress, and intonation of speech. These markers can indicate tension, hesitation, or emphasis. When modelled correctly, it can expand AI’s interpretive depth.

The implications are vast. In customer service, for instance, the need may not be to identify anger but to equip a bot with empathetic responses that mirror human interaction. “We’re very far away from that reality,” Raghavan admitted, “but our work is laying the groundwork to get there.”

India’s Multilingual Reality

One of the toughest challenges for voice AI is handling Indic languages. Unlike English, which has benefited from decades of corpus development and tokenisation research, Indian languages lack extensive digital datasets.

Some large models, such as Google’s Gemini stack, perform significantly better with Indic languages than others, Raghavan pointed out. Consequently, Pradhi AI has adopted a hybrid approach that combines augmented models with datasets, leverages heuristic methods for recognising dialects, and introduces a human-in-the-loop mechanism to achieve fine-tuned accuracy.

“Achieving the 98-99% accuracy levels for Indian languages will take time,” she said. “But until we go under the hood, embedding-level improvements, tokenisation, and larger datasets, the gap will persist.”

Privacy as a Cornerstone

If emotion recognition is a research challenge, data privacy is the commercial one. Enterprise customers are often hesitant to transmit voice data outside their controlled environments. Raghavan is acutely aware of this barrier.

She revealed a significant breakthrough. “From a data privacy standpoint, we do not make any API calls. That means the data remains within your environment only. All our models are locally installed. Plus, our data is protected with an advanced cryptography solution. The breakthrough is no API calls.”

Pradhi AI uses elliptic-curve encryption to secure audio both in transit and at rest. Clients can run the stack on-premises, ensuring compliance with strict privacy standards.

The Indian AI Ecosystem

Voice-first platforms in India face infrastructure barriers. Many enterprises are still adapting their user interfaces, designed originally for text, to accommodate voice inputs.

Raghavan likens the process to plumbing: “You can’t install a fancy tap until the pipes are laid down. For us, the first work often is just putting that infrastructure in place.”

Yet the enthusiasm remains high. Once businesses see the potential for natural and accurate interactions, they recognise the long-term value.

The funding climate for AI has shifted dramatically since 2023. Back then, a startup could attract investors merely by mentioning “AI” in its pitch deck. Investors demand a clear path to revenue now, says Raghavan.

Voice AI has received significant investment on the front-end side, making voices sound human, Indian, or empathetic. But voice as data input and voice analytics remain underfunded. That is precisely where Pradhi AI is carving its niche.

“We will look for institutional funding soon,” Raghavan said, “but right now our focus is on traction and building an ARR-driven business model.”

Her career reflects the startup’s layered approach. She worked in the corporate sector, transitioned to the non-profit world, and ultimately embraced entrepreneurship.

“Corporate impact can feel limiting,” she said. “Non-profits deliver scale but often depend on stakeholders and grants. As an entrepreneur, I can shape not just my organisation but also the behaviours of others through the products we create.”

This philosophy underpins Pradhi AI’s dual vision: advancing research in emotion recognition while creating tangible impact for enterprises.

Future of Voice AI with Emotional Intelligence 

Pradhi AI is tackling the gap between human emotion and machine intelligence.

Its layered model stands apart in a domain still in its early stages. 

The aim is not to replace human agents but augment them, providing better assessments of conversations and stronger decision-making for enterprises.

As Raghavan put it: “The trick is not in solving 80% of the use cases, but in knowing when the AI is dealing with the critical 20% it can’t handle yet. That’s why we always keep the human in the loop.”

In doing so, Pradhi AI is shaping not just India’s AI story but also the global debate on how machines can truly learn to listen.

The post How Pradhi AI Embeds Emotional Intelligence in Voice AI appeared first on Analytics India Magazine.

]]>
Mangaluru Looks to Build Its Own Tech Identity, Not Replicate Bangalore https://analyticsindiamag.com/ai-features/mangaluru-looks-to-build-its-own-tech-identity-not-replicate-bangalore/ Thu, 25 Sep 2025 08:56:54 +0000 https://analyticsindiamag.com/?p=10178130

“The coastal city could showcase tangible results by applying deep tech to areas it already dominates”

The post Mangaluru Looks to Build Its Own Tech Identity, Not Replicate Bangalore appeared first on Analytics India Magazine.

]]>

Mangaluru should not aspire to be an extension of Bangalore. Instead, it must define itself as a sustainable, inclusive, culturally rooted, coastal AI-first city. This was the common ground among panelists at Technovanza 2025 as they discussed Mangaluru’s technology identity.

Industry leaders, entrepreneurs, and ecosystem enablers gathered at Technovanza 2025 to discuss the future of tech in Mangaluru, on September 24. 

Not Another Bangalore

Bhaskar Verma, regional director–South, NASSCOM, noted that while the city is seeing new organisations and revenue, branding is still missing. “When we talk about Mangaluru, how many of you are coming forward as brand ambassadors?” he asked.

Verma stressed that development must be inclusive, as he urged the ecosystem to bring more women entrepreneurs into the fold. He also highlighted NASSCOM’s role in connecting startups with investors and mentors, scaling talent through the Future Skills program, and working with the government on GCC and ER&D policies.

“The city has the opportunity to plan, not just react,” said Gurudatta Shenoy, managing partner at Vertex Workspace. He emphasised the importance of sustainable infrastructure, well-managed workspaces, and a deliberate effort to strike a balance between professional opportunities and quality of life. 

Shenoy said these factors could help attract startups and retain skilled workers. Over the last two decades, academia in the region has supplied more than two lakh professionals to the tech industry, he said, adding that 90% of the 2,250 people accommodated in Vertex spaces over the last three years are locally rooted. 

“That attachment to the city ensures they deliver value back to Mangaluru,” he said.

AI-First by Design

Nethaji Rajendran, founder of Creolay, stressed on Artificial intelligence, saying, “If we can’t even use current AI compute power, then investing in new infrastructure is wasteful.”

His argument was that Mangaluru must be deliberate in its investments — embedding AI into its planning from the start rather than treating it as an add-on. According to him, this approach would allow the city to leapfrog others that are still retrofitting AI into outdated infrastructure.

Banking, Education as Anchors

Mangaluru’s historic strengths in banking and education came up repeatedly. Anand G Pai, president of the Kanara Chamber of Commerce & Industry, said these legacy sectors could serve as anchors for future growth. “Fintech and BFSI are natural growth verticals for us,” he said, pointing to the region’s strong track record in producing banking talent and institutions. Pai also highlighted the need for closer industry-academia collaboration to retain local talent, which usually leaves for metro cities.

The discussion also touched on the need to connect Mangaluru to the global technology ecosystem. 

Vijaykrishna Shetty, CEO of ThoughtGenesis, pointed to the undersea cable project near the city as a critical enabler of digital connectivity. He argued that such infrastructure, combined with global capability centres, could make Mangaluru a serious contender in the global services market. 

On branding, Shetty suggested positioning the city as the “Silicon Beach of India,” in an attempt to capture both its coastal geography and technological ambitions.

Blue Economy as Low-Hanging Fruit

Sunil Padmanabh, industry ecosystem expert, steered the conversation toward the region’s maritime strengths. He described the blue economy, including fisheries, ports, and marine exports, as a natural sector for AI deployment. 

From reducing spoilage in seafood exports to optimising port logistics, he argued, Mangaluru could showcase tangible results by applying deep tech to areas it already dominates. “Start where we already have strength. Prove value here, and the world will take notice,” Padmanabh said.

While each speaker emphasised different aspects of infrastructure, AI, legacy sectors, or branding, a clear consensus emerged. Mangaluru’s future identity lies in fusion: combining traditional strengths with deep tech, balancing sustainability with growth, and ensuring cultural roots remain intact while engaging global markets.

The post Mangaluru Looks to Build Its Own Tech Identity, Not Replicate Bangalore appeared first on Analytics India Magazine.

]]>
Google’s Gemini Nano Banana and the Cost of Convenience https://analyticsindiamag.com/ai-features/googles-gemini-nano-banana-and-the-cost-of-convenience/ Thu, 25 Sep 2025 06:30:00 +0000 https://analyticsindiamag.com/?p=10178115

The company’s new AI image and photo editor deepens concerns over data use and consent gaps, experts warn.

The post Google’s Gemini Nano Banana and the Cost of Convenience appeared first on Analytics India Magazine.

]]>

Google’s new Gemini Nano Banana AI image editor and photo editor has pushed conversations about privacy and security back into the spotlight. 

The tool, which allows users to easily generate or edit images, placing themselves alongside celebrities or altering facial features, has rekindled debates around biometric data, user consent, and surveillance capitalism.

While Google maintains that its models are not trained on personal photos, experts point out that the underlying technology may still be used for behavioural tracking, facial recognition, and metadata analysis.

Eamonn Maguire, director of engineering, AI & ML at Proton, in an interaction with AIM, described Nano Banana as “a troubling expansion of surveillance capitalism into creative expression, raising urgent questions about consent and control of personal data.” 

He noted that the tool’s operation depends on analysing biometric data through facial recognition, tracking editing habits, and gathering metadata such as location and device details.

Maguire highlighted a phenomenon he calls “consent gaps”, where users are “strong armed into agreeing to privacy policies without understanding what they’re agreeing to.” 

With opaque disclosures and no way to “un-train” data once it enters models, deletion remains limited, leaving users with diminished agency.

The implications go beyond the individual. “The feature accelerates the normalisation of big tech surveillance,” Maguire warned. 

Even attempts at reassurance, such as watermarking, are fragile. He said, “Now, Google tries to give lip service to people’s concerns through things like watermarking.”

“But watermarking offers little protection as they can be stripped and there is no standard for cross-platform verification.”

Current laws, he argued, were never designed with AI training in mind, leaving gaps that risk legitimising mass data collection. He warned that tools like Nano Banana could normalise convenience over privacy, impacting future regulations.

Also Read: Canva’s Fight for Relevance in the Age of Google Nano Banana

A Predictable Evolution, With Familiar Risks

Joel Latto, threat advisor at F-Secure, a global cyber security and privacy company, took a more measured stance. “I have not seen indicators that this would pose any new risks,” he said, framing Nano Banana as a natural progression in the competitive race to deliver the next viral AI feature.

Latto observed that most users “hop on to different services and in turn feed all of them with their personal data,” but stressed that basic hygiene practices, such as temporary modes, disabling training, and avoiding sensitive inputs, remain the most practical defences.

He added that while users are entrusting personal data, including potential biometrics, to a major advertising entity, it’s worth noting that companies such as Google generally maintain more robust privacy practices compared to early viral applications like FaceApp.

Deepfake technology forms another layer of concern. “Deepfake generation and detection is an ongoing arms race which F-Secure is invested in as well,” Latto noted. The real shift, he argued, is not necessarily in quality but in access. “Whenever a LLM model/feature goes viral, it lowers the barrier of entry,” meaning more users can produce photorealistic fakes with minimal technical skill.

Latto acknowledged that new releases often lack meaningful restrictions at first. “Just like with the Ghibli case, when these things come out there’s surprisingly little guardrails in place.” 

“After the viral wave hits, new restrictions are put in place.” 

This reactive cycle underscores the fragile nature of safeguards in the generative AI space.

The Old Privacy Rule For New Users

On the one hand, powerful tools like Nano Banana make creative AI widely accessible, pushing the boundaries of image generation. On the other, they embed users more deeply into ecosystems where consent may be opaque with privacy compromised.

As Maguire argued, the moment is pivotal. But as Latto suggested, the risks may not be entirely new, only more widely distributed.

The fundamentals for privacy and security remain the same, even in the post GenAI world. It looks like it is only a matter of awareness for new users who weren’t aware of it.

The post Google’s Gemini Nano Banana and the Cost of Convenience appeared first on Analytics India Magazine.

]]>
BharatGen’s ‘Recipe’ for Building a Trillion Parameters Indic Model https://analyticsindiamag.com/ai-features/bharatgens-recipe-for-building-a-trillion-parameters-indic-model/ Wed, 24 Sep 2025 10:30:00 +0000 https://analyticsindiamag.com/?p=10178061

The consortium insists sovereignty doesn’t mean shutting the door on global players.

The post BharatGen’s ‘Recipe’ for Building a Trillion Parameters Indic Model appeared first on Analytics India Magazine.

]]>

BharatGen, the IIT Bombay-led consortium that bagged the biggest GPU allocation under the IndiaAI Mission, is now laying out the framework to achieve its most ambitious target yet — a trillion-parameter model. 

The task is not just about scaling compute, but building the scaffolding India lacks — data, talent, and what the group calls “recipes” for sovereign AI.

“We picked this ambitious goal because we really want to move the needle on what’s possible to build in India today,” Rishi Bal, head of BharatGen, told AIM. “But this is not just about the models. It’s about the entire ecosystem… it’s a steep ramp.”

The numbers are staggering. BharatGen has secured 13,640 H100 GPUs and close to ₹1,000 crore in funding, the single-largest allocation in the country. 

It already has a series of early releases under its belt — Param-1, a bilingual 2.9-billion-parameter model, Shrutam for speech recognition, and Patram, a vision-language model for document understanding. But, scaling to a trillion parameters is a different order of challenge.

The Recipe

BharatGen started with a consortium of seven institutes and is now expanding that base to nine by adding IIT Kharagapur and IIIT Delhi. Bal said that to reach the larger model, the team first needs the talent, and that is what it is currently focusing on. 

The first milestone is to build a robust research ecosystem and next 12 months have been assigned to build just that, Bal said, adding that the numerous language specific problems, challenges, and knowledge exchange require the group to partner very closely.

Data is the other pillar. India is short on the kind of large-scale, high-quality datasets that powered the first wave of LLMs in the West. Several startups took the synthetic data route, generating Indic data from models like Llama and Mistral, but that path does not seem completely scalable.

To fix this, BharatGen has chosen a ground-up approach. Teams have been deployed in Madhya Pradesh to convince publishers and radio stations to contribute. 

“This may not give us trillions of tokens, but it gives us high-quality, human-generated data,” Bal said.

Beyond collection, the group is investing in data provenance — metadata and curation pipelines that are otherwise a “black art” for small players. “We need sovereign recipes, including when to build small models from scratch, when to distill from larger ones, and when to checkpoint. These are crucial for the ecosystem,” IIT Bombay professor Ganesh Ramakrishnan told AIM

The group also draws a sharp line against aping Western models. “Indian languages are as important as English. You cannot just tokenise them into English-heavy models. This is about building on our own terms,” Ramakrishnan said.

BharatGen is working with NASSCOM AI to fold this into a national AI stack.

On synthetic data, BharatGen is pragmatic. Bal said it cannot be written off. “You have to look at the right mixture. Crawling, OCR, synthetic generation, community contribution — they’re all tools. The question is about balance, and that’s where the research consortium helps.”

A Sovereign Ecosystem, Not Isolation

BharatGen was seeded by the Department of Science and Technology (DST) and is structured as a Section 8 non-profit company under IIT-Bombay as per the Companies Act, 2013. The idea is to make public goods, not returns. 

“If I set this up as a for-profit, there would always be questions,” Bal said. “As a Section 8 [entity], we have institutional credibility, and it unlocks partnerships — academic and private — that would be harder otherwise.”

“It’s like another investor who has some investment expectation of return.

And in return, some expectation of control to ensure that its interests are served well. This is no different from raising 100 crores from a private equity fund or a sovereign fund,” Bal added.

In India, the government has provided GPUs and funding, and does take a stake in return, structured through board seats or convertible debentures. Critics worry this could give the state too much control over the models. Bal pushed back. “It’s just another investor with expectations. The legal structures are in place.”

Comparing the AI ecosystem in China, which is heavily supported by their government, Bal said India needs a similar approach to develop an ecosystem for AI that thrives.

“Because if you get these large players [like OpenAI, Google, or Meta] very early, there’s no protection for the local players,” Bal said, highlighting the surge of AI models from China that are built by private players.

Earlier, Abhishek Upperwal from Soket AI Labs, another participant in the IndiaAI Mission, said that equity is a good idea. Nikhil Malhotra from Tech Mahindra’s Makers’ Lab, which is now also part of the mission, said that as long as the equity doesn’t interfere with their direction, it’s “a fantastic idea.”

Read: IndiaAI’s Equity in AI Startups is a ‘Fantastic’ but Risky Idea

BharatGen insists sovereignty doesn’t mean shutting the door on global players. The team recently signed an agreement with IBM to collaborate on model technologies, as well as data preparation, and scaling data prep work for complex, governed pipelines. This is after the team’s continued partnership with NVIDIA.

“We are not collaborating with IBM on foundational models,” Ramakrishnan clarified. “We are building our sovereign AI stack. Partners can add value on top of it.” The foundation will remain Indian.

IBM wrote in a blog post that BharatGen will also integrate with IBM’s growing family of Granite models, and build use case templates for those industries with IBM watsonx and Red Hat OpenShift AI. 

With nearly 14,000 GPUs at its disposal, BharatGen sits at the heart of India’s AI push. But if its leaders are to be believed, the real test would not be in hitting the trillion-parameter milestone alone. It would be whether the effort can seed a sovereign ecosystem robust enough to outlast the compute cycles.

The post BharatGen’s ‘Recipe’ for Building a Trillion Parameters Indic Model appeared first on Analytics India Magazine.

]]>
Storytelling is in the Creator’s Control with GenAI https://analyticsindiamag.com/ai-features/storytelling-is-in-the-creators-control-with-genai/ Wed, 24 Sep 2025 08:40:55 +0000 https://analyticsindiamag.com/?p=10178031

“Democracy would not have been possible without storytelling being distributed.”

The post Storytelling is in the Creator’s Control with GenAI appeared first on Analytics India Magazine.

]]>

Stories have been the lifeblood of human civilisations. From cave paintings to cinema screens to shorts and reels, each era has found new ways to tell them. With the advent of generative AI, imagination is no longer limited by production barriers when it comes to narrating a story. 

At Cypher 2025, India’s largest AI conference organised by AIM in Bengaluru, Soumyadeep Mukherjee, co-founder and CTO of Dashverse, delved into reshaping narratives with GenAI. 

The age-old practice is being transformed with accessibility and a more immediate output, he said, adding that “the society is built around stories.”   

In his view, storytelling has always been intertwined with technology, from the printing press and radio to the internet, and each disruption has made it more widely available. With GenAI, he argued, this accessibility is reaching an entirely new scale.

Storytelling Through Technology

Mukherjee drew parallels between technological leaps and social transformation. He observed that the printing press made stories available to millions, radio and film allowed societies to share emotions, and the internet put creativity into pockets worldwide. 

“Democracy would not have been possible without storytelling being distributed,” he noted, adding that broader access to narratives gave individuals a voice in shaping societies.

For him, GenAI represents the next iteration of this journey. The barrier to entry in storytelling is collapsing, enabling anyone with an idea to create compelling narratives. 

Imagination is the driving force now, Mukherjee said while showcasing examples of AI-generated films created in just days using his platform. 

This immediacy, he argued, reduces the traditional complexity of filmmaking, scripts, budgets, logistics, production schedules and allows creators to focus on ideas.

The team at Dashverse take this a step further by utilising available Gen AI models and fine tuning them for long-form storytelling. 

Mukherjee highlighted that compared to the available consumer-facing genAI tools like Google Veo 3, their product focuses on providing consistent character generation.

Imagination Over Constraints

Traditionally, a director might spend years developing a film, juggling departments, budgets, and editing. 

With Dashverse, “you saw the first version of your video in 10 minutes of effort… and now you can keep editing scene by scene,” Mukherjee said, highlighting the creative control that storytellers can achieve. 

Yet Mukherjee was clear that AI is not the storyteller, humans are. 

For him, GenAI’s role is in production, not authorship. He stated that inspiration has always been an integral part of art, enabling artists to learn from others and serving as a crucial element in the creative process. This, he added, allows for more impactful storytelling and the creation of previously impossible narratives.

In a world saturated with content, Mukherjee suggested that what stands out are not just stories, but connections. Infinite supply, he argued, should not dilute creativity but deepen the niches where audiences resonate most. With GenAI, those connections become quicker to build and easier to visualise.

Mukherjee’s reflections at Cypher 2025 offered a reminder that storytelling is less about tools and more about human imagination. 

GenAI, in his framing, is not the replacement of creativity but its amplifier. “It is the human’s emotion, it’s the human’s journey, it’s our thought that humans connect with,” he concluded. And in that sense, technology is helping us realise our imagination and vision into promising tales that move humans.

The post Storytelling is in the Creator’s Control with GenAI appeared first on Analytics India Magazine.

]]>
India’s Gas-Powered Data Centres at Crossroads: Bridge Fuel or Wrong Turn? https://analyticsindiamag.com/ai-features/indias-gas-powered-data-centres-at-crossroads-bridge-fuel-or-wrong-turn/ Wed, 24 Sep 2025 06:18:26 +0000 https://analyticsindiamag.com/?p=10178008

Natural gas may seem a potential fuel for data centres, but higher costs and insufficient infrastructure pose challenges.

The post India’s Gas-Powered Data Centres at Crossroads: Bridge Fuel or Wrong Turn? appeared first on Analytics India Magazine.

]]>

As India’s digital economy evolves, hyperscale cloud providers, AI adoption, and rapid digitalisation are fuelling the growth. Yet, the sector faces a challenge: electricity needs.

Electricity demand is expected to rise by 60% by 2030, according to the Council on Energy, Environment and Water. To meet this, industry players are exploring natural gas as an option for round-the-clock supply. 

Globally, gas is often called a “bridge fuel,” cleaner than coal or diesel, while still reliable. India is now testing this idea. Gas-powered data centres have been gaining attention as operators look for lower-carbon alternatives. 

The Boom Behind Gas

A report in The Economic Times revealed that policymakers and industry groups are actively considering gas-based power plants for data centres. State utilities and private developers are reportedly assessing whether gas could support upcoming hyperscale facilities in metros like Mumbai, Hyderabad, and Delhi NCR.

Mumbai, which hosts the bulk of India’s existing data centre capacity, seems the most likely candidate to explore this alternative. Proximity to LNG import terminals and industrial gas infrastructure makes it better placed than land-locked cities such as Hyderabad. 

“While there is no clear indication yet, regions with a high concentration of data centres, such as Mumbai, may be more likely to explore gas-based power as an option,” Hanumanth Raju, senior associate at the Center for Study of Science, Technology and Policy (CSTEP), told AIM.

Are Renewables Cheaper?

Cost turns out to be the biggest hurdle in the transition to gas in India. According to the Institute for Energy Economics and Financial Analysis (IEEFA), firm renewable energy (RE) prices in India average ₹4.98-₹ 4.99 per kWh, which is below the median gas tariff of ₹5.4.

Rishik Teepireddi, vice-president of business strategy and renewable energy at  CtrlS Datacenters, told AIM, “Gas may be a short-term solution to address gaps in the RTC energy supply. India boasts abundant solar, wind, and hydro power, and has already achieved roughly 50% of non-fossil installed power capacity ahead of its 2030 target.”

Nonetheless, the weighted average power cost of gas-based plants is significantly higher than almost all other sources (except diesel), said Raju. “For instance, tariffs have risen from ₹4.72 per unit in FY2016 to ₹7.17 per unit in FY2024. Given that data centres require a stable and reliable supply of affordable power, long-term power purchase agreements from gas plants may not be the most suitable option,” he added.  

This puts India at odds with global narratives where gas is positioned as a competitive bridge. In India, solar, wind, and battery storage are already more affordable.

Infrastructure Bottlenecks

Even if operators were willing to bear the cost premium, India’s gas pipeline and plant infrastructure is inadequate. Locating gas plants near urban data centre hubs is difficult due to land scarcity and transmission challenges, explained Raju, adding that respective state governments need to actively facilitate land allocation for this purpose. 

“Power evacuation from these plants requires careful transmission planning, and the addition of new substations within dense city limits is a complex, cumbersome task,” he said.

Mumbai might emerge as an early test bed, but scaling gas-powered data centres nationwide would be an uphill task.

“Renewables plus storage and flexible clean generation are more sustainable long-term solutions than investing in substantial gas infrastructure, which faces supply, cost, and policy risk in the Indian subcontinent, ” Teepireddi added. 

Use Cases and Emerging Plans

According to ITPro, data centre operators globally are adopting onsite gas generation systems as a way to ensure reliability during grid stress. In India, similar conversations are underway.

Several companies in India are focusing on cleaner energy solutions and infrastructure investments for data centres. Nxtra by Airtel, in collaboration with Bloom Energy, is employing advanced fuel cell technology powered by natural gas, and eventually, hydrogen to provide cleaner energy for its data centres. 

The Indian Oil Corporation Limited has expressed interest in entering the data centre market. Global players like GE Vernova and Siemens Energy are also eying the opportunity, with GE Vernova offering gas turbines like the LM2500XPRESS for efficient backup energy, especially for AI workloads. Siemens Energy provides a variety of solutions for power generation and grid stability. 

Furthermore, Ursa Clusters recently announced plans for a 100-MW data centre in Hyderabad, backed by a memorandum of understanding with the Telangana government. 

Blackstone is constructing a 150-MW facility after acquiring a natural gas-fired power plant in Virginia, and Tillman Global Holdings plans a significant 300-MW data centre. 

While no large gas-powered data centre project has broken ground in India yet, these plans reflect serious deliberation.

Policy Vacuum and Climate Trade-Offs

India currently has no policy framework that promotes gas-powered data centres. Instead, the government’s Data Centre Policy 2020 and subsequent drafts emphasise renewable energy integration.

IEEFA argues that adopting gas could be a misstep in India’s net-zero journey. “Natural gas is not a transition fuel for India’s data centres, it’s an expensive, volatile import that will expose operators to risks,” the think tank warned.

Raju echoed this view: “Renewable energy coupled with storage clearly holds an advantage, both environmentally and economically. Data centres can increasingly rely on firm and dispatchable renewable energy (FDRE), which already offers tariffs approaching those of standalone RE projects.”

From a climate lens, the methane leakage associated with gas supply chains further erodes its “clean fuel” reputation. Hyperscalers like AWS and Google, which have pledged 100% renewable energy commitments, may struggle to justify their gas usage in India operations.

The Transition Argument

Still, some argue for pragmatism. India’s data centre capacity is expected to grow from 1.3 GW in 2024 to about 5 GW by 2030, nearly a fourfold increase, fueled by AI, cloud computing, and data localisation policies. If renewable and storage capacity lags, gas could mend the gap for short-term.

Raju conceded this: “If demand accelerates steeply in the near term, procuring electricity from existing gas-based plants may serve as a feasible stopgap for meeting peak demand, given their high flexibility and fast ramping capabilities.” 

But investing in new gas plants solely for this purpose would not be an ideal long-term strategy, he added.

This suggests gas may provide transitional backup solutions, but not the mainstream source of energy.

Gas offers quick ramping and relative emissions benefits over coal and diesel, but is costlier, less sustainable, and infrastructure-constrained. Mumbai may pilot hybrid or backup gas solutions, but a deeper enquiry points toward renewables + storage as the ultimate answer.

The critical question now is whether Indian data centres will use gas, and for how long? If renewables and storage scale fast enough, the window for gas could close before it even opens.

The post India’s Gas-Powered Data Centres at Crossroads: Bridge Fuel or Wrong Turn? appeared first on Analytics India Magazine.

]]>
How BharatGen Took the Biggest Slice of IndiaAI’s GPU Cake https://analyticsindiamag.com/ai-features/how-bharatgen-took-the-biggest-slice-of-indiaais-gpu-cake/ Mon, 22 Sep 2025 14:30:00 +0000 https://analyticsindiamag.com/?p=10177923

BharatGen has secured 13,640 H100 GPUs and ₹988.6 crore in funding to pursue India’s first trillion-parameter AI model initiative.

The post How BharatGen Took the Biggest Slice of IndiaAI’s GPU Cake appeared first on Analytics India Magazine.

]]>

The second phase of the IndiaAI Mission is all about finally delivering the GPUs that were promised earlier, while also adding new promises to the table. While the first four startups are slowly beginning to receive their hardware, the eight new ones are already queued for allocations. 

The government is now moving into full-scale compute deployment, making this the country’s biggest AI GPU programme. At the centre of it is BharatGen, led by an IIT-Bombay consortium, which has received the single largest single GPU allocation in the country to build sovereign large language and multimodal models, surpassing even Sarvam AI, one of the initial selections. 

BharatGen has been allocated 4,096 NVIDIA H100 GPUs for two months, 8,192 H100 GPUs for 10 months, 440 H100 GPUs for speech models over a year and 912 H100 GPUs for vision-language models across two six-month phases. The project is supported with ₹988.6 crore, along with up to 25% additional funding for non-compute costs. The goal is a trillion-parameter model, an attempt no Indian team has made before.

“This is not just about building models but understanding why they behave the way they do. Trust is important, ethics is important, and Indian languages need the right representation,” said professor Ganesh Ramakrishnan of IIT Bombay, who is leading BharatGen, while speaking at AIM’s Cypher 2025. “If we don’t take this seriously, we risk losing not just Indian languages but also Indian content.”

The Bigger GPU Rush

BharatGen is part of a much larger GPU push. In less than a year, IndiaAI has empanelled over 34,000 GPUs, four times its original target of 10,000, with another 6,000 in the pipeline. This brings the total close to 40,000, making it one of the largest AI compute programmes outside the US and China.

Based on announcements and estimations.

Alongside BharatGen, several other companies are drawing from this pool. Fractal Analytics has secured 4,096 H100 GPUs for over nine months with a 40% concession to build India’s first large reasoning model, scaling up to 70 billion parameters. The focus is on structured reasoning and decision-making in healthcare, drug discovery, national security and education.

Alongside it, ZenteiQ.ai (formerly Zentech AI) is building BrahmaAI, a science-driven foundation model for engineering and scientific computing. It has 2,128 H200 GPUs over a year, supported by ₹74.7 crore, to deliver models from eight billion to 80 billion parameters. The project will cover engineering simulations and a multilingual science-education chatbot for non-invasive diagnostics.

In the healthcare space, NeuroDX, under IntelliHealth, is building a 20 billion parameter multimodal EEG model with 368 H200 GPUs over 18 months with ₹12.5 crore in support. The aim is early detection of dementia, personalised treatments for depression and anxiety and future brain-computer interfaces. All outputs from the project will be open-weight and open-source.

Meanwhile, Genloop is pursuing a smaller-scale, India-focused approach with three models—Yukti, Varta, and Kavach—each with around two billion parameters. The company aims to support all 22 scheduled Indian languages. Backed by just 16 H100 GPUs and ₹1.32 crore in funding for 12 months, Genloop is focusing on conversational AI for rural healthcare, inclusive banking and content moderation.

Tech Mahindra’s Makers Lab is working on an eight-billion-parameter model for Hindi dialects and an agentic AI platform, Orion. Its allocation is 32 H100 GPUs over nine months with ₹1.06 crore in support. Orion will extend Project Indus, which began with Hindi large language models (LLMs), into agritech, edtech, rural finance and healthcare.

Avataar.ai and Shodh AI complete the GPU landscape of IndiaAI’s second phase. 

Avataar.ai is taking a multimodal route, building large multimodal models that range from 1.5 billion to 70 billion parameters across image, video and text. Its ‘Avataars’, a suite of domain-specific and distilled AI models, are intended to power key sectors like agriculture, healthcare, education and governance. The project, supported by an allocation of 768 H100 GPUs over six months, will also develop an agentic platform to enable sector-specific applications. 

Shodh AI, meanwhile, is focused on a seven-billion-parameter foundational model for material discovery, compressed from an 80-billion-parameter scientific LLM. The model will power an autonomous system for hypothesis generation and experiment design, creating a materials-science AI assistant across electronics, semiconductors, healthcare and defence. 

To achieve this, Shodh AI is allocated up to 128 H100 GPUs for a period of eight months. 

When it comes to the GPUs already allocated from Phase 1, Sarvam AI, which is building a 120-billion-parameter model for Indic languages. Sarvam AI secured the highest order of 4,096 H100 GPUs over a period of six months and nearly ₹99 crore in subsidies. It is expected to release India’s first LLM next year. 

Initially, the firm had received 1,536 GPUs from the initial tranche. However, Sunil Gupta from Yotta previously confirmed to AIM that the company has now received all of the allocated GPUs. 

Similarly, Soket AI Labs is charting a path to a 120-billion-parameter Indic model for healthcare, defence and education. Its journey begins with a seven-billion-parameter model over six months, before it is scaled up to 120 billion parameters. Gupta also confirmed that 1,536 GPUs are reserved from Yotta’s end for the firm, which will be deployed soon.

Gnani.ai has a ₹177 crore contract for 1.3 crore GPU hours, equivalent to 1,536 GPUs across H100 and H200 units from E2E Networks for one year. The firm is building a 14-billion-parameter voice AI model for multilingual real-time speech processing. The timeline for its GPU allotment is yet to be revealed.

As for Gan.ai, while there have been no announcements about GPUs or the roadmap yet, the company aims to create a 70-billion-parameter model to achieve ‘superhuman text-to-speech’ capabilities.

Why BharatGen Stands Out

BharatGen alone accounts for approximately 13,640 H100 GPUs. Taken together, the disclosed allocations so far amount to roughly 22,776 H100 GPUs and 2,496 H200 GPUs with a mix of 3,072 H100 and H200 GPUs, or just over 28,000 GPUs—around 71% of the available pool. That leaves 11,000-12,000 of India’s GPUs, around 29%, either unallocated or undisclosed.

Seeded initially by the central government’s science and technology department (DST), BharatGen is now a nine-member consortium with funding and GPUs higher than those of Phase 1 beneficiaries. It has already released Param-1, a 2.9-billion-parameter bilingual model with 25% Hindi data, as well as several domain-specific models in agriculture, law, finance and Ayurveda.

It has also developed Shrutam, an automatic speech recognition system, and Patram, a seven-billion-parameter vision-language model for document understanding. These smaller-scale projects are the foundation for its trillion-parameter roadmap.

At Cypher 2025, Ramakrishnan explained why sovereign models matter. “The government has taken this very seriously. We would like all this to balloon into a massive creation of Indian language content,” he said. “The models cannot be built by merely tokenising Indian languages into English-heavy models. I don’t buy into such a vision. Indian languages are as important as English. This is about building models on our own terms.”

This philosophy is baked into BharatGen’s open approach. All models will be open-source, open-weight and open recipe, and the IP will be retained by IIT-Bombay and IndiaAI. Use cases range across governance, finance, healthcare, agriculture, education and law. The underlying bet, however, is on sovereignty. The goal is to make India a producer, not just a consumer, of AI technologies.

“This is actually the seat of India’s AI ecosystem, having our feet on the ground through applications, while building models which are not just aping Western models,” Ramakrishnan added.

The post How BharatGen Took the Biggest Slice of IndiaAI’s GPU Cake appeared first on Analytics India Magazine.

]]>
The Generation That Refused to Log Off in Nepal https://analyticsindiamag.com/ai-features/the-generation-that-refused-to-log-off-in-nepal/ Mon, 22 Sep 2025 12:30:00 +0000 https://analyticsindiamag.com/?p=10177900

“The backlash made it evident that Nepali citizens do not tolerate digital authoritarianism disguised as governance.”

The post The Generation That Refused to Log Off in Nepal appeared first on Analytics India Magazine.

]]>

When Nepal’s government abruptly banned 26 major social media platforms in September, it didn’t just cause an internet outage; it sparked a nationwide digital uprising. What started as a protest against online restrictions quickly transformed into a movement led by Gen Z, demanding accountability, transparency and a greater voice in the country’s political and digital future.

“I think what a lot of international coverage misses is that this protest was never just about social media. Yes, the ban on 26 platforms triggered the response, but the real fuel was deeper: corruption that had crossed all limits,” said 26-year-old Prince Shah Chaudhary, CEO of online petition platform SpeakUp Nepal, who was part of the protests that swept across Nepal from September 8-13.

Chaudhary recalled the moment the announcement was made. “The government announced the ban with zero consultation. The message felt clear: ‘You’re not allowed to talk back.’ For young people like me, that was just unacceptable. It wasn’t just about TikTok; it was also Facebook and WhatsApp, platforms our families, small businesses, diaspora communities and even school groups rely on every single day. These are not luxuries; they are part of our daily civic, economic and social lives.”

The protests cut across demographics. “It was creators, students, doctors, engineers, filmmakers and even high school kids in uniforms. People who may never have joined a protest before felt compelled to say, ‘No, this is not okay,’” he explained.

Chaudhary also highlighted the economic stakes for Nepal’s youth. “In the short term, the effects were immediate and brutal. I personally know young people who run their entire business off Facebook and Instagram. Some of them had customers waiting, parcels half-shipped, and suddenly, they couldn’t respond.” 

“Freelancers who work with international clients lost communication overnight because they used WhatsApp or Telegram to coordinate. Students preparing for exams lost their peer groups, study materials and even daily schedule reminders that were all shared online. That’s the reality in Nepal: the internet isn’t a luxury. It is infrastructure,” he added. 

Chaudhary warned that the long-term consequences could be even more severe. “It told our youth: your livelihoods, your learning, your voice, all of that can be taken away if it inconveniences power. It told global investors: Nepal is unstable when it comes to tech governance. It told innovators: don’t build here unless you want your platform blocked overnight.”

The Road to Protests: What Sparked the Digital Uprising

On September 8, protests erupted in response to the Nepal government’s social media regulations, which require platforms to register with the communications ministry and appoint a local grievance officer. While TikTok and Viber complied, 26 others, including Facebook, Instagram, and YouTube, did not.

According to Suvechchha Chapagain, senior programme officer at Accountability Lab Nepal, this raised concerns that the measure was less about tackling misinformation and more about expanding state control over digital platforms. “The backlash made it evident that Nepali citizens do not tolerate digital authoritarianism disguised as governance.” 

According to Pius Fozan, communications manager at the International Currency Association, the government’s approach was standard compared with other countries. “What the government asked platforms to do was fairly standard…TikTok complied with this requirement, as did Viber.”

Fozan points to global precedents where social media giants have faced tough regulation. The European Commission fined Apple over €1.8 billion in 2024 and repeatedly imposed antitrust penalties on Google. More recently, Meta was forced to scale back its political advertising programme due to EU regulations. 

Meanwhile, the Australian News Media Bargaining Code compelled Facebook and Google to negotiate payments to news organisations after some initial resistance. By comparison, Nepal is a young democracy with institutions still in their infancy. Thus, to label its regulatory efforts as outright authoritarian is unfair, he said. 

Yet, Fozan acknowledges that regulation remains essential. “But that does not negate the underlying need for regulation. Social media platforms must be held accountable for how algorithms amplify hate, spread misinformation or facilitate fraud. These are rampant problems widespread in South Asia. We cannot forget how, in Myanmar, an unmoderated Facebook helped fuel anti-Rohingya hate speech and contributed to the 2017 genocide and exodus of Rohingya. That could not happen unchecked in Europe or North American countries.”

“As for the legitimacy of the law itself and the spontaneous anger it triggered, I see the two as separate. The law was a spark, a trigger perhaps, but the deeper frustration stems from decades of political stagnation, where three familiar figures, [Sher Bahadur] Deuba, [Pushpa Kamal Dahal] Prachanda and [KP Sharma] Oli, have rotated power among themselves like a musical chair. Young people wanted a break from that cycle, and you could already sense this mood in the last parliamentary elections,” he added.

Government Shakeup: Political Accountability in Nepal

The protests had immediate political consequences. Oli resigned as the PM, the Parliament was dissolved, and the government underwent a major reshuffle.

“Now, as the government is toppled, the parties that had long been dominated by ageing political figures are agreeing to conditions for a citizen-led government within the constitutional framework,” Chapagain noted.

Chapagain frames the protests as a moment of political accountability. 

From the government’s perspective, the original social media registration requirement may have been intended to improve governance and combat misinformation. But the backlash demonstrated the public’s intolerance for digital authoritarianism and corruption in Nepal, forcing policymakers to rethink their approach.

Implications on Tech Policies 

Meanwhile, public policy researchers see the protests as a wake-up call for youth-inclusive governance. “The Gen Z protests erupted almost spontaneously, mobilised through platforms like Reddit and Discord, while major social media were still blocked. For years, leaders underestimated the public’s awareness of corruption, but digital spaces offered young people a lens into the lavish lives of the politicians and a tool for collective action,” Chapagain said.

Fozan emphasised the dual nature of regulation in a rapidly digitising society. “I would not frame it in binary terms. Regulation of social media and wider tech systems is essential in our times of rapid technological disruption. These humongous platforms are no longer experimental, entrepreneurial, or marginal; they are entrenched infrastructures with influence unmatched by any other parallel system in history.” 

“They have unusually unpredictable power in the way information is created and distributed. They can select, push, ban and remove information the way they like if not checked by law, and they most likely do it all the time, anyway,” he highlighted. 

Long-Term Effects: Gen Z’s Digital Empowerment

Reflecting on the future, Chaudhary underscored the lasting impact of these protests. “We didn’t wait for a party or politician to lead us. We used the very platforms they tried to ban to organise, inform and mobilise. It was decentralised, but not chaotic. Everyone knew the two core demands: lift the ban and investigate the corruption cases fairly.”

The successful mobilisation has sparked mainstream conversations around tech policy, digital rights and governance reform. “In the end, the government had to listen. The ban was reversed. And now, for the first time in years, we’re seeing mainstream conversations about tech policy, digital rights and governance reforms. That didn’t happen because of violence or political pressure. It happened because young people demanded better: persistently and powerfully,” Chaudhary explained.

For Nepal’s Gen Z, the episode is not merely about restoring access to social media. It is about asserting their economic, social and civic stakes in a digital world. “These platforms are infrastructure. They are livelihoods, they are learning, they are connections. And we’ve shown that if you try to cut them off arbitrarily, young people will push back and they will win,” Chaudhary concluded.

The post The Generation That Refused to Log Off in Nepal appeared first on Analytics India Magazine.

]]>
Should India Build Its Own AI Foundational Models? https://analyticsindiamag.com/ai-features/should-india-build-its-own-ai-foundational-models/ Mon, 22 Sep 2025 03:56:25 +0000 https://analyticsindiamag.com/?p=10177838

At Cypher 2025, industry leaders debate whether India should invest in building its own AI foundational models or adapt global ones.

The post Should India Build Its Own AI Foundational Models? appeared first on Analytics India Magazine.

]]>

AI is headed to be the cornerstone of digital transformation worldwide. But, India faces a pressing question: invest in building its own foundational AI models or continue adapting global ones for local use? The debate revolves around sovereignty, cost, innovation, and cultural identity.

The subject was put to test at the Cypher 2025 hosted by AIM in Bengaluru. 

The session brought together Jason Joseph, chief information security officer at mPokket, Ashwini Patil, EVP and Head of Product Design at Lentra, and Manish Kumar Purwar, Global IT Head for Sales & Service Technologies.

With India eyeing developed nation status by 2047, the conversation was not just theoretical, it carried undertones of national strategy. 

The Case Against Immediate Investment

Joseph struck a cautious note, pointing out that building models from scratch demands immense compute power, data, and resilient infrastructure. 

“Do we have the infrastructure that helps us build such models at scale? I would say not yet,” he said.

India, historically, has caught up by improving global technologies rather than reinventing them, he argued. “Let us build on what is existing… and in time make a smooth transition.”

Patil agreed that cost and datasets remain barriers. “One of the numbers said that the Indic dataset that they are using was less than 0.01%. But we cannot rely on those types of datasets.” 

However, she disagreed with Joseph’s description of neighbour countries as potential adversaries. She leaned towards adapting what exists for India’s needs, suggesting to work with global partners: “If they have already done some work, we will reuse some of it and build what we want to build on it.”

The Push for Sovereignty and Innovation

Purwar, on the other hand, underscored the strategic risks of overdependence on foreign models. He said he would be doubtful about the success of the industry “if we don’t build the foundation of AI layers.”

He cited the UPI infrastructure, Aadhaar, to stress upon India’s capacity to scale digital systems, adding, “that has put us into the upper quadrant of the world in digital transformation. Why not AI?”

The debate touched upon India’s diversity, with references to Krutrim LLM and Sarvam AI, initiatives that prove local languages and contexts can outperform global models in specific tasks. 

For Purwar, the argument stretched beyond technology to identity. “If our model doesn’t support our cultural diversity, probably after a few years everybody would call us an Indian, not the diversity that we have.”

A Middle Path

A middle path emerged as Patil spoke of “pragmatic nationalism”, a strategy for India to invest in foundational models without ignoring current challenges. 

Taking a measured approach, she quizzed if the country is ready to act now or wait till developing foundational models is an accessible and economically feasible idea?”

Public-private partnerships surfaced as a potential solution, especially in sensitive sectors like healthcare. 

Joseph noted: “It should definitely be a PPP model… there could be enough oversight, a way to safely build models without abuse.”

Need for Sovereign, But Balanced Approach

The debate reflected the duality of India’s AI journey: an immediate need for innovation balanced against the long-term ambition of sovereignty. 

While the speakers differed on timing, they converged on one point: that India cannot ignore the question of investing in foundational models forever. 

Whether through cautious adoption or bold investment, the country’s AI future will likely demand a mix of global collaboration and home-grown strength.

The post Should India Build Its Own AI Foundational Models? appeared first on Analytics India Magazine.

]]>
Policy Could Pose Bigger Risk Than Technology, says E-Gaming Federation chief to AI community https://analyticsindiamag.com/ai-features/learn-from-rmg-ban-or-face-the-axe-e-gaming-federation-chief-warns-indian-ai-sector/ Fri, 19 Sep 2025 12:07:59 +0000 https://analyticsindiamag.com/?p=10177812

Speaking at Cypher 2025, the E-Gaming Federation CEO called on AI firms to embed trust early by following ethical, legal frameworks.

The post Policy Could Pose Bigger Risk Than Technology, says E-Gaming Federation chief to AI community appeared first on Analytics India Magazine.

]]>

Anuraag Saxena, CEO of E-Gaming Federation, issued a sharp warning to India’s AI community at Cypher 2025. He urged companies to embed trust and engage policymakers early or risk the same fate as the recently banned real money gaming (RMG) sector.

Saxena argued that regulation, not sales or technology, poses the biggest threat to industries. He recalled how sudden policy moves have wiped out entire sectors: the coal block cancellations in 2014, the crypto ban in 2018, and the abrupt shutdown of the ₹20,000-crore online money games industry this year that employed two lakh people.

“Regulation is the biggest risk on your alphas,” he said, warning that firms with five or 5,000 employees could be destroyed overnight if they ignore policy engagement.

The Cost of Ignoring Policy

Saxena noted that the tech narrative usually revolves around investors, products, and consumers. Leaving policy out of the equation, he said, leaves companies vulnerable. He pointed to Meta, which had to hire former UK deputy prime minister Nick Clegg to lead its policy function after facing global backlash. “Repair is always more expensive than prevention,” he said.

Policy and innovation also move at different speeds, Saxena observed. “Companies are on skates; governments are ships.” This mismatch, he explained, means regulation can swing between enabling innovation or destroying it.

He added that restrictive policies often arise from genuine concerns around safety, privacy, social order, or consumer protection. Drawing from gaming’s failures, he said only a few operators enforced strong KYC, geofencing, and age checks. The rest ignored safeguards, resulting in prohibition, lost jobs, and vanished capital.

Trust from Day Zero

Saxena urged the AI sector to embed trust from “day zero” and design systems that operate within ethical and legal frameworks. He called on companies to secure “a seat at the table” as laws are drafted. “Realism and empathy are required to understand policymakers’ constraints, distractions, and knowledge gaps,” he said.

He outlined three hooks for building government partnerships: user safety, economic value, and national pride—“putting India on the map.” Without sustained engagement, he warned, Indian innovators would lack the fuel available to global peers and remain vulnerable to sudden regulatory shocks.

A Collective Responsibility

Saxena pressed the community to act together and rethink its approach. “Trust is the only currency… you need to earn it.” He challenged the audience to build trust frameworks that not only meet regulatory standards but also turn regulation into a driver of innovation.

India, he said, has the scale, speed, and cultural flair to leapfrog global competitors, but wasted opportunities in hardware, software, and social media must not be repeated in AI.

He closed with a clear message: ethics, safety, user education, and government partnerships are non-negotiable. “If engaging with policymakers isn’t your job, it’s nobody’s job. Build trust or risk everything crumbling overnight. Learn from our mistakes in gaming, make trust your ‘day zero’ priority.”

The post Policy Could Pose Bigger Risk Than Technology, says E-Gaming Federation chief to AI community appeared first on Analytics India Magazine.

]]>
AI-First Villages: Taking JAN AI to Rural India https://analyticsindiamag.com/ai-features/ai-first-villages-taking-jan-ai-to-rural-india/ Fri, 19 Sep 2025 09:59:40 +0000 https://analyticsindiamag.com/?p=10177809

JAN AI is on a mission to transform rural India into AI-first villages, ensuring technology empowers farmers, women, and youth at the grassroots.

The post AI-First Villages: Taking JAN AI to Rural India appeared first on Analytics India Magazine.

]]>

Artificial intelligence is often hailed as transformative, but its benefits rarely reach rural India. JAN AI wants to change that by building “AI-first villages” across the country. Its focus is inclusivity—uplifting farmers, artisans, and rural entrepreneurs, not just urban innovators.

The initiative aims to bridge the digital divide. It plans to deliver AI literacy in local languages and help people apply it to practical, everyday use. A farmer diagnosing crop disease with an AI app, or a homemaker selling crafts online, are the kinds of outcomes it envisions. The mission is bold: reach 10,000 villages, train 10 million citizens, and enable 100,000 rural AI entrepreneurs.

The vision was laid out by Madan Padaki, managing trustee of JAN AI and head of the Head Held High Foundation, at Cypher 2025 in Bengaluru. “It’s not just making AI in India. I think this is also about making AI work for India,” Padaki said.

Rethinking AI for Bharat

Padaki questioned why AI should serve only metros and tech hubs. “Why should the internet or a metaverse or AI first work in Koramangala rather than Koppal?” he asked. To test this idea, JAN AI ran literacy pilots in villages in North Karnataka. Students identified everyday problems where AI could make a difference.

The results showed early sparks of innovation. Farmers tried disease detection apps. Women entrepreneurs explored AI tools to expand their businesses. Padaki was clear on the metric that matters: “If you are unable to put a thousand rupees more in their pockets every month, the tech is useless.”

The foundation also launched training with UN Women, turning women in ITIs into AI trainers for peers. Rural youth are learning not just to use AI, but to imagine new careers and enterprises built on it. “In India, we need jobs. We need AI to create more jobs in the long run,” Padaki said.

He added that the foundation is working with initiatives like Bhashini and the IndiaAI Mission to advance its goals.

Building AI-First Villages

JAN AI’s model rests on four pillars: awareness and learning, rural innovation, entrepreneurship, and community ownership of data. The goal is to democratise AI as earlier waves of technology were—PCOs in the telecom era or internet cafés in the early days of the web.

Padaki imagines local AI centres in every village. Trained youth would act as advisors, offering context-specific solutions for crops in Dharwad or crafts in Koppal.

To scale, the foundation is working with universities, government bodies, and global organisations. Partnerships with Google.org, the Asian Development Bank, and state institutions have already trained hundreds in Kalyan Karnataka. Another partnership with the Karnataka Digital Economy Mission is aiming to create 1,000 AI entrepreneurs in a year. The first cohort has begun in Kalburgi.

Padaki calls the model an “A-I-D-E-A-L” village. In this vision, thousands are AI aware, many know how to safeguard against misuse, some run income-generating projects, entrepreneurs offer trusted solutions, and cooperatives ensure shared benefits from community-owned data.

“Can we create 10,000 AI-first villages, 100,000 AI entrepreneurs, and a play store of a thousand proven solutions that truly put money in the hands of our rural brethren?” Padaki asked.

For him, the success of AI in India will not be judged by shiny labs or global rankings. It will be measured in resilient villages that thrive in the digital age.

The post AI-First Villages: Taking JAN AI to Rural India appeared first on Analytics India Magazine.

]]>
Leander Paes: AI and Indian Intelligence Can Power Nation’s Leap to First World https://analyticsindiamag.com/ai-features/leander-paes-ai-and-indian-intelligence-can-power-nations-leap-to-first-world/ Fri, 19 Sep 2025 05:56:42 +0000 https://analyticsindiamag.com/?p=10177796

At Cypher 2025, the tennis icon said India’s rise will depend on fusing global investment with the country’s greatest asset, its people’s intelligence.

The post Leander Paes: AI and Indian Intelligence Can Power Nation’s Leap to First World appeared first on Analytics India Magazine.

]]>

Tennis legend Leander Paes believes India’s rise as a first-world nation will depend on how effectively it combines technology, artificial intelligence, and human intelligence.

Speaking at Cypher 2025, India’s largest AI conference organised by AIM from September 17-19, Paes urged global businesses to invest in India’s talent. “Their money, our intelligence, we design in India. We don’t design outside India. Make in India. Make in Bharat. Make it here.”

Paes stressed that India’s strength lies in its people’s intellect and adaptability. He framed AI as both a global leveller and a uniquely Indian opportunity. “AI does not have emotions. It only processes the data we feed it. What it forces us humans to do is grow our emotional quotient. The Indian human quotient is very high, we must retain and enhance it,” he said.

He is putting this vision into practice through his new Olympic academy in Odisha, which blends AI, sports science, and education to empower young athletes. The project currently impacts 1.2 lakh people and aims to reach 250 million children over 20 years, he said.

AI for Athletics

At the academy’s core is the PACE system, or Physical Athletic Education System, an evolution of the sports science pioneered by Paes’s father. It now uses AI-driven modelling to improve performance. “We’ve created the perfect athlete with AI,” Paes said, explaining how technology can design an ideal performance model.

But he cautioned that true effectiveness lies in customisation. Training must adjust for whether someone is left- or right-brain dominant, their genetics, and even their region—Punjab, Bengal, Gujarat, or Tamil Nadu. Unlike medal-chasing programs, his focus is grassroots. “The real talent is in grassroots India. If we can marry sports science, education, and AI there, we don’t just make champions, we build livelihoods. That’s job creation at scale,” he said.

Technology as a Career Tool

Paes noted that technology has long shaped his own journey. Rivals once used videography and data to analyse his weaknesses. He responded by mapping his own patterns and changing tactics mid-match. “Tech was mapping me, so I used it to outthink the map. That’s adaptability, and that’s what we Indians are great at,” he said. Partnerships, he added, will be key to scaling this adaptability.

Linking Tech with Life

He also connected India’s tech future with quality of life. Pointing to a corporate culture of nonstop emails and constant productivity, he warned against burnout. “AI should free us to improve life, our sleep, our food, our time, our movement. It should enhance quality, not trap us in a loop,” he said.

At 52, Paes said his mission is no longer medals but nation-building. “The Tiranga runs in my veins. My dream is that young Indians won’t know me as the guy who won 18 Slams, but as the one who gave them an education, a job, and a skill to put food on the table. That’s my gold medal now.”

The post Leander Paes: AI and Indian Intelligence Can Power Nation’s Leap to First World appeared first on Analytics India Magazine.

]]>
Here are the 8 New Tech Firms Scaling Under IndiaAI Mission’s Phase 2 https://analyticsindiamag.com/ai-features/here-are-the-8-new-tech-firms-scaling-under-indiaai-missions-phase-2/ Thu, 18 Sep 2025 15:41:24 +0000 https://analyticsindiamag.com/?p=10177779

With these selections, IndiaAI has now created a 12-company cohort, tasked with building the backbone of India’s sovereign AI ecosystem.

The post Here are the 8 New Tech Firms Scaling Under IndiaAI Mission’s Phase 2 appeared first on Analytics India Magazine.

]]>

IndiaAI Mission has announced the selection of eight firms for the second phase of its foundation model initiative. As AIM exclusively reported on September 12, 2025,  the list includes BharatGen, Tech Mahindra, and Fractal, along with Avataar.ai, ZenteiQ.ai, Genloop, NeuroDX (IntelliHealth), and Shodh AI.

The government announced the list at the curtain raiser event of the IndiaAI Impact Summit. The event by the Ministry of Electronics & Information Technology (MeitY)  is scheduled for February 2026. With this, the total number of firms under the Mission’s foundation model program has gone up to 12.  It includes four previously selected players—Sarvam AI, Soket AI Labs, gnani.ai, and Gan.AI.

The mission’s goal is to build indigenous large language and multimodal AI models trained on India-specific datasets. Since its call for proposals on January 30, 2025, the mission received 506 proposals in a span of three months. 

The New Startups

Union information and technology minister Ashwini Vaishnaw felicitated the newly selected startups, beginning with Avataar.ai. The company is building a suite of domain-specific models, called Avataars, with up to 70 billion parameters. 

These will be optimised for Indian languages and contexts such as agriculture, healthcare, and governance, while keeping infrastructure costs low and scalable. The models will be shared on AI Coach. 

The IIT Bombay-led BharatGen Consortium, backed by the department of science and technology, has ambitious plans to develop multilingual and multimodal models ranging from 2 billion to 1 trillion parameters, integrating text, speech, and images. 

It also aims to build smaller domain-specific LLMs for agriculture, governance, finance, law, health, and education. BharatGen’s approach remains open—open source, open weights, and open recipes—to create sovereign Indian foundational models.

This adds to BharatGen’s recent milestones. In May 2025, the Consortium launched its first foundational LLM, Param-1, a 2.9 billion parameter bilingual model built entirely from scratch. Param-1 had 25% Indic data—far more than Meta’s Llama, which had only 0.01%. BharatGen’s new proposal expands this ambition with models scaling up to 1 trillion parameters.

Fractal Analytics, the company that is likely to go for an IPO soon, has proposed to build India’s first large reasoning model, with up to 70 billion parameters, focused on structured reasoning, deliberate problem solving, and agentic decision-making. 

The focus is on STEM and medical reasoning, with new benchmarks created for Indian contexts. 

Meanwhile, Tech Mahindra’s Makers Lab, too, joined the list. Its plan involves creating an 8 billion parameter model specifically tuned to Indic language groups, with a focus on Hindi dialects. Alongside, it is building an agentic AI platform ‘Orion’ for real-time intelligence to be deployed in government and beyond. 

ZenteiQ.ai (formerly Zentech AI Tech Innovations) has proposed BrahmAI, pitched as India’s first science-driven foundation AI model for engineering intelligence, scientific computing, and industrial innovation. 

The initiative will deliver multimodal models ranging from 8 billion to 80 billion parameters, with applications powered by a robust data infrastructure. 

Genloop is taking a different path with smaller models. Its project is to build 2 billion parameter language models designed natively for all 22 scheduled Indian languages, prioritising reasoning capabilities over translation-based approaches. The three models—Yukti, Varta, and Kavach—will also have in-built content moderation.

Another startup, NeuroDX (IntelliHealth), is working on a 20 billion parameter foundation model for EEG signal analysis, with the goal of enabling early screening of neurological disorders and building brain-computer interfaces. The plan is to create affordable, non-invasive diagnostic tools and integrate human-AI collaboration in neuroscience using transformer-based architectures. 

And finally, Shodh AI, which is developing a 7 billion parameter foundation model for material discovery, is now part of the Mission. Its automation framework integrates AI into every step of the discovery process, from data gathering to experiment planning and evaluation. 

What’s in the pipeline?

In less than a year, the IndiaAI Mission has turned into one of the largest GPU programmes. More than 34,000 GPUs are already empanelled, four times the original 10,000 target. Another 6,000 are in the pipeline, bringing the total to nearly 40,000 GPUs.

Sarvam AI is the biggest beneficiary so far. The Bengaluru startup landed 4,096 NVIDIA H100s through Yotta Data Services and nearly ₹99 crore in subsidies. Sarvam is expected to ship India’s first large language model by early next year, though the launch has been delayed from its original six-month target.

Soket AI Labs is planning a 120-billion-parameter Indic language model. It will start with a 7-billion model in six months, scale to 30 billion, and then 120 billion within a year. It has already released a 1-billion-parameter model earlier this year.

Gnani.ai is another key beneficiary. GPU and Cloud service provider E2E Networks recently secured a ₹177 crore order to supply GPU resources to Gnani.ai. The deal covers 1.3 crore GPU hours over a year, with H100 and H200 units allocated.

With these selections, IndiaAI has now created a 12-company cohort, tasked with building the backbone of India’s sovereign AI ecosystem.

The post Here are the 8 New Tech Firms Scaling Under IndiaAI Mission’s Phase 2 appeared first on Analytics India Magazine.

]]>
Intelligent Storage: The Key to a Sustainable Energy Future https://analyticsindiamag.com/ai-features/intelligent-storage-the-key-to-a-sustainable-energy-future/ Thu, 18 Sep 2025 09:30:00 +0000 https://analyticsindiamag.com/?p=10177760

Solar and wind are abundant but unpredictable. Dhanya Rajeswaran of Fluence India talks about solutions to this intermittency.

The post Intelligent Storage: The Key to a Sustainable Energy Future appeared first on Analytics India Magazine.

]]>

As the world shifts to renewable energy, one challenge persists: intermittency. Solar and wind are abundant but unpredictable. Intelligent storage, powered by AI, IoT, and predictive analytics, is emerging as the bridge between renewable generation and reliable supply.

At Cypher 2025, one of India’s largest AI conference organised by AIM from September 17-19, Dhanya Rajeswaran, global vice president and country managing director at Fluence India, explained how storage is transforming power systems.

Rajeswaran broke down the fundamentals: “Renewable energy, which is energy generated through solar or wind, is perishable. What you generate that particular day has to go into the grid,” she said.

India has pledged 500 GW of renewable capacity by 2030. Yet, without storage, much of that energy risks being wasted. The International Energy Agency (IEA) estimates that renewables already supply 30% of global electricity, underscoring the need for effective storage solutions.

Fluence’s Approach

Fluence, founded as a Siemens–AES joint venture and now a public company, operates in 48 markets. It has deployed more than 30 GW of storage and digital applications worldwide.

Rajeswaran described Fluence’s offering as a blend of hardware and intelligence. The system houses thousands of components and an operating system layered with AI. This enables market optimisation, predictive maintenance, and life-cycle management.

The company’s software platforms, Nespera and Mosaic, exemplify this. Nespera monitors components to predict failures, maintaining 99% uptime. Mosaic acts as a bidding platform. Rajeswaran compared it to financial markets: “In a stock market, anybody with more information than the other is the king. This software ensures our customers are king.”

Bottlenecks in Grid Integration

Storage deployment often faces regulatory delays of 12–24 months. Fluence has built intelligence into its modelling process, cutting this timeline to under six months in many markets.

“Certification calls for a whole ton of compliance because you cannot afford any sort of safety risk. With the intelligence that we have built into the system, we are now able to bring that down to less than six months,” Rajeswaran said.

Faster certification reduces capital tie-ups and makes projects more attractive for investors.

Safety, Reliability, and Scale

Battery safety remains a global concern. Fluence units, which weigh up to 90 tonnes, are designed with predictive safety systems. AI detects risks early and can shut down systems before faults escalate.

“Safety for us is paramount. There’s no way any government is going to allow us to work if it’s not built for absolute safety,” Rajeswaran said. She added that predictive analytics also prevent premature battery degradation.

Reliability engineering is central to Fluence’s model. System-level insights inform design improvements and strengthen supplier ecosystems.

Beyond renewables, storage also supports energy-hungry data centres. The IEA reports that global data centres consumed 460 TWh in 2022, equal to the UK’s total power use. Rajeswaran noted that Fluence’s storage does not generate energy but ensures predictable supply, a critical need for data infrastructure.

India’s Role in Global Storage

Although headquartered in the US, Fluence relies heavily on its Bengaluru innovation hub. Most product design, science, and delivery work originates there.

India’s ambitions are equally bold. The power ministry targets 47 GW of battery storage by 2030, up from 2 GW today. By 2032, total storage demand could reach 74 GW, including pumped hydro. Companies like Fluence are key to this growth.

The Future of Smarter Grids

The integration of intelligent storage is not just about grid stability. It enables decentralised systems, empowers communities, and supports net-zero goals.

Rajeswaran closed with a reminder: “AI is definitely solving very important problems for humanity. If we are not able to leverage the power of AI to make sure that these systems continue to support the evolution of the human race with renewable energy, we really have a huge opportunity to miss.”

The post Intelligent Storage: The Key to a Sustainable Energy Future appeared first on Analytics India Magazine.

]]>
ISRO’s Nitish Kumar: Spacecrafts Need AI That Thinks, Not Just Computes https://analyticsindiamag.com/ai-features/isros-nitish-kumar-spacecrafts-need-ai-that-thinks-not-just-computes/ Thu, 18 Sep 2025 08:52:41 +0000 https://analyticsindiamag.com/?p=10177757

ISRO scientist Nitish Kumar outlined ideas to shape the role of AI in space, highlighting an initiative called Gyaan education agent.

The post ISRO’s Nitish Kumar: Spacecrafts Need AI That Thinks, Not Just Computes appeared first on Analytics India Magazine.

]]>

Artificial intelligence is often seen as a natural partner for space exploration. Missions demand automation, agility, and precision, but the risks of black-box models make adoption difficult.  Nitish Kumar, scientist at ISRO and recipient of the Innovative Student Projects Award by the Indian National Academy of Engineering (INAE), explained why intelligibility and explainability are central to AI in space.

Addressing the gathering at Cypher 25, India’s largest AI conference organised by AIM from September 17-19 in Bengaluru, Kumar said that while AI and space appear aligned, the reality is complex. The sector is “automation hungry,” he noted, but reluctant to trust opaque models. “Spacecrafts demand explainability, agility, and assurance,” he said.

Innovation as Thought Work

He broke down intelligence into perception, cognition, and action, linking AI systems to human self-reflection. His questions—“can we perceive cognition?”—highlighted the philosophical roots of ISRO’s work. 

“One of the bottlenecks [of developing AI] is we don’t understand our own thinking; it is very difficult to understand how AI thinks,” he said. For him, “perception without aberration of thoughts is the higher form of intelligence.”

Solar Cells to Space Doctors

Kumar illustrated how abstract ideas drive practical breakthroughs. Observing bats led him to reimagine solar cell defect detection through “distraction” rather than attention, that achieved near-perfect accuracy. The same approach powered MEND, an AI system that predicts satellite anomalies 10–15 minutes before failure, giving ISRO a vital safety window.

His team has also experimented with generative AI. Gyaan, an educational assistant, has reached 22,000 schools, which started in Bihar and is available to all. 

Gyaan is an online platform that makes education accessible to everyone. Users can ask questions specifically referencing the NCERT syllabus, for subjects like Maths, Science, and English. At the moment, it covers the syllabus for Classes 8, 9,10,11, and 12 and supports interactions in Hindi, Bhojpuri, and English languages.

For Kumar, AI is more than engineering. “If AI is not deterministic, we cannot apply it to space technology,” he said. Comparing it to the industrial revolution, he argued that AI represents a shift “from muscle power to steam power to electric power to nuclear power. This is a totally different game. It is a mental game.”

He closed by urging India to shape AI globally, not just apply it locally. In his view, agility and intelligibility will decide whether AI can be trusted to operate in space, where uncertainty is the only constant.

The post ISRO’s Nitish Kumar: Spacecrafts Need AI That Thinks, Not Just Computes appeared first on Analytics India Magazine.

]]>
Why ARTPARK-IISc Believes the Future of AI Lies in Societal Impact, Not Hype https://analyticsindiamag.com/ai-features/why-artpark-believes-the-future-of-ai-lies-in-societal-impact-not-hype/ Wed, 17 Sep 2025 11:40:34 +0000 https://analyticsindiamag.com/?p=10177740

“Physical AI is a very, very important element of what India needs to build.”

The post Why ARTPARK-IISc Believes the Future of AI Lies in Societal Impact, Not Hype appeared first on Analytics India Magazine.

]]>

India’s innovation model has long been criticised for chasing short-term gains while overlooking long-term needs. From drones to health AI, much of the country’s technology ecosystem still depends on imported systems. But the question that looms large is: what will India really need in five or 10 years?

At Cypher 2025, India’s biggest AI summit and expo in Bengaluru, Raghu Dharmaraju, CEO of ARTPARK-IISc (AI & Robotics Technology Park) at IISc, said this is the foundation of his organisation’s work. With over two decades of experience in building institutions for impact, he leads ARTPARK-IISc, a public-private initiative that connects academia, government, and startups. Its focus is translational and application-driven, taking research from labs into societal deployment.

In the past few years, ARTPARK-IISc has incubated 23 startups, developed seven award-winning social innovations—mostly in health—and built two India-scale data initiatives. Its Bengaluru “garage” spans 75,000 square feet and provides space for drones, robotics experiments, and AI infrastructure.

Innovating for India’s Needs

Dharmaraju argued that India cannot rely on imported technology to address future challenges. He cited drones with four-metre wingspans built from scratch, domestic sensors and chips, and harmonic actuators engineered by ARTPARK-IISc startups. “Physical AI is a very important element of what India needs to build,” he said, noting that actuators alone make up 60% of a robotic system’s cost.

This approach extends to AI for science and engineering. Dharmaraju pointed to Zenteiq, a startup that won the IndiaAI Mission’s foundational model challenge, for developing AI-based thermal analysis. These projects, he said, show how ARTPARK-IISc answers its guiding question: What does India really need?

Health has been one such priority. Reflecting on COVID-19, Dharmaraju said the pandemic was “a war”, but warned that its lessons are fading fast. “If there were one more COVID-19 (pandemic) to happen, how would Bangalore be different? I hear all of you shake your heads. Why not? Because we tend to forget.” For him, preparedness cannot be reactive.

Societal Impact with Inclusive AI

Dharmaraju argued that climate change is increasing zoonotic events, making future pandemics likely. To respond, ARTPARK-IISc created the “One Health and Climate” platform, which integrates data on climate, movement, and health to predict outbreaks like dengue two to four weeks in advance. “An imperfect prediction is okay, as long as it is two to four weeks ahead of time,” he said.

The organisation has also worked on AI screening for oral cancer, tuberculosis detection, and equitable deployment of health algorithms with ICMR and IISc. Another project, Midas, is building datasets and digital public infrastructure to ensure AI is trustworthy and benchmarked for Indian contexts.

Generative AI is being applied to help frontline health workers. In Uttar Pradesh, ARTPARK-IISc designed an AI assistant to respond in local dialects and Hindi-English mix. User research revealed that female workers preferred short text responses to long voice messages during visits. This change reduced escalations from 40% to 20%, with 97% of workers rating the tool satisfactory.

“It is not about AI. It is about solutions based on AI,” Dharmaraju said. For him, true impact comes from workflows, user insights, and systems thinking. That is why ARTPARK-IISc insists there are “no plug-and-play solutions” in health AI.

Towards Equitable Innovation

Dharmaraju said inclusion must be central to India’s AI future. Language, equity, and accessibility are key to building digital public goods that serve all citizens. Whether in drones, robotics, or health AI, ARTPARK-IISc’s vision is to create infrastructure that meets India’s most urgent needs.

“Moving bits is not enough,” he told the Cypher audience. “Physically, atoms need to move if there is going to be a healthcare impact.” In this spirit, ARTPARK-IISc continues to blur the boundaries between research, startups, and public systems. It reminds India’s tech ecosystem that real innovation is measured not in hype cycles, but in healthier lives and stronger communities.

The post Why ARTPARK-IISc Believes the Future of AI Lies in Societal Impact, Not Hype appeared first on Analytics India Magazine.

]]>
Why a Generic LLM Won’t Cure Healthtech’s Biggest Problems https://analyticsindiamag.com/ai-features/why-a-generic-llm-wont-cure-healthtechs-biggest-problems/ Wed, 17 Sep 2025 04:30:00 +0000 https://analyticsindiamag.com/?p=10177720

While foundation models may boost experimentation, healthcare demands deterministic, auditable and compliant systems before they can be trusted in production.

The post Why a Generic LLM Won’t Cure Healthtech’s Biggest Problems appeared first on Analytics India Magazine.

]]>

The global healthcare AI market is projected to reach $46.6 billion by 2035. It is set to transform how hospitals, payers and wellness providers manage everything from electronic medical records (EMR) to diagnostics and claims. However, as the industry evolves, generic AI platforms are proving insufficient for healthcare’s unique requirements.

Unlike consumer or enterprise IT, healthcare demands accuracy, compliance and explainability. Errors don’t just create inefficiencies; they can lead to serious, high-risk consequences. Hence, the next wave of healthcare AI is moving away from generic, speed-focused tools and towards verticalised, compliance-first platforms specifically designed for regulated industries.

India, long known for exporting global healthcare IT talent, is now building deep tech products. With the rise of global capability centres (GCCs) and a push for intellectual property creation, Indian firms are developing export-first platforms that meet stringent global compliance and interoperability standards.

Knewron is one such AI-native platform, built exclusively for healthcare by CitiusTech. The platform is designed for end-to-end product development in healthcare, embedding regulatory guardrails, persona-specific workflows, explainability and human-in-the-loop validation into every stage of the process.

“Every process flow includes validation points, audit trails, and compliance gates that cannot be bypassed. Clients can also add extra review steps for sensitive tasks. It’s how we deliver speed without sacrificing safety or accountability. ” said Sudhir Kesavan, COO, CitiusTech in conversation with AIM

One of the biggest bottlenecks in healthcare IT is the slow, fragmented development cycle. Multi-agent AI architectures promise to accelerate this process, but their outputs often conflict, introducing new risks. Knewron addresses this by orchestrating workflows with compliance guardrails and human oversight, preventing cascading errors.

Instead of retraining models every time regulations shift, the platform enforces the Health Insurance Portability and Accountability Act, General Data Protection Regulation, Medical Device Regulation and other compliance rules through a policy layer. This design enables healthcare organisations to adapt quickly to evolving regional requirements without stalling engineering teams.

The timing is significant. Global majors such as AWS and Accenture have already identified cloud, interoperability and AI as the foundation of equitable healthcare. But industry insiders point out that only domain-aware, healthcare-native platforms can deliver on this promise.

India’s Export-First Advantage

India has an edge in this transition. Its workforce combines technical expertise with clinical domain knowledge, while its lower-cost R&D base makes experimentation more viable. Growing digital health initiatives like the Ayushman Bharat Digital Mission are also creating a fertile ground for innovation.

With more than 8,500 specialists and a client base spanning over 140 global healthcare organisations, Indian deep tech firms are proving that they can compete on IP creation and compliance depth, rather than just labour cost.

Meeting the Challenges of Sensitive Data

The urgency is amplified by the healthcare industry’s pivot to value-based care in the US and the European Union, where providers and payers demand cost efficiency and seamless interoperability. AI platforms built natively for healthcare are expected to play a central role in claims automation, population health analytics and wellness applications.

However, the challenges are real. “Healthcare systems deal with sensitive PII, which requires strict protocols,” said Dhruvanandan V, a medical software developer. “We struggled to get LLMs to respond deterministically in a chatbot meant to match patients with doctors.”

“Failure to comply with these protocols carries huge legal fines,” he added. “Domain-specific models, fine-tuned or augmented with tools like RAGs and function calls, have improved reliability, but we’re still not confident deploying them in consumer-facing products.”

While foundation models and generic tools may accelerate experimentation, healthcare demands deterministic, auditable and compliant systems before they can be trusted in production.

Embedding Compliance Into Engineering

Platforms like Knewron are focusing on embedding compliance and auditability directly into engineering workflows. Instead of treating regulations as a bolt-on, these platforms build them into the core architecture, delivering speed without sacrificing accountability.

The approach also reflects advances in AI research itself. Beyond conventional large language models, innovations like Spatio-Temporal Graph Attention Networks (STGATs) by Shunya Labs are enabling more nuanced healthcare applications. 

By introducing a time dimension into model reasoning, STGATs capture causal relationships, such as the sequence of symptoms that can mean the difference between a correct diagnosis and a misdiagnosis.

From Service Hubs to Deep-Tech Exporters

As the healthcare AI market matures, the winners are unlikely to be horizontal, one-size-fits-all tools. The edge will belong to verticalised, compliance-first solutions that deeply understand regulated industries.

For India, this marks a turning point. Companies are no longer confined to the role of outsourced service providers. They are creating deep-tech, export-first platforms that serve some of the most highly regulated and high-value sectors in the world.

The momentum is that speed alone is not enough in healthcare. Trust, compliance and accountability are becoming the defining differentiators. And the companies that can embed these principles from the ground up are the ones most likely to lead the next phase of healthcare AI.

The post Why a Generic LLM Won’t Cure Healthtech’s Biggest Problems appeared first on Analytics India Magazine.

]]>
GenAI Is Killing Old Open Source Rules https://analyticsindiamag.com/ai-features/genai-is-killing-old-open-source-rules/ Tue, 16 Sep 2025 09:01:46 +0000 https://analyticsindiamag.com/?p=10177701

What was once a hub for collaborative innovation in the GenAI era has quickly become a breeding ground for clones.

The post GenAI Is Killing Old Open Source Rules appeared first on Analytics India Magazine.

]]>

Open source has always walked a fine line between community spirit and commercial survival. With the rise of generative AI, that balance is becoming harder to maintain. For many independent developers, permissive licences no longer feel like a safeguard of openness, but a threat to sustainability.

Herman Martinus, founder of Bear Blog, is among those moving away from permissive licensing. His concerns stem from how easily AI-assisted development enables competitors to rebrand and resell open projects, often at the expense of the original creator.

The Rise of “Free-Ride Competition”

Martinus, described what he calls “free-ride competition.” He told AIM, “Someone forking the code, rebranding and releasing a hosted version of the same software that competes with Bear Blog.” 

Though these forks had not yet toppled his project, the practice felt “exploitative and not aligned with the reasons I made the code available in the first place.”

The tipping point came when he noticed that one of the clones had been created with AI tools. 

Martinus observed that it was clearly AI-assisted at first glance. Although the underlying code was nearly identical, the text had been revised by a model.

 “This was the final straw for me, where it felt like free-ride competition was now too easy and didn’t require much technical skill,” he said.

His experience reflects a broader concern within open source. What was once a hub for collaborative innovation in the GenAI era has quickly become a breeding ground for clones.

What the Broader Landscape Shows

Across the industry, there are signs that developers and companies are making similar moves. 

The Linux Foundation has argued that AI models, unlike traditional software, combine code, training data, weights, and documentation. This creates licensing challenges that existing open-source frameworks are ill-equipped to handle. 

This uncertainty has left many contributors questioning whether their work is sufficiently protected in an era where models can remix and repurpose content with little regard for attribution.

Some high-profile projects have already been abandoned with permissive licences. Elastic shifted from Apache 2.0 to a more restrictive licence in 2021, citing cloud providers that re-hosted its software as competing services. The company framed the change as a defence of sustainability, though it sparked debate about whether such measures still qualified as “open”. 

Others, like Timescale and Confluent, have adopted “source-available” models that allow transparency without giving competitors a free pass to monetise their work.

Rohit Vyas, director of solutions engineering and customer success for South Asia at Confluent, told AIM, “From a licensing perspective, I would say that Confluent has always maintained a separation between open source and proprietary [license].”

He explained, “If you [talk about] the proprietary services of Confluent, then the licensing regime is built to ensure that we provide the right mix of the software, which is rightly traceable, at the right price point and geographically, globally available, consistent.”

Vyas added that open-source projects already have governance mechanisms that determine who can contribute to and manage the software and its main code branches.

“If somebody has to do something with an open source project under the hood [using] Gen AI, then it is left to them. The law catches up with them sooner or later,” he said. 

Placed against this backdrop, Bear CMS’s decision appears less like an isolated shift and more like part of an industry-wide recalibration. The idea that “open” automatically means permissive licensing is losing ground to a more defensive, sustainability-minded approach.

Restrictive Licences and the Future of Openness

The debate around restrictive licences is not new, but the AI factor has sharpened its urgency. Martinus sees the logic in the approach. “If it’s referring to a cohesive system that is the foundation of a company (and your livelihood), then it absolutely protects sustainability,” he said.

Yet he acknowledged trade-offs. His original motivation for releasing Bear’s code under MIT was transparency rather than adoption.

“The main reason I wanted to make the source available was to make my statements about privacy and security auditable,” he explained. That openness, however, became an unintended invitation for competition. “If I knew when I was starting what I know now, I would have made it source-available from the beginning,” he said. 

Martinus noted that community backlash has been minimal, acknowledging that a small percentage—about 5 percent—of Hacker News users are likely to be upset by any issue. He dismissed the criticism, expressing the view that the decision will ultimately benefit the Bear project and will not significantly impact the open-source community.

Towards a Post-Open Source Era?

The bigger question is whether open source itself needs to evolve. Martinus was sceptical about the need for AI-era licences. 

“We already know that AI companies do not respect licenses,” he said, pointing to controversies over dataset usage by Anthropic and Meta. “If you don’t want your code ingested, and want cloning of your software difficult, don’t release the code publicly.”

In his view, open source was facing strain long before AI. “Since the internet gained more commercial interest, there has been a tug-of-war between open-source development and commercial exploitation,” he said. AI, then, is only accelerating existing tensions.

And what of the future? Martinus remains uncertain. “If OpenAI is to be believed, all code will be written by AI in the future, and we won’t even need open-source software to build with. I remain sceptical, but I can’t predict the ever-more-uncertain future.”

As developers confront the new reality of AI-enabled cloning, the era of default permissive licensing may be giving way to more defensive strategies. Whether this protects innovation or fragments the ecosystem remains unresolved. But what is clear is that the GenAI era is forcing developers to rethink what openness really means.

The post GenAI Is Killing Old Open Source Rules appeared first on Analytics India Magazine.

]]>
GenAI Infrastructure Will Drive Public Cloud Models at the Edge https://analyticsindiamag.com/ai-features/genai-infrastructure-will-drive-public-cloud-models-at-the-edge/ Mon, 15 Sep 2025 10:52:59 +0000 https://analyticsindiamag.com/?p=10177662

Enterprises are now looking beyond centralised data centres, choosing to move workloads closer to where data is created.

The post GenAI Infrastructure Will Drive Public Cloud Models at the Edge appeared first on Analytics India Magazine.

]]>

Generative AI is changing the way enterprises think about technology infrastructure. Global market intelligence firm International Data Corporation’s (IDC) report titled ‘The Edge Evolution: Powering Success from Core to Edge’, developed in collaboration with cloud service provider Akamai, highlights how legacy systems are falling short as AI transitions from pilots to production. Enterprises are now looking beyond centralised data centres, choosing to move workloads closer to where data is created.

The report forecasts that public cloud-based services at the edge will grow at a 17% CAGR in the Asia-Pacific region, excluding Japan, reaching $29 billion by 2028. At the heart of this shift is generative AI’s demand for scalability and performance, which is pushing enterprises to invest in infrastructure that spans from core to edge.

Cloud to Edge: A Distributed Fabric

The report underlines that 96% of enterprises in the region adopting generative AI will rely on public cloud infrastructure-as-a-service (IaaS) for training and inferencing.

Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research, told AIM, “The old arguments about choosing between cloud or edge simply do not hold anymore. Models may still be trained in vast cloud data centres, but once they are put to work, the story changes.”

He explained that for daily use, speed, privacy and sovereignty are crucial, necessitating inference closer to data creation or service delivery. 

Speaking to AIM, Mitesh Jain, regional VP at Akamai India, explained, “Training workloads, which are resource-intensive and require massive computing power, are best suited for public cloud IaaS.”

However, he pointed out that keeping all workloads in the cloud can become costly due to continuous data transfer and storage needs. “This is where the edge becomes critical.” 

Edge deployments, he believes, are ideal for inferencing workloads and real-time GenAI applications like IoT monitoring, fraud detection and personalised customer engagement. This is due to their need for low latency, localised compliance and accelerated decision-making.

Rushikesh Jadhav, CTO of ESDS Software Solution Limited, agreed, noting that organisations must weigh “latency, data sovereignty, scalability and compliance” before deciding workload placement. 

In his view, public cloud will remain the default for compute-intensive training.

“Conversely, inference workload that demands real-time decision-making jobs like financial services’ fraud prevention, manufacturing’s predictive maintenance or video analytics for smart cities are best to be executed at the edge,” he said. “This provides low-latency performance, lower bandwidth cost, and local adherence to data privacy regulation.”

Industry Use Cases Driving Adoption

Examples of AI adoption are already visible across industries. Banks are experimenting with AI-enabled mainframes to facilitate real-time transactions. Factories are embedding intelligence into production lines to identify defects, while hospitals are running assistants on-site to ensure patient data remains private. Gogia described these not as mere trials on the sidelines but as “fundamental shifts in design”.

Jain from Akamai India highlighted that predictive AI is leading adoption across the Asia-Pacific region.

“Enterprises are scaling predictive workloads to power real-time insights, fraud detection and operational optimisation. This momentum is being driven by the need to process data closer to its source, reducing latency, lowering connectivity costs and ensuring compliance,” he said. 

He further pointed out that in India, 38% of enterprises prioritise interpretive AI, highlighting the country’s specific requirement to process large amounts of edge data.

Meanwhile, Jadhav noted predictive AI’s strong traction in manufacturing and utilities, thanks to its tangible returns in reducing downtime and improving efficiency.

Brijesh Patel, founder and CTO of SNDK Corp, reinforced the business case and said to AIM, “Workloads that demand immediate insights, local decision-making or processing of sensitive data should be deployed closer to the edge, where latency is minimised, data privacy is better managed and operational continuity is ensured.”

Lessons and Challenges Ahead

The move to edge AI is not without pitfalls. “We have seen organisations invest heavily in cloud-based inference engines, only to abandon them when response times proved too slow for industrial control. Others discovered that sprinkling GPUs across remote sites created more cost than value when traffic was sparse,” Gogia explained. 

He argued that success depends on calibrating workloads carefully across cloud and edge.

Enterprises must prepare for growing challenges around costs and infrastructure, Jain cautioned. “Rising compute costs, energy demands and hardware availability are among the most pressing concerns,” he said. He further stressed that “to overcome these challenges, enterprises must modernise their digital backbone with edge-optimised architectures, embrace interoperable multicloud strategies and adopt cost management practices that balance performance with scalability”.

Patel noted that high-performance GPUs at the edge can be costly, energy-intensive, and necessitate effective cooling and upkeep. Jadhav echoed the warning, highlighting that managing a distributed fleet of edge devices necessitates a new operational model. Most IT teams, accustomed to centralised cloud management, are currently unprepared for this shift.

Despite the hurdles, momentum is clear. Gogia summed it up, saying that generative AI’s future lies in distributed intelligence, not cloud centralisation. While the cloud remains vital, edge computing will be key to performance and trust.

Enterprises that orchestrate cloud and edge as one fabric, rather than choosing one over the other, are most likely to lead in an AI-first world.

The post GenAI Infrastructure Will Drive Public Cloud Models at the Edge appeared first on Analytics India Magazine.

]]>
Why Agora Bets on Golang for Real-Time AI https://analyticsindiamag.com/ai-features/why-agora-bets-on-golang-for-real-time-ai/ Mon, 15 Sep 2025 03:30:00 +0000 https://analyticsindiamag.com/?p=10177575

“Golang delivers high performance without the overhead of complex runtimes”.

The post Why Agora Bets on Golang for Real-Time AI appeared first on Analytics India Magazine.

]]>

When developers think of conversational AI, Python usually takes the spotlight. It’s the language of research papers, TensorFlow tutorials, and every quick experiment you’ve ever seen on GitHub. But when the goal shifts from “let’s test this in a notebook” to “let’s run this at global scale in real time,” the story changes. That’s where Golang quietly steps in. 

For Agora, a company that powers real-time voice, video, and AI-driven engagement, the choice of programming language was never about hype and trends but about demands of performance at scale. 

Golang stood out as the preferred choice for real-time applications because it delivers high performance without the overhead of complex runtimes, said Rishi Raj Singh Ahluwalia, director of solution architecture & customer success at Agora, in an interaction with AIM.

Golang has also been a choice for companies like INDMoney for real-time data streaming.

From powering live classrooms and telehealth to building multilingual chatbots in India’s tier-2 and tier-3 cities, Golang has emerged as a preferred tool in Agora’s stack.

Why Golang, Not Python (or Node, or Java)

Ahluwalia noted that the language stands apart when compared with others. “Python is excellent for experimentation and definitely has a strong AI ecosystem. But it sometimes struggles with concurrency under heavy load,” he said.

Node.js is strong at asynchronous input-output, but its single-threaded model can sometimes become a bottleneck, he said, adding that Java has been a powerhouse, but its heavier runtime and verbose syntax can slow down iteration cycles.

By contrast, Golang “strikes the perfect balance.” Ahluwalia mentioned that with its goroutines and channel-based concurrency models, Golang simplifies the efficient management of a vast number of simultaneous connections.

It also boasts low memory consumption, rapid compilation, and a garbage collector optimised for minimal latency.

Ahluwalia provided some usage examples for Golang at Agora, highlighting that many of Agora’s backend services, including signalling and networking, are already built on Golang.

When it comes to encouraging developers to use Golang, the company publishes documentation and guides that help the community build low-latency and scalable services.

He said, “For us, Golang isn’t just another language option. It’s the backbone of how Agora builds and scales its real-time infrastructure.”

At the same time, Ahluwalia pointed out that no single language dominates. “We still use React for many of our services, and provide developer guides for all those languages as well.”

“It definitely needs to be a stack of multiple languages rather than a dependency on one,” he said.

Building for Developers, Not Just Applications

What matters just as much, according to Ahluwalia, is how accessible the technology is. “What truly sets Agora’s SDKs and APIs apart is their unparalleled developer-centric design that transforms complexity into simplicity and enabling rapid innovation.”

Their goal is to offer maximum capability with minimum complexity to developers, he added. 

The company also supports integration across web, iOS, Android, and frameworks such as React Native, Flutter, Electron, and Unity. On the infrastructure side, the company runs its own ultra-low latency network across 250 points of presence. 

“Agora guarantees crystal-clear, uninterrupted experiences, even in challenging network conditions, with built-in optimisations like adaptive bitrate and dynamic channel switching,” he said.

For Indian developers, the challenges often revolve around affordability, multilingualism, and scale. He elaborated that in Indian environments, particularly within the telecom sector, operations must function efficiently under low bandwidth conditions, especially in Tier 2 and Tier 3 cities. 

He also observed startups leveraging Agora to develop multilingual conversational bots, AI tutors that adjust to individual student learning paces, and even astrological and mental health platforms where users can interact with AI avatars.

Ahluwalia said that the entry process is designed to be simple as the company provides a console on the Agora platform, where developers can sign up, create multiple projects, and monitor billing – all within the console.

He explained that within the console, a redirection link leads to the documentation website, offering access to SDK downloads and Agora APIs, creating a seamless journey for developers.

On conversational AI, he explained that the real-time audio from Agora channels is sent to ASR engines for transcription. The transcribed text then goes to a large language model, and its output is converted into natural-sounding speech by Text-to-Speech (TTS). This synthesised response is streamed back to the user via the Agora channel, achieving minimal latency, potentially as low as 650 milliseconds in optimal conditions.

The Road Ahead

Looking forward, Ahluwalia sees the sector shifting rapidly. “Real-time engagement is going to transform from utility to intelligence,” he said, adding, “We want developers to focus on experiences, not infrastructure.”

Today, the focus is on making conversations happen; tomorrow’s systems may become context-aware assistants capable of reasoning, translation, and personalisation at scale.

The post Why Agora Bets on Golang for Real-Time AI appeared first on Analytics India Magazine.

]]>
Next.js Has a Middleware Problem https://analyticsindiamag.com/ai-features/next-js-has-a-middleware-problem/ Sun, 14 Sep 2025 04:30:00 +0000 https://analyticsindiamag.com/?p=10177573

Developers may be losing patience with Next.js, as broken middleware, AsyncLocalStorage woes and slow fixes raise doubts about the framework’s reliability.

The post Next.js Has a Middleware Problem appeared first on Analytics India Magazine.

]]>

Next.js has long been positioned as the flagship React framework, powering countless production apps across the web. However, growing developer frustration is beginning to expose its limitations. 

A recent blog post by developer Dominik Meca, titled ‘Next.js is Infuriating’, struck a chord across the community in Hacker News and Reddit, with many engineers echoing similar pain points. The issues are not isolated bugs; they reflect deeper cracks in how the framework handles middleware, logging and developer feedback.

Meca sets out with what should be a simple task: setting up production-ready logging. Instead, what follows is a spiral of workarounds, broken abstractions and unanswered questions. 

Meca’s verdict is unflinching. “How do you f**k this up so bad? We’ve had middlewares since at least the early 2010s when Express came out,” he wrote. From here, the frustrations snowball as middleware refuses to chain, AsyncLocalStorage contexts mysteriously vanish mid-render and logging across client, server, and middleware becomes a split, fragile process.

The Issues in Spotlight

The core complaint largely centres around middleware, which Next.js’s documentation describes as “particularly useful for implementing custom server-side logic like authentication, logging or handling redirects”.

Yet, in practice, Meca found that “you can pass a grand total of four parameters from your middleware” and nothing beyond headers propagates downstream. His workaround was to stuff request IDs into headers just to pass data to pages.

When even that broke, he tried moving to a custom server, only to find the pattern getting repeated there as well, with AsyncLocalStorage still failing to behave as expected.

This isn’t just one engineer’s rant. Other developers have echoed similar concerns. 

Utkarsh Kanwat, an engineer at ANZ, told AIM that beyond the chaining issues, the AsyncLocalStorage problems are a dealbreaker for many use cases.

“You can’t share context between middleware and your actual application code, which breaks distributed tracing, any sophisticated auth patterns, etc,” he said.

He also argued that the lack of context propagation makes Next.js unsuitable for advanced real-world requirements.

“The fact that you can’t reliably share context between middleware and your application code in 2025 is pretty frustrating, especially when Express and other frameworks solved this years ago,” he added.

A Framework Falling Behind?

Meca’s post draws a sharp contrast between Next.js and SvelteKit, another framework backed by Vercel. 

Where Next.js middleware struggles even with basics, SvelteKit supports chaining, request-scoped data and composability. 

As Meca put it bluntly, “This is what real engineering looks like. SvelteKit is a Vercel product. How is the flagship offering worse than what is essentially a side project?”

Kanwat reinforced this point, arguing that Next.js makes developers work against the framework for anything beyond basic use cases. “It’s frustrating because these should be solved problems by now.”

Vishwa Gaurav, software development engineer at Groww, offered a slightly more measured perspective to AIM. He pointed out that Next.js’ middleware is powerful for straightforward, latency-sensitive tasks as it runs before the route handler and supports edge runtime. However, for richer, composable or complex workflows, its limitations make frameworks like SvelteKit more appealing.

Moreover, highlighting a key difference in approach, he said that unlike dedicated middleware systems, SvelteKit employs hooks to intercept and modify requests and responses, while adding that Vercel should provide “built-in utilities or patterns that facilitate the management of request-scoped data, reducing the need for a custom server.”

The Larger Frustration

Underneath the middleware debate lies a deeper cultural frustration: responsiveness. Developers such as Meca describe the Next.js GitHub issue tracker as a “crown jewel of the dumpster fire,” where “hopes and issues come to die”. 

“The mean response time for a bug report is never,” Meca claimed, citing multiple issues that received silence despite detailed reproductions.

Kanwat agrees. “Honestly, the GitHub response times are concerning, but haven’t stopped me from using Next.js yet,” he said, adding that the worrying part is the actual performance regressions that keep showing up, such as massive build slowdowns in recent versions that take months to get fixed.

“The issue isn’t just slow responses, it’s that many performance problems seem to get introduced and then take forever to resolve,” he added.

For projects of critical importance, Kanwat advised greater caution. He admitted that it is better to exercise greater caution and consistently pin specific versions instead of relying on automatic updates.

The sentiment is not universal. Gaurav found Vercel engineers responsive on social media and positive in addressing requests. Yet, the larger frustration remains that a framework of this scale leaves developers split between hacks, custom servers or entirely different tools.

What’s Next For Next.js?

“Personally, I don’t want to use Next.js anymore,” Meca admitted. While he admitted he lacks the leverage to move his entire company away, the experience has eroded his trust in the framework. 

Many developers may continue to rely on Next.js, but often with caution, workarounds or pinned versions to avoid regressions.

What emerges is less a single bug and more a pattern: middleware that doesn’t propagate context, AsyncLocalStorage that fails where it’s needed most, and an issue tracker seen as unresponsive. Together, these point to a framework caught between its ambitions and a faltering developer experience. 

With alternatives like SvelteKit offering greater flexibility, the cracks in Next.js’s dominance are starting to widen.

The post Next.js Has a Middleware Problem appeared first on Analytics India Magazine.

]]>
The AI Race is About Scale. India is Asking if it Should Be https://analyticsindiamag.com/ai-features/the-ai-race-is-about-scale-india-is-asking-if-it-should-be/ Sat, 13 Sep 2025 04:30:00 +0000 https://analyticsindiamag.com/?p=10177565

As Big Tech pours billions of dollars into bigger models, smaller research teams think reasoning and inclusivity matter more than scale.

The post The AI Race is About Scale. India is Asking if it Should Be appeared first on Analytics India Magazine.

]]>

The world is in a headlong rush towards AI, and the spotlight remains fixed on tech giants in the US and China. Conversations there often circle around scaling models, expanding compute power and securing vast amounts of data. 

Yet, in India, a quieter current is beginning to flow. Here, researchers and startups are approaching the challenge differently—focusing less on scale and more on efficiency, inclusivity and domain-specific needs. 

For Shunya Labs, the central question isn’t “how big can the model be trained?” but “how well can it reason?” Sourav Banerjee, co-founder and technical architect, is blunt about the shortcomings of today’s large language models. “They mimic the act of reasoning, but they don’t actually reason,” he said in a conversation with AIM.

That gap inspired the creation of the Spatial-Temporal Graph Attention Network (STGAT). Unlike conventional LLMs that treat words as static relationships, STGAT introduces a time dimension. This matters especially in healthcare, where causality, the sequence of events, can mean the difference between an accurate diagnosis and a dangerous misdiagnosis.

Banerjee explains with an example: “A patient develops a rash a week after attending a pet gathering. A seasoned doctor immediately connects the dots. For AI, unless it understands the timeline, it’s just noise. STGAT builds that temporal understanding into the model.”

The result is a clinical knowledge graph already in use across India and in 200 clinics in Australia. And its scope extends far beyond clinical note-taking. Researchers are now exploring applications in drug discovery and clinical trials, essentially letting it act as an “intelligence layer” that integrates seamlessly with existing healthcare workflows.

Challenging the World on Speech Recognition

If reasoning is one frontier, voice is the other. Here too, the team’s work on Pingala V1, an automated speech recognition (ASR) system, has quietly made global history. The model has achieved a 2.94% word error rate for English and 3.1% for universal speech, outperforming heavyweights like NVIDIA, IBM and OpenAI’s Whisper on open leaderboards.

The name itself carries weight. The original inspiration behind the name, Pingala, was an ancient Indian sage, credited with inventing the binary representation of sound more than 2,000 years ago. This makes the model both a nod to Indic intellectual heritage and a declaration of intent. “We wanted to attribute the original creator of coding voice into binary,” Banerjee explained.

What makes Pingala’s achievement even more striking is its efficiency. Trained on just two GPUs in two days, it runs on commercial-grade hardware like NVIDIA’s L40. That means organisations can deploy world-class ASR for a fraction of the usual cost. Latency clocks in at under 100 milliseconds, a critical threshold for real-time applications such as telemedicine or multilingual customer support.

The model has been open-sourced on Hugging Face, where it has already been downloaded more than 2,000 times and licensed under a responsible AI framework that bars misuse. Pingala V1 supports 216 languages worldwide—including 39 Indian ones. The team estimates that it can understand 96% of the world’s population—something few Western players have even attempted.

The Case for Inclusive AI

The founders, many of whom come from small-town India, see firsthand how exclusionary design in AI could worsen social divides.

“If AI doesn’t understand someone speaking Santali in Jharkhand or Bhojpuri in Bihar, we’re designing the future to exclude them,” Banerjee warned. “It’s not about whether AI replaces humans. It’s about AI replacing people who don’t have access to it.”

To counter that risk, Shunya Labs has committed to releasing domain-specific ASR models for medicine and Indic languages. They also lean heavily on synthetic data generation and linguistic structure analysis, bypassing the scarcity of labelled datasets that has historically disadvantaged low-resource languages.

Privacy by Design

Unlike global platforms that centralise user data, Shunya Labs insists on on-premise deployment. Hospitals and enterprises can run their models locally, ensuring compliance with GDPR and India’s privacy norms without sending sensitive data to third parties. This, the team argues, is crucial for healthcare, where the stakes of data misuse are high.

Yet, for all their breakthroughs, the team is candid about the hurdles ahead. Data scarcity remains a constant challenge, as does a cultural scepticism towards Indian foundational research. “In the US, if you say you want to build a new model, the ecosystem rallies behind you. In India, the first question is: Why not use an American one?” Banerjee noted.

To change that, Shunya’s answer is a call for collective effort, from investors willing to back inclusivity, to media outlets highlighting open benchmarks, to government initiatives like Project Vaani that annotate neglected dialects. Equally insistent, they stressed, is academic rigour. “It’s not enough to claim success in PR. We need to publish, present at global conferences and invite public scrutiny,” he added.

In Sanskrit, ‘shunya’ means both zero and infinity, a fitting metaphor for what the company is attempting: zero word error rates, infinite possibilities. 

As Ritu Mehrotra, co-founder and CEO of Shunya Labs, puts it: “Small players can come, do real research, and put a model into the open domain, responsibly and for the world.”

Whether Shunya Labs will succeed in rewriting the global AI playbook remains to be seen. But with Pingala V1 already outperforming some of the world’s biggest names, one thing is clear: Shunya Labs is pulling the conversation on AI’s future to India.

The post The AI Race is About Scale. India is Asking if it Should Be appeared first on Analytics India Magazine.

]]>
Who are the ‘Misfits’ in Indian IT? https://analyticsindiamag.com/ai-features/who-are-the-misfits-in-indian-it/ Fri, 12 Sep 2025 13:30:00 +0000 https://analyticsindiamag.com/?p=10177567

The Indian IT industry’s famous ‘pyramid model’ is collapsing from the middle.

The post Who are the ‘Misfits’ in Indian IT? appeared first on Analytics India Magazine.

]]>

A Reddit post alleging that Tata Consultancy Services (TCS) forced a veteran employee into early retirement without severance has drawn attention to the company’s ongoing restructuring. 

This claim comes as TCS prepares to lay off around 12,000 employees globally, roughly 2% of its workforce, in one of the company’s biggest-ever job cuts.

TCS CEO K Krithivasan told Moneycontrol that the decision is linked to skill mismatch and the company’s evolving business needs rather than AI-led productivity gains. 

He also pointed out that in some cases they have not been able to deploy individuals to required roles.“Some people, especially at senior levels, find it difficult to transition to tech-heavy roles,” he said, adding that the company is moving from its legacy “waterfall” delivery model to a more agile, product-centric approach. This transition has reduced the need for conventional project and program managers.

The layoffs, to be implemented gradually through FY26, will mostly impact mid-to-senior level professionals.

Krithivasan said that the waterfall models had multiple leadership layers, which is changing, while describing the decision as “difficult but necessary.” He said that affected employees would be provided severance packages, extended insurance, mental health support and outplacement services. 

Sunil Padmanabh, AI & digital strategy leader, called such positions “misfits,” concentrated in coordination and approval layers. Reflecting on the trend, he told AIM that Wipro has eliminated hundreds of mid-level roles to boost margins and agility, while TCS layoffs are partly linked to AI-driven role realignment. 

According to NASSCOM–EY’s AI Adoption Index 2.0, 87% of Indian enterprises are mid-stage in AI adoption, with “expert” adoption almost doubling since 2022.

“The pattern is clear: Indian IT firms are flattening guardrail roles and reallocating talent toward AI-driven builder functions,” Padmanabh said.

Mid-senior level Under Threat

The industry’s historic tilt toward people-management has created a supply–demand imbalance. Murali Santhanam, CHRO, Ascent HR Technologies, said that the industry has an excess of coordinators, but a shortage of strong contributors in areas like architecture, data, product, SRE, and platform engineering.

Traditionally, Indian IT services companies built pyramid structures around managing people. “Strong technical professionals were often pushed into manager roles instead of being nurtured as technical experts. This model once made sense, as huge batches of freshers needed multiple layers of supervision,” Neeti Sharma, CEO, TeamLease Digital said.

The traditional services pyramid pushed engineers into supervisory tracks—TL, AM, PM—far too early, even when delivery called for technical depth. As the pyramid shifts into a diamond with fewer freshers and more mid-senior professionals, the management-heavy middle layer is increasingly visible.

Santhanam said that many coordination-heavy tasks like status tracking, reporting, ticket allocation, escalation – once central to mid roles – are now automated or absorbed into AI-enabled workflows.

What remains is higher-value work in product, data, and platform engineering – roles that require deep technical expertise. Redeployment is tough because many mid-level managers drifted away from these skills early in their careers. The result is AI has thinned out the managerial middle, while demand rises for strong individual contributors and technical leaders.

GCCs exposing gaps

GCC growth is exposing gaps in traditional mid-level IT roles. As global capability centres (GCCs) in India expand rapidly, they are moving beyond cost-arbitrage and delivery support into mandates around “product development, innovation, and domain-led solutions,” Santhanam said. 

This evolution demands product-thinking, architectural depth, and business context, capabilities many mid-level managers in traditional IT services lack, having advanced through delivery-heavy and coordination-focused tracks. 

Unlike IT services, GCCs emphasise “product innovation, ownership, and advanced technology,” hiring aggressively for roles in product management, cloud architecture and R&D, said Sharma. 

These roles are outside the comfort zone of many mid-level IT managers accustomed to team supervision and project oversight. The result is a sharp contrast: GCCs offer better pay and faster growth for professionals with strong technical and innovation skills, while traditional IT firms struggle to redeploy managers into hands-on roles.

Recent layoffs have disproportionately impacted this mid-level cohort. To remain relevant, professionals must shift their mindset, invest in technical expertise, stay adaptable, and cultivate emotional intelligence as the industry pivots from manpower-driven delivery to value-driven outcomes in an AI-first world.

“AI fluency is critical, as managers must understand how automation reshapes workflows and where human judgment adds value. KPIs should shift from effort and coordination to business outcomes and innovation impact,” said Santhanam.

The post Who are the ‘Misfits’ in Indian IT? appeared first on Analytics India Magazine.

]]>
Orchestrating a Healthcare AI Symphony in India Through Federated Learning  https://analyticsindiamag.com/ai-features/orchestrating-a-healthcare-ai-symphony-in-india-through-federated-learning/ Fri, 12 Sep 2025 11:30:22 +0000 https://analyticsindiamag.com/?p=10177558

Federated learning uses organised medical knowledge and synthetic data to unify diverse datasets, enhancing patient care nationwide.

The post Orchestrating a Healthcare AI Symphony in India Through Federated Learning  appeared first on Analytics India Magazine.

]]>

In India’s hospitals, data tells very different stories. A Delhi TB clinic may show endless lung scans, while an oncology hospital in Chennai stores tumour-heavy datasets. For artificial intelligence (AI), this abundance is both a blessing and a curse: plenty of data but little coherence.

Patient data is sensitive and very varied across different kinds of hospitals, says Hima Makonahally Pratap, physician advisory board member at the International Journal of Clinical Research. “Each hospital has its own unique set of patients, and data is stored separately, not just for privacy reasons, but because their very nature differs.”

This is where federated learning (FL) comes into play, a framework that enables AI models to learn collaboratively across hospitals, without transferring the underlying patient data. In other words: collaboration without compromise.

The Challenge of Label Skew

One of the biggest hurdles in Indian healthcare data is what researchers call label skew, when disease distributions vary drastically across hospitals.

“When one hospital sees TB patients and another focuses entirely on oncology, you can immediately see how different their data would look,” Pratap notes.

The National TB Prevalence Survey (NPSI) found that TB affects 31% of Indians over 15, with heavy regional variations. This means hospitals in states like Delhi or Tamil Nadu naturally develop specialised datasets. For AI, this causes two main problems:

  • Model divergence: Each hospital’s AI model becomes highly specialised, but when aggregated into a global model, the system is pulled in conflicting directions.
  • Catastrophic forgetting: Knowledge of one disease (like TB) may be overwritten when new data from another speciality (like oncology) is introduced.

The result? Instead of converging to a universal solution, AI struggles to serve anyone well.

Yet label skew doesn’t always manifest equally. Dr Zainul Charbiwala, cofounder and CTO of Tricog, a medtech company, observes less skew in cardiac data.

“We currently have about half of our data from urban and the other half from rural healthcare facilities, and we’re not seeing this divide play a big role. The diversity of conditions is so high and the underlying causes are quite similar. The differences don’t stand out too much. In cardiology, ECG is the go-to test everywhere, so the modality is consistent,” he explains.

This nuance highlights a key insight: some medical domains may lend themselves more naturally to federated learning, while others (like radiology) face tougher integration challenges due to differences in equipment, data resolution, and workflows.

From Chaos to Symphony

Pratap uses music to describe the challenge of integrating diverse datasets: A great guitarist and a great pianist may sound wonderful, but if they play at once without coordination, there will only be noise, no music. “Our goal is to preserve their brilliance while creating a symphony.”

Federated learning, combined with smart strategies, aims to create that symphony.

By embedding structured medical knowledge graphs, such as UMLS and SNOMED CT, federated models don’t just learn patterns; they learn relationships. This ensures respiratory conditions, for instance, are weighted more heavily when TB hospitals contribute to lung-related models.

Techniques like FedProx add “gravity” to local models, gently pulling them toward the global model while allowing for speciality-specific variance.

The multilevel aggregation of structured hierarchies in areas of respiratory, cardiovascular, and neurological health, ensures that models evolve within coherent medical contexts before being rolled up into a broader framework.

Tackling the Gaps

One of AI’s biggest limitations in healthcare is the lack of data for rare conditions. Here, synthetic data generation offers a lifeline.

Charbiwala highlights the challenge in cardiology: “Rare conditions are underrepresented. We don’t typically use synthetic data generation from scratch, but instead rely on augmentation, sampling rarer conditions more often and subtly modifying signals to add variation. This avoids bias while still giving the model enough examples to learn from.”

Emerging frameworks like Gen-FedSD can generate realistic medical images based on text prompts, filling critical gaps without exposing patient identities.

India’s healthcare infrastructure is far from uniform. Urban centres boast cutting-edge MRI machines, while rural clinics may rely on older X-ray setups. Network connectivity is another hurdle.

Tricog addresses this with cloud-connected ECG machines. “One element of our design has been to ensure that our devices work even in poor network conditions. We used to have problems circa 2015-16, but with widespread 4G/5G availability, there’s no issue at all today,” says Charbiwala.

For other modalities, federated learning employs hybrid gradient compression (HGC), which smartly reduces the size of updates shared across networks while preserving vital diagnostic signals. This allows even bandwidth-limited rural clinics to participate meaningfully.

Privacy, Regulation, and Trust

Incorporating India’s Digital Information Security in Healthcare Act (DISHA) is central to federated learning adoption.

“We never move raw data, ever. Every model update is auditable, every hospital has full control, and patients have granular consent,” stresses Pratap.

This approach addresses concerns about data misuse, ensuring compliance while fostering public trust.

The potential of FL is evident in India’s healthcare landscape. In Delhi and Bihar, hospitals are using local models to enhance tuberculosis screening and improve pneumonia and COPD detection. Specialised cancer centres in Chennai contribute to global models for early tumour detection without sharing raw scans. Tricog’s ECG platform in Karnataka helps rural clinics identify over 140 cardiac conditions, showcasing FL’s effectiveness in low-resource settings while ensuring data privacy.

The experts agree that India has a unique opportunity to lead. “If we get this right, India could become the blueprint for federated healthcare AI globally. We have diversity, scale, and strong regulatory frameworks. That’s exactly the testbed the world needs,” Pratap reflects.

The key is not to chase glamorous solutions, but to ensure they actually work for the last-mile clinic,” says Charbiwala.

Federated learning is not a silver bullet, but it offers India a pathway to balance privacy, diversity, and innovation in healthcare AI. 

“Each hospital is a brilliant soloist. Federated learning is how we turn them into an orchestra,” concludes Pratap.

The post Orchestrating a Healthcare AI Symphony in India Through Federated Learning  appeared first on Analytics India Magazine.

]]>
Why Responsible AI Demands Both Trust and Compute Ownership https://analyticsindiamag.com/ai-features/why-responsible-ai-demands-both-trust-and-compute-ownership/ Fri, 12 Sep 2025 10:38:09 +0000 https://analyticsindiamag.com/?p=10177557

Enterprises need to control the full AI stack and deployment, especially when sensitive information is involved.

The post Why Responsible AI Demands Both Trust and Compute Ownership appeared first on Analytics India Magazine.

]]>

Artificial Intelligence now influences decisions across sectors, but not all decisions carry the same weight. A chatbot’s casual error may be forgivable. In finance or healthcare, however, a single wrong prediction can cost billions, or even a life. 

This is why experts argue that regulated industries require a responsible AI, systems designed for trust and accountability from the ground up.

Bhaskarjit Sarmah, head of financial services AI research at Domyn, a composite AI platform to design, deploy, and orchestrate AI Agents, explained the stakes in an exclusive interaction with AIM

“Nobody can make an AI with 100% accuracy… but the question is, how do I know which AI output is correct and which is not when AI is in production,” he said.

Responsible AI, in his view, goes beyond fine-tuning existing large language models. It requires infrastructure, domain-specific training, and an open approach to data ownership.

The Risk of Generic Models

Most mainstream AI systems are trained to be general-purpose. While this approach works for broad tasks, it falls short when precision and trust are non-negotiable. 

“We cannot use ChatGPT for financial services. Sometimes this model hallucinates. Sometimes it is generic and offers biased output,” said Sarmah, adding that ChatGPT will never tell you when to trust or not to trust its output.

This is where domain-specific and open source models come in. By training language models directly on financial or healthcare data, researchers can reduce risks of hallucination and bias. 

But domain-specificity is not enough on its own. Sarmah stresses that enterprises also need to control the full AI stack, data, models, and deployment, especially when sensitive information is involved.

Why Compute Ownership Matters

Training responsible AI requires enormous computing power, which remains a bottleneck for most countries. Sarmah draws a distinction between renting GPU clusters from big providers and owning infrastructure outright.

 “At BlackRock, I never had the chance to train large language models from scratch. It requires massive compute investment, which nobody has in India,” he said.

This lack of sovereign compute capacity means many organisations depend on closed providers, often moving sensitive data outside local networks. By contrast, owning compute enables enterprises and governments to train and deploy models within controlled environments, ensuring privacy and accountability.

Domyn, where Sarmah now leads AI research for financial services, offers one example of how this can be done. 

The company has partnered with NVIDIA to build Colosseum, a supercomputer in southern Italy capable of 115 exaFLOP calculations per second. 

From its India-based team, Domyn is training foundation models from scratch on this infrastructure, something Sarmah notes is not happening elsewhere in the country. 

Across Europe, Asia, and the US, governments are recognising the same need and pouring billions into national AI supercomputers. The message is consistent: responsible AI is not only about software safeguards, it also depends on who owns the hardware.

The EU’s AI Act extends the regulatory scope to include hardware, requiring organisations to identify both the software and hardware components of AI systems and ensure their safety. 

Meanwhile, China is moving toward self-reliance by subsidising domestic AI chip production and aiming for smart computing infrastructure independence by 2027, underlining the strategic role of hardware ownership.

Responsible AI as a Global Imperative

The idea that stands out from Sarmah’s reflections is not about one company’s product, but about the direction AI may need to take. 

Regulated industries require AI systems that are not only accurate but also transparent, explainable, and accountable. This involves building models from scratch, publishing methods to detect bias, and enabling users to understand why outputs can or cannot be trusted.

“The point is that we are not using AI blindly, we are using it responsibly,” Sarmah said. His words echo a broader challenge: as AI spreads into sensitive sectors, the conversation must shift from speed and scale to responsibility and trust.

The post Why Responsible AI Demands Both Trust and Compute Ownership appeared first on Analytics India Magazine.

]]>
IIT Madras Builds Low-Cost Chip to Slash Antibiotic Testing Time https://analyticsindiamag.com/ai-features/iit-madras-builds-low-cost-chip-to-slash-antibiotic-testing-time/ Thu, 11 Sep 2025 14:30:00 +0000 https://analyticsindiamag.com/?p=10177508

The device can diagnose resistance in three-to-six hours and aims to expand testing from one to eight antibiotics.

The post IIT Madras Builds Low-Cost Chip to Slash Antibiotic Testing Time appeared first on Analytics India Magazine.

]]>

Antibiotic resistance — the “silent pandemic” — caused nearly five million deaths worldwide in 2019. At IIT Madras, researchers are tackling this crisis with a chip that can test bacterial resistance in hours instead of days.

The device, called ε-µD, can determine whether bacteria resist antibiotics in just three to six hours, compared to the 48–72 hours required by conventional antimicrobial susceptibility testing (AST). That time difference could mean the gap between effective treatment and life-threatening complications.

How the Chip Works

The prototype is about 1.5 cm by 4 cm, built on a glass slide with four carbon electrodes and a soft polymer channel. Patient samples, such as urine, flow into the channel where bacteria attach to the electrodes. After flushing and adding a nutrient medium, bacterial growth begins. 

The system uses Electrochemical Impedance Spectroscopy to detect changes caused by bacterial metabolites. If the antibiotic kills the bacteria, the signal remains steady. If not, bacterial growth shifts the signal.

Currently, ε-µD tests one antibiotic at a time. The team is working on multiplexing the design to test up to eight antibiotics simultaneously — providing a complete susceptibility profile from a single sample.

Built for Speed and Affordability

Unlike high-end diagnostic tools that rely on costly fabrication or rare materials, ε-µD uses screen-printed carbon electrodes, making it simple and economical. “Most methods still depend on culture tests that take days. This is a completely different way of looking at the problem,” said lead researcher professor S Pushpavanam.

For accuracy, the team spent months fine-tuning the channel size and nutrient concentration to ensure even small bacterial growth rates show clearly. Clinical validation is underway at IIT Madras’ Institute Hospital and Southern Railway Hospital, both of which handle large numbers of urinary tract infection cases.

Cost has been a key factor. The main expense is the potentiostat (sensing equipment), but once installed, tests could cost as little as ₹500 per antibiotic. Multiplexed tests may cost about ₹1,000 — still cheaper and faster than existing methods.

From Lab to Market

To prepare for adoption, researchers are trialing the chip alongside standard tests in pathology labs. Commercialisation efforts are being supported by Kaappon Analytics India Pvt Ltd, a startup incubated at IITM Research Park.

The current prototype is larger than a SIM card, but miniaturisation is part of the roadmap. The team hopes to make it smaller, portable, and capable of running multiple tests at once. “We are looking for partners to help with miniaturising and multiplexing. That’s the next stage,” Prof. Pushpavanam said.

A Timely Innovation

The work, published in Nature Scientific Reports, comes as the World Health Organisation lists antimicrobial resistance (AMR) among the top ten global health threats. In countries like India, where rural populations face rising resistance and limited lab access, a low-cost rapid test could be transformative.

By cutting diagnostic time from days to hours and scaling from one antibiotic to eight, ε-µD could help doctors prescribe the right treatment faster, reducing misuse of broad-spectrum drugs and slowing the advance of AMR.

“The problem of AMR is pressing, and the three-day lag in current testing works against us. We asked: can we reduce it? That clarity of the question helped us find the answer,” said professor Pushpavanam.


(From left to right – Priyadarshini, Diksha, Dr. Richa,Dr.  Pushpavanam, Himanshu and Dr Saranya)

The post IIT Madras Builds Low-Cost Chip to Slash Antibiotic Testing Time appeared first on Analytics India Magazine.

]]>