Generative AI News, Stories and Latest Updates 2025 https://analyticsindiamag.com/news/generative-ai/ News and Insights on AI, GCC, IT, and Tech Mon, 25 Aug 2025 13:12:50 +0000 en-US hourly 1 https://analyticsindiamag.com/wp-content/uploads/2025/02/cropped-AIM-Favicon-32x32.png Generative AI News, Stories and Latest Updates 2025 https://analyticsindiamag.com/news/generative-ai/ 32 32 SiMa.ai Launches Chip to Run Reasoning-Based LLMs On-Device in Under 10 Watt https://analyticsindiamag.com/ai-news-updates/sima-ai-launches-chip-to-run-reasoning-based-llms-on-device-in-under-10-watt/ Tue, 12 Aug 2025 18:15:32 +0000 https://analyticsindiamag.com/?p=10175804

The new Modalix SoM and DevKit are now available, with pricing for the 8GB SoM starting at $349 and the 32GB version at $599.

The post SiMa.ai Launches Chip to Run Reasoning-Based LLMs On-Device in Under 10 Watt appeared first on Analytics India Magazine.

]]>

SiMa.ai, the US-based AI chip company, has launched its next-generation platform to accelerate the growth of Physical AI applications. The company introduced the Modalix Machine Learning System on a Chip (MLSoC) along with a new System-on-Module (SoM) and development kits, aiming to meet the growing demand for Physical AI in sectors like robotics, automotive, and healthcare.

Modalix is a second-generation MLSoC designed to deliver high performance without compromising power efficiency, operating at less than 10 watts. “The era of Physical AI is here. With Modalix now in production, we’re accelerating its global adoption,” said Krishna Rangasayee, founder and CEO of SiMa.ai.

Its flexible, Arm-based architecture enables real-time decision-making, natural language interaction, and seamless support for large language models (LLMs), transformers, convolutional neural networks (CNNs), and generative AI (GenAI) workloads.

Seamless Integration 

SiMa.ai also unveiled its new Modalix SoM, which it claims is pin-compatible with leading GPU SoMs. This feature enables easy integration into existing systems, making it an attractive option for developers. 

The platform includes MIPI, memory, and other essential I/O components for rapid scaling of Physical AI systems. The company’s LLiMa software framework, which supports LLM deployment, further simplifies integrating GenAI capabilities into Physical AI applications.

“Modalix showcases the scale of innovation possible with Arm’s flexible, power-efficient compute platform,” said Ami Badani, chief marketing officer of Arm. “SiMa.ai is enabling smarter and more sustainable systems across industries.”

SiMa.ai collaborated with Synopsys to speed up development, using the company’s AI-powered design tools. TSMC’s advanced N6 process ensured Modalix met power, thermal, and reliability standards.

Sajiv Dalal, president of TSMC North America, said, “This collaboration underscores our commitment to driving energy-efficient chip innovations that are redefining the future of AI.”

Global Launch and Commercial Availability

The new Modalix SoM and DevKit are now available, with pricing for the 8GB SoM starting at $349 and the 32GB version at $599. The DevKit is priced at $1,499. SiMa.ai aims to accelerate the global adoption of Physical AI, with strong demand for its products.

In a recent exclusive interview with AIM, Rangasayee also praised the Indian market, saying, “We believe India could be the next market maker for the planet. And not only for local consumption, but for global consumption.”The company recently raised $85 million in an oversubscribed funding round, taking its total capital raised to $355 million. It had hinted that the funds would be used to expand globally and scale its physical AI platform.

The post SiMa.ai Launches Chip to Run Reasoning-Based LLMs On-Device in Under 10 Watt appeared first on Analytics India Magazine.

]]>
India’s GenAI Startup Boom Faces Funding and Infrastructure Hurdles, Says Nasscom Report https://analyticsindiamag.com/ai-news-updates/indias-genai-startup-boom-faces-funding-and-infrastructure-hurdles-says-nasscom-report/ Fri, 08 Aug 2025 07:49:15 +0000 https://analyticsindiamag.com/?p=10175435

A 2.8X increase in startup formation and a 1.7X rise in patents indicate a surge in innovation

The post India’s GenAI Startup Boom Faces Funding and Infrastructure Hurdles, Says Nasscom Report appeared first on Analytics India Magazine.

]]>

India’s generative AI (GenAI) startup ecosystem is growing rapidly but faces critical challenges around funding, talent, and infrastructure, according to a recent Nasscom report.

The report, which maps the momentum of GenAI startups in India in 2025, claims they have grown 3.7 times over the past year, with the total now exceeding 890 ventures. It noted a 2.8X increase in startup formation and a 1.7X rise in patents, indicating a surge in innovation. Over 83% of these startups are application-focused, building vertical AI and SaaS solutions to fast-track commercialisation.

Despite these gains, India’s GenAI funding still lags global peers. 

In the first half of 2025, the ecosystem raised $990 million, marking a 30% year-on-year increase, but most of the funding remains in early stages. Late-stage investments are constrained by a risk-averse funding culture and limited infrastructure — particularly high compute costs, which have now overtaken talent shortages as the top barrier to scaling.

Arpit Mittal, founder and CEO of edtech startup SpeakX, told AIM: “What we see isn’t a mass exit; it’s more like angels putting their foot on the brake. 2024-25 rules from SEBI now ask angels to prove higher net-worth and go through extra accreditation. Many casual angels don’t want that paperwork, so they have paused investing, while the seasoned folks are simply cutting ticket sizes from ₹1-2 crore to ₹50-75 lakh per deal.”

“GenAI startups have the potential to shape the future of AI innovation for emerging markets and beyond,” said Rajesh Nambiar, president of Nasscom.

Agentic AI is emerging as a key frontier, where startups are building infrastructure, orchestration layers, and automation tools that can reshape enterprise workflows. Large tech companies worldwide are acquiring or building such capabilities, increasing competition and opportunity.

Startups are also betting big on domain-specific models for regulated sectors like BFSI, healthcare, and legal, where compliance and auditability are critical — areas where generic models often fall short. Enterprise demand is shifting from core models to orchestration tools powering agentic workflows, presenting an opportunity for Indian firms to lead in building “agent-as-a-service” stacks.

Meanwhile, a significant untapped opportunity exists for lightweight, multi-indic LLMs and voice-first AI assistants tailored for India’s mobile-first, linguistically diverse population, especially in Tier 2 and Tier 3 cities.

However, the report warns that regulatory hurdles, IP protection issues, and lack of compute-rich infrastructure are stalling the ecosystem’s maturity and slowing partnerships. The lack of production-ready talent also remains a major bottleneck.

The post India’s GenAI Startup Boom Faces Funding and Infrastructure Hurdles, Says Nasscom Report appeared first on Analytics India Magazine.

]]>
MakeMyTrip Launches Myra, Multilingual AI Agent for Travel Bookings https://analyticsindiamag.com/ai-news-updates/makemytrip-launches-myra-multilingual-ai-agent-for-travel-bookings/ Thu, 07 Aug 2025 10:47:31 +0000 https://analyticsindiamag.com/?p=10175278

It supports voice and text inputs and will soon expand to more Indian languages once early feedback is incorporated.

The post MakeMyTrip Launches Myra, Multilingual AI Agent for Travel Bookings appeared first on Analytics India Magazine.

]]>

MakeMyTrip has launched a new multilingual Trip Planning Assistant designed to simplify travel bookings through human-like conversations in English and Hindi. 

The company said in a press release that the AI assistant, Myra, will support users through every stage of their travel—from discovery to booking, and even post-trip services—while removing language barriers that often hinder access.

The assistant is currently available in beta and marks a major upgrade over MakeMyTrip’s existing AI tool. With Myra, users can ask open-ended and complex questions such in Hindi, English, and Hinglish, and receive relevant, real-time suggestions along with the option to make a booking.

It supports voice and text inputs and will soon expand to more Indian languages once early feedback is incorporated.

The technology behind the assistant is built on a framework of specialised AI agents covering flights, hotels, holidays, ground transport, visas, and forex.

“We have always believed that technology is at its best when it solves complex problems behind the scenes, while making the customer interface as intuitive and as delightful as possible,” said Rajesh Magow, co-founder and group CEO of MakeMyTrip. 

“With GenAI, we take that vision further by turning intent into action through natural, human-like conversations. By enabling access initially in Hindi, and expanding to multiple Indian languages soon, this launch has the potential to solve for the Bharat heartland, reaching the deepest corners, and bringing seamless, intelligent travel booking to those who’ve long been underserved by digital platforms,” Magow added.

Sanjay Mohan, Group CTO of MakeMyTrip, this marks one of the company’s most complex tech builds to date. “Our in-house team has developed custom language models and layered them with planning, scheduling, and verification systems that work in sync and respond in real time,” he said. 

MakeMyTrip first adopted generative AI in 2023, becoming one of the earliest travel platforms to embed it directly into its booking experience. 

Since then, it has rolled out tools like Fare Lock, Zero Cancellation, and predictive features for train travel. The new launch continues this momentum by embedding GenAI more deeply into the platform’s infrastructure.

The post MakeMyTrip Launches Myra, Multilingual AI Agent for Travel Bookings appeared first on Analytics India Magazine.

]]>
‘GenAI is Potentially Dangerous to the Long-term Growth of Developers’ https://analyticsindiamag.com/ai-features/genai-is-potentially-dangerous-to-the-long-term-growth-of-developers/ Sun, 13 Jul 2025 06:27:44 +0000 https://analyticsindiamag.com/?p=10173293

“If you pass all the thinking to GenAI, then the result is that the developer isn’t doing any thinking.”

The post ‘GenAI is Potentially Dangerous to the Long-term Growth of Developers’ appeared first on Analytics India Magazine.

]]>

Curiosity is often seen as a key trait of great developers. However, in an age of generative AI tools that can write code, generate tests, and even review themselves, the act of asking questions is at risk of being outsourced. While these tools may speed up delivery, some in the developer community warn that they also threaten the growth of a developer.

An MIT Study also hinted towards a decline in cognitive capabilities when using an LLM. It’s not just about juniors or seniors, fast delivery, or clean code. It’s about what happens when understanding gets replaced by imitation and how that could slowly diminish a developer’s capacity to create truly exceptional software.

The Fragility of Copy-Paste Knowledge

“GenAI is potentially dangerous to the long-term growth of developers. If you pass all the thinking to GenAI, then the result is that the developer isn’t doing any thinking,” said Ben Hosking, a dynamics 365 solution architect at Kainos, in a conversation with AIM. 

Additionally, Hosking notes in his Medium blog that developers who lack an understanding of sound practice logic are missing out on the real benefits. He warns that blindly following principles without understanding their purpose leads to fragile knowledge that can easily break down in different contexts.

Hosking draws a line between clean code and correct code. He notes that developers “might already be wrong, but don’t know it yet,” primarily when they rely on requirements without understanding the underlying logic. 

This was echoed when AIM asked him about his opinion on the correct code, but not regarding the right job, based on his experience. He notes that if the requirements are wrong, no matter how good the code is, it will still be incorrect. 

He believes that the company only finds out during demos or UAT, which causes problems because they’ve built dependent software on top of faulty software.

Chaitanya Choudhary, CEO of Workers IO, echoed this sentiment and told AIM that he once dedicated days to developing a well-designed authentication system only to discover that users were abandoning it due to its complexity. Currently, he emphasises the importance of first validating the problem at hand, often by applying the simplest solution possible.

Choudhary believes the solution-first mindset is being amplified by GenAI. “It can create a mentality where you build because you can, not because you should,” he said. The issue is not a lack of capability, but curiosity. Or rather, the lack of it when machines do the proposing.

Experiment, Don’t Just Execute

Hosking explained that developers learn more through experimenting with various solutions and experiencing failures, rather than just creating solutions. He mentions that this kind of thinking is being gradually replaced by automation. He encourages developers to approach their work as if conducting an experiment, finding a healthy balance between meeting requirements and fostering growth.

Choudhary echoes this experimental mindset, emphasising the importance of being flexible and adaptable. He told AIM, “The best engineers I know approach each feature like a hypothesis to be tested. They ask ‘What if we’re wrong about this?’ and build in ways that make it easy to pivot.” This shared perspective highlights a common theme among innovative developers: the value of iterative testing and learning.

Building on this idea, Choudhary also stresses the importance of striking a balance between investment and resources. He adds that while a rapid prototype may suffice in some cases, others require a resilient infrastructure. This experimental approach enables deliberate trade-offs, thereby preventing the default tendency to over-engineer every solution.

All agree GenAI can play a positive role, but only when used deliberately. 

“The way to use GenAI while learning is to ask GenAI lots of questions and get it to come up with ideas that you then take time to understand and develop,” said Hosking.

However, GenAI isn’t always helpful in that regard. Hosking warns that the weakness with GenAI is the need to review its creations. He adds that it’s too easy to assume it has done it correctly because reviewing code, documents, and unit tests is boring.

Considering this, it’s crucial to adopt a cautious approach when using GenAI. It’s essential to treat your work as an experiment and remain open to refining your solutions. As Alex Dunlop, a senior engineer at Popp AI, said, “It’s vital to see your work as an experiment and avoid becoming too attached to your initial solution, as this can lead to defensiveness.”

Curiosity Is the Long Game

The concern isn’t that GenAI will produce poor developers, but that it will foster complacent ones. Developers who avoid the struggles of debugging, stop questioning why, and place too much trust in the system. 

When asked about his view on the issues developers face in the current era of GenAI, he said, “Without understanding the purpose behind the requirements, development teams have no idea if they are building the right software”.

Dunlop explained that the initial excitement of not knowing everything and the constant urge to learn tends to diminish as one becomes a senior engineer, replaced by a sense of duty to have all the answers. However, a recent shift in outlook has rekindled their explorer’s curiosity by viewing everything as an unknown to be uncovered.

For those willing to stay curious, GenAI can be an accelerant rather than a crutch. Choudhary builds “curiosity projects”—small tools that solve real problems—just to keep learning. “I also make it a practice to understand what the AI is doing,” he adds. “Asking ‘why did it choose this approach?’ keeps me learning even when using powerful tools.

As GenAI improves at delivering code, the best developers may not be the fastest builders, but rather the most profound thinkers who retain their curiosity and critical thinking. 

The post ‘GenAI is Potentially Dangerous to the Long-term Growth of Developers’ appeared first on Analytics India Magazine.

]]>
Gnani.ai Unveils Inya.ai, No-Code Agentic AI Platform for Voice and Chat Agents in 10+ Indic Languages https://analyticsindiamag.com/ai-news-updates/gnani-ai-unveils-inya-ai-no-code-agentic-ai-platform-for-voice-and-chat-agents-in-10-indic-languages/ Wed, 09 Jul 2025 05:53:59 +0000 https://analyticsindiamag.com/?p=10173098

To encourage early adoption, Inya.ai is offering $10,000 in free credits to the first 1,000 sign-ups.

The post Gnani.ai Unveils Inya.ai, No-Code Agentic AI Platform for Voice and Chat Agents in 10+ Indic Languages appeared first on Analytics India Magazine.

]]>

Gnani.ai has launched Inya.ai, a no-code Agentic AI platform designed to help developers and enterprises deploy intelligent voice and chat agents across customer-facing workflows, without writing a single line of code.

The platform is aimed at revenue-critical functions such as lead qualification, payment nudges, abandoned cart recovery, contextual upselling, and multilingual follow-ups. It supports personalised, emotionally intelligent conversations at scale while maintaining contextual memory across sessions. 

To encourage early adoption, Inya.ai is offering $10,000 in free credits to the first 1,000 sign-ups. 

“Inya’s multi-agent orchestration capability, allows businesses to create agents for different teams that can communicate, collaborate, and operate cohesively,” said Ganesh Gopalan, co-founder & CEO, Gnani.ai.

With support for voice, chat, SMS, and WhatsApp, Inya.ai is built on the back of Gnani.ai’s eight years of domain expertise in sectors like BFSI, retail, telecom, automotive, and consumer durables. Its multilingual capabilities and enterprise-friendly integration make it adaptable for diverse business needs.

According to the website, companies like IDFC First Bank, P&G, HDFC Bank, and a few others are already testing and deploying this in their work.

“It is open, developer-friendly, voice-first, and built for seamless enterprise integration,” said Ananth Nagaraj, Co-founder & CTO, Gnani.ai.

Gnani.ai’s momentum also includes its selection under the IndiaAI Mission, where the company is developing India’s first voice-focused LLM with 14 billion parameters covering over 40 languages, including more than 15 Indian languages.

The post Gnani.ai Unveils Inya.ai, No-Code Agentic AI Platform for Voice and Chat Agents in 10+ Indic Languages appeared first on Analytics India Magazine.

]]>
Subtl.ai Collapse Exposes Cracks in India’s AI Scene https://analyticsindiamag.com/ai-startups/subtl-ai-collapse-exposes-cracks-in-indias-ai-scene/ Mon, 07 Jul 2025 13:30:37 +0000 https://analyticsindiamag.com/?p=10172993

“Some investors flirt A LOT with founders,” Vishnu Ramesh said. “But it doesn't mean s*** until they give you a term sheet.”

The post Subtl.ai Collapse Exposes Cracks in India’s AI Scene appeared first on Analytics India Magazine.

]]>

Building an AI startup in India isn’t exactly a walk in the park. Startup founders often feel frustrated due to a multitude of factors, such as limited funding, a lack of investor expertise, and the high demand for free proof of concepts (POCs). As it turns out, there are far deeper reasons for this.

Last week, Vishnu Ramesh, founder of Subtl.ai, posted a heartbreaking message on LinkedIn, signalling the end of the road for the company. “TL;DR: we have started shutting down Subtl.ai,” he wrote. 

That one line of update confirmed what many in India’s AI startup ecosystem are increasingly confronting—ambitious ideas hitting a wall faster than anyone expects.

Subtl.ai, the Hyderabad-based enterprise GenAI startup, had carved out a niche in RAG and AI. With clients like SBI, defence contracts, two airports, and a few others, the company was trying to solve a hard problem—how to make enterprise data usable through natural language interfaces.

It had solid benchmarks as well. Ramesh said that their auto-tuning pipeline, Subtl V2, built by engineers and researchers at IIIT Hyderabad, outperformed RAG built on OpenAI and open-source embeddings by 15-20%

In an interview with AIM earlier, Ramesh revealed that the startup’s ambition was to reduce the dependence on companies like OpenAI.

Furthermore, in a blog post last year, the startup revealed that SBI successfully implemented Subtl.ai, demonstrating 92% accuracy in information retrieval, and 56,570 minutes were saved (equivalent to approximately Rs 5 lakh).

The Promise was Real. The Traction, Limited

Ramesh is onto building another AI startup. “Going vertical AI this time,” he declared in his LinkedIn bio. Instead of blaming the market, customers, or the investors for the shutdown, he places the failure squarely on his own decisions, especially a lack of market focus.

Subtl chased use cases across vastly different industries, from banking to insurance to defence. Nothing was repeatable. “I got stuck handling customers from wildly different domains with wildly different use cases… customers gave no s*** about our other portfolio of work we had done,” he explained.

Despite early wins and a product reportedly called ‘Private Perplexity for Enterprise’, the startup was operating on thin fuel. It had raised around ₹1 crore in angel funding—barely enough to build and maintain a product in an increasingly competitive GenAI market. 

There were no follow-up rounds, and no external signals of new revenue deals. Subtl built APIs that could have powered AI agents, citations, and document retrieval, but never invested in making those APIs developer-friendly. 

“All we did was put a message on our website saying ‘yo reach out if you wanna use our APIs’,” he admitted. There were no open-source SDKs, no integrations with tools like LlamaIndex or Portkey, no real documentation, and no developer community.

According to Tracxn data shared with AIM earlier, over the last five years, 706 AI startups have failed, of which 54 are from India. This has been the case for several AI startups building for Indic use cases in the country, as there is not enough audience to actually test out use cases.

Read: Indic AI is Not Inspiring Enough for Indian Developers

“I’m a better CTO than a CEO,” Ramesh wrote. Lacking domain depth made enterprise sales difficult, and bouncing between industries didn’t help. One of the more painful parts of his reflection was about fundraising. He described how multiple investors had long conversations and seemed interested, but never followed through. 

“Some investors flirt A LOT with founders,” he said. “But it doesn’t mean s*** until they give you a term sheet.” He made hiring and scaling decisions assuming money would come in, but it never did.

Yet, he didn’t paint himself as a victim. “It’s completely on me, I failed my team and investors more than they failed me,” he wrote. The domain is still up, the LinkedIn team profiles still say “Subtl.ai,” but the quiet exit has begun. 

The Sad State of Affairs

Indian AI startups are walking on thin ice, almost all the time. Even though it might seem like the demand is increasing in the country, it is not exactly the case. 

Similar to Ramesh, Vaibhav Domkundwar, CEO of Better Capital, earlier highlighted that a wave of frustration was sweeping through India’s AI and SaaS startup ecosystem, sparking what founders call the ‘Skip India Movement’. 

There is a growing sentiment that Indian enterprises are not worth the time, effort, or resources required to sell to them.

Read: Free PoCs are Killing Indian AI Startups

Then there is the talent problem. “India seriously has a big f***ing talent problem,” said Umesh Kumar, co-founder of Runable, an Indian AI startup building a platform where anyone can build AI agents. “We got around 1,000 applications for a backend engineering role in just the last two to three days, and guess how many were actually decent?” Less than five.

Kumar’s startup was hiring backend developers with a no-nonsense offer: ₹50 lakh base pay, relocation, food, and a shot at working with top-tier talent. The hiring process involved a simple coding task, two calls and one paid trial. 

And yet, his hunt continues.

Just a few months before Subtl.ai’s wind-down, Unikon.ai had a more chaotic departure. On March 1, 2025, multiple developers reported being asked to pack up and leave without a warning. Devices were returned, and offices were cleared. But the startup didn’t die.

It had raised $2 million from prominent Indian angels just nine months earlier and was initially pitched as a GenAI-enabled networking platform. But it soon pivoted into building a D2C skincare brand—a sharp detour from its original AI pitch. The result was a ₹2 crore per month burn rate and no follow-on funding. 

The founder, Aakash Anand, shut down the company in a town hall meeting.

This is similar to what happened with InsurStaq.ai. In September 2024, the company started shutting down after one year of operations, and is now completely inoperative.

Some startups go to the US for quick monetisation and come back to solve the country’s problems, but AI, arguably, is too early for that. RevRag, a B2B agentic AI startup, decided to undertake this herculean task long ago and is now back in India selling AI to enterprises, even if the payoff takes a bit longer.

“We are not quitting India because we think India is a large market overall, even if it takes time. And we will be at it,” Ashutosh Singh, co-founder and CEO of RevRag, told AIM

He acknowledged the challenges of selling in India. The sales cycle is slow, decision-making is layered with bureaucracy, and customers can go cold without a warning. “You might get ghosted, and you won’t even know. You’ll keep following up, but the deal might just disappear,” he explained.

The shutdown of Subtl.ai isn’t just a story of one startup’s stumble—it’s a mirror to the broader Indian AI ecosystem. A mix of premature scaling, fractured focus, lack of developer ecosystem thinking, and disillusioned investors is quietly draining momentum from startups that should be thriving. 

While global AI companies ride waves of hype and capital, Indian founders are often left grappling with free POCs, flaky investors, and talent that can’t meet the bar. Some will regroup, like Ramesh, and build again—hopefully with sharper lessons. But for many others, the silence will be the final word.

The post Subtl.ai Collapse Exposes Cracks in India’s AI Scene appeared first on Analytics India Magazine.

]]>
Generative AI Needs Its iTunes Moment for AI Copyright https://analyticsindiamag.com/ai-features/generative-ai-needs-its-itunes-moment-for-ai-copyright/ Thu, 03 Jul 2025 13:37:44 +0000 https://analyticsindiamag.com/?p=10172841

“An author takes five years to publish a book. ChatGPT takes under one second to read it and another second to start responding to questions.”

The post Generative AI Needs Its iTunes Moment for AI Copyright appeared first on Analytics India Magazine.

]]>

The tussle between breaking copyright and enforcing it has long existed before the advent of AI. However, it may have become worse without guardrails, as AI services take over the digital world.

There are various privacy and copyright laws worldwide, but none specifically address how AI utilises copyrighted content. Whether it should store the data, check for copyrights, or not care about copyrights at all.

Current frameworks, such as the GDPR or India’s DPDP Act, are useful for safeguarding personal data. But when it comes to intellectual property, Rishi Agarwal, co-founder and CEO of TeamLease Regtech, told AIM, “​​I don’t think GDPR, DPDP or any one of these laws are primed to deal with issues of this nature.”

From Piracy to Payment

There’s a precedent for industries disrupted by technology to fight back, not by banning innovation but by monetising it. Two decades ago, the music business did exactly that by embracing iTunes. Now, as large language models (LLMs) recreate or consume books, articles, reviews and more in seconds, Agarwal argues that generative AI needs a similar transformation to respect copyright without stifling progress.

Agarwal believes the issue is urgent but solvable. “An author takes five years to publish a book. ChatGPT takes under one second to read it and another second to start responding to questions,” he said. “What is the royalty that the author got as a result of this?”

The music industry once faced a similar existential threat. Peer-to-peer downloads hollowed out revenues until a paid, frictionless model came along. “That’s why iTunes happened,” said Agarwal. 

He believes Steve Jobs started iTunes with the idea that he was not violating copyright, stating that he would offer a song for $1 or similar, with the producer receiving a percentage and Apple taking a fair commission.

The same logic, he believes, can apply to AI-generated content. Instead of blocking access, let publishers register content with a platform. When an LLM references it to answer a prompt, a microtransaction gets triggered. Once the user agrees to pay, a deduction is made from their existing credit. For example, six rupees might be allocated to the author, two rupees to the platform, and the user gains access to the material.

In theory, nothing stops OpenAI, Anthropic, or Mistral from offering such a model. As Agarwal noted, one could imagine approaching leading publishers, such as HarperCollins or Random House (publishers), to explore the possibility of accessing their catalogues. A payment system could then be implemented, where the publishers are compensated each time the model draws on their material.

This isn’t just about books. The same principle, he argues, can be extended to newspapers, music, graphics, and even poetry. LLMs aren’t stealing intentionally, but by recreating works trained on proprietary data, they bypass existing compensation channels.

Monetisation is More Time-Saving Than Lawsuits

Meanwhile, copyright lawsuits continue to emerge, from The New York Times suing OpenAI to the Delhi High Court probing whether ChatGPT used ANI’s content without permission. Agarwal views these as symptoms of a broken model, rather than a solution. “Lawsuits are expensive,” he warned. “They’re symbolic of a larger problem brewing.”

He believes a subscription-advertisement hybrid, similar to YouTube or Spotify, could offer a way forward. “YouTube has figured out a way… I can become a publisher, and depending on how many views happen to my content, YouTube has a way of paying me back.”

So far, LLM platforms have been focused on scale, not structure. But as compute costs rise and free users expect richer responses, monetisation models will inevitably emerge. “The day is not far when this $20 ChatGPT premium becomes $30 or $50,” said Agarwal. 

“The question is, will OpenAI make the money? Or will it become a platform, which will make money and share a portion of the spoils and kill with the publishers?”

At the same time, Agarwal cautions against over-regulating too soon. “If you try and do it too prematurely, the technology will not get to a critical path… and you’ll end up killing it,” he said. “But over a period of time, this is going to be the natural direction.”

For now, the burden lies with platforms to implement a system that allows publishers to freely make their content available and establish a pricing model that rewards them.

As Agarwal puts it, “Only those models where all the stakeholders get a fair share of the pie will work in the long term.”

The post Generative AI Needs Its iTunes Moment for AI Copyright appeared first on Analytics India Magazine.

]]>
Replit Launches New Feature for its Agent, CEO Calls it ‘Deep Research for Coding’ https://analyticsindiamag.com/ai-news-updates/replit-launches-new-feature-for-its-agent-ceo-calls-it-deep-research-for-coding/ Thu, 03 Jul 2025 09:58:35 +0000 https://analyticsindiamag.com/?p=10172810

The slew of new features for Replit coders is slated to take their coding to the next level.

The post Replit Launches New Feature for its Agent, CEO Calls it ‘Deep Research for Coding’ appeared first on Analytics India Magazine.

]]>

Replit has unveiled three new features for its coding assistant, Replit Agent, as part of a broader capability upgrade it calls ‘Dynamic Intelligence’. The update introduces enhanced context awareness, step-by-step reasoning, and autonomous problem-solving, bringing the assistant closer to acting like a full-fledged coding partner.

The additions include Extended Thinking, High Power Model, and Web Search, all designed to help the agent handle more complex software development tasks. According to Replit, these upgrades aim to reduce human intervention while improving solution quality.

The web search capability is turned on by default and lets the agent intelligently query the internet to bridge knowledge gaps. Users can also explicitly instruct it to use Web Search for more relevant answers. The extended thinking mode prompts the agent to slow down and display parts of its reasoning before presenting final outputs, making it particularly useful for debugging or solving ambiguous tasks. The high-power model, on the other hand, leverages a more advanced AI to improve accuracy in demanding workflows such as database logic changes, UI overhauls, or API integrations.

According to Replit, users can toggle these features on a per-request basis, offering flexibility depending on task complexity. The announcement underscores Replit’s ongoing push to make its AI assistant more autonomous and developer-friendly, especially as tools like GitHub Copilot and Cursor expand their own intelligent assistants.

Recently, Replit crossed $100 million in annual recurring revenue, growing tenfold since 2021 without raising new funding since its 2023 round at a $1.1 billion valuation. Masad announced the milestone on X, noting strong adoption from both enterprises and independent developers. 

With Dynamic Intelligence, Replit’s agent steps beyond code suggestions and into the realm of real-time, goal-driven programming help. “It’s like deep research but for coding. Super powerful,” Amjad Masad, CEO of Replit, said, describing the new features.

The post Replit Launches New Feature for its Agent, CEO Calls it ‘Deep Research for Coding’ appeared first on Analytics India Magazine.

]]>
Google Just Made AI Available Where AI Companies Still Can’t https://analyticsindiamag.com/global-tech/google-just-made-ai-available-where-ai-companies-still-cant/ Wed, 02 Jul 2025 10:18:39 +0000 https://analyticsindiamag.com/?p=10172730

Gemini in Classroom aims to help educators and students enhance their productivity.

The post Google Just Made AI Available Where AI Companies Still Can’t appeared first on Analytics India Magazine.

]]>

There’s always talk of AI revolutionising the workplace. However, that conversation rarely extends to chalkboards and school corridors. While OpenAI and Anthropic fine-tune their models for enterprise and consumer applications, Google is quietly embedding Gemini into one of the potentially most impactful sectors, education.

The company recently announced that its AI-powered teaching suite, Gemini in Classroom, is rolling out globally and is doing so free of charge for all users of Google Workspace for Education. That alone gives Google a reach into millions of classrooms in a way no other AI company has yet matched. 

Not just classrooms, but Google’s presence, with its AI offerings, is also pronounced among enterprises. For instance, AIM, like many others, utilises Google Workspace, where Gemini Advanced subscription is a key benefit. This seamless integration is what gives Google its edge, it’s not launching another app, it’s enhancing what millions already use.

An AI Teaching Assistant with No Office Hours

Gemini in Classroom brings over 30 new features to educators’ fingertips. Using starter prompts and grade-level inputs, teachers can now generate lesson plans, quizzes, rubrics, and even example-rich explanations. 

As Mariam Fan, a language and robotics teacher at Los Gatos High School, put it, “Gemini in Classroom saves me hours on planning and support, fostering a more inclusive and engaging classroom.”

For students, it’s no longer just about reading assignments. NotebookLM and Gems, two tools previously confined to AI enthusiasts, are now embedded directly into Classroom. 

Teachers can create interactive, chat-based study guides or audio summaries that mimic a podcast. A biology teacher might assign a “Quiz Me” Gem to reinforce core concepts, while others can deploy “Real-world connector” bots to bridge textbook knowledge with everyday applications.

Google is aiming to make Gemini the de-facto co-pilot for teachers, not a replacement, and students engage on their terms. 

Mike Amante, a tech educator at New Hartford Central Schools, calls Gemini “the ultimate teaching assistant—always available, always helpful.”

A Reach Rivals Can’t Match Yet

Unlike OpenAI, which remains app-based, or Anthropic, which centres on enterprise-safe models, Google’s strength lies in ubiquity. Through Workspace and tools like Google Forms, Slides, and Docs, it’s repackaging AI into familiar workflows — from auto-generating quizzes from lesson decks to summarising form responses with Gemini in Forms.

There’s also the matter of data privacy, a particularly sensitive issue in schools. Here, Google leans on its security track record. Gemini in Education is built with strict policies, doesn’t use student data to train models, and has earned the Common Sense Media Privacy Seal, it is a move designed to win over wary administrators.

Meanwhile, the analytics tab in Classroom is evolving into a teacher’s dashboard, surfacing which students are struggling, which assignments are trending late, and how learning outcomes align with national standards.

While the commercial AI race is still about who builds the smartest chatbot, Google appears to be using AI to quietly improve lives without seeking attention.

The only rival that offers a similar scale for integrating AI in education is Microsoft

Microsoft is also ramping up its support for educators by integrating AI tools like Microsoft 365 Copilot to streamline lesson planning, personalise learning, and build future-ready skills. 

With expanded access to Copilot Chat for students, new training resources, and insights from its 2025 AI in Education Report, the company aims to empower teachers with practical, job-embedded AI support while helping students thrive in an AI-driven world.

Meanwhile, OpenAI may have something catered for teachers and students, like ChatGPT Edu, a version of ChatGPT tailored for universities to deploy AI responsibly across campuses, offering advanced capabilities like data analysis, vision reasoning, and custom GPT building, all powered by GPT-4o.

It may not match the scale of Google’s integration with its established platforms.

A Quiet Lead with No Homework Required

Google’s approach stands out not just for its technical prowess but for its institutional reach. Whether in classrooms or businesses, the primary challenge isn’t persuading people to adopt AI, but seamlessly integrating it into their existing environments. 

With Gemini in Workspace, Google has done exactly that. It has a network of businesses using Workspace with Gemini, and with Classroom, it’s doing it again for the next generation.

For teachers, Gemini means fewer hours spent on formatting feedback. For students, it means personalised AI tutors that adapt to their learning pace. And for Google, it’s proof that making AI useful doesn’t always mean making it loud.

As other companies chase high-profile partnerships, Google is quietly inking its name into school curricula.Along with education, Google is also dashing its way into the developer space by offering open-source CLI tools as a free alternative to Claude Code. It seems Google is bringing out its big guns to match the adoption rate of popular AI tools like ChatGPT, and education seems like the ideal way forward.

The post Google Just Made AI Available Where AI Companies Still Can’t appeared first on Analytics India Magazine.

]]>
‘PostgreSQL Eats the World, But CockroachDB Digests It’ https://analyticsindiamag.com/global-tech/postgresql-eats-the-world-but-cockroachdb-digests-it/ Mon, 30 Jun 2025 03:37:58 +0000 https://analyticsindiamag.com/?p=10172547

The core difference offered by CockroachDB lies in its horizontal scaling capabilities. 

The post ‘PostgreSQL Eats the World, But CockroachDB Digests It’ appeared first on Analytics India Magazine.

]]>

The database market is undergoing significant changes, driven by increasing demands for scale, resilience, and the burgeoning era of AI agents. 

Speaking exclusively to AIM, CockroachDB CEO Spencer Kimball stated that the shift towards distributed SQL databases built on a solid PostgreSQL foundation is becoming increasingly crucial for businesses of all sizes, not just tech giants.

The core difference offered by CockroachDB lies in its horizontal scaling capabilities. While it strives to maintain a PostgreSQL-like interface, distributed operations require a different approach. 

“Cockroach didn’t reject Postgres. It re-architected it from the ground up to meet the scale, distribution, and the consistency AI demands,” Kimball said.

He further added that scaling 100x on a monolithic architecture is utterly impossible. This, he explained, is where distributed SQL databases like CockroachDB come in, built for “serious scale, like hundreds of terabytes into petabytes” of operational data. “Postgres may be eating the world, but AI needs a database that can digest.”

Kimball said that he is particularly referring to operational databases and not the analytical ones. “It’s about the metadata that tracks the product or service, all the activity, and the high level of concurrent operations that demand strong consistency,” he added.

He explained that both humans and agents would have access to the data. These agents operate at high speed and are continuously active, performing the same tasks multiple times daily or even hourly. They work on behalf of both consumers and businesses, resulting in a steadily increasing volume of traffic.

What’s Next from CockRoach

Kimball sees AI playing a role in observability and support. “AI can move much faster. If you give it the right scenarios and train it, then what could have taken several hours to fix might only take several minutes,” he said.

Vector indexing is another area of focus for CockroachDB. “Customers want nearest-neighbour search in high-dimensional spaces at scale. They want it fast and consistent, even as data changes,” Kimball said.

But he clarified that CockroachDB is not trying to become a general-purpose vector database. Cockroach isn’t trying to compete with OpenSearch, Elastic, or MongoDB on vector search. “If you’re already using CockroachDB for mission-critical relational workloads, you want vector support there. Not everyone needs that, but for our users, it’s essential.”

He further added that they are not trying to win the market for the vector index. “We’re not a vector database. However, it’s a very important modality.”

Moreover, Kimball talked about reducing costs. “Nobody wants to pay 10x more because their workload scales 10x. CockroachDB can improve utilisation with multi-tenancy.” He explained that if a customer has 100 use cases on a large cluster, the peaks and troughs average out, allowing them to move from 10% to 50-60% utilisation.

The company is also working on using cloud cost efficiencies. Kimball said CockroachDB’s architecture allows the use of spot instances, disaggregated storage, and storage tiering. “We believe we can reduce costs by 10 to 16x in the next few years.”

Moat of Cockroach

Kimball said that CockroachDB’s strength is in geographic scale. “We have customers in the EU, the US, and India. If you want to make your service span all of those places, Cockroach has some really interesting capabilities that are different.”

He provided one example from the US sports betting sector. “Customers use Cockroach nodes in multiple states to comply with data locality laws. Data is processed where bets are placed.”

Moreover, he added that CockroachDB is cloud-agnostic and supports hybrid deployments. “Big banks and tech companies use private data centres and all three major clouds. We let customers run the database wherever their business needs it.”

One key challenge, he pointed out, is integrating AI into database operations. “It’s not easy to run distributed systems. When something goes wrong, you want the root cause before a human even looks at it. AI can help.”

On competing with cloud vendors, he noted, “They’re both competitors and partners. Big clouds don’t want to serve self-hosted enterprise customers, and those customers don’t want to be tied to one cloud. CockroachDB fits well there.”

He added that clouds often refer such customers to CockroachDB. “They say, ‘We can’t run this in your data centre, but CockroachDB can.’ That’s why the partnership works.”

As the era of AI agents increases data scale and complexity, CockroachDB is positioning itself to meet those demands through distributed design, cross-cloud flexibility, and AI-enhanced tooling.

Why Postgres 

Kimball explained how CockroachDB tries to stay close to the Postgres experience but adapts key behaviours to function at scale in distributed environments.“So well, it tries to look as much like Postgres as possible.”

One clear example was ID generation. Traditional Postgres allows for monotonically increasing sequences, such as auto-incrementing IDs for user records. In monolithic systems, this works smoothly, but things break down at a massive scale.

“In a monolithic system… that counter, it’s all just in one place… But once you say, I want to do 10 million of these concurrently… you don’t want them all going to one node that holds a counter.”

CockroachDB distributes the sequence generation process differently, making it scale-friendly but less linear. “It will look the same as a sequence. But… we have a more distributed mechanism to assign IDs… they’re not just counting 1,2,3,4,5.”

He acknowledged differences between Postgres and MySQL users as well. “Postgres does structured data, too. There’s room for both.”

Kimball said that the bigger challenge lies in how the databases are operated, not how they are used by applications. He said that system administrators and DBAs familiar with one will have a steeper learning curve when switching to the other, due to differences in tools, management styles, and best practices. 

“If you’re very good as a system administrator or like a DBA using Postgres, then it’s a lot more new stuff to learn. 

Kimball said that it often comes down to what teams are already used to operating. “If you’re good at MySQL, moving to distributed MySQL, then TiDB makes sense.” He was referring to TiDB CTO Ed Huang, who said that he believes MySQL will power AI agents.

Journey of the Cockroach

Cockroach Labs was founded in 2015 by ex-Google employees Kimball, Peter Mattis, and Ben Darnell. It draws inspiration from Google’s Bigtable and Spanner databases.

Kimball said that in the early 2000s, systems like Google’s Bigtable avoided SQL not out of dislike, but to keep things simple while focusing on scalability. “It was just easier not to have to do all that stuff and also build something that is elastically scalable and more survivable.”

However, over time, the industry began adding SQL features again. MongoDB added transactions. Google layered SQL on top of Spanner with F1

“They created a whole new distributed architecture, but they left all of the hard stuff and started adding the hard stuff back on top of it,” said Kimball. 

He added that NoSQL systems, such as Cassandra, offer flexibility and scalability but fall short in terms of consistency and schema management. “If you have 50 people working on a complex, mission-critical product… it just becomes impossible.”

By 2015 the CockroachDB team had a clear understanding of their target users which included big banks, major tech firms and other high-stakes organisations. 

Instead of building a new SQL dialect, they chose PostgreSQL. “Postgres felt like the cleanest and the most appropriate, and had the most upward velocity momentum.”

The post ‘PostgreSQL Eats the World, But CockroachDB Digests It’ appeared first on Analytics India Magazine.

]]>
OpenAI is Flirting with Danger by Naming China’s Blacklisted Zhipu AI as a Threat https://analyticsindiamag.com/global-tech/openai-is-flirting-with-danger-by-naming-chinas-blacklisted-zhipu-ai-as-a-threat/ Sat, 28 Jun 2025 03:31:51 +0000 https://analyticsindiamag.com/?p=10172531

OpenAI’s blog post simply boosts the IPO-bound Zhipu AI’s visibility among both funders and customers, effectively putting the Chinese rival on the global map.

The post OpenAI is Flirting with Danger by Naming China’s Blacklisted Zhipu AI as a Threat appeared first on Analytics India Magazine.

]]>

China’s AI ecosystem is going strong. So much so that it is starting to compete internally, and the much-talked-about DeepSeek is no longer at the top.  

Zhipu AI is not a name that typically comes up in casual conversations about AI supremacy. It doesn’t have the fanfare of DeepSeek or the benchmark-breaking headlines of Alibaba’s latest models. 

But this week, OpenAI made it clear that Zhipu is a threat worth watching.

In a blog post that reads more like a geopolitical intelligence memo than a developer update, OpenAI called out the Beijing-backed startup as a significant player in China’s AI playbook. “While we hear the most about new models, just as significant is CCP headway in getting other governments around the world to adopt its AI,” the post warned. 

At the centre of that strategy is Zhipu AI, which appears to be building the scaffolding for China’s global AI infrastructure. 

Despite US sanctions, Zhipu is not starved for capital. According to a Reuters report, it recently raised $69 million in a Series D round led by state-owned Huafa Group. This comes on the heels of two recent funding rounds that the company secured from multiple local government bodies. 

Its current valuation is estimated to be $2.74 billion. The company has reportedly begun preliminary steps toward an IPO. Zhipu has also received support from Chinese tech giants Tencent and Alibaba, further blurring the line between state-backed innovation and private-sector speed. 

More Than Just Another LLM Startup

Founded in 2019, Zhipu AI has been dubbed one of China’s “AI tigers,” a term used by Chinese state media to describe the handful of LLM unicorns spearheading Beijing’s push to reduce dependence on Western technology. 

Unlike DeepSeek, which has become the poster child of Chinese AI ambition with its R1 model, Zhipu is playing a different game — one that involves partnerships with foreign governments, stealthy international expansion, and direct support from the Chinese Communist Party.

State media reports say the startup has secured over $1.4 billion in state-backed investment and frequently engages with top-level Chinese officials, including Premier Li Qiang. 

The company also reportedly has working relationships with the Chinese military — a detail that led to its inclusion on the US Commerce Department’s Entity List in January 2025, effectively blacklisting it from buying American components.

But the ban hasn’t slowed it down.

Zhipu AI has established offices in Singapore, Malaysia, the United Kingdom, and the Middle East. It’s also running joint “innovation centers” in Southeast Asia — including in Indonesia and Vietnam — as part of what OpenAI describes as China’s strategy to embed its AI stack globally before Western alternatives can take hold. 

What is up with OpenAI?

OpenAI’s blog post seems less concerned with model performance and more alarmed about the architecture of influence. By offering “AI infrastructure solutions to governments around the world,” Zhipu is laying the groundwork for a global Chinese AI ecosystem, one that could prove sticky for decades.

“The goal is to lock Chinese systems and standards into emerging markets before US or European rivals can, while showcasing a ‘responsible, transparent and audit-ready’ Chinese AI alternative,” OpenAI wrote. 

The language is telling. This is not about tech supremacy in the traditional sense. It’s about building trust, negotiating procurement deals, and establishing data and infrastructure dependencies.

Just this week, Zhipu AI launched AutoLM Rumination, a new AI agent that can carry out deep research, draft comprehensive reports, and help users plan complex tasks. Powered by the company’s proprietary models, it’s available for free. 

According to Zhipu, these models are not only as capable as DeepSeek R1, but also run eight times faster while consuming just one-thirtieth the compute resources. That’s a remarkable claim — one that, if accurate, could give Zhipu serious leverage in low-resource markets.

The release comes shortly after another Chinese player, Manus AI, launched a general-purpose agent that outperformed OpenAI’s deep research tools on the GAIA benchmark. Manus AI is now a commercial product with monthly plans, suggesting that Chinese companies are not just building fast but are also monetising fast.

VC Bill Gurley questioned OpenAI’s motives in spotlighting Zhipu AI, suggesting the move may have unintended consequences. “Very odd decision for OAI to openly promote this Chinese AI company ‘Zhipu AI’ in a blog post,” he wrote on X. He suggests that the action taken simply boosts competitors’ visibility among both funders & customers, effectively putting them on the map.

Gurley’s criticism echoes a broader sentiment in Silicon Valley that Zhipu, once obscure outside China, now finds itself in the global spotlight not because of a model release but because OpenAI chose to write about it.

“The opposite of love is indifference,” Gurley added, implying that attention, even critical, is a kind of endorsement.

OpenAI’s unease isn’t just about competition. It’s about losing narrative control. While the US has been aggressively promoting its AI stack globally through Project Stargate and government deals, trade missions, and strategic partnerships, China’s quiet, infrastructure-first approach may prove more enduring.

This is the US building walls. And unlike DeepSeek, which focuses solely on performance and visibility, Zhipu may be the one digging the trenches for now.

The post OpenAI is Flirting with Danger by Naming China’s Blacklisted Zhipu AI as a Threat appeared first on Analytics India Magazine.

]]>
Context Engineering is the New Vibe Coding https://analyticsindiamag.com/ai-features/context-engineering-is-the-new-vibe-coding/ Fri, 27 Jun 2025 10:30:00 +0000 https://analyticsindiamag.com/?p=10172495

"Context engineering is 10x better than prompt engineering and 100x better than vibe coding."

The post Context Engineering is the New Vibe Coding appeared first on Analytics India Magazine.

]]>

Two years ago, Python developers were getting replaced by prompt engineers, at least in the tech Twitter space. The past year was all about vibe coding with tools like Cursor, Windsurf, Replit, and others. Now, the AI community has a new-found obsession: context engineering — the art and science of structuring everything an LLM needs to complete a task successfully. 

If prompt engineering was about the clever, (mostly) one-liner instructions, context engineering is about writing the full screenplay. Andrej Karpathy, co-founder of OpenAI, also the person who called English the hottest programming language and made “tab tab tab” the default, is now all in favour of context engineering.

“+1 for ‘context engineering’ over ‘prompt engineering’,” Karpathy said in a post on X. He added that he does not want to coin a new term for it now that it has already caught on with developers.

“In every industrial-strength LLM app, context engineering is the delicate art and science of filling the context window with just the right information for the next step,” said Karpathy. 

From Prompts to Context

Prompt engineering gave us the early magic of ChatGPT — coaxing the model into doing our bidding with clever phrasings. But as applications get complex, that approach hits a wall. 

“People associate prompts with short task descriptions,” Karpathy explained. “But apps build contexts — meticulously — for LLMs to solve their custom tasks.”

Unlike prompt engineering, which focuses on how to phrase a task, context engineering is about ensuring the task is possible to solve in the first place. That might mean retrieving relevant documents using RAG, summarising a long conversation to preserve state, injecting structured knowledge, or supplying tools that let the model take action in the world.

Many developers are realising that when LLMs fail, it’s not because the model is broken — it’s because the system around it didn’t set it up for success. The context was insufficient, disorganised, or simply wrong. And like humans, LLMs respond differently depending on how you talk to them. 

A poorly structured JSON blob might confuse a model where a crisp natural language instruction would succeed. 

The shift to context engineering is not just semantic. It’s structural. Where prompt engineering ends at crafting a sentence, context engineering begins with designing full systems, ones that bring in memory, history, retrieval, tools, and clean data — all optimised for an AI model that isn’t psychic.

“Context engineering is 10x better than prompt engineering and 100x better than vibe coding.” That’s how Austen Allred, founder of BloomTech, summed up the shift in how developers are thinking about building LLMs.

Context is King

Sebastian Raschka, founder of RAIR Lab, captured the division well: prompt engineering is often user-facing, while context engineering is developer-facing. It requires building pipelines that bring in context from user history, prior interactions, tool calls, and internal databases — all in a format that’s easily digestible by a Transformer-based system. 

Context engineering doesn’t just mean “adding more stuff” to your prompt. It means curating, compressing, and sequencing the right inputs at the right time. It’s a system, not a sentence.

Harrison Chase, CEO and co-founder of LangChain, said that this is precisely why frameworks like LangGraph are gaining traction. Designed to give developers fine-grained control over what goes into the model, what steps run beforehand, and where outputs are stored, LangGraph embraces the philosophy that context engineering is central to any serious agent framework. 

While older abstractions often hide this complexity in the name of ease of use, LangGraph puts context back into the developer’s hands, where it belongs.

Context engineering has organisational implications beyond the technical realm. Ethan Mollick, associate professor at The Wharton School, noted that it’s not just about crafting a useful LLM input — it’s about encoding how your company works. 

That includes the structure of reports, the tone of communication, and the internal processes that define your business logic. In that sense, context engineering is as much about culture as it is about code.

Karpathy made a broader point that often gets lost in the discussion: context engineering is just one piece of a growing software stack built around LLMs. It coexists with problem decomposition, memory management, UI/UX flows, verification steps, and orchestrating multiple LLM calls. 

Calling all of that a “ChatGPT wrapper,” he said, is “really, really wrong.” It’s not a wrapper — it’s a new paradigm of software altogether.

And perhaps that’s why the term “vibe coding” is now being used tongue-in-cheek. In the early days of LLM experimentation, developers often relied on intuition and repetition, tweaking wording endlessly until they got something that felt right. 

But intuition doesn’t scale; structure does. What works in a playground doesn’t hold up in production. Tobi Lütke, CEO of Shopify, said it best: “It describes the core skill better — the art of providing all the context for the task to be plausibly solvable by the LLM.”

That word — plausibly — carries weight. AI models don’t have intent or judgment. They’re not reasoning from first principles. They’re guessing the next word based on everything you’ve told them so far. And if you haven’t told them the right things, or told them in the wrong format, your clever prompt won’t save you.

Context engineering is not just the new vibe; it’s the new software architecture.

The post Context Engineering is the New Vibe Coding appeared first on Analytics India Magazine.

]]>
L&T Finance Hopes Project Cyclops Would Be 90% Agentic One Day https://analyticsindiamag.com/ai-features/lt-finance-hopes-project-cyclops-would-be-90-agentic-one-day/ Mon, 16 Jun 2025 09:32:32 +0000 https://analyticsindiamag.com/?p=10171794

“I’ve seen demos where agents can do EDA, generate train-test sets, build models, and write documentation—all with very little human intervention,” Debarag Banerjee said. “That future, where something like Project Cyclops is 90% agent-driven, would be wonderful.”

The post L&T Finance Hopes Project Cyclops Would Be 90% Agentic One Day appeared first on Analytics India Magazine.

]]>

In the world of finance, the use of AI is tricky owing to privacy issues, the dreaded risk of hallucinations and the guardrails. Given that modern systems are increasingly getting foolproof, there is a huge opportunity for changing the industry with agents—all with a little fine-tuning.

Debarag Banerjee, chief AI and data officer at L&T Finance, spoke with AIM about how the firm is steadily moving away from rules-based automation towards a future led by agents.

The most important and real transformation happening under the hood is with Project Cyclops, L&T Finance’s proprietary AI stack for real-time, high-accuracy credit decisioning. 

“We launched this with our two-wheeler portfolio last year. It now handles 100% of those loans,” Banerjee said. “This year, we’ve extended it to our tractor business and are preparing to roll it out for small business loans.”

Project Cyclops pulls together various “trust signals”—from customer profiles to repayment behaviour—into an ensemble model that can instantly separate delinquent-risk borrowers from credit-worthy ones. “You upload your information and, just like that, you get a decision,” Banerjee said.

How to Build This in India?

In terms of data residency and compliance, L&T Finance is already future-proofing things. “Even for closed-source LLMs, we insist on endpoints hosted in India to prepare for laws like the Digital Personal Data Protection Act (DPDPA),” Banerjee added.

Open-source models naturally provide better control. “Since we host and manage the stack ourselves, the data is more secure. We ensure contractually that our EII data isn’t used for model retraining.” Much of the data used to fine-tune open-source LLMs is proprietary. Even when it’s not proprietary, the formulation under which the team trains them makes the contextual usage proprietary.

L&T Finance has also explored Indian LLMs built for Indic languages. “They’re a good start. But the number of parameters still matters.”

Meanwhile, Project Cyclops, the firm’s proprietary ML stack, continues to scale. It combines models across various trust signals — from customer data to repayment behaviour — and re-ensembles them to deliver real-time credit decisions.

The company deliberately took a multi-LLM route from day one. The goal is flexibility, not being locked into any single provider or model.

“Instead of being tied to LLMs from any one company, our stack can call any model our developer thinks is right for that task,” said Banerjee. This includes Google’s Gemini (multiple versions), OpenAI models through Azure, and several open-source LLMs hosted on GPU-as-a-service platforms.

They’ve also tested Meta’s Llama family (3.1, 3.2) and successfully fine-tuned them for performance comparable to larger models like Gemini, but with lower inference costs.

“In one of our other applications, we found medium-sized fine-tuned Llamas performing nearly as well as some of the premium models,” he noted. “We’re agnostic to geography or company, as long as data privacy is maintained and we retain full control.”

Tackling Bias and Ethics in AI Lending

When asked about ethical concerns in using AI for credit decisions, especially cases where background visuals or personal environment might influence model behaviour, Banerjee stressed two things: statistical validation and consent.

“Any trust signal we use has to hold up against statistically significant past data. If it’s frivolous, it gets discarded,” he said. “We are also very careful about consent. All data usage is fully transparent to the customer.”

He acknowledged the risk of adversarial behaviour, like customers gaming the system with artificial backgrounds. “But these kinds of patterns are caught through quality checks and operational safeguards,” Banerjee noted. “It’s a team effort — credit, risk, field ops, everyone must align for the system to work at scale.”

Traditional software relied on endless rules. With agent frameworks powered by LLMs and task-specific tools, Banerjee said that the effort to build systems has reduced drastically. “You can create something with a minimalist approach, deploy it, and let it improve over time. These are not just tools — they’re self-improving systems,” Banerjee said.

The future of agentic AI is both exciting and inevitable for Banerjee. “Agentic AI seems to have finally caught that right gap,” he observed. “We can already see it proving its mettle—not only in decision-making but in emulating human functions.”

He envisions agents becoming self-improving and minimalist in design. “Instead of writing rule after rule, you get to a working solution quickly, test it, improve it, and even reinforce it.”

Though we’re not fully there yet, the progress is palpable. “I’ve seen demos where agents can do EDA, generate train-test sets, build models, and write documentation—all with very little human intervention,” he said. “That future, where something like Project Cyclops is 90% agent-driven, would be wonderful.”

Regulation, Black Swans & the Human Touch

Will agents eventually monitor finance and trading platforms on their own? “There are still regulatory needs—maker, checker, monitor—which will stay,” he said. “And while you can create agents for predictable failures, black swans are by nature unpredictable.”

He also believes AI will create jobs. “One area where AI may generate jobs is in humans playing both white hat and black hat—looking for ways AI can fail or be misused and figuring out how to recover.” For L&T Finance, agentic AI is not just about tech, it’s about solving for India’s underserved.

“I was there when India got connected. Then came digital payments. Now, the next big inflection is digital access to credit for the bottom and middle of the pyramid,” he said. “India has the opportunity to leapfrog old credit systems because its consumers are digitally connected.”

The post L&T Finance Hopes Project Cyclops Would Be 90% Agentic One Day appeared first on Analytics India Magazine.

]]>
GEO is Eating SEO—It’s a Whole New World https://analyticsindiamag.com/ai-features/geo-is-eating-seo-its-a-whole-new-world/ Fri, 06 Jun 2025 05:47:20 +0000 https://analyticsindiamag.com/?p=10171421

The way content creators or marketers tune their content is influenced by generative AI.

The post GEO is Eating SEO—It’s a Whole New World appeared first on Analytics India Magazine.

]]>

Search engine optimisation (SEO) is the process of tuning a content or web page so that it can be easily discovered by search engines like Google and Bing. 

However, web search, as we know it, has drastically changed, thanks to ChatGPT, Perplexity, and other similar AI tools that can search the web. So, naturally, the way content creators or marketers tune their content will also be influenced by it. Enter Generative Engine Optimisation (GEO)—a new strategy designed for the age of AI-driven search.

Generative Engine Optimisation, The New Game?

In an interaction with AIM about AI startups, Paul Ravindranath G, program manager, developer relations of startup and expert programmes at Google India, said, “I was at an event where somebody showed me a very interesting approach. Traditionally, we do SEO, search engine optimisation. And this person I was talking to told me, ‘I’m doing GEO generative engine optimisation.’”

He observed that AI is fundamentally changing the landscape of marketing, noting that the changes underway are unfolding in ways few could have anticipated. 

Traditionally, SEO was all about links, structured hierarchies of authority and relevance calculated by spiders and ranks. As per a blog post by a16z, GEO is about language. The content that shows up in a model’s answer isn’t pulled from a top-ten list, it’s generated from what the model has retained, understood, and deemed useful. Visibility means being referenced, not just ranked.

LLMs like GPT-4o, Claude, and Gemini don’t crawl websites; they engage in conversations with users. Moreover, they synthesise across sources, remember user intent, and respond with multi-layered reasoning. In that world, content optimisation requires a shift. Precision and repetition take a back seat. Instead, the focus turns to clarity, semantic richness, and context awareness.

“Phrases like ‘in summary’ or bullet-point formatting help LLMs extract and reproduce content effectively,” A16Z wrote in the blog. GEO requires content to be not just relevant, but legible to machines.

While the SEO ad market ran on clicks, GEO runs on references. That changes business incentives entirely. Since many LLMs are subscription-based and not ad-driven, there’s less motivation to surface third-party content unless it enhances the user experience. Yet, outbound traffic isn’t dead. ChatGPT, for example, still drives referrals to thousands of domains. The key difference is that it’s not about who shouts the loudest, but who the model remembers first.

Content Adapting to Generative AI to Minimise Impact

As per a recent report by Search Engine Land, AI Overviews, Google’s new generative summaries in Search, are dramatically altering how content is discovered and consumed. By delivering fast, conversational answers sourced from multiple websites, these summaries reduce the need for users to click through to the original content. 

This shift has serious consequences for SEO, including declining traffic, attribution, and click-through rates, as traditional organic listings are pushed down. Even though inclusion in an Overview might boost brand visibility or authority, it rarely results in measurable traffic or leads.

While Google’s AI-powered features in Search are indeed damaging organic traffic, new opportunities are emerging with the rise of AI-driven search engines like Perplexity.

According to some marketers, users’ habit of going to Google for a search is shifting to rely on platforms like Perplexity, and ChatGPT.

Bandan Singh, head of product at Riverty, wrote in a blog post, “I started noting that perplexity wasn’t blabbering, but also giving me links to the source of information. Selectively, I will dive into links, but 99% of the time, I would be satisfied with the answer perplexity gave me.”

Whatever the case may be, it is clear that AI is at the forefront of the new search experience for netizens. And, with that in the landscape, GEO, the new concept, is becoming more relevant for marketers, content creators, and webmasters.

What Does It Look Like for GEO?

Legacy SEO firms are racing to keep up. Semrush now offers an AI toolkit for GEO, while Ahrefs’ Brand Radar helps companies monitor how they’re framed in AI Overviews. These tools offer more than metrics; they reveal how a brand is encoded in the generative layer of the internet.

GEO is a new layer of the internet. Hemant Mohapatra, partner at Lightspeed India, pointed out on X, “Search TAM isn’t getting fragmented; it’s actually expanding with AI.” 

He noted that while traditional search experience may decline, especially among Gen Z and Gen Alpha, the appetite for discovery through AI is rising.

The post GEO is Eating SEO—It’s a Whole New World appeared first on Analytics India Magazine.

]]>
HCLTech Shows Confidence in Generative AI with 12 Exclusive Deals in the Quarter https://analyticsindiamag.com/it-services/hcltech-shows-confidence-in-generative-ai-with-12-exclusive-deals-in-the-quarter/ Tue, 22 Apr 2025 13:56:56 +0000 https://analyticsindiamag.com/?p=10168437

CEO C Vijaykumar didn’t disclose any specific vertical where generative AI deals were awarded, but said that it is part of almost all deals.

The post HCLTech Shows Confidence in Generative AI with 12 Exclusive Deals in the Quarter appeared first on Analytics India Magazine.

]]>

Much on the lines of Indian IT firms that reported subdued Q4 FY25 results in the last fortnight, HCLTech, too, posted modest earnings while showing confidence in generative AI.

In Q4 FY25, the revenue in rupee terms rose 1.2% to ₹30,246 crore from ₹29,890 crore. And in USD terms, the company posted a revenue of $3.4 billion, down 1.0% QoQ.

For the full financial year, the revenue stood at ₹117,055 Crores, up 6.5%. Accounting in USD, the revenue was $13.8 billion, representing a 4.3% increase compared to the same period last year. 

Operating performance weakened, with EBIT dropping to ₹5,442 crore from ₹5,821 crore. The EBIT margin contracted to 18.1% from 19.5%. Meanwhile, net profit also declined 6.3% to ₹4,300 crore from ₹4,591 crore for the fourth quarter.

In Q4, HCLTech focused on exclusive AI and generative AI deals, securing 12 new agreements, including those involving agentic AI and automation processes. This announcement differs from other IT firms that shied away from mentioning AI-specific deals. However, all the tech companies acknowledged that AI is part of every deal conversation. 

HCLTech CEO C Vijaykumar stated that Q4 FY25 saw the highest number of deals after the September 2023 quarter. This quarter, HCLTech closed $3 billion in net new bookings, bringing the year-to-date total to $9.26 billion. It’s a 5% decline compared to last year.

“Our engineering and R&D services business led the charter with a record-high 75% growth in bookings in FY25,” Vijaykumar said, attributing this performance to the successful execution of HCLTech’s integrated go-to-market strategy.

He also emphasised that generative AI will remain a core focus for enterprises across all industries, despite broader macroeconomic uncertainties and pressure on discretionary spending. “Their focus on using generative AI to drive high efficiency in every aspect of their business is becoming central to all conversations,” Vijaykumar said.

On the topic of pricing pressure in the age of AI, he acknowledged that efficiencies delivered through AI would naturally lead to some level of pricing deflation.

“When you’re able to deliver at a much more efficient operating level, we do share some of those benefits with our customers,” he explained. “But in every renewal and client conversation, we’re also able to ask for a higher wallet share as we proactively provide generative AI-driven benefits.”

For FY26, HCLTech expects overall revenue growth in the range of 2–5% in constant currency (CC) terms. Services revenue is also projected to grow between 2–5% in CC terms. The company has guided for an EBIT margin of 18–19% for the fiscal.

“In FY25, we clocked the consolidated revenue of $13.84 billion, an increase of 4.7% attributed to both our services business and the software business,” said Vijaykumar. 

He added that the pipeline spans IT and business services, engineering services, and HCLSoftware, and that AI and generative AI are now integral components of almost every deal. “Both Americas and Europe showed considerable pipeline growth during the quarter,” he noted. “We’ve made significant strides in AI and GenAI, impacting both client-facing solutions and internal operations.”

The company’s four flagship AI offerings—AI Force, AI Foundry, AI Labs, and AI Engineering—have seen substantial adoption and scaling during FY25. Notably, AI Labs alone has delivered 500 GenAI engagements for 400 clients.

The company’s attrition rate increased to 13%, up from 12.4% in Q4 of last year. The overall headcount stood at 223,420, with the tech firm adding 7,829 freshers in the fiscal year. 

Speaking further about AI, Vijaykumar said that both customers and the firm are internally running several AI use cases to build solutions around them. 

“The approach is: invest in skills development, invest in labs, build use cases, and on the back of it, create POCs that we can take to customers. That’s been our approach, and it’s working well so far,” he said. “Don’t worry about the investments now—look at the ROIs we can get on the back of it. That has been our approach. It takes time, but we’re seeing success.”

Focus on generative AI 

Speaking at an industry event in Mumbai in February, Vijayakumar emphasised that AI’s disruption in IT services is unlike previous technological shifts such as cloud computing and digital transformation. “The changes AI is assuring are very different, and we need to be more proactive to categorise our revenues to create completely new businesses,” he said.

Generative AI is expected to accelerate software development by automating coding and reducing project timelines. Vijayakumar pointed to a financial services firm where AI-driven efficiencies reduced the timeline of a $1 billion technology transformation program from five years to three-and-a-half years.

He emphasised that, with declining costs of AI training, India must invest in economically viable ways to develop its own models. “I strongly believe that the business model is ripe for disruption. What we saw in the last 30 years was a fairly linear scaling of revenues and people. I think time is already up for that (business model),” he added.

Meanwhile, in its Q3 FY25 results in January, HCLTech secured $2.1 billion in total contract value (TCV), with many deals embedding AI solutions. The CEO said that it is advancing its generative AI strategy and aims to integrate AI services into 100 clients by FY26.

“Generative AI is getting more and more real. The cost of using an LLM or conversational model has dropped by over 85% since early 2023, making more use cases viable,” Vijayakumar noted in the Q3 earnings call. Compared to Q1 and Q2, the company’s Q3 results demonstrate a growing integration of AI into its business operations.

The post HCLTech Shows Confidence in Generative AI with 12 Exclusive Deals in the Quarter appeared first on Analytics India Magazine.

]]>
Sakana’s AI CUDA Engineer Delivers Up to 100x Speed Gains Over PyTorch https://analyticsindiamag.com/ai-news-updates/sakanas-ai-cuda-engineer-delivers-up-to-100x-speed-gains-over-pytorch/ Thu, 20 Feb 2025 05:54:24 +0000 https://analyticsindiamag.com/?p=10164184

The AI CUDA Engineer has successfully translated more than 230 out of 250 evaluated PyTorch operations.

The post Sakana’s AI CUDA Engineer Delivers Up to 100x Speed Gains Over PyTorch appeared first on Analytics India Magazine.

]]>

Japanese AI startup Sakana AI has introduced The AI CUDA Engineer, an agentic framework that automates the discovery and optimisation of CUDA kernels for improved GPU performance. 

The company claims the framework can generate CUDA kernels with speedups ranging from 10 to 100 times over common PyTorch operations and up to five times faster than existing CUDA kernels used in production.

CUDA is a low-level programming interface that enables direct access to NVIDIA GPUs for parallel computation. Optimising CUDA kernels manually requires significant expertise in GPU architecture. Sakana AI’s new system uses LLMs and evolutionary optimisation techniques to automate this process, making high-performance CUDA kernel development more accessible.

“The coolest autonomous coding agent I’ve seen recently: use AI to write better CUDA kernels to accelerate AI. AutoML is so back!” said Jim Fan, senior research manager and lead of embodied AI at NVIDIA. He added that the most impactful way to utilise compute resources is by enhancing the future productivity of that very same compute.

According to Sakana AI, The AI CUDA Engineer converts standard PyTorch code into optimised CUDA kernels through a multi-stage pipeline. Initially, it translates PyTorch operations into CUDA kernels, often improving runtime without explicit tuning. The system then applies evolutionary optimisation, using strategies such as ‘crossover’ operations and an ‘innovation archive’ to refine performance.

“Our approach is capable of efficiently fusing various kernel operations and can outperform several existing accelerated operations,” the company said. The framework builds on the company’s earlier research with The AI Scientist, which explored automating AI research. The AI CUDA Engineer extends this concept to kernel optimisation, using AI to enhance AI performance.

Sakana AI reported that The AI CUDA Engineer has successfully translated more than 230 out of 250 evaluated PyTorch operations. It has also generated over 30,000 CUDA kernels, of which over 17,000 were verified for correctness. Approximately 50% of these kernels outperform native PyTorch implementations.

The company has made the dataset available under a CC-By-4.0 licence on Hugging Face. It includes reference implementations, profiling data, and performance comparisons against native PyTorch runtimes.

Sakana AI has also launched an interactive website where users can explore the dataset and leaderboard rankings of optimised kernels. The platform provides access to kernel code, performance metrics, and related optimisation experiments.

The post Sakana’s AI CUDA Engineer Delivers Up to 100x Speed Gains Over PyTorch appeared first on Analytics India Magazine.

]]>
ECI Mandates Labelling of AI Generated Content in Political Campaigns https://analyticsindiamag.com/ai-news-updates/eci-mandates-labelling-of-ai-generated-content-in-political-campaigns/ Thu, 16 Jan 2025 07:22:33 +0000 https://analyticsindiamag.com/?p=10161532

Disclaimers must accompany campaign advertisements or promotional materials utilising synthetic content.

The post ECI Mandates Labelling of AI Generated Content in Political Campaigns appeared first on Analytics India Magazine.

]]>

Ahead of the Delhi assembly elections, The Election Commission of India (ECI) has reinforced its directive for political parties to label and disclose AI-generated and synthetic content used in election campaigns. 

The advisory mandates that parties explicitly label images, videos, audio, or other materials significantly altered by AI technologies with notations such as “AI-Generated,” “Digitally Enhanced,” or “Synthetic Content.”

Additionally, disclaimers must accompany campaign advertisements or promotional materials utilising synthetic content.

Chief Election Commissioner Rajiv Kumar has consistently warned about the dangers of AI and deepfakes exacerbating misinformation. In a statement, he emphasised that such technologies have the potential to undermine public trust in electoral processes. 

In a letter to the presidents, general secretaries, and chairpersons of all national and state-recognised political parties, ECI Joint Director Anuj Chandak highlighted how advancements in AI have enabled the creation of highly realistic synthetic content, including images, videos, and audio.

Acknowledging the growing impact of AI-generated and synthetic content on public opinion, the Election Commission has urged political parties, their leaders, candidates, and star campaigners to prominently label such content when shared on social media or other platforms during campaigns.

Last year, during the Lok Sabha elections, the ECI issued guidelines for the ethical and responsible use of social media platforms, further demonstrating its commitment to maintaining transparency and fairness in campaigns.

The latest advisory aligns with the ECI’s broader efforts to ensure a level playing field in elections, particularly through responsible use of AI and digital platforms. During the Global Election Leaders Summit (GELS) 2024, the Commission reiterated the importance of ethical practices in leveraging technology for electoral campaigns.

The post ECI Mandates Labelling of AI Generated Content in Political Campaigns appeared first on Analytics India Magazine.

]]>
‘India Has Missed the GenAI Bus and No Amount of Funds Can Cover it’ https://analyticsindiamag.com/ai-features/india-has-quietly-lost-the-genai-bus-no-amount-of-funds-can-cover-it/ Thu, 16 Jan 2025 05:00:00 +0000 https://analyticsindiamag.com/?p=10161498

In 2024, the Indian tech landscape raised around $11.3 billion from investors, which is negligible compared to the West’s $184 billion.

The post ‘India Has Missed the GenAI Bus and No Amount of Funds Can Cover it’ appeared first on Analytics India Magazine.

]]>

With each passing week in the global AI landscape, the goalpost for building generative AI and competing with players like Google and OpenAI seems to be ever-changing. Some years ago, Google released Transformers, followed by OpenAI’s ChatGPT. This year, the conversation is around agentic AI.

Despite throwing billions of dollars, India seems to be quietly losing out in the race because that is just not enough. In 2024, the Indian tech landscape raised around $11.3 billion from investors, which is negligible compared to the West’s $184 billion.

The only brighter side is that building a product in India is far cheaper than in the West, along with the availability of a vast and affordable talent pool.

HCLTech also revealed that it is aiming to integrate AI services for 100 clients by FY26. “Generative AI is getting…real. The cost of using an LLM or conversational model has dropped by over 85% since early 2023, making more use cases viable,” said CEO and MD Vijayakumar C. 

Despite this, nothing revolutionary has come out of India. “The pace of AI progress is so rapid that we simply cannot catch up by relying on engineers and researchers alone. Without big corporations or government backing, India’s GenAI dreams will remain just that – dreams,” said a researcher in a Reddit discussion titled ‘India has quietly lost the Gen-AI bus also, and no amount of investment will cover it now’.

Cost Drops for Services, Not for Building Products

Generative AI and quantum computing require billions in funding. While US giants like Google, OpenAI, Anthropic, and Microsoft lead the charge, and China uses open-source strategies to scale, India’s resources and interest in research are inadequate.

Many believe India has already missed the bus. A recurring theme in discussions with AIM is India’s lack of fundamental research in areas like Transformer architectures and their hardware execution. While countries like the US and China are making significant strides, India’s contribution remains negligible, which, to some extent, can be attributed to the lack of funding.

Developing generative AI models demands immense capital, yet India’s private sector remains reluctant to invest in long-term research. This is clearly highlighted in the earnings calls of the country’s big IT firms.

Vedant Maheshwari, CEO of quso.ai, believes foundational AI requires significant capital and patience, which is harder to secure in India. “While funding here is substantial, it’s mostly application-focused rather than foundational,” he explained. 

A student from a premier Indian institute observed, “The research output from China in just the past two years has placed them decades – if not a century – ahead of us.” 

The volume and quality of papers emerging from the US and Chinese institutions reflect a culture that prioritises innovation over mere service delivery. While only 1.4% of papers from India contributed to top AI research conferences, the US and China accounted for 30.4% and 22.8%, respectively.

The Indian government, possibly constrained by limited budgets, struggles to fill the gap.

This sentiment is echoed across the board. For instance, a quantum computing researcher shared how a company offered them just ₹20,000 to conduct advanced research.

So What’s the Point?

While speaking with AIM, several industry leaders agreed that there was no point in competing to build the largest LLM. To put this in perspective, TCS chief K Krithivasan recently said that there is no huge advantage in building its own LLMs in India since there are already so many available.  

This aligns with the idea of Nandan Nilekani, co-founder of Infosys, making India the AI use case capital of the world

The reason is simple – lack of capital. “Who will give $200 million to a startup in India to build an LLM?” Mohandas Pai, head of Aarin Capital and former CFO of Infosys, told AIM when asked about the lack of innovation from Indian IT

“Why is nothing like Mistral coming from India?” he asked rhetorically. “There is nobody…Creating an LLM or a big AI model requires large capital, time, a huge computing facility, and a market. All of which India does not have.”

Though India has startups like Sarvam, TWO, and Krutrim building products, the impact that they have created when compared to something like ChatGPT is minuscule, simply due to the vast difference in investments.

Despite this, there are predictions that India will have around 100 AI unicorns over the next decade.

To put things into perspective, Anthropic is looking to raise $2 billion in a funding round, raising its valuation to $60 billion. In comparison, Krutrim raised a $50 million round, and Sarvam AI raised $41 million. 

While speaking at Cypher 2024, Pai called on the Indian government to significantly increase its investment in AI. He pointed out that although the central government spends ₹90 lakh crore annually, only ₹3,000 to ₹4,000 crore is allocated for innovation – a sum he referred to as “peanuts”. “The government of India should invest ₹50,000 crore in AI.” If that happens, the Indian tech ecosystem will probably struggle with funds. 

Focus on Short Term

India’s tech sector continues to prioritise short-term gains from outsourced IT services rather than investing in creating globally competitive products. Indian startups are also busy making API wrappers for SaaS instead of pushing the boundaries of core research, which is all because of funding.

Amit Sheth, the chair and founding director of the Artificial Intelligence Institute of South Carolina (AIISC), earlier told AIM that only a handful of universities are able to publish research at top conferences

“In the USA, all the projects they get to work on involve advancing the state-of-the-art (research),” Sheth added. He also highlighted the issue of a publication racket prevalent in India and several other developing countries, with only a handful of researchers from select universities standing out as exceptions.

India’s elite institutions, such as the Indian Institute of Science (IISc), are also hamstrung by limited budgets. Notably, the institute’s entire budget is around ₹1,000 crore, which is barely enough to compete with global AI research.

India’s academic framework, especially in engineering and technology, is increasingly criticised for emphasising quantity over quality. Students are often required to publish multiple research papers, many of which lack originality. 

Despite the gloomy outlook, some believe there’s hope for the future if immediate corrective measures are taken. India needs a paradigm shift in its approach to education, funding, and research. The global race in generative AI is a high-stakes game, and India appears to be losing.

The post ‘India Has Missed the GenAI Bus and No Amount of Funds Can Cover it’ appeared first on Analytics India Magazine.

]]>
Accenture Hits Record $4.2 Billion in Generative AI Sales https://analyticsindiamag.com/ai-news-updates/accenture-hits-record-4-2-billion-in-generative-ai-sales/ Fri, 20 Dec 2024 04:39:28 +0000 https://analyticsindiamag.com/?p=10144020

In its last earnings call in September 2024, Accenture announced $1 billion revenue from generative AI, which was a jump from $900 million in the previous quarter.

The post Accenture Hits Record $4.2 Billion in Generative AI Sales appeared first on Analytics India Magazine.

]]>

Accenture in its latest earnings call Q1 FY25 reported record generative AI bookings of $1.2 billion. Bringing the total to $4.2 billion since September 2023, this marks the highest quarterly bookings in the segment, reflecting growing client investments in generative AI.

In its last earnings call in September 2024, Accenture announced $1 billion generative AI sales, which was a jump from $900 million in the previous quarter. The company was the first in its industry to disclose generative AI deal values. In June 2023, it reported $100 million in pure-play generative AI projects for the quarter. The sales are almost double as reported in the last quarter.

“It’s not really different from the kinds of productivity that we’ve been experiencing. And here, of course, there’s an added wrinkle in that generative AI. In order for us to use it with our clients, they have to allow us to use it and they have to prioritise,” said Julie Sweet, Accenture’s CEO, during the company’s post earnings call.

She emphasised that organisations must first invest in building robust data foundations before scaling AI initiatives. “We do not currently see an improvement in overall spending by our clients, particularly on smaller deals. When those market conditions improve, we will be well positioned to capitalise on them as we continue to meet the demand for the critical programmes our clients are prioritising.”

In the August 2024 quarter, Accenture secured $1 billion in generative AI orders, bringing its yearly total to $3 billion. The segment accounted for 6.4% of its overall $18.7 billion bookings for the quarter.

Accenture has also added 24,000 employees in the quarter, bringing its total workforce to 799,000, with a significant portion of hiring concentrated in India. Sweet added that the company has steadily increased its data and AI workforce, reaching approximately 57,000 practitioner.

Despite its strong performance, Accenture maintained a cautious outlook on the global economy. The company’s revenue for the first quarter stood at $17.69 billion, a 7.8% sequential increase, leading to a revision in full-year growth projections to 4-7%, up from 3-6%.

Indian IT firms, however, are yet to disclose such revenues from generative AI.

In October, Accenture partnered with NVIDIA to launch Accenture NVIDIA Business Group, aimed at helping enterprises scale AI adoption. This initiative includes training for 30,000 professionals globally to assist clients in reinventing processes and expanding the use of enterprise AI systems.

The post Accenture Hits Record $4.2 Billion in Generative AI Sales appeared first on Analytics India Magazine.

]]>
Composable Architectures Are Non-Negotiable https://analyticsindiamag.com/ai-highlights/composable-architectures-are-non-negotiable/ Mon, 16 Dec 2024 09:50:53 +0000 https://analyticsindiamag.com/?p=10143637

The modularity of composable architectures enables a low-code or no-code approach, opening AI development to a wider audience and accelerating the adoption of generative AI across industries.

The post Composable Architectures Are Non-Negotiable appeared first on Analytics India Magazine.

]]>

In the rapidly evolving world of generative AI applications, composable architecture is emerging as a framework for building scalable AI-powered applications. Integrating independent, modular components via APIs enables developers to create customised solutions quickly. 

“When we talk about composable architecture, we really mean building larger systems by assembling smaller, independent modules that can be easily swapped in and out,” explained Jaspinder Singh, principal consultant at Fractal, during an interaction with AIM

Designing for Scale with Composable Architecture 

This modular approach enables developers to build applications by combining several smaller modules, providing flexibility, scalability, and enhanced control over the deployment of AI solutions. 

“Not every project needs this approach,” Singh pointed out. “If you’re building something small and straightforward, you might be overthinking it if you try to make everything modular. But if you are planning to scale up, if lots of people will use your application, that’s when composable architecture really shines.” 

Scalability is one of the primary benefits of composable architectures. Singh emphasised that individual modules can be scaled independently based on demand, optimising resources for generative AI applications.

For example, a data processing module might require more frequent scaling than a front-end user interface. This selective scaling manages costs by avoiding unnecessary resource allocation.

The composable architecture is designed to allow for rapid experimentation and granular control, which are especially useful in the fast-paced world of generative AI. With new AI models appearing frequently, the ability to integrate and test them with minimal disruption to the overall system ensures that applications stay relevant and current.

The composable paradigm also allows a balance between custom development and leveraging off-the-shelf modules. Using modular APIs and established components for routine tasks allows developers to focus on refining specific business logic, reducing time to market and enabling faster iterations. 

“Companies can’t afford to spend months building everything from the ground up anymore,” according to Singh. “These modular components let you move quickly and stay competitive, especially in fast-paced tech-powered industries.”

Integrating Foundation Models in Generative AI Systems

Foundation models serve as fundamental building blocks in composable generative AI systems. These models, which serve as a base layer, can be fine-tuned or augmented for specific tasks, providing a versatile starting point within modular applications. 

A content creation system exemplifies this flexibility: organizations can integrate GPT-4 for text generation alongside image generation models like Flux-pro, resulting in a seamless workflow. This modular approach enables strategic combinations of best-fitting AI capabilities.

According to Singh, the output from each model can be routed to specialised modules for further processing, such as plagiarism detection, grammar correction, or style enhancement. This results in a robust but flexible workflow in which each component performs its specialised function while maintaining system cohesion. 

The architecture excels in adaptability, and organizations can improve or replace individual components as technology evolves, ensuring that their AI systems remain current without requiring complete rebuilds. 

Architecting Better Prompt Management

Prompt engineering is critical in generative AI applications, but managing and optimising prompts at scale poses significant challenges. Composable architecture addresses this issue by treating prompt management as a separate module within the overall system.

“We have seen organisations struggle with prompt consistency and version control,” Singh points out. “By incorporating a centralised prompt library into the composable architecture, teams can standardise their approaches while remaining flexible. This is especially useful when combined with experimentation features like A/B testing prompts, models, and data variations.”

The composable architecture enables this structured approach to prompt management, monitoring, and model evaluation by allowing developers to manage each of these activities within their own modules.

Security and Compliance in Composable Generative AI Systems

While composable architectures provide increased flexibility, they also present unique security and compliance challenges. The distributed nature of modular generative AI systems requires data security management, as sensitive data may flow through multiple modules. 

Compliance with data protection laws is critical, especially when data needs to move beyond an organisation’s infrastructure. In such cases, only necessary data should be transferred, with all confidential information handled securely in-house.

Moreover, generative AI models may be vulnerable to adversarial attacks, in which malicious inputs attempt to manipulate model behaviour. Singh recommends that input and output vetting should be a regular part of the composable AI pipeline, along with secure communication channels and access control mechanisms. A strong data governance framework, as well as regular security audits, help to ensure the security of application environments.

Composing the Way Forward

The flexibility of composable architectures offers a promising path forward for generative AI applications. As standardised interfaces evolve, Singh highlights that organisations can avoid vendor lock-in and experiment with competing AI solutions to find those best suited to their needs. 

The modularity of composable architectures facilitates a low-code or no-code approach, making AI development more accessible and accelerating the adoption of generative AI across industries.

However, implementing composable architectures can be challenging. Integrating multiple modules and transitioning from experimental to production environments presents challenges, especially as AI tools and technologies advance rapidly. Data privacy, intellectual property rights, and model reliability remain key areas of focus, demanding ongoing attention as organisations scale their generative AI applications.

Singh recommends comprehensive monitoring throughout the AI application lifecycle, from ideation to deployment, to ensure that modular generative AI systems operate seamlessly. Observability frameworks and GenAIOps practices can track metrics such as model accuracy, application performance, and cost efficiency. This would provide a comprehensive view of the system’s health and aid in the development of generative AI solutions that are both reliable and effective.

By embracing composable architectures, organisations can position themselves to adapt swiftly to AI’s evolving landscape, benefiting from the enhanced flexibility, scalability, and security that modular systems provide.

The post Composable Architectures Are Non-Negotiable appeared first on Analytics India Magazine.

]]>
How AI Dragons Set GenAI on Fire This Year https://analyticsindiamag.com/deep-tech/how-ai-dragons-set-genai-on-fire-this-year/ Wed, 27 Nov 2024 09:30:00 +0000 https://analyticsindiamag.com/?p=10141761

Predictions for 2025 suggest that AI will become mainstream, speeding up the adoption of cloud-based solutions across industries.

The post How AI Dragons Set GenAI on Fire This Year appeared first on Analytics India Magazine.

]]>

If you thought the buzz around AI would die down in 2024, think again. Persistent progress in hardware and software is unlocking possibilities for GenAI, proving that 2023 was just the beginning.

2024 – the Year of the Dragon — marks an important shift as GenAI becomes deeply woven into the fabric of industries worldwide. Businesses no longer view GenAI as just an innovative tool. Instead, it is being welcomed as a fundamental element of their operational playbooks. CEOs and industry leaders, who recognise its potential, are now focused on seamlessly integrating these technologies into their key processes.

This year, the landscape evolved rapidly and generative AI became increasingly indispensable, progressing from an emerging trend to a fundamental business practice.

Scale and Diversity

An important aspect is the growing understanding of how GenAI enables both increased volume and variety of applications, ideas and content. 

The overwhelming surge in AI-generated content is leading to consequences we are just starting to uncover. According to reports, over 15 billion images were generated by AI in one year alone – a volume that once took humans 150 years to achieve. This highlights the need for the internet post-2023 to be viewed through an entirely new lens.

The rise of generative AI is reshaping expectations across industries, setting a new benchmark for innovation and efficiency. This moment represents a turning point where ignoring the technology is not just a lost opportunity, but could also mean falling behind competitors.

“The top open source models are Chinese, and they are ahead because they focus on building, not debating AI risks,” said Daniel Jeffries, chief technology evangelist at Pachyderm. 

China’s success is underpinned by its focus on efficiency and resource optimisation. With limited access to advanced GPUs due to export restrictions, Chinese researchers have innovated ways to reduce computational demands and prioritise resource allocation. 

“When we only have 2,000 GPUs, the team figures out how to use it,” said Kai-Fu Lee, AI expert and CEO of 01.AI. “Necessity is the mother of innovation.” 

He further highlighted how his company transformed computational bottlenecks into memory-driven tasks, achieving inference costs as low as 10 cents per million tokens. “Our inference cost is one-thirtieth of what comparable models charge,” Lee further said. 

The rise of Chinese AI extends beyond its borders, with companies like MiniMax, ByteDance, Tencent, Alibaba, and Huawei targeting global markets. 

MiniMax’s Talkie AI app, for instance, has 11 million active users, half of whom are based in the US. 

At the Wuzhen Summit 2024, analysts noted that as many as 103 Chinese AI companies were expanding internationally, focusing on Southeast Asia, the Middle East, and Africa, where the barriers to entry were lower than the Western markets. 

ByteDance has launched consumer-focused AI tools like Gauth for education and Coze for interactive bot platforms, while Huawei’s Galaxy AI initiative supports digital transformation in North Africa. 

AI Video Models 

Models like Kling and Hailuo have outpaced Western competitors like Runway in speed and sophistication, which represents a shift in leadership in this emerging domain. This is reflected in advancements in multimodal AI, where models like LLaVA-o1 rival OpenAI’s vision-language models by using structured reasoning techniques that break down tasks into manageable stages.

The Rugged Boundary

In 2023, it became clear that generative AI is not just elevating industry standards, but also improving employee performance. According to a YouGov survey, 90% of workers agreed that AI boosts their productivity. Additionally, one in four respondents use AI daily, with 73% using it at least once a week.

Another study revealed that when properly trained, employees were able to complete 12% of tasks 25% faster with the assistance of generative AI, while the overall quality of their work improved by 40%. The greatest improvements were seen among low-skilled workers. However, for tasks beyond AI’s capabilities, employees were 19% less likely to produce accurate solutions.

This dual nature has led to what experts call the ‘jagged frontier’ of AI capabilities.

On one side, AI now performs impressive abilities and tasks with remarkable accuracy and efficiency that were once deemed beyond machines’ reach. On the other hand, however, it struggles with tasks that require human intuition. These areas, defined by nuance, context, and complex decision-making, are where the binary logic of machines currently falls short.

Cheaper AI

As enterprises begin to explore the frontier of generative AI, we might see more AI projects take shape and become standard practice. This shift is driven by the decreasing cost of training LLMs, thanks to advancements in silicon optimisation, which is expected to halve every two years. Alongside growing demand and global shortages, the AI chip market is set to become more affordable in 2024, with new alternatives to industry leaders like NVIDIA emerging.

Moreover, new fine-tuning techniques such as self-play fine-tuning are making it possible to strengthen LLMs without relying on additional human-defined data. These methods use synthetic data to develop better AI with fewer human interventions.

Unveiling the ‘Modelverse’

The decreasing cost is enabling more companies to develop their own LLMs and highlighting a clear trend towards accelerating innovation in LLM-based applications in the next few years.

By 2025, we will likely see the emergence of locally executed AI instead of cloud-based models. This shift is driven by hardware advances like Apple Silicon and the untapped potential of mobile device CPUs.

In the business sector, SLMs will likely find greater adoption by large and mid-sized enterprises because of their ability to address niche requirements. As implied by their name, SLMs are more lightweight than LLMs. This makes them perfect for real-time applications and easy integration across various platforms.

While LLMs are trained on massive, diverse datasets, SLMs concentrate on domain-specific data. In such cases, the data is often from within the enterprise. This makes SLMs tailored to industries or use cases, thereby ensuring both relevance and privacy. 

As AI technologies expand, so do concerns about cybersecurity and ethics. The rise of unsanctioned and unmanaged AI applications within organisations, also referred to as ‘Shadow AI’, poses challenges for security leaders in safeguarding against potential vulnerabilities.

Predictions for 2025 suggest that AI will become mainstream, speeding up the adoption of cloud-based solutions across industries. This shift is expected to bring significant operational benefits, including improved risk assessment and enhanced decision-making capabilities.

Organisations are encouraged to view AI as a collaborative partner rather than just a tool. By effectively training ‘AI dragons’ to understand their capabilities and integrating them into workflows, businesses can unlock new levels of productivity and innovation.

The rise of AI dragons in 2024 represents a significant evolution in how AI is perceived and utilised. As organisations embrace these technologies, they must balance innovation with ethical considerations, ensuring that AI serves as a force for good.

The post How AI Dragons Set GenAI on Fire This Year appeared first on Analytics India Magazine.

]]>
From POC to Product: Measuring the ROI of Generative AI for Enterprise https://analyticsindiamag.com/ai-highlights/from-poc-to-product-measuring-the-roi-of-generative-ai-for-enterprise/ Wed, 13 Nov 2024 06:30:00 +0000 https://analyticsindiamag.com/?p=10140838

Measuring the ROI of GenAI investments is not as straightforward as calculating the savings from a new software tool.

The post From POC to Product: Measuring the ROI of Generative AI for Enterprise appeared first on Analytics India Magazine.

]]>

The years 2023 and 2024 have been game-changers in the world of AI. What initially started as a subtle shift towards automation has now turned into a full-blown revolution, disrupting traditional ways of doing business. Generative AI is no longer seen as just an extension of AI but as a distinct technology with diverse applications. 

Vijay Raaghavan, the head of enterprise innovation at Fractal, highlights this transformation, particularly focusing on how organisations are now moving beyond experimentation to actively invest in generative AI solutions and maximise their value.

Consumers Lead, Enterprises Follow

Interestingly, Raaghavan noted that the early traction for GenAI wasn’t driven by businesses but by consumers. The virality of tools like ChatGPT caught the attention of millions, compelling enterprises to take notice. Since it only took a few weeks for ChatGPT to reach 100 million users, the business world couldn’t ignore the potential of generative AI, and more specifically, LLMs.

Soon, enterprises began experimenting with LLMs, and some eventually started building generative AI solutions. After two years of ChatGPT, it is no longer an experiment but becoming a reality.

“Leaders in the boardrooms began to ask whether their organisations should start building GenAI products,” Raaghavan said.

POCs to Real-World Applications

By late 2023 and into 2024, the GenAI landscape experienced yet another shift. By now, what began as exploratory proof-of-concept (POC) projects with conversational AI tools and chatbots had turned into serious discussions about investment. If 2023 was a breakout year, 2024 turned out to be the build-out year.

At present, the conversation has veered away from experimentation to figuring out whether GenAI can be turned into a product for the company or if it’s just plug-and-play. This transition from POCs to real-world applications has presented new challenges, particularly when it comes to measuring value, which is still the toughest part of driving investment in generative AI.

This is the “moment of truth” for enterprises. “During the experimentation phase, companies asked if it made sense for their organisations. Now that they’ve moved past that, the question is about return on investment (ROI),” Raaghavan pointed out.

Possibly, 2025 and beyond will be about scaling these investments and realising their full potential. “We’ve moved from POCs to full-scale deployment. The next step is value maximisation,” he said. This is also visible among several Indian enterprises and IT giants as they increasingly push POCs to products for their clients.

Quantifying ROI: From FTEs to Conversion Rates

Measuring the ROI of GenAI investments is not as straightforward as calculating the savings from a new software tool. It involves a blend of quantitative and qualitative factors, from time saved to human value unlocked. “Whenever you talk about any investment, the CEO conversation is all about value and ROI.”

As the world speaks of replacing workers with generative AI, the most basic form of value measurement is productivity gains, typically measured in hours saved or full-time employees (FTEs) freed up. People aren’t discussing replacing people outrightly because it’s a sensitive topic, but some leaders are reallocating roles. 

For example, many content writers are becoming content reviewers because GPT models can generate drafts which humans just need to review and refine.

This shift is what Raaghavan describes as “human value unlocking”. GenAI allows organisations to elevate employees from mundane tasks to higher-order roles, which can lead to a more engaged workforce. While AI performs the redundant tasks, humans have elevated to performing more meaningful roles.

While some aspects of GenAI’s value are difficult to quantify, tangible metrics are emerging, particularly around FTE savings. Some organisations are measuring how many FTEs have been saved by introducing GenAI. For example, if a task previously required 10 full-time employees, introducing GenAI might save two, freeing them up for other projects.

In addition to FTE savings, companies also measure digital engagement and conversion rates, especially in sectors like e-commerce. Organisations use metrics like percentage engagement and conversion to measure the impact of GenAI. For instance, a consumer might use GenAI to make a more informed purchase decision faster, which improves conversion rates.

With so many companies adopting GenAI, staying competitive requires strategic investment. Raaghavan outlines a multi-layered approach: “Generative AI is not a plug-and-play solution. It requires the right data, hyperscale strategy, and long-term commitment.”

The post From POC to Product: Measuring the ROI of Generative AI for Enterprise appeared first on Analytics India Magazine.

]]>
Rabbitt AI Announces Strategic Applications of Generative AI in Defense https://analyticsindiamag.com/ai-news-updates/rabbitt-ai-announces-strategic-applications-of-generative-ai-in-defense/ Mon, 11 Nov 2024 12:57:59 +0000 https://analyticsindiamag.com/?p=10140786

Rabbitt AI’s technology integrates deep learning models with diverse sensor inputs—from infrared and radar to audio and video—to detect unauthorised intrusions, environmental anomalies, and abnormal activities without human intervention. 

The post Rabbitt AI Announces Strategic Applications of Generative AI in Defense appeared first on Analytics India Magazine.

]]>

Indian AI startup Rabbitt AI has launched a suite of GenAI tools to reshape military operations by minimising human involvement in high-risk zones. 

The core idea centres on reducing human exposure to danger. GenAI-powered drones, autonomous vehicles, and surveillance systems enable real-time threat detection and response, offering a safer, AI-driven alternative to traditional security methods. 

By incorporating diverse sensor data—from infrared and radar to audio and visual feeds—Rabbitt’s models detect unauthorised movements, environmental anomalies, and abnormal activities without human intervention.

“We are not very far from a future where AI with limbs can dominate battlefields,” said Harneet Singh, Rabbitt AI’s chief, who was previously an AI consultant to the South Korean Navy.

“One of our key missions is to protect lives at the borders by creating situationally aware, autonomous AI systems that can respond to threats by observing and analyzing sensor data in real-time,” he added. 

Rabbitt AI’s technology integrates deep learning models with diverse sensor inputs—from infrared and radar to audio and video—to detect unauthorised intrusions, environmental anomalies, and abnormal activities without human intervention. 

Singh highlighted the technology’s autonomy, saying, “With AI-powered systems, we can now provide uninterrupted, unbiased monitoring that ensures both coverage and efficiency, all while reducing operational costs.”

In addition to reducing personnel risks, Rabbitt’s GenAI tools also help streamline resources by automating many surveillance functions. The technology minimizes reliance on human labor, which Singh says “not only reduces costs but increases accuracy, freeing military personnel to focus on strategic tasks.” The system’s AI-driven detection capabilities also lower the need for costly corrective actions.

Rabbitt AI is also advancing “human-machine teaming” by pairing GenAI with unmanned drones and ground vehicles to increase adaptability in hard-to-reach terrains. According to Singh, “This tech enables real-time situational awareness, allowing command centres to get immediate insights without the delay of human reporting, even in complex environments like urban areas or mountainous regions.”

Singh, an IIT Delhi alumnus recognised by DRDO and Indian military officials with an honorary medal, emphasised Rabbitt AI’s broader vision for defence. “Our work goes beyond developing AI models,” he said. “We are building a defence ecosystem where AI serves as a force multiplier, enhancing every soldier’s capabilities while increasing situational awareness and reducing decision-making time.”

Founded by Singh, Rabbitt.ai focuses on generative AI solutions, including custom LLM development, RAG fine-tuning, and MLOps integration. The company recently raised $2.1 million from TC Group of Companies and investors connected to NVIDIA and Meta. 

The company recently appointed Asem Rostom as its Global Managing Director to lead expansion across the MENA and Europe regions. Before this role, Rostom served as the managing director at Simplilearn.

The company has also launched Rabbitt Learning, a new division focused on transforming education access and workforce readiness in the MENA region. As a part of its expansion, Rabbitt AI has opened a new office in Riyadh, Saudi Arabia, to meet the growing demand for Gen AI skilling courses and digital transformation projects in the Gulf countries.

The post Rabbitt AI Announces Strategic Applications of Generative AI in Defense appeared first on Analytics India Magazine.

]]>
Can Google Beat AI Rivals and Keep the Ad Cash Rolling? https://analyticsindiamag.com/ai-features/can-google-beat-ai-rivals-and-keep-the-ad-cash-rolling/ Tue, 05 Nov 2024 13:00:00 +0000 https://analyticsindiamag.com/?p=10140251

The rise of AI-driven search engines, driven by ChatGPT and Perplexity, poses a significant threat to Google’s search and ad revenue dominance.

The post Can Google Beat AI Rivals and Keep the Ad Cash Rolling? appeared first on Analytics India Magazine.

]]>

Tech giant Google’s advertising business continues to drive its financial performance with the company posting the highest ever ad revenue of $65.85 billion for Q3 FY24. It constitutes nearly 75% of its total revenue. 

Amid the company’s expansion drive through Google Cloud and AI innovations and investments, advertising revenue remains the core of its operations. Ad revenue increased 10% year-on-year, indicating the company’s dominance in search-based and video ads. 

Search revenue alone contributed $49.39 billion to the total, a 12% increase from last year. YouTube also performed well, earning $8.92 billion for the quarter. 

Sundar Pichai, CEO of Alphabet which owns Google said, “The momentum across the company is extraordinary. Our commitment to innovation, as well as our long-term focus and investment in AI, are paying off with consumers and partners benefiting from our AI tools.”

QuarterAd Revenue (in millions)Year-over-Year % Change
Q3 2024$65.810.41%
Q3 2023$59.69.48%
Q3 2022$54.42.54%
Q3 2021$53.1

The company’s reliance on ad revenues is nothing new. Over the past few years, its ad revenues have been on an upward trajectory with Q3 marking the highest ever earnings. Google has successfully leveraged its search engine, user data and AI features to effectively manage the ad call-to-action behaviour. 

The company believes the ads on AI Overviews, a feature that summarises content on the search query and displays it under the search box, have allowed users to quickly connect with relevant businesses and services, thereby making the ad process more relevant. 

Rising Competition in the Ad Space 

However, its heavy reliance on ads makes its business challenging in a competitive landscape. The rise of AI driven search engines, driven by ChatGPT and Perplexity, poses a significant threat to Google’s search and ad revenue dominance. 

Both ChatGPT and Perplexity are releasing chrome extensions for their search engines. Besides, Google’s main competitor, Meta, too is reportedly entering the search engine space. The latter has shown consistent progress in the last two years with its ad revenue for Q3 FY24 touching $39.9 billion, an 18.7% jump year-on-year. 

Perplexity co-founder and CEO Aravind Srinivas posted on X about Google’s approach to raking up ad revenues despite his own company entering the search space. 

With these developments, Google cannot afford to lose its focus on the ad revenue nor ignore the emerging competitors.  

Source : X

Meta is developing an AI search engine to answer queries on Meta AI. Currently the company is relying on Google and Microsoft’s Bing for the same. It is obvious that dominant players are trying to build their ecosystem to ensure their customers stay on their portal with minimal dependency on competitors. 

Meta has the advantage of a large user base and data from Facebook and Instagram platforms, so training the AI search platform might not be problematic. Meta’s web crawler is already scraping data for AI training. The company has even partnered with publications such as Reuters to bring news-related answers. 

AI Powers Ads

Even as the company’s Q3 FY 2024 results surpassed analyst expectations in the top and bottom lines, with the consolidated revenues at $88.3 billion and Google Cloud revenue increased 35% to $11.4 billion, advertising income still remains a key growth driver. 

Google Cloud revenues was led by accelerated growth in Google Cloud Platform (GCP) across AI Infrastructure, Generative AI Solutions, and core GCP products.

Pichai credited their long-term focus and investment in AI as key drivers of success for the company and its customers, even highlighting the Gemini API’s 14x growth over the past six months.

Google claims that both customers and advertisers have found AI features to propel the user experience across its products and services. Advertisers have been using Gemini to build and test ad creatives at scale. 

Google’s latest text-to-image model, Imagen 3 was updated in Google Ads. The model was tuned with ads’ performance data across industries to aid customers with high-quality images for their campaigns. 

It’s interesting to note that AI-powered feature integration on search has also been economical. Pichai mentioned that when the company first began testing AI Overviews, they had lowered the machine costs per query ‘significantly,’ and now, in 18 months, the costs have been reduced by more than 90%. 

“AI is expanding our ability to understand the intent and connect it to our advertisers. This allows us to connect highly relevant users with the most helpful ad, and deliver business impact to our customers,” said Philipp Schindler, SVP and CBO at Google, on the earnings call. 

The post Can Google Beat AI Rivals and Keep the Ad Cash Rolling? appeared first on Analytics India Magazine.

]]>
The Transformative Impact of Generative AI on IT Services, BPO, Software, and Healthcare https://analyticsindiamag.com/ai-highlights/the-transformative-impact-of-generative-ai-on-it-services-bpo-software-and-healthcare/ Tue, 22 Oct 2024 07:51:29 +0000 https://analyticsindiamag.com/?p=10139061

“As many as 91% of the respondents believe that GenAI will significantly boost employee productivity, and 82% see enhanced customer experiences through GenAI integration,” said the Technology Holdings panel while speaking at Cypher 2024, India’s biggest AI conference organised by AIM Media House.

The post The Transformative Impact of Generative AI on IT Services, BPO, Software, and Healthcare appeared first on Analytics India Magazine.

]]>

Technology Holdings, an award-winning global boutique investment banking firm dedicated to delivering M&A and capital-raising advisory services to technology services, software, consulting, healthcare life sciences, and business process management companies globally, recently launched its report titled “What Does GenAI REALLY Mean for IT Services, BPO, and Software Companies: A US $549 Billion Opportunity or Threat?

“As many as 91% of the respondents believe that GenAI will significantly boost employee productivity, and 82% see enhanced customer experiences through GenAI integration,” said Venkatesh Mahale, Senior Research Manager at Technology Holdings, while speaking at Cypher 2024. He added that in the BPO sector, GenAI is expected to have the biggest impact, particularly in areas such as automation and advanced analytics.

Speaking about the impact of generative AI in the IT sector, Sriharsha KV, Associate Director at Technology Holdings, said, “IT services today generate approximately one-and-a-half trillion dollars in revenue, a figure expected to double in the next eight to ten years.”

He added that Accenture, the number one IT services company in the world, has started disclosing GenAI revenues, and their pipeline is already at a half-billion run rate for the year. “The pipeline has scaled from a few hundred million last year to, I would say, 300 to 400%. That makes us strongly believe that GenAI is real.”

He noted that data centre and chip companies are part of the upstream sectors, as they are responsible for creating the generative AI infrastructure. In contrast, IT services companies are downstream but are gaining momentum in automating building processes using GenAI.

Sriharsha stated that generative AI has a notable impact on testing, debugging, DevOps, MLOps, and DataOps.

The panel at Cypher further discussed the growing trends in mergers and acquisitions (M&A) driven by GenAI. “2023 was a blockbuster year for funding in GenAI, with $20 to $25 billion infused into the sector,” Sriharsha said. This surge in investment has also translated into increased M&A activity, particularly in the IT services and BPO sectors. “We’ve seen numerous acquisitions focused on integrating GenAI capabilities into industry-specific operations,” he added.

Sriharsha explained that in the BPO sector, GenAI is particularly disrupting contact centres. “By automating up to 70% of calls through a combination of chat, email, and voice interactions, companies can operate with fewer agents while maintaining service quality,” he said. This efficiency allows organisations to redirect resources to higher-value tasks, reshaping the way BPOs operate.

Enhancing Healthcare with GenAI


“India has a population of around 1.4 billion, but there is still a dearth of doctors and nurses,” said Anant Kharad, Vice President at TH Healthcare & Life Sciences. He added that generative AI has several use cases in the healthcare industry that can help solve these problems.

“GenAI will analyse my medical records and try to identify the issues I faced in the past and what I’m experiencing now. It will create a summary of all that and then provide it to the nurse for review, who will handle the initial treatment for the outpatient department. The doctor can then take it from there instead of nurses going through tons of paperwork,” he explained.

He said that this not only enhances patient care but also optimises healthcare workflows, allowing medical staff to focus on more complex cases. Moreover, he added that GenAI is playing a vital role in drug discovery and patient care strategies. “It is working with companies that reverse Type 2 diabetes,” Kharad shared. “It has used machine learning to analyse data from thousands of patients, creating effective treatment curricula that can be rolled out globally,” he said.

The Long-Term Implications of Generative AI

As companies navigate the potential disruptions brought on by generative AI, the long-term impacts on business models and service offerings cannot be overlooked. According to Kharad, the need for traditional models, like manual contact centres, is already being questioned in the BPO sector.

“Testing and debugging in IT services are also being challenged,” he said, suggesting that companies must evolve or risk obsolescence. The healthcare sector, however, appears poised for positive disruption through the application of generative AI. Kharad shared specific examples of how AI can enhance efficiency, especially in diagnostics.

“For instance, instead of a radiologist reading 20 reports a day, AI could enable them to process 100 reports,” he explained. This not only increases operational efficiency but also optimises resource allocation in a sector often constrained by staff shortages.

Furthermore, Kharad pointed out that major players like Amazon are already using generative AI to automate prescription orders based on data inputs. “If AI can handle 90% of the workload, it will reduce costs and provide faster service for patients,” he said.

Kharad further elaborated on the healthcare sector’s response to M&A trends, noting that biotech and health-tech companies are at the forefront. “Pharmaceutical companies in India are partnering with start-ups to drive innovation in drug discovery,” he said. 

For those interested in exploring the implications of generative AI further, Technology Holdings has launched a comprehensive report on its impact on IT services, BPOs, and software companies. The report can be accessed here.

The post The Transformative Impact of Generative AI on IT Services, BPO, Software, and Healthcare appeared first on Analytics India Magazine.

]]>
Adobe Launches Content Authenticity Web App to Protect Creators’ Work from Generative AI Misuse https://analyticsindiamag.com/ai-news-updates/adobe-launches-content-authenticity-web-app-to-protect-creators-work-from-generative-ai-misuse/ Tue, 08 Oct 2024 13:00:00 +0000 https://analyticsindiamag.com/?p=10137790

Adobe’s web app includes a feature that allows creators to signal whether they want their work to be used by AI models.

The post Adobe Launches Content Authenticity Web App to Protect Creators’ Work from Generative AI Misuse appeared first on Analytics India Magazine.

]]>

Adobe has unveiled the Adobe Content Authenticity web app, a free tool designed to protect creators’ work and ensure proper attribution. This new app enables users to easily apply Content Credentials—metadata that serves as a “nutrition label” for digital content—ensuring their creations are safeguarded from unauthorised use. 

Supported by popular Adobe Creative Cloud apps such as Photoshop, Lightroom, and Firefly, Content Credentials provide key information about how digital works are created and edited, offering creators ways to claim ownership and protect their creations.

The company launched its Content Authenticity Initiative in 2019. With over 3,700 members backing this industry standard, the initiative aims to combat misinformation and AI-generated deepfakes. Adobe’s new web app builds on this legacy, offering a centralised platform where creators can apply, manage, and customise their Content Credentials across multiple files, from images to audio and video.

Enhancing Creator Control 

A recent Adobe study revealed that 91% of creators want a reliable method to attach attribution to their work, with over half expressing concerns about their content being used to train AI models without their consent. In response, Adobe’s web app includes a feature that allows creators to signal whether they want their work used by AI models, ensuring their rights are respected.

“Adobe is committed to responsible innovation centered on the needs and interests of creators,” said Scott Belsky, chief strategy officer at Adobe. “By offering a simple, free way to attach Content Credentials, we are helping creators preserve the integrity of their work, while enabling a new era of transparency and trust online.”

The app also offers features such as batch credential application and the ability to inspect content for associated credentials through a Chrome extension. This ensures that the information remains visible, even if platforms or websites fail to retain it.

With this new tool, Adobe is not only empowering creators to protect their work but is also driving a broader push for transparency across the digital ecosystem. The company has gone all in on generative AI. Last month, they introduced new features in Adobe Experience Cloud, including Adobe Content Analytics and real-time experimentation tools. The tool will help personalise, test, and evaluate AI-generated content across various channels while offering actionable insights to improve marketing performance and boost customer engagement.

The post Adobe Launches Content Authenticity Web App to Protect Creators’ Work from Generative AI Misuse appeared first on Analytics India Magazine.

]]>
Generative AI Cost Optimisation Strategies https://analyticsindiamag.com/ai-highlights/generative-ai-cost-optimisation-strategies/ Thu, 03 Oct 2024 07:21:49 +0000 https://analyticsindiamag.com/?p=10137322

As an executive exploring generative AI’s potential for your organisation, you’re likely concerned about costs. Implementing AI isn’t just about picking a model and letting it run. It’s a complex ecosystem of decisions, each affecting the final price tag. This article will guide you to optimise costs throughout the AI life cycle, from model selection […]

The post Generative AI Cost Optimisation Strategies appeared first on Analytics India Magazine.

]]>

As an executive exploring generative AI’s potential for your organisation, you’re likely concerned about costs. Implementing AI isn’t just about picking a model and letting it run. It’s a complex ecosystem of decisions, each affecting the final price tag. This article will guide you to optimise costs throughout the AI life cycle, from model selection and fine-tuning to data management and operations.

Model Selection

Wouldn’t it be great to have a lightning-fast, highly accurate AI model that costs pennies to run? Since this ideal scenario does not exist (yet), you must find the optimal model for each use case by balancing performance, accuracy, and cost.

Start by clearly defining your use case and its requirements. These questions will guide your model selection:

  • Who is the user?
  • What is the task?
  • What level of accuracy do you need?
  • How critical is rapid response time to the user?
  • What input types will your model need to handle, and what output types are expected?

Next, experiment with different model sizes and types. Smaller, more specialised models may lack the broad knowledge base of their larger counterparts, but they can be highly effective—and more economical—for specific tasks.

Consider a multi-model approach for complex use cases. Not all tasks in a use case may require the same level of model complexity. Use different models for different steps to improve performance while reducing costs.

Fine-Tuning and Model Customisation

Pretrained foundation models (FMs) are publicly available and can be used by any company, including your competitors. While powerful, they lack the specific knowledge and context of your business.

To gain a competitive advantage, you need to infuse these generic models with your organisation’s unique knowledge and data. Doing so transforms an FM into a powerful, customised tool that understands your industry, speaks your company’s language, and leverages your proprietary information. Your choice to use retrieval-augmented generation (RAG), fine-tuning, or prompt engineering for this customisation will affect your costs.

Retrieval-Augmented Generation

RAG pulls data from your organisation’s data sources to enrich user prompts so they deliver more relevant and accurate responses. Imagine your AI being able to instantly reference your product catalogue or company policies as it generates responses. RAG improves accuracy and relevance without extensive model retraining, balancing performance and cost efficiency.

Fine-Tuning

Fine-tuning means training an FM on additional, specialised data from your organisation. It requires significant computational resources, machine learning expertise, and carefully prepared data, making it more expensive to implement and maintain than RAG.

Fine-tuning excels when you need the model to perform exceptionally well on specific tasks, consistently produce outputs in a particular format, or perform complex operations beyond simple information retrieval.

We recommend a phased approach. Start with less resource-intensive methods such as RAG and consider fine-tuning only when these methods fail to meet your needs. Set clear performance benchmarks and regularly evaluate the gains versus the resources invested.

Prompt Engineering

Prompts are the instructions given to AI applications. AI users, such as designers, marketers, or software developers, enter prompts to generate the desired output, such as pictures, text summaries or source code. Prompt engineering is the practice of crafting and refining these instructions to get the best possible results. Think of it as asking the right questions to get the best answers.

Good prompts can significantly reduce costs. Clear, specific instructions reduce the need for multiple back-and-forth interactions that can quickly add up in pay-per-query pricing models. They also lead to more accurate responses, reducing the need for costly, time-consuming human review. With prompts that provide more context and guidance, you can often use smaller, more cost-effective AI models.

Data Management

The data you use to customise generic FMs is also a significant cost driver. Many organisations fall into the trap of thinking that more data always leads to better AI performance. In reality, a smaller dataset of high-quality, relevant data often outperforms larger, noisier datasets.

Investing in robust data cleansing and curation processes can reduce the complexity and cost of customising and maintaining AI models. Clean, well-organised data allows for more efficient fine-tuning and produces more accurate results from techniques like RAG. It lets you streamline the customisation process, improve model performance, and ultimately lower the ongoing costs of your AI implementations.

Strong data governance practices can help increase the accuracy and cost performance of your customised FM. It should include proper data organisation, versioning, and lineage tracking. On the other hand, inconsistently labelled, outdated, or duplicate data can cause your AI to produce inaccurate or inconsistent results, slowing performance and increasing operational costs. Good governance helps ensure regulatory compliance, preventing costly legal issues down the road.

Operations

Controlling AI costs isn’t just about technology and data—it’s about how your organisation operates.

Organisational Culture and Practices

Foster a culture of cost-consciousness and frugality around AI, and train your employees in cost-optimisation techniques. Share case studies of successful cost-saving initiatives and reward innovative ideas that lead to significant cost savings. Most importantly, encourage a prove-the-value approach for AI initiatives. Regularly communicate the financial impact of AI to stakeholders.

Continuous learning about AI developments helps your team identify new cost-saving opportunities. Encourage your team to test various AI models or data preprocessing techniques to find the most cost-effective solutions.

FinOps for AI

FinOps, short for financial operations, is a practice that brings financial accountability to the variable spend model of cloud computing. It can help your organisation efficiently use and manage resources for training, customising, fine-tuning, and running your AI models. (Resources include cloud computing power, data storage, API calls, and specialised hardware like GPUs). FinOps helps you forecast costs more accurately, make data-driven decisions about AI spending, and optimise resource usage across the AI life cycle.

FinOps balances a centralised organisational and technical platform that applies the core FinOps principles of visibility, optimisation, and governance with responsible and capable decentralised teams. Each team should “own” its AI costs—making informed decisions about model selection, continuously optimising AI processes for cost efficiency, and justifying AI spending based on business value.

A centralised AI platform team supports these decentralised efforts with a set of FinOps tools and practices that includes dashboards for real-time cost tracking and allocation, enabling teams to closely monitor their AI spending. Anomaly detection allows you to quickly identify and address unexpected cost spikes. Benchmarking tools facilitate efficiency comparisons across teams and use cases, encouraging healthy competition and knowledge sharing.

Conclusion

As more use cases emerge and AI becomes ubiquitous across business functions, organisations will be challenged to scale their AI initiatives cost-effectively. They can lay the groundwork for long-term success by establishing robust cost optimisation techniques that allow them to innovate freely while ensuring sustainable growth. After all, success depends on perfecting the delicate balance between experimentation, performance, accuracy, and cost.

The post Generative AI Cost Optimisation Strategies appeared first on Analytics India Magazine.

]]>
Accenture and NVIDIA Partner to Train 30,000 Professionals to Scale Agentic AI for Enterprises https://analyticsindiamag.com/ai-news-updates/accenture-and-nvidia-partner-to-train-30000-professionals-to-scale-agentic-ai-for-enterprises/ Wed, 02 Oct 2024 13:32:56 +0000 https://analyticsindiamag.com/?p=10137269

Accenture AI Refinery platform will help companies commence their custom agentic AI journeys using the full NVIDIA AI stack.

The post Accenture and NVIDIA Partner to Train 30,000 Professionals to Scale Agentic AI for Enterprises appeared first on Analytics India Magazine.

]]>

Accenture and NVIDIA have expanded their partnership with the launch of a new Accenture NVIDIA Business Group, aimed at helping enterprises scale AI adoption. This initiative includes training for 30,000 professionals globally to assist clients in reinventing processes and expanding the use of enterprise AI systems. 

The new business group will leverage Accenture’s AI Refinery platform, which uses NVIDIA’s AI stack, to help companies accelerate their AI journeys. The AI Refinery will be available across public and private cloud platforms and aims to streamline AI-powered simulation, process reinvention, and sovereign AI.

Scaling Agentic AI Systems

Accenture’s AI Refinery is set to scale the next frontier of AI : agentic AI. “We are breaking significant new ground with our partnership with NVIDIA and enabling our clients to be at the forefront of using generative AI as a catalyst for reinvention,” said Julie Sweet, chair and CEO of Accenture. 

To support this initiative, Accenture is introducing a global network of AI Refinery Engineering Hubs in key regions, including Singapore, Tokyo, Malaga, and London. These hubs will focus on the large-scale development of AI models and operations. 

Jensen Huang, founder and CEO of NVIDIA, added, “AI will supercharge enterprises to scale innovation at greater speed.” This collaboration has already seen successful use cases, such as Indosat Group in Indonesia using agentic AI to develop industry-specific solutions in financial services.

Additionally, Accenture is debuting the NVIDIA NIM Agent Blueprint for virtual factory simulations, integrating NVIDIA Omniverse and Isaac software. Accenture’s marketing division has also begun using the AI Refinery platform with autonomous agents to streamline campaigns, achieving a 25-55% increase in speed to market.

Accenture has been on a role to adopt generative AI across their platform by providing training for upskilling opportunities to their employees. 

Agentic AI has been a hot topic of discussion across major tech providers over the last few weeks. From Oracle to Salesforce, major Saas players had unveiled a number of AI agentic products across their wide suite of products. There has also been a steady increase in providing autonomous databases for their customers. 

The post Accenture and NVIDIA Partner to Train 30,000 Professionals to Scale Agentic AI for Enterprises appeared first on Analytics India Magazine.

]]>
Embracing the Future: How Agentic Systems are Revolutionising Enterprises https://analyticsindiamag.com/ai-features/embracing-the-future-how-agentic-systems-are-revolutionising-enterprises/ Wed, 02 Oct 2024 05:30:00 +0000 https://analyticsindiamag.com/?p=10137207

Sriram Gudimella from Tredence shared with AIM some valuable insights into the potential of these advanced systems that are poised to change how enterprises function.

The post Embracing the Future: How Agentic Systems are Revolutionising Enterprises appeared first on Analytics India Magazine.

]]>

Automation had already begun transforming industries before generative AI came into the picture. Now, the next frontier of innovation is marked by the rise of agentic systems, which are autonomous systems capable of dynamic decision-making, learning from feedback, and executing complex tasks with minimal human intervention.

Sriram Gudimella from Tredence shared with AIM some valuable insights into the potential of these advanced systems that are poised to change how enterprises function.

The distinction between traditional automation and agentic systems is profound. “Traditional automation is efficient at performing repetitive tasks but lacks the flexibility and learning capability of agentic systems,” Gudimella explained.

He emphasised that traditional automation systems require human intervention for updates or iterations, often leading to delays in incorporating feedback. In contrast, agentic systems operate autonomously, continuously learning from real-time data and user feedback, enabling ongoing improvements without the need for human oversight.

To simplify this concept, Gudimella likened traditional automation to a chess piece that can only move as instructed, while agentic systems act more like a chess master, strategically assessing the entire board and autonomously planning optimal moves. “A chess piece follows orders, but a chess master anticipates, adapts, and ensures the most valuable outcomes,” he added.

This analogy captures the essence of how agentic systems surpass traditional automation by leveraging autonomy and adaptability.

Real-World Applications of Agentic Systems

Agentic systems are already making an impact across various industries, from commodity trading to gaming, healthcare, logistics, and even agriculture.

Gudimella shared a compelling example from the commodity trading sector, where Tredence is helping a client develop an agentic system to make autonomous decisions based on factors like inventory levels, competitor information, and procurement rates.

“The goal is to create a system that can scale without human or subject-matter expert dependence, enabling seamless decision-making across multiple geographies,” he explained.

Meanwhile, Tredence is also assisting a gaming company in enhancing its decision-making processes. The agentic system analyses data on game performance across different geographies, determining which games and promotions are successful and why. This provides valuable insights that can be applied to future business strategies.

Agentic systems are also beginning to show promise in agriculture. “In advanced use cases, these systems are improving productivity and value by streamlining processes and optimising resources,” said Gudimella. The versatility of agentic systems allows them to be adapted for diverse applications, showcasing their potential to transform multiple industries.

Accelerating Digital Transformation

One of the most exciting aspects of agentic systems is their ability to accelerate digital transformation. “Tasks that used to take weeks can now be completed at the press of a button,” Gudimella noted. These systems break down complex tasks into smaller segments, assigning agents to handle specific elements while orchestrating the entire process.

This level of automation not only saves time but also ensures that resources are optimised, enhancing decision-making capabilities within organisations.

Agentic systems provide real-time insights by analysing vast amounts of data without the bias that often accompanies human decision-making. “They simulate different scenarios, testing various agents and tools to find the best solution in real-time,” Gudimella explained. This ability to quickly integrate diverse datasets and offer a holistic view allows businesses to make more informed, data-driven decisions.

However, despite their promise, the implementation of agentic systems is not without challenges. Gudimella highlighted the need for skilled professionals capable of designing, implementing, and managing these systems. “Not every organisation has the requisite skill sets to handle such complex technology,” he said.

Additionally, the cost of deploying agentic systems can be significant, particularly for organisations without experience in managing these advanced solutions.

Gudimella also stressed the importance of guardrails and governance to ensure the reliability and accuracy of these systems. While they are autonomous, businesses must establish mechanisms to prevent errors or misuse. “Guardrails are crucial to prevent the system from delivering irrelevant responses or losing the users’ confidence,” he emphasised.

Furthermore, ethical concerns surrounding agentic systems must be addressed, particularly regarding data privacy and accountability. When using LLMs in agentic systems, businesses need to ensure that the data used to train these models is ethically sourced and free from biases.

“Accountability is a major concern,” Gudimella noted, questioning who would be responsible for the decisions made by autonomous systems.

The Future of Agentic Systems

Agentic systems are set to play a pivotal role in shaping the future of enterprise automation. Gudimella believes that the rise of small, specialised companies leveraging AI and agentic systems will transform industries. “We are already seeing solopreneurs and small teams achieving phenomenal results with AI, and I believe this trend will continue to grow,” he said.

Shortly, companies will increasingly rely on AI agents to handle complex tasks, with fewer employees needed to manage these systems. “It’s all about building an ecosystem where each company provides specialised solutions, integrating with others to deliver comprehensive services,” Gudimella concluded.

The post Embracing the Future: How Agentic Systems are Revolutionising Enterprises appeared first on Analytics India Magazine.

]]>
Code Review Should Be Completely Taken Over by AI https://analyticsindiamag.com/ai-features/code-review-should-be-completely-taken-over-by-ai/ Mon, 30 Sep 2024 11:09:55 +0000 https://analyticsindiamag.com/?p=10136925

AI is just simply better at reviewing code than humans. Or is it?

The post Code Review Should Be Completely Taken Over by AI appeared first on Analytics India Magazine.

]]>

Writing code was not enough, now the AI world has decided to create tools that can monitor, edit, and even review code. Well it turns out that AI is actually better at reviewing code than humans, and computers are all you need. A brilliant example for this are tools like CodeAnt, CodeRabbit, or SonarQube, which take the task of reviewing code onto their own hands.

“Code reviews are dumb, and I can’t wait for AI to take over completely,” said Santiago Valdarrama, who believes that we are not far when the reviewing process might be completely automated. But it isn’t a take that doesn’t come with a lot of contention. 

“My colleagues approve my PRs without even looking at it,” he noted, highlighting a common issue in code reviews. For him, an automated solution would be welcome. “When you review code, most of the time, you have no idea what you are even reading.”

While speaking with AIM, Amartya Jha, the co-founder and CEO of CodeAnt AI said that developers spend 20 to 30% of their time just reviewing someone else’s code. “Most of the time, they simply say, ‘It looks good, just merge it,’ without delving deeper,” Jha explained. This leads to bugs and security vulnerabilities making their way into production.

Still he said that the quality of code generated by AI is still far from what humans do. But when it comes to reviewing the code, maybe AI could take over. Saurabh Kumar, another developer, argues, “I will let AI review my code when it can write better code than me—boilerplating doesn’t count.”

For better or worse, code reviews are part of the job

One of the key advantages of AI in code review is its ability to process vast amounts of data quickly, freeing up human developers to focus on higher-level tasks. As Mathieu Trachino pointed out, many code reviewers don’t actually dive deep into the code they’re supposed to evaluate. 

The debate ultimately boils down to whether AI can reach a level of understanding and context that is currently unique to human developers. Santiago Valdarrama pointed out that reviewing code is actually easier than writing it, implying that AI might be better suited to code review than code generation. However, some remain skeptical. 

Many developers like Trachino envision a future where AI can conduct code reviews more effectively than their human counterparts. Petri Kuittinen echoes this sentiment, noting that traditional line-by-line reviews are no longer cost-effective. 

While there’s optimism about AI taking over code reviews, many developers argue that a complete handover could overlook key human elements. Sebastian Castillo said, “Code review also serves to share knowledge between team members and as a way for more people to be familiar with the wider context of the product implementation,” highlighting that it is important for a human touch while reviewing code. 

AI can’t replace the collaborative learning and communication that occur during human-led code reviews. Which seems true. Recognising the benefits of AI but cautioning against eliminating human interaction entirely. 

Can AI Fully Replace Human Code Reviews?

Drawing a parallel between AI decision making and the Boeing 737 Max incidents, a user argued that AI enhancement of this is helpful, but it should not replace code reviews. “Boeing 737 Max programmers thought the same as well.”

In essence, AI lacks the capacity to understand the long-term strategic goals of a project. “For anything even remotely critical, my opinion is that AI code reviews are a terrible idea,” said a developer on a Reddit discussion

But this is also something that modern platforms like CodeAnt have addressed. One of CodeAnt AI’s standout features is its ability to allow enterprises to input their own data and create custom policies.

In complex systems where safety and security are critical, human oversight remains essential, but this is something that companies are focusing on improving. While AI can flag bugs right now, enforce style guides, or detect inefficiencies, the final call on whether code aligns with the larger system architecture and product vision should likely remain with humans—at least for the time being.

The post Code Review Should Be Completely Taken Over by AI appeared first on Analytics India Magazine.

]]>