Data Science News, Stories and Latest Updates 2025 https://analyticsindiamag.com/news/data-science/ News and Insights on AI, GCC, IT, and Tech Mon, 25 Aug 2025 13:11:06 +0000 en-US hourly 1 https://analyticsindiamag.com/wp-content/uploads/2025/02/cropped-AIM-Favicon-32x32.png Data Science News, Stories and Latest Updates 2025 https://analyticsindiamag.com/news/data-science/ 32 32 Oracle Deploys OpenAI GPT-5 Across Database and Cloud Applications https://analyticsindiamag.com/ai-news-updates/oracle-deploys-openai-gpt-5-across-database-and-cloud-applications/ Mon, 18 Aug 2025 16:32:52 +0000 https://analyticsindiamag.com/?p=10176125

GPT-5 is available in three sizes through the API and in ChatGPT Enterprise.

The post Oracle Deploys OpenAI GPT-5 Across Database and Cloud Applications appeared first on Analytics India Magazine.

]]>

Oracle on August 18 announced that it has deployed OpenAI GPT-5 across its database portfolio and suite of SaaS applications, including Oracle Fusion Cloud Applications, Oracle NetSuite, and Oracle Industry Applications such as Oracle Health. The move allows customers to integrate business data with AI capabilities for coding, reasoning, and process automation.

“Oracle AI Vector and Select AI together with GPT-5 enable easier and more effective data search and analysis,” said Kris Rice, senior vice president, Database Software Development, Oracle. “Oracle’s SQLcl MCP Server enables GPT-5 to easily access data in Oracle Database. These capabilities enable users to search across all their data, run secure AI-powered operations, and use generative AI directly from SQL—helping to unlock the full potential of AI on enterprise data.”

GPT-5 is available in three sizes through the API and in ChatGPT Enterprise. The model is built to assist with code generation, editing, debugging, multi-step reasoning, and agent-driven automation across enterprise workflows.

“GPT-5 will bring our Fusion Applications customers OpenAI’s sophisticated reasoning and deep-thinking capabilities,” said Meeten Bhavsar, senior vice president, Applications Development, Oracle. “The newest model from OpenAI will be able to power more complex AI agent-driven processes with capabilities that enable advanced automation, higher productivity, and faster decision making.”

Oracle said the integration of GPT-5 with Oracle Database 23ai, Fusion Applications, and other tools is intended to help enterprises improve business insights, accelerate coding tasks, and orchestrate processes more efficiently. The company emphasises security, scalability, and adaptability in its deployment.

OpenAI and Oracle have a major partnership centred around the Stargate project, a $500 billion AI infrastructure initiative to build large-scale data centers across the U.S. Oracle is supplying OpenAI with immense data center capacity, including a recent deal to provide 4.5 gigawatts of power capacity, supporting millions of AI chips. This partnership makes Oracle a primary cloud infrastructure provider for OpenAI, expanding beyond Microsoft Azure to meet OpenAI’s growing AI compute needs.

Oracle recently expanded its partnership with Google Cloud to give Oracle customers access to Google’s advanced AI models, starting with Gemini 2.5, through the Oracle Cloud Infrastructure (OCI) Generative AI service. The models can be used to build AI agents for tasks including multimodal understanding, coding and software development, workflow automation, and research and knowledge retrieval.

Oracle plans to make Google’s full range of Gemini models available through OCI Generative AI service via new integrations with Vertex AI. This will include models for video, image, speech, and music generation, as well as specialised industry models such as MedLM.

In the future, Oracle will work with Google Cloud to offer Gemini models through Vertex AI as an option within Oracle Fusion Cloud Applications. This will allow customers to enhance workflows in areas including finance, HR, supply chain, sales, service, and marketing.

The post Oracle Deploys OpenAI GPT-5 Across Database and Cloud Applications appeared first on Analytics India Magazine.

]]>
CtrlS Opens First Rated-4 Datacenter in Kolkata, Plans ₹2,200 Crore Campus https://analyticsindiamag.com/ai-news-updates/ctrls-opens-first-rated-4-datacenter-in-kolkata-plans-%e2%82%b92200-crore-campus/ Wed, 13 Aug 2025 11:28:15 +0000 https://analyticsindiamag.com/?p=10175880

The site is designed to serve enterprises, government agencies, and large tech companies, including those working on AI

The post CtrlS Opens First Rated-4 Datacenter in Kolkata, Plans ₹2,200 Crore Campus appeared first on Analytics India Magazine.

]]>

CtrlS has opened its first datacenter in Kolkata, marking the start of a ₹2,200 crore plan to build eastern India’s largest datacenter ecosytem. The new facility, located in Bengal Silicon Valley, New Town, is the first in the region to meet the Rated-4 certification standard.

The first phase offers 16 MW of IT load capacity, with plans to expand to over 60 MW in four stages. The site is designed to serve enterprises, government agencies, and large tech companies, including those working on AI.

“This launch is a major milestone in our mission to democratise world-class datacenter infrastructure across India,” said Sridhar Pinnapureddy, founder & CEO, CtrlS Datacenters.

The facility includes high-density rack support, advanced cooling, nine layers of security, dual power sources, and connections to multiple internet providers. It is built to withstand natural disasters and aims for top-level green building certification.

“Kolkata’s location is a natural gateway to eastern and northeastern India, as well as neighbouring countries,” said Kallol Sen, EVP & regional CEO (Kolkata), CtrlS Datacenters. “With a deep talent pool and strong government support, it is the perfect base for high-performance, AI-ready datacenter infrastructure.”

The datacenter connects to four fibre entry points and several internet exchanges. Future links, including the Digha subsea cable landing station set to open in 2026, are expected to improve international connectivity.

The post CtrlS Opens First Rated-4 Datacenter in Kolkata, Plans ₹2,200 Crore Campus appeared first on Analytics India Magazine.

]]>
MongoDB Launches New Voyage AI Embedding and Reranking Models with MCP Server https://analyticsindiamag.com/ai-news-updates/mongodb-launches-new-voyage-ai-embedding-and-reranking-models-with-mcp-server/ Tue, 12 Aug 2025 06:17:39 +0000 https://analyticsindiamag.com/?p=10175722

The MCP Server enables direct connections between MongoDB and tools such as GitHub Copilot, Anthropic’s Claude, Cursor, and Windsurf.

The post MongoDB Launches New Voyage AI Embedding and Reranking Models with MCP Server appeared first on Analytics India Magazine.

]]>

MongoDB today announced product updates and an expanded partner ecosystem at Ai4, its annual AI-focused conference to help customers build AI applications at scale. 

The company introduced new Voyage AI embedding and reranking models, launched the MongoDB Model Context Protocol (MCP) Server in public preview, and added new partners to its AI ecosystem.

The latest Voyage AI models include voyage-context-3, which processes full document context for more relevant retrieval results, and voyage-3.5 and voyage-3.5-lite, which aim to improve retrieval quality and price-performance. 

MongoDB also launched rerank-2.5 and rerank-2.5-lite, allowing developers to guide the reranking process using instructions.

“Databases are more central than ever to the technology stack in the age of AI,” said Andrew Davidson, SVP of products at MongoDB. “By consolidating the AI data stack and building a cutting-edge AI ecosystem, we’re giving developers the tools they need to build and deploy trustworthy AI solutions faster.”

The MCP Server enables direct connections between MongoDB and tools such as GitHub Copilot, Anthropic’s Claude, Cursor, and Windsurf, allowing developers to manage database operations using natural language. 

Since its preview launch, thousands of developers have adopted MCP, with growing enterprise interest for agentic application stacks.

“Many organisations struggle to scale AI because the models themselves aren’t up to the task,” Fred Roma, SVP of engineering at MongoDB, said. “The quality of your embedding and reranking models is often the difference between a promising prototype and an AI application that delivers meaningful results in production.”

MongoDB has expanded its AI partner ecosystem with three new additions. 

Galileo provides AI reliability and observability through continuous evaluations and monitoring of applications. Temporal enables the orchestration of resilient AI workflows with durable execution and horizontal scaling. LangChain offers integrations such as GraphRAG with MongoDB Atlas and natural language querying capabilities.

MongoDB reported that in the last 18 months, enterprise adopters such as Vonage, LGU+, and The Financial Times, along with approximately 8,000 startups, including Laurel and Mercor, have used its platform for AI projects. Over 200,000 new developers register for MongoDB Atlas monthly.

The post MongoDB Launches New Voyage AI Embedding and Reranking Models with MCP Server appeared first on Analytics India Magazine.

]]>
AI Exposes the Flaws You’ve Been Ignoring. A Wake Up Call for Architects! https://analyticsindiamag.com/ai-features/ai-exposes-the-flaws-youve-been-ignoring-a-wake-up-call-for-architects/ Tue, 12 Aug 2025 04:56:51 +0000 https://analyticsindiamag.com/?p=10175706

What happens when legacy systems clash with AI’s rapid rise? Businesses face an architectural crisis as they chase intelligence without the foundation to support it.

The post AI Exposes the Flaws You’ve Been Ignoring. A Wake Up Call for Architects! appeared first on Analytics India Magazine.

]]>

As Artificial Intelligence evolves, so should the architecture behind the systems. It is not enough to bolt on to AI; it must be integrated into data, infrastructure, applications, and user experience.  

Traditional architecture is no longer fit for the pace and demands of AI evolution, said Bala Prasad Peddigari, chief innovation officer (Technology Software and Services Business Group) at TCS, as he cut to the heart of a pressing industry dilemma, during his address at the NIIT StackRoute Digital Architect Conclave 2025.

In today’s world, he said, traditional architectures are showing cracks. “We’re facing performance bottlenecks, scalability limitations, and integration complexities.” Many pilots in AI remain unscalable because they lack foundational AI-ready architectures.

Peddigari questioned the audience: “Are architects drafting blueprints, or are they navigating AI battlefields?” The answer, he believes, is clearly the latter. 

Most architects are reacting to the speed of change, rather than leading it. He cited a CTO who admitted buying AI tools without preparing the underlying architecture to support them, something he called the “architect’s dilemma”.

Tracing the journey from rudimentary client-server models to today’s AI-native imperatives, Peddigari set the tone with an anecdote, an encounter with a manufacturing firm plagued by 100 system shutdowns a day. 

The issue wasn’t just technological. “Is it an architectural issue, a philosophy issue, or is it a code issue?” were the questions asked, according to him. The answer, as it turned out, lay in their mounting technical debt and an inability to adapt.

Peddigari and his team replatformed the legacy-heavy operation by integrating IoT sensors and AI, reducing shutdowns to zero. The assembly line ran uninterrupted for 90 days. The transformation, he said, was driven not merely by technical fixes, but by deep understanding of business domains, architecture, and the discipline of marrying tech trends to real-world problems. 

He compared this approach to Aadhaar’s simplification, from 20 fields to six, arguing that “removing the unnecessary” is what makes systems more intelligent, leaner, and functional. The crux of his talk was about architecting AI-native platforms, a term he views as both, a mindset and a capability. 

AI-native platforms need this

Peddigari presented a layered framework that breaks down the building blocks of AI-native platforms. The central thesis, according to him, is that intelligence must be embedded from the ground up. 

This means designing systems that are capable of data feedback loops, real-time monitoring, and “observability” — all vital to keeping AI systems aligned and operational. Data fabric, Peddigari stressed, is the core. Without intelligent pipelines, robust catalogs, annotation and privacy measures, models cannot thrive. 

From there, he pointed to MLOps as essential for governing the lifecycle of models, from experimentation to drift detection. He also emphasised the importance of the experience layer, where AI becomes meaningful to users, and the trust layer, where fairness, safety, and explainability must be designed in, not added later.

The need for autonomous agents, orchestration layers, and LLM-powered cognition also surfaced in his breakdown of a modern digital intelligence framework. He advocated for a platform mindset, solutions must be scalable, not siloed. This includes planning for extensibility, low-code/no-code participation, and vibe coding acceleration, while recognising their current limitations at enterprise scale.

The three levels of maturity

He mapped out three levels of organisational AI maturity: siloed experimentation, sandboxed development, and true enterprise AI platforms. The last, he said, is where the vision lies , not only democratising AI, but institutionalising it. He called for cross-functional teams to collaborate, breaking silos between decision-makers, developers, data scientists, and architects.

Peddigari touched on the importance of GenAI Ops, governance, observability, and evaluation mechanisms tailored for generative models. He also explained how prompt engineering, plug-ins, and hierarchical agents must work in concert with authentication and orchestration to drive real business applications.

In their LinkedIn posts, Mahmoud Abufadda, senior consultant (Enterprise architecture, digital transformation, AI) at Moro Hub, and John Chavner, senior principal at Strategic Technology Advisory Services, both warn that AI is compelling enterprises to confront long-ignored architectural flaws.

Abufadda stressed that business architecture must evolve in step with AI’s rapid advances, embedding it into strategy, processes, and IT assets to avoid fragmented, unscalable efforts.

“Organisations must therefore incorporate AI into their strategic planning and enterprise architecture roadmaps, ensuring they build the capabilities today that will be standard in their industry tomorrow,” he said.

Chavner pointed out that legacy systems, designed for stability and monolithic operation, are fundamentally misaligned with AI’s requirements for seamless data access, real-time processing, and modular integration. Both caution that without foundational modernisation, AI initiatives risk stalling and eroding competitiveness.

“The cost of modernisation is substantial, but the cost of inaction is existential. Companies that delay AI implementation while competitors advance their capabilities face a future where their market positions become increasingly untenable,” said Chavner. 

Peddigari’s concluding advice to architects was to start small, build iteratively, and embed AI into every workflow with intention. The goal, he said, is to move from interfaces to conversational interactions, from blueprints to real-time navigation. 

Peddigari ended on a note of certainty. AI-native architecture is not optional. “From silicon to software systems, massive investments are being made. No industry will be left out,” he said, pointing to trends like GenAI and Quantum AI as the next frontiers.

The post AI Exposes the Flaws You’ve Been Ignoring. A Wake Up Call for Architects! appeared first on Analytics India Magazine.

]]>
Skyflow Introduces Data Protection Layer for Safer MCP-Based AI Integrations https://analyticsindiamag.com/ai-news-updates/skyflow-introduces-data-protection-layer-for-safer-mcp-based-ai-integrations/ Fri, 08 Aug 2025 10:25:00 +0000 https://analyticsindiamag.com/?p=10175473

The new offering aims to make the MCP experience secure for enterprises and SaaS platforms.

The post Skyflow Introduces Data Protection Layer for Safer MCP-Based AI Integrations appeared first on Analytics India Magazine.

]]>

Skyflow, a privacy-focused data protection platform, has introduced a new data protection layer tailored for the Model Context Protocol (MCP), addressing a rising concern for enterprises and SaaS platforms adopting agentic AI. The new platform aims to secure sensitive information as AI agents increasingly interface with real-world tools via MCP.

Originally developed by Anthropic and now backed by OpenAI, AWS, and Google, MCP simplifies how AI agents connect to databases, SaaS platforms and internal systems. However, this standard also opens up fresh risks, particularly around exposing personally identifiable information (PII), health data and financial records.

Skyflow’s solution introduces a polymorphic data protection engine that applies masking, tokenisation and contextual rehydration dynamically, based on usage policies and permissions. This ensures data privacy without disrupting AI performance.

Two deployment models available are the Skyflow MCP Gateway, which acts as a privacy-enforcing proxy, and a Skyflow MCP Server SDK for direct integration into MCP implementations. Key features include audit trails for regulatory compliance, entity-preserving transformations to support AI reasoning and secure memory handling.

“As AI agents start connecting to more real-world data through MCP, companies need privacy infrastructure that can keep up,” Anshu Sharma, CEO of Skyflow, said.

“Skyflow helps developers and SaaS platforms protect sensitive data without slowing down AI workflows, making secure, compliant AI deployment possible at scale.”

The launch extends Skyflow’s existing AI-focused privacy offerings, including its GPT Privacy Vault (2023) and Agentic AI Security and Privacy Layer (2024). The company says the new MCP Data Protection Layer is suited for sectors such as finance, healthcare, travel and retail, where regulatory compliance and data sensitivity are paramount.

The post Skyflow Introduces Data Protection Layer for Safer MCP-Based AI Integrations appeared first on Analytics India Magazine.

]]>
Unlocking 20% Growth: How India’s Data Centre Surge is Reshaping Tech Services https://analyticsindiamag.com/ai-features/unlocking-20-growth-how-indias-data-centre-surge-is-reshaping-tech-services/ Thu, 24 Jul 2025 13:33:10 +0000 https://analyticsindiamag.com/?p=10174088

Fueled by cloud, AI, and 5G, this surge promises a 20% revenue growth in the tech services sector, job creation, and a new era of digital infrastructure.

The post Unlocking 20% Growth: How India’s Data Centre Surge is Reshaping Tech Services appeared first on Analytics India Magazine.

]]>

India’s data centre industry is experiencing unprecedented growth, and it is most likely to reshape the technology services sector across the country. 

In just six to seven years, data centre capacity in the top seven cities has increased more than fourfold, reaching 1,263 megawatts as of April. 

This surge is driven by rapid digitalisation, the widespread rollout of 5G networks, increased adoption of cloud and AI technologies, strengthened subsea cable infrastructure, and favourable state-level policies, according to a survey by Nasscom Insights.

Despite accounting for around 20% of the world’s data, India currently holds only 3% of the global data centre capacity, indicating significant room for expansion. While Mumbai leads the sector with 41% of the country’s capacity, Chennai follows at 23%, the survey points out. 

Emerging Tier II and III cities such as Vijayawada, Mohali and Jaipur are also becoming important hubs. The rapid expansion of data centre infrastructure is creating a significant opportunity for technology services, with all leading data centre operators surveyed expecting a notable increase in demand over the next three to five years, the report revealed.  

This growth is expected to boost tech services revenue by approximately 20%, with the majority of respondents predicting a stronger effect on the domestic market.

Tech Services Poised for Growth

Cybersecurity services top the growth prediction, with 17% of respondents citing it as the area most likely to see increased demand due to data centre expansion, followed closely by cloud services (16%) and cooling, power and physical infrastructure services (15%). 

Together, these three areas account for nearly half the expected growth, highlighting the critical role of secure, scalable and efficient data centre operations. 

In addition, IT services and systems integration (14%), data analytics and AI (14%), networking and connectivity services (12%), and software/application development (9%) also feature prominently, indicating a holistic need for advanced skills and solutions as data centres proliferate. 

Jobs & Skills in Demand

The demand for skilled professionals is also rising, particularly for cybersecurity experts, cloud infrastructure engineers, AI and machine learning engineers, data centre hardware specialists and experts in sustainable technology and edge computing.

According to the report, each additional megawatt of data centre capacity can create up to 2,000 jobs, underscoring the sector’s employment impact. 

Critical skills needed over the next few years include expertise in AI-driven automation, real-time data processing, virtualisation and containerisation technologies like Kubernetes and Docker, hybrid and multi-cloud architectures, AI-based threat detection, and advanced cooling and renewable energy integration.

Collaborations, Sustainability, Recommendations

Strategically, technology service providers and data centre operators are increasingly collaborating through AI-ready alliances, managed service partnerships, joint go-to-market solutions, and build-to-suit or sell-to-business models, according to the report. These partnerships allow them to focus on infrastructure while specialised providers deliver cloud, security, and software services.

Sustainability is a growing focus in the industry, with about 25% of data centre capacity in big cities now green certified, according to the report. Operators are actively investing in energy-efficient technologies and adopting renewable energy. Edge data centres, designed to support IoT applications and latency-sensitive services, are also expanding rapidly, with over 200 new edge facilities planned in the next five to six years.

Responding to a question from AIM during a webinar on how data centres tackle energy consumption sustainably, Kuhu Singh, analyst at Nasscom Insights, said that it can be done by adopting renewable energy and green infrastructure.

“Solar, wind, hybrid models and edge computing all contribute to sustainable energy use,” she said. 

Listing out the recommendations from the report, she said that for the government, creating a partnership-friendly environment is crucial. It includes improving the ease of doing business and encouraging state-level data centre policies that integrate allied tech services.

For operators, Singh said, it’s important to identify critical requirements, invest in co-developing solutions with tech providers, and hire talent with emerging skills.

Tech service providers should focus on building end-to-end infrastructure solutions, position themselves around hybrid cloud and cloud migration offerings and pursue a collaboration-first approach.

For the future workforce, she said, “We recommend pursuing academic pathways aligned with in-demand job roles and building skills in AI automation, data processing, and other frontier tech areas.”

The post Unlocking 20% Growth: How India’s Data Centre Surge is Reshaping Tech Services appeared first on Analytics India Magazine.

]]>
The Luigi Warning: Can Indian Insurance Escape the AI Trap? https://analyticsindiamag.com/ai-features/the-luigi-warning-can-indian-insurance-escape-the-ai-trap/ Wed, 16 Jul 2025 06:54:53 +0000 https://analyticsindiamag.com/?p=10173458

Indian tech companies are rapidly integrating AI into insurance operations, raising a tough question: How can we ensure our systems remain fair, transparent, and humane before it's too late?

The post The Luigi Warning: Can Indian Insurance Escape the AI Trap? appeared first on Analytics India Magazine.

]]>

Luigi Mangione, a 26-year-old Ivy League graduate, created a national storm when he allegedly shot UnitedHealthcare CEO Brian Thompson. While some view him as a troubled individual, others see him as a symbol of rage against a system they believe is broken.

According to an NYPD intelligence report obtained by CNN, Mangione’s actions were allegedly fueled by deep resentment toward the health insurance industry and its perceived prioritisation of corporate profit over human care.

The tragedy has renewed scrutiny on lawsuits alleging that UnitedHealthcare uses AI tools like nH Predict to deny Medicare Advantage claims—supposedly overriding physicians’ judgments and rejecting patient care due to flawed algorithms, which are reported to have error rates as high as 90%.

This moment forces a hard question: Could India face a similar crisis as AI becomes deeply embedded in its health insurance and IT systems? Transparency, accountability, and ethical oversight in algorithmic decision-making are no longer optional; they’re necessary.

Indian IT companies are increasingly shaping the global insurance industry with the use of AI. They are automating claims and underwriting, enhancing fraud detection, and improving customer experience. Companies like Infosys, HCLTech, TCS, and Wipro are integrating sophisticated AI technologies into their insurance processes.

Infosys, for instance, is leading the digital transformation of the Life Insurance Corporation of India (LIC) under its DIVE initiative, delivering end-to-end AI-enabled integration and DevOps services.

HCLTech has expanded its partnership with The Standard, a prominent US insurer, to co-develop AI-led financial protection solutions.

This trend is part of a broader shift in the industry. India’s generative AI solutions in insurance are expected to grow at a CAGR of 38.28% from FY2024 to FY2032, reflecting rapid adoption.

According to Wipro’s “The AI Advantage” report, based on insights from 100 US-based insurers, 81% of companies plan to increase their AI budgets, with underwriting emerging as a key focus area for enhancing efficiency and accuracy. Nearly all surveyed leaders believe AI is vital to customer experience and personalisation.

However, a Genpact-AWS study reveals that only 36% of US customers feel their digital experience has improved, despite 69% of insurers deploying AI, highlighting a need for better scaling and alignment with user expectations.

That said, concerns about fairness and transparency persist. Examples from around the world, such as the Optum healthcare algorithm that reportedly underestimated Black patients’ health risks, demonstrate how biased data can distort AI results.

As AI plays a larger role in decision-making, questions arise about the transparency of these solutions and whether they ensure fairness and equity.

In India, Star Health and Allied Insurance came under insurance watchdog IRDAI’s scrutiny after it found “serious lapses in the claim settlement practices” in the company, according to reports.

The insurance regulator did not announce any direct documented cases specifically attributing claim denials solely to AI. However, last year, Star Health introduced a new AI-driven tool called Star Health Face Scan. The company claimed it will remotely gauge 18 parameters such as blood pressure, pulse rate, heart rate, haemoglobin levels and stress levels. 

“The company will continue expanding its technological capabilities to provide even more advanced and user-friendly solutions for its customers,” Anand Roy, MD & CEO of Star Health and Allied Insurance, had said in a press statement.

Lack of Guidelines

To avoid such risks creeping into Indian insurance services, DR KS Uplabdh Gopal, associate fellow (health initiative) at the Observer Research Foundation, told AIM that insurers and regulators need to mandate bias audits and regular checks to assess how well AI models perform across various social groups.

He insisted that the country needs to test algorithms against discriminatory results based on gender, income, geography and caste, even if they are not explicitly used. 

Gopal emphasised the need for explainable AI (XAI) models that not only make decisions but also explain why. If a claim is denied or classified as high-risk by an AI tool, the individual involved must be given a reason and the right to appeal. 

He expressed regret that in India, we are only beginning to engage in this conversation. 

“While the Digital Personal Data Protection Act, 2023 provides some safeguards around consent and data use, we currently have no required guidelines for AI explainability, fairness, or redressal in insurance,” Gopal remarked. He added that the regulatory sandbox of IRDAI permits innovation, but compliance has to keep pace. This is to ensure that AI advances inclusion rather than perpetuating current imbalances.

The Wipro report mentioned earlier also emphasises that a primary challenge in AI adoption within insurance companies involves both external and internal risks. It acknowledged that while AI enables faster and more accurate decisions, it also introduces risks of bias and reputational damage.

HCLTech and TCS, meanwhile, did not respond to the queries sent by AIM. 

However, responding to a user’s query on LinkedIn on how agentic AI is transforming the insurance sector, Sukriti Jalali, principal consultant at TCS, stated that they are enabling all roles in the claims process, from policyholders to investigators, with both Generative AI and traditional machine learning.

Indian companies’ way

Saurabh Arora, co-founder and CTO of Plum, an insurtech startup, told AIM that his company does not feed sensitive fields (race, religion or gender) in its claims or pricing models, with age and geography appearing only where regulation explicitly requires it.

The company conducts weekly lightweight audits, keeping potential bias visible and easy to correct, while tailoring explanations for claimants, employers, and regulators. Arora acknowledged they do not claim perfect bias elimination but aim for transparency and simplicity. 

Plum said it uses ClaimLens, its document deficiency AI system, to ingest every bill, discharge summary, and lab report, and uses GPT-4o-based OCR plus medical-language models to pull out structured fields (patient ID, ICD-10 code, procedure, line-item costs) and assigns a confidence score to each extraction.  Besides, it also uses Anthropic Claude for summarising long policy documents, and Sarvam’s model for local-language voice bots. 

Meanwhile, Mphasis recently entered into a strategic partnership with Sixfold, the AI underwriting company redefining how insurers assess risk.

As an implementation partner, Mphasis will integrate Sixfold’s AI platform to help insurers accelerate their underwriting process, speeding up submission intake and equipping underwriters with the contextual risk insights they need to make faster, more confident decisions.

In an email response to AIM, the mid-tier company acknowledged that maintaining ethical integrity in AI-driven underwriting is non-negotiable, especially in a regulated industry like insurance. 

“We have implemented bias insulation protocols that include regularly reviewing and updating our models to guard against unintended discrimination and to preserve fairness,” it said. 

It also claimed to monitor model drift or hallucinations, which can occur as they evolve over time.“We also ensure human oversight remains central to the process, with underwriters

equipped to review, question, and override AI-generated recommendations whenever necessary,” the company said. 

Insurers racing to adopt AI must not lose sight of what matters most—trust. Mphasis rightly emphasizes that “AI is only as valuable as it is trustworthy.”

The post The Luigi Warning: Can Indian Insurance Escape the AI Trap? appeared first on Analytics India Magazine.

]]>
Workday Doubles Down on India with Local Data Centre, Bigger Sales Push https://analyticsindiamag.com/ai-news-updates/workday-doubles-down-on-india-with-local-data-centre-bigger-sales-push/ Thu, 03 Jul 2025 10:16:46 +0000 https://analyticsindiamag.com/?p=10172812

The company said it will begin offering services from a data centre located in India for the first time, with operations expected to begin in the first half of 2026.

The post Workday Doubles Down on India with Local Data Centre, Bigger Sales Push appeared first on Analytics India Magazine.

]]>

Enterprise cloud leader Workday is sharpening its focus on the Indian market with a bold expansion that includes setting up a local data centre, strengthening its on-ground sales operations, and scaling its partner ecosystem. 

According to an official statement, these investments will enhance Workday’s ability to serve both its global customers with operations in India and the increasing number of local organisations seeking to modernise their business operations.

The company said that it will begin offering services from a data centre located in India for the first time, with operations expected to begin in the first half of 2026. 

As a result, Workday customers in India will have the option to deploy Workday Human Capital Management (HCM), Workday Financial Management and Workday Adaptive Planning on Amazon Web Services (AWS) located in India. 

The domestic hosting will provide highly available services while preserving the security, scalability, and performance inherent in the Workday platform, the company shared.

Notably, it said, the move will also enable customers to incrementally adopt services that adhere to data sovereignty regulations specific to the Indian market, providing critical assurance and compliance for businesses operating within India’s evolving regulatory landscape.

“India plays a crucial role in Workday’s global strategy, and these strategic investments, coupled with the upcoming appointment of a dedicated country leader, signify our deepening commitment to the region,” Simon Tate, president of Asia Pacific at Workday, said in the statement. 

“Deploying Workday on a local AWS data centre will provide critical data residency capabilities for our Indian customers, while our expanded sales presence will allow us to better serve the growing market. We are building on our strong foundation in India to accelerate adoption and foster deeper relationships.”

The company shared that its significant and rapidly expanding presence in India includes a strong foundation of almost 700 talented employees, with business operations, technology, and services support teams in Pune and Mumbai.

Workday said it recently announced plans to establish a global capability centre (GCC) in Chennai, which will serve as a vital hub for product and technology development, supporting Workday’s global operations and driving innovation in next-generation AI.
Manish Dubey, global head of enterprise technology at Sagility, said, “Workday has played an important role in helping us transform our global HR and finance operations. It has provided us with the agility and insights needed for our extensive Indian operations, where we have our largest resource pool, to thrive in the fast-paced global environment.”

The post Workday Doubles Down on India with Local Data Centre, Bigger Sales Push appeared first on Analytics India Magazine.

]]>
What Enterprises Get Wrong About Data Complexity and How to Fix It https://analyticsindiamag.com/ai-highlights/what-enterprises-get-wrong-about-data-complexity-and-how-to-fix-it/ Wed, 02 Jul 2025 12:14:38 +0000 https://analyticsindiamag.com/?p=10172752

Starburst unifies siloed data, simplifies AI workflows with Trino, and boosts LLM accuracy using RAG, governance, and open architecture.

The post What Enterprises Get Wrong About Data Complexity and How to Fix It appeared first on Analytics India Magazine.

]]>

In today’s rapidly evolving world of data, the challenge for enterprises is no longer just managing complexity—it’s unlocking the untapped potential hidden within it. 

At the recent DES 2025 summit, Shivani Bennur, onboarding engineer at Starburst, stepped into the spotlight to share a vision of how breaking free from traditional limitations can pave the way for a unified, intelligent data future.

Data Drill 

Bennur described the all-too-familiar “data drill” that plagues most organisations. “We all have the enterprise data siloed and fragmented across different data sources, making it extremely difficult to access this data.”

The traditional solution, she explained, has been to use extract, transform, load (ETL) processes to churn through and centralise data in warehouses. But as Bennur pointed out, this creates its own set of problems.

“It creates a single source of truth, but with this, the data is often siloed, outdated, and fragmented. So we are not even ready for…the growing needs of data science and AI, which actually need to process a lot of unstructured data,” she said. 

The result is multiple copies of data, added complexity, inconsistencies, and a governance maze that few can navigate.

Unlocking the Potential

Bennur declared that the future lies in an open, hybrid data lakehouse—one that combines the performance of a warehouse without the high cost and gives users the flexibility to access data wherever it lives without the headaches of data movement. 

“What we are aiming at is not just having a lakehouse, which you already have, but actually having an open hybrid lakehouse,” she explained. 

The vision is based on Trino, an open-source, parallel, distributed SQL query engine originally developed at Facebook. 

“Trino is known for querying data across multiple different sources—that is called query federation—eliminating the need to move complex data around and write scripts,” Bennur said, highlighting its ability to separate compute from storage for independent scaling and cost efficiency.

She noted that Trino’s open architecture is why leading lakehouse table formats like Iceberg were designed to work with it, something that Starburst calls an Icehouse architecture. The Icehouse is a modern, open-data “lakehouse” architecture that combines Trino (an open-source distributed SQL engine) with Apache Iceberg (an open table format) to deliver a full data warehouse experience on data lakes—supporting transactional queries (insert/update/delete), scalable analytics, and AI workloads, without vendor lock-in.

‘Starburst Speaks Your Language’

“This brings us to Starburst,” Bennur explained. “The first thing I want you to know is that Starburst is a Trino company co-founded by the creators of Trino. We bring the goodness of Trino to the enterprise.”

Starburst is essentially an analytics engine that fits into any environment without requiring data movement or centralisation, combining Trino’s power with enterprise-grade features. 

These include data products for querying and governance, fault-tolerant execution for ETL workflows, and advanced query optimisation. 

“Starburst order scales from gigabytes to petabytes in no time, so that you can save on your resources,” she said, adding that smart indexing and caching can significantly improve query times. Notably, Metrics and logging features provide a comprehensive view of both data and platform performance.

As a central hub, Starburst allows organisations to plug in and query any data source, creating what Bennur called a “data mesh”. 

This breaks down silos and delivers a trustworthy single point of access. Integration with popular business intelligence (BI) tools is seamless, requiring no changes to existing workflows.

For data engineers, Bennur painted a compelling picture. “Imagine a place where you don’t have to write heavy ETL scripts and move the data in and out, and have to spend hours just waiting for the jobs to complete. What if you could do all of that in a single place and get direct access to your data? That’s what Starburst does—Starburst speaks your language.” 

The platform supports open table formats like Apache Hudi, Iceberg, and Delta Lake, allowing users to choose the best tool for the job. Its fault-tolerant execution mode means queries can resume from failures without losing progress or wasting resources. 

Moreover, for Python users, PyStarburst enables pipeline development within Starburst using familiar APIs, she noted. Autoscaling allows rapid analysis of data volumes from gigabytes to petabytes, and integration with tools like DBT, Airflow, Jupyter, and Spark is effortless. 

She stressed that whether customers want a fully managed cloud solution or an on-premises deployment, Starburst offers both, with robust security and governance features built in. 

Challenges Exist

Yet, as Bennur acknowledged, connecting the right data to LLMs is the next big challenge for the data industry. Even the most advanced models face new challenges around data access in the AI era. According to her, general-purpose AI often hallucinates, giving generic answers due to limited, outdated, and siloed enterprise data that it can’t access or learn from. Every organization adopting AI now faces a data access challenge. 

She pointed out that data collaboration is another major hurdle. AI initiatives often stall because IT and business teams struggle to architect and govern data together, leading to models that lack relevance, trust, and real-world applicability.

“Data governance is fractured because AI use cases involve moving your data across sensitive data across different technologies like vector embeddings, LLMs, and vector DBs, resulting in increased security and governance risks,” she said. Inconsistent policies across data sources only add to the security gaps and operational overhead.

Starburst to the Rescue

Starburst supports both batch and streaming data processing. For effective data collaboration, Starburst offers intuitive data discovery tools, simple SQL access, and the ability to curate governed data products for sharing across teams. 

“Data governance is a really critical part in today’s hybrid and multi-cloud environments. Starburst provides robust data features, including fine-grained access control, AI-powered data generation, AI-powered data classification, data and network security, ensuring that users’ data is secure and compliant and only accessible for those in need of it,” she said.

Enhancing AI Outcomes

Bennur explained how Starburst boosts AI performance using retrieval augmented generation (RAG), which she called a “superpower memory boost”. Unlike traditional LLMs that rely only on training data, RAG dynamically pulls up-to-date enterprise information to generate context-rich responses.

She contrasted RAG with prompt stuffing, another method where enterprise data is embedded directly into the prompt, pointing out that both help LLMs produce more relevant, timely answers.

RAG works by retrieving current data, such as research papers or support tickets, combining it with the user query, and passing it to the LLM. This ensures the output is accurate and aligned with the latest information.

Bennur also showcased how Starburst simplifies complex AI workflows. Traditionally requiring coordination with engineers and multiple tools, tasks like joining Oracle clinical trial data with Delta Lake sources can now be done using simple SQL queries. LLM functions such as classify, extract, and summarise can be invoked directly.

Starburst supports integration with multiple LLM, including OpenAI and AWS Bedrock. In one example, it used vector and full-text search to analyse support tickets and then passed summaries to the LLM for insights.

RAG workflows in Starburst involve gathering data, chunking, embedding it into Iceberg tables, and performing AI search—all governed and compliant. Bennur concluded by saying Starburst unifies data access, enhances AI outcomes, and ensures security and control, helping organisations fully realise the potential of AI.

The post What Enterprises Get Wrong About Data Complexity and How to Fix It appeared first on Analytics India Magazine.

]]>
Databricks Introduces Agent Bricks to Build Production-Ready AI Agents on Enterprise Data https://analyticsindiamag.com/ai-news-updates/databricks-introduces-agent-bricks-to-build-production-ready-ai-agents-on-enterprise-data/ Thu, 12 Jun 2025 10:35:00 +0000 https://analyticsindiamag.com/?p=10171675

Databricks also launched MLflow 3.0, a redesigned version of its AI lifecycle management platform.

The post Databricks Introduces Agent Bricks to Build Production-Ready AI Agents on Enterprise Data appeared first on Analytics India Magazine.

]]>

At the Data + AI Summit, Databricks announced the launch of Agent Bricks, a new offering that allows businesses to build and deploy AI agents using their own data, without the need for manual tuning or complex tooling. 

Available in beta starting today, Agent Bricks is positioned as an automated system that transforms a high-level task description and enterprise data into a production-grade agent.

Ali Ghodsi, CEO and co-founder of Databricks, described it as “a whole new way of building and deploying AI agents that can reason on your data.” He added, “For the first time, businesses can go from idea to production-grade AI on their own data with speed and confidence, with control over quality and cost tradeoffs.”

Agent Bricks automates the entire process of AI agent development. It uses research developed by Mosaic AI to generate synthetic data tailored to a customer’s domain and builds task-specific benchmarks to evaluate agent performance. 

The system then runs a series of optimisations, allowing users to choose the version that best balances accuracy and cost. The result is a deployable agent that operates with consistency and domain awareness.

The platform supports a range of use cases across industries. An Information Extraction Agent can convert unstructured content like PDFs and emails into structured fields such as names and prices. A Knowledge Assistant Agent provides accurate, data-grounded answers to user queries, reducing the kind of errors often seen in traditional chatbots. 

The Multi-Agent Supervisor allows coordination between multiple agents and tools like MCP to manage workflows, including compliance checks and document retrieval. 

Meanwhile, a Custom LLM Agent handles specific text transformation tasks, such as generating marketing content that aligns with an organisation’s brand voice.

Databricks said the product addresses a key issue in the AI agent space, which is that most experiments fail to reach production due to a lack of evaluation standards, inconsistent performance, and high costs.

According to the company, Agent Bricks resolves these challenges by offering domain-specific, repeatable, and objective evaluations, all within a workflow that requires no stitching together of multiple tools.

Early adopters are seeing results across sectors. AstraZeneca used Agent Bricks to extract structured data from over 400,000 clinical trial documents without writing any code. Joseph Roemer, head of data & AI at the company, said they had “a working agent in just under 60 minutes.”

At Flo Health, the tool helped improve the medical accuracy of AI systems while meeting internal standards for safety and privacy. “By leveraging Flo’s specialised health expertise and data, Agent Bricks uses synthetic data generation and custom evaluation techniques to deliver higher-quality results at a significantly lower cost,” said Roman Bugaev, the company’s CTO.

The announcement was accompanied by the release of two additional tools. Databricks now offers support for serverless GPUs, giving teams access to high-performance compute without the operational burden of managing GPU infrastructure. This enables users to fine-tune models and run AI workloads on demand.

Databricks also launched MLflow 3.0, a redesigned version of its AI lifecycle management platform. Tailored for generative AI, MLflow 3.0 includes prompt management, human feedback loops, LLM-based evaluation and integration with existing data lakehouses. The new version allows teams to monitor and debug AI agents across any platform and is downloaded over 30 million times a month.

According to Databricks, the combination of Agent Bricks, serverless GPU support, and MLflow 3.0 makes its platform the most complete environment for building, tuning and deploying enterprise-grade generative AI systems.

The post Databricks Introduces Agent Bricks to Build Production-Ready AI Agents on Enterprise Data appeared first on Analytics India Magazine.

]]>
Cisco, NVIDIA Launch Next gen AI Data Center Tools https://analyticsindiamag.com/ai-news-updates/cisco-nvidia-launch-next-gen-ai-data-center-tools/ Wed, 11 Jun 2025 15:59:18 +0000 https://analyticsindiamag.com/?p=10171647

Cisco focuses on high-performance, low-latency networking to meet the growing demands of AI workloads and deliver secure, scalable infrastructure solutions.

The post Cisco, NVIDIA Launch Next gen AI Data Center Tools appeared first on Analytics India Magazine.

]]>

Cisco, in partnership with NVIDIA, is accelerating AI adoption by unveiling new innovations designed to simplify, secure, and future-proof data centres.

With a focus on high-performance, low-latency networking, Cisco aims to meet the growing demands of AI workloads and deliver secure, scalable infrastructure solutions.

“The world is moving from chatbots intelligently answering our questions to agents conducting tasks and jobs fully autonomously. This is the agentic era of AI,” said Jeetu Patel, President and Chief Product Officer, Cisco. 

The company announced the expansion of AI PODs, the integration of NVIDIA Spectrum-X Ethernet networking based on Cisco Silicon One, and the introduction of the Unified Nexus Dashboard, which centralises management across multiple environments. It noted that these innovations enable enterprises to deploy and manage AI workloads with enhanced operational efficiency. 

Commenting on data centres, Patel noted that as billions of AI agents start functioning on behalf of users, the need for high-bandwidth, low-latency, and power-efficient networking in data centres will significantly increase. He emphasised that Cisco is leading the way by providing advanced and secure networking technology essential for the AI-ready data centres of the future. 

“Cisco is at the forefront, delivering advanced, secure networking technology that’s foundational to the AI-ready data centres of the future,” Patel said. 

Cisco’s strategic partnerships with neocloud providers such as HUMAIN, G42, and Stargate UAE further solidify its leadership in next-gen AI infrastructure. At the same time, its Agile Services Networking architecture helps service providers modernise their networks to support new AI-driven services.

In Q3 FY2025, Cisco notably surpassed its annual target of $1 billion in AI infrastructure orders from hyperscalers a year ahead.

Matt Kimball, Moor Insights & Strategy, said Cisco’s ability to deliver AI-ready infrastructure to its customers, along with its investment in AI-enabled operations, differentiates the company. “AI reaching its full potential depends on a resilient network on which partners build and deliver solutions and services,” Kimball said. 

Cisco is also introducing a new “Unified Fabric Experience” using its Nexus product line. This is designed to help organisations simplify network operations and improve efficiency across various environments.

Another innovation, Cisco AI Defence and Cisco Hypershield, provides visibility, validation, and runtime protection of the end-to-end enterprise AI workflow and is now included in the NVIDIA Enterprise AI Factory validated design.

Meanwhile, the company also announced new innovations to help enterprises securely integrate AI into their operations, addressing the increasing risks of AI-driven cyberattacks. 

The post Cisco, NVIDIA Launch Next gen AI Data Center Tools appeared first on Analytics India Magazine.

]]>
EPAM Thinks You Should Rethink Your Data Stack for AI https://analyticsindiamag.com/ai-highlights/epam-thinks-you-should-rethink-your-data-stack-for-ai/ Tue, 10 Jun 2025 06:47:05 +0000 https://analyticsindiamag.com/?p=10171506

“AI is no longer just an application layer,” Srinivasa said. “We must now look at data platforms themselves as intelligent systems that integrate AI at their core.”

The post EPAM Thinks You Should Rethink Your Data Stack for AI appeared first on Analytics India Magazine.

]]>

EPAM is challenging the status quo of legacy data infrastructure, highlighting how traditional warehouses and batch-driven pipelines are becoming roadblocks to realising the full potential of generative AI. 

To move from experimentation to enterprise-scale impact, organisations must rethink their data foundations for agility, real-time intelligence and AI-native design. In this shift, the company is pushing for intelligent data platforms and touchless engineering to power real-time AI agents.

At DES 2025, Srinivasa Rao Kattuboina, Senior Director and Head of the data and Analytics Practice at EPAM Systems Inc. (EPAM), delivered a compelling session arguing that the era of agentic AI demands a radical revamp of how data platforms are architected, moving from traditional batch processing toward real-time, intelligent, and open infrastructures.

“AI is no longer just an application layer,” Kattuboina said. “We must now look at data platforms themselves as intelligent systems that integrate AI at their core.”

Why Existing Platforms are Missing the Mark 

Kattuboina noted that most current enterprise data platforms, built over decades through data warehouses, data lakes and lakehouses, are crumbling under the demands of generative AI and agentic systems. 

These legacy systems, heavily reliant on batch processing, are unable to support real-time decision-making or autonomous agents that depend on fresh, clean, and reliable data to function effectively.

He described this as a transition from traditional platforms to what he calls intelligent data platforms. These systems are designed not just to store and manage data but also to automate insights, deliver real-time recommendations, and align closely with a company’s AI goals.

One of the standout points Kattuboina emphasised was “dark data”, which refers to enterprise data that has been collected but remains unused.

“Every time we build a model, we only look at a portion of our data,” he said. “Terabytes are sitting in lakes and warehouses, untouched. With agentic systems, even SQL queries can now explore that dark data.”

He argued that the advent of AI assistants and agent-based architectures means organisations can finally start tapping into this hidden potential. But to do that, the data must be real-time, accessible and intelligently integrated across the pipeline.

Rethinking the Stack: From Batch to Real-Time

The shift to agentic AI brings with it new technological imperatives. Kattuboina explained that traditional data engineering practices, like automated pipelines and metadata-driven orchestration, are no longer sufficient. 

Instead, he proposed reconfiguring the data architecture, highlighting the need for real-time processing, open architectures, minimal layering and embedded intelligence across the pipeline. Technologies like Apache Iceberg, Flink and Kafka are increasingly becoming the backbone of this transformation. 

“With the pace at which Iceberg is evolving, you may not even need Snowflake or Databricks in the future. Open formats and compute frameworks can do much of the heavy lifting,” he added. Such platforms could dramatically reduce AI implementation timelines—from months or years to just weeks.

The intelligent data platform, as envisioned by Kattuboina, automates not just data ingestion but also transformation, feature engineering, and MLOps workflows. “You connect your source, and your data is processed to the golden layer without manual intervention,” he said. 

“You don’t need to insert metadata or orchestrate flows manually. That’s the level of intelligence we’re aiming for,” he added.

Eliminating Redundant Layers with AI Assistants

Kattuboina also explored enterprise use cases driving this shift. A typical scenario is the desire to replace thousands of static reports with a single AI assistant capable of querying real-time data. 

However, such a vision is only feasible if the underlying platform is intelligent and nimble and built for AI from the ground up. “Everyone wants to eliminate the reporting layer with an assistant. But that requires the right data representation and infrastructure,” he emphasised.

Too often, organisations treat AI initiatives as separate threads, duplicating data into new stores rather than upgrading existing platforms. The key, he argued, is to bring intelligence into existing infrastructure so that it can serve both traditional analytics and emerging AI use cases.

To illustrate what’s possible, Kattuboina described a project implemented on AWS using Snowflake, Kafka, Flink, and Iceberg. The architecture enabled “touchless” data engineering, where engineers only had to configure table names and layer targets. 

The system automatically took care of ingestion, transformation, and orchestration. “You just configure what you want to process,” he said. “The entire pipeline—using Flink, Kafka, and Iceberg—runs without human touch. That’s the future,” he added.

Kattuboina concluded with a call for nimble, simple, and real-time platforms that can integrate across multiple AI protocols and cloud ecosystems, from AWS to Azure and GCP. “We’re seeing tremendous pressure to deliver AI fast. The question is, do you need six months to deploy a model, or can you do it in a few weeks?”

For more information on EPAM’s data & analytics capabilities, visit https://welcome.epam.in/

The post EPAM Thinks You Should Rethink Your Data Stack for AI appeared first on Analytics India Magazine.

]]>
Perplexity’s SEC Data Integration Makes It the Investor’s AI Sidekick https://analyticsindiamag.com/ai-news-updates/perplexitys-sec-data-integration-makes-it-the-investors-ai-sidekick/ Fri, 06 Jun 2025 09:00:52 +0000 https://analyticsindiamag.com/?p=10171435

“Everyone deserves access to the same financial information that drives professional investment decisions.”

The post Perplexity’s SEC Data Integration Makes It the Investor’s AI Sidekick appeared first on Analytics India Magazine.

]]>

AI search engine startup Perplexity AI on Thursday rolled out access to SEC filings across its platform, aiming to make technical financial data easily understandable for all types of investors, from students to advisors to day traders.

The new SEC/EDGAR integration allows users to query financial documents directly within Perplexity’s Search, Research, and Labs interfaces. Answers are backed with citations and references, helping users trace insights back to original source documents. The company says the feature is designed to simplify complex reports that typically require domain expertise or expensive tools to interpret.

“Everyone deserves access to the same financial information that drives professional investment decisions,” the blog post states, highlighting a contrast with traditional financial data platforms that often gatekeep clarity behind paywalls and complexity.

With this launch, users can ask questions about earnings, risks, or strategy and receive natural-language answers grounded in regulatory filings. It’s available for all Perplexity users. However, for Perplexity Enterprise Pro customers, the integration also works alongside datasets from Factset, Crunchbase, and internal company files, offering deeper comparative research.

This move positions Perplexity as a potential disruptor in the financial intelligence space, especially for retail investors seeking affordable, context-rich answers with a minimal learning curve.

Recently, Perplexity AI launched ‘Labs’, a new feature available to Pro users on the web, iOS, and Android that turns prompts into complete projects like reports, dashboards, spreadsheets, and simple web apps. 

It is positioned as an evolution of the platform’s existing ‘Deep Research’ tool—now renamed ‘Research’. Labs supports more complex and extended workflows. Unlike earlier modes that focused on information retrieval, Labs acts as a virtual project team, generating structured outputs using web browsing, code execution, and asset creation tools. Users can access generated content and interactive elements via dedicated ‘Assets’ and ‘App’ tabs.

The post Perplexity’s SEC Data Integration Makes It the Investor’s AI Sidekick appeared first on Analytics India Magazine.

]]>
Google Releases Data Science Agent in Colab https://analyticsindiamag.com/ai-news-updates/google-releases-data-science-agent-in-colab/ Tue, 04 Mar 2025 05:49:04 +0000 https://analyticsindiamag.com/?p=10165045

The agent achieves goals set by the user by orchestrating a composite flow which mimics the workflow of a typical data scientist.

The post Google Releases Data Science Agent in Colab appeared first on Analytics India Magazine.

]]>

Google released a Data Science Agent on the Colab platform on Monday, powered by its Gemini 2.0 AI model. The Data Science Agent is capable of autonomously generating the required analysis of the data file uploaded by the user. It is also capable of creating fully functional notebooks, and not just code snippets. 

Google said the agent “removes tedious setup tasks like importing libraries, loading data, and writing boilerplate code”. The agent achieves goals set by the user by “orchestrating a composite flow” which mimics the workflow of a typical data scientist. Users can use the agent to clean data, perform exploratory data analysis, statistical analysis, predictive modeling and other such tasks. 

The generated code can be customised and extended to meet users’ needs. Moreover, results can also be shared with other developers on Colab. Google also said that the agent ranked fourth on the DAPStep (Data Agent Benchmark) on HuggingFace, ahead of GPT-4o, DeepSeek-V3, Llama 3.3 70B and more. 

The Data Science Agent was launched for trusted testers last December, but is now available on Google Colab. Colab is a free, cloud-based environment where Python code can be written and run within the web browser. It also provides free access to Google Cloud GPUs and TPUs. 

“We want to simplify and automate common data science tasks like predictive modelling, data preprocessing, and visualisation,” Google said.

Recently, Google also announced the public preview of Gemini Code Assist, a free AI-powered coding assistant for individuals. The tool is globally available and supports all programming languages in the public domain.

It is available in Visual Studio (VS) Code and JetBrains IDEs, as well as in Firebase and Android Studio. Google also said the AI coding assistant offers “practically unlimited capacity with up to 1,80,000 code completions per month”.

The post Google Releases Data Science Agent in Colab appeared first on Analytics India Magazine.

]]>
40 Under 40 Data Scientists Awards 2025 – Meet the Winners https://analyticsindiamag.com/ai-highlights/40-under-40-data-scientists-awards-2025-meet-the-winners/ Fri, 07 Feb 2025 10:17:24 +0000 https://analyticsindiamag.com/?p=10163015

This award recognises India’s top data scientists and their achievements in the machine learning and analytics industry.

The post 40 Under 40 Data Scientists Awards 2025 – Meet the Winners appeared first on Analytics India Magazine.

]]>

Amidst the three days of AI and ML workshops, conferences, presentations, and tech talks at the Machine Learning Developers Summit (MLDS) 2025, about 40 dynamic data scientists were presented with the 40 Under 40 Data Scientists Award on Thursday. 

This award recognises India’s top data scientists and their achievements in the machine learning and analytics industry.

This year’s winners are driving real impact at some of the world’s most influential companies, including Razorpay, HSBC, Genpact, PepsiCo, Bloomberg, Ford Motors, Paytm, Tata, Wells Fargo, Accenture, and more. 

They are creating AI solutions that improve efficiency and developing data models that prioritise privacy. More than just driving innovation, they are also fostering a culture of learning and growth.

The Winners of the 40 Under 40 Data Scientists Awards 2025

Abhinav Vajpayee, Senior Manager, Analytics at Razorpay Software Private Limited

A hands-on innovator in data-driven business strategy, Abhinav takes part in optimising payments, ads, and content acquisition. His solutions at Razorpay, Swiggy, and Vuclip boosted retention, ad revenue, and cost efficiency, earning industry recognition.

Abhishek Kumar, VP, Analytics Lead at HSBC

Abhishek is a data science leader known for high-impact analytics solutions across banking, FMCG, and retail. His work spans forecasting, pricing models, and customer insights, earning multiple awards for innovation and business impact.

Akhil Makol, Principal Engineer at NatWest Group

Akhil leads the data strategy and cloud architecture for commercial & institutional domains. He drives AI and analytics adoption by aligning data products with banking standards and leveraging AWS Data Lake.

Akshay Jain, AGM – Lead Digital Downstream at Hindalco Industries Ltd

Akshay Jain is a data scientist transforming aluminium manufacturing with AI, predictive analytics, and Industry 4.0. He leads a team that optimises operations and drives AI adoption on the shop floor, focusing on people-centric implementation.

Amresh Kumar, General Manager at Niva Bupa Health Insurance

Amresh, a data scientist with more than 15 years of experience, has worked across insurance, banking, and digital marketing. He has successfully implemented renewal, planning, and reinsurance models, with certifications in advanced insurance and Google Analytics.

Ankit Sati, Senior Manager at Genpact

Ankit is a vital member of Genpact’s AI/ML practices. He specialises in computer vision and GenAI solutions. With more than seven years of experience across industries, he is also an active Kaggle competitor and hackathon enthusiast.

Anup Kumaar Goenka, Deputy Director of Data Science at PepsiCo.

Anup is a data science innovator known for AI-driven solutions in forecasting and automation. He developed an award-winning AI meeting summarisation tool and led predictive analytics projects optimising supply chains and financial planning.

Anupam Tiwari, Data Science Manager at GoTo Company

Anupam contributed to GenAI for Southeast Asian languages, developing Sahabat AI, a suite of LLMs for Indonesian dialects. His models, openly available on Hugging Face, support AI innovation and adoption in Indonesia.

Arjit Jain, Co-Founder and CTO at TurboML

Arjit is an ML researcher specialising in real-time machine learning for fraud detection and personalisation. A former Google researcher and IIT Bombay graduate, he has published award-winning papers with more than 200 citations.

Avinash Kanumuru, Senior Manager – Data Science & Engineering at Niyo

Avinash is a data science contributor who publishes acclaimed articles on platforms like ‘Towards Data Science’. He has also developed open-source Python libraries (ml-utils, pyspark-utils), simplifying ML workflows and enhancing industry best practices.

Debanjan Mahata, Senior ML Research Engineer at Bloomberg

Debanjan Mahata is a leading researcher in NLP, machine learning, and Document AI, with publications in top conferences and patented innovations in document analysis. His recent work focuses on multimodal Retrieval-Augmented Generation (RAG) for DocVQA, enhancing financial and ESG data extraction.

Dr. Shital Patil, Solution Architect at Robert Bosch

Dr. Shital, a Prime Minister’s Fellowship recipient, specialises in AI-driven machinery condition monitoring and predictive maintenance. With four patents, she excels in fault diagnosis, XAI research, and solution architecture across India and the Middle East.

Dr. Vikram Singh, Senior Vice President of AI & Digital at EightBit AI Private Limited

Dr. Vikram is a researcher specialising in image super-resolution, deblurring, and deep learning. His work includes high-frequency refinement techniques and advanced neural networks for sharper image and video processing.

Gaurav Mhatre, Director at Tiger Analytics

Gaurav, a data science leader with 13+ years of experience, drives AI innovations across CPG, healthcare, telecom, and eCommerce. At Tiger Analytics, he has led route-to-market analytics, pet food optimisation, and 5G network routing using cutting-edge AI algorithms.

Gopinath Chidambaram, Global Technical Director, AI/ML & Cloud at Ford Motors

Gopinath, an AI/ML professional, holds five patents in autonomous vehicle perception and has filed two more in AI-driven monitoring systems. He is also co-authoring Gen AI Untrained, an upcoming book on AI concepts and applications.

Kantesh Malviya, Associate Vice President – Analytics at Paytm

Kantesh is a data science leader known for mentorship and thought leadership. He has spoken at IIT Bombay and developed innovative analytics solutions, driving user engagement and revenue growth across industries.

Kulbhooshan Patil, Head of Data Science and Analytics at TATA AIG General Insurance Company

Kulbhooshan is an award-winning AI leader recognised for innovation in risk management and user experience in insurance. His AI-driven solutions have earned multiple industry accolades, including the Best AI Technology Implementation of the Year and Outstanding AI & ML Solution Provider.

Mahima Bansod, Data Science and Analytics Leader at LogicMonitor

Mahima is a Data and AI leader with a decade of experience driving digital transformation at companies like Salesforce and Siemens. She has implemented ML models for customer retention, achieving a 94% renewal rate and 40% growth in product adoption.

Mahish Ramanujam, Associate Director – Analytics at Games24X7

Mahish led an award-winning project, developing a game-wise adaptive user-engagement model inspired by cricket analytics. The model boosted D30 LTV by 20% while reducing spending by 15%.

Mehuli Mukherjee, Vice President at Wells Fargo

Mehuli, VP at Wells Fargo, is an analytics leader specialising in GenAI, LLMs, and NLP. A gold medallist and PhD researcher, she is developing an Indian Sign Language recognition system while mentoring in AI and advocating for social impact.

Namit Chopra, VP-2 at EXL Service

Namit developed EXL Property Insights Solution (patent pending) and applies NLP/GenAI to insurance claims. His work includes LLM-based claim summarisation, fraud detection, and cause-of-loss identification.

Namita Khurana, Data Scientist Associate Director at Accenture

Namita is a leader in Revenue Growth Management (RGM), specialising in pricing, promotion, and assortment analytics across global markets. She has developed patented solutions, including AI-driven conversational tools for strategy optimisation and decision-making.

Nandita Saini, Manager – AI & Cognitive Solutions at e& enterprise

Nandita Saini led the productisation of e& Enterprise’s GenAI-based Utilities Copilot. She successfully turned the concept into a launched product, driving innovation.

Nishant Ranjan, Head of Analytics at Godrej Consumer Products

Nishant developed innovative pricing, forecasting, and AI-powered analytics models, including a first-of-its-kind promotion attribution model. He also pioneered MLOps best practices, enabling scalable machine learning deployment.

Pankaj Goel, Associate Vice President – Innovations at BA Continuum India Pvt Ltd (Bank of America subsidiary)

Pankaj Goel pioneered demand prediction in the CPG industry, analysing country-level demand impact on product lines, earning recognition from Procter & Gamble. He recently completed a proof of concept on digital transformation using LLM/GenAI.  

Pavak Biswal, Senior Manager at Merkle  

Pavak, a data science leader with 13+ years of experience, has driven business impact through AI and analytics innovations. He designed GenAI-powered chatbots, optimised pricing models, and led multimillion-dollar analytics projects across industries.  

Pawan Kumar Rajpoot, Lead Data Scientist at TIFIN 

Pawan has more than 10 years of experience in NLP research and development and has been the winner of 10-plus international competitions. His previous work experience includes companies like Tact.ai, Rakuten India and Huawei.

Puspanjali Sarma, Senior Manager – AI at ServiceNow

Puspanjali is an AI and data science expert specialising in AI product management, NLP, and predictive analytics. Her work at ServiceNow and beyond has driven innovative, AI-driven solutions with measurable business impact.  

Rajaram Kalaimani, Senior Principal Data Scientist at Mindsprint

Rajaram is the architect behind Mindverse, a GenAI platform, and precision agriculture solutions for the agri supply chain. His innovations are now hosted on Google, expanding AI capabilities at Mindsprint.  

Ritwik Chattaraj, Data Science Manager/Senior Data Scientist at Commonwealth Bank of Australia  

Ritwik Chattaraj is a data science leader with expertise in Generative AI, LLMs, and robotics. He has led AI/ML innovations at major banks, published extensively, and received multiple excellence awards.  

Sachin Kumar Tiwari, Deputy Vice President at Canara HSBC Life Insurance Company  

Sachin Kumar has led key AI and analytics projects, including GenAI chatbots, customer genomics, and sales governance models. His work spans predictive modelling, sentiment analysis, and geospatial analytics to drive business decisions.  

Sairam Mushyam, Head of Data and AI (SVP) at Zupee  

Sairam has revolutionised Real Money Gaming in India through AI-driven innovations, regulatory frameworks, and data infrastructure. His work spans blockchain-based fairness validation, user integrity systems, and GenAI-powered gaming experiences, driving $30M+ in annual revenue growth.  

Shravan Kumar Koninti, Associate Director – Data Science at Novartis  

Shravan is an AI researcher and innovator who is developing self-service AI tools and large-scale AI projects. He has won multiple hackathons, pioneered Generative AI applications, and contributed to healthcare AI advancements through collaborative research.  

Sumeet Pundlik, Delivery Unit Head at TheMathCompany (MathCo)  

Sumeet is an AI and data science expert with a patented service location optimisation system for a global CPG brand. He also explores Edge analytics, pushing the boundaries of predictive maintenance.  

Swapnil Ashok Jadhav, Senior Director – Machine Learning & Engineering at Angel One  

Swapnil developed Yubi’s first open-source repository and India’s first Fintech language model, YubiBERT, earning recognition from Meta and media coverage. At Unacademy, he created EdOCR, an OCR tailored for EdTech, and presented it at NVIDIA GTC 2021.

Google Cloud MLDS 2024 Stage

Meet the Winners of Previous Years

2024 | 2023 | 2022| 2021 | 2020 | 2019

The post 40 Under 40 Data Scientists Awards 2025 – Meet the Winners appeared first on Analytics India Magazine.

]]>
8 Best Certified Companies for Data Professionals to Work For https://analyticsindiamag.com/ai-highlights/8-best-certified-companies-for-data-professionals-to-work-for/ Tue, 31 Dec 2024 08:28:27 +0000 https://analyticsindiamag.com/?p=10160596

These firms not only nurture talent but also empower data scientists to excel in their roles.

The post 8 Best Certified Companies for Data Professionals to Work For appeared first on Analytics India Magazine.

]]>

The best firms for data scientists and data engineers in 2024, as recognised by AIM, exemplify what it means to create workplaces where data professionals thrive. 

These companies prioritise continuous learning through robust upskilling and mentorship programs while fostering diversity with significant representation in leadership and inclusive team cultures. 

By offering flexible work arrangements, strong recognition systems, and comprehensive well-being initiatives, they address the holistic needs of their employees. 

These firms not only nurture talent but also empower data scientists to excel in their roles, making them standout choices for those seeking both career growth and a supportive work environment.

In alphabetical order, here are eight of the best-certified companies for data scientists to work for:

Aays

Aays, founded in 2018, is an enterprise analytics solutions provider headquartered in Gurugram. The company specialises in data and AI solutions in manufacturing, consumer packaged goods, retail, and automotive industries. 

The company has also developed a decision intelligence platform called AaDi. The platform serves as a copilot for finance functions and provides agentic capabilities and conversational features that help perform root cause analysis, flux reporting and variance or bridge analysis. Its multi-agent system orchestrates several tasks involving information retrieval, natural language understanding, and data extraction, summarisation and visualisation. 

“I am incredibly proud of the team we’ve built – talented, experienced professionals who bring genuine passion and purpose to their work every day. The success we are seeing wouldn’t be possible without their dedication, and I am committed to making sure Aays is a place where they can keep growing, pushing boundaries, and thriving while creating impactful solutions for our clients,” said Dwarika Patro, founder and COO at Aays.

The company earned the award for the second time in 2024. 

Carrier

Carrier Global, the company that specialises in heating, ventilation, air conditioning and refrigeration, established its first digital hub in Hyderabad in 2017.

Carrier’s India Hub is the company’s largest digital transformation centre, making up nearly 40% of the company’s digital talent from its two locations in Hyderabad and Bengaluru.

The India hub significantly contributes to Carrier’s flagship products: Abound, a cloud-native platform for healthy and efficient buildings, and Lynx, a cold chain platform that uses advanced data analytics, the Internet of Things (IoT), and machine learning. The company is also set to establish an AI centre of excellence in India with over 100 employees.

“Carrier Digital Hub India plays a pivotal role in our global strategy, accelerating our digital transformation journey. By leveraging India’s robust talent pool and strategic capabilities in cybersecurity, cloud operations, customer experience, IoT, and data science, we enhance operational agility and drive innovation across our enterprise,” said Bobby George, senior vice president and CDO of Carrier, in a recent interview with AIM

Diggibyte

Diggibyte Technologies Private Limited is a technology consulting and services company that specialises in automating IT operations with artificial intelligence and business intelligence tools. Their services also include IoT streaming, and the company processes a trillion messages yearly from devices. 

The company is also said to have developed over 1,000 data pipelines, which process 10 petabytes of data yearly and successfully migrated over 500 legacy data warehouse objects to modern warehouses. 

“We are delighted to share the exciting news that Diggibyte Technologies Pvt Ltd has been honoured as the best firm for data engineers. This prestigious recognition is a reflection of our exceptional team and the outstanding culture we’ve fostered,” said Lawrance Amburose, co-founder at Diggibyte. 

In 2024, Diggibyte Technologies was certified as the best firm for data engineers by AIM

General Mills

General Mills’ foundation dates back to 1866 in Mississippi, US. The company houses some of the world’s iconic brands like Cheerios, Pillsbury, Nature Valley and Häagen-Dazs. 

In 1996, it established the General Mills India Centre (GIC) in Mumbai, which currently has 2,000 employees. The company delivers value across multiple arenas of supply chain, sales strategy and intelligence, consumer and market intelligence, and so on. 

Earlier this year, the company introduced MillsChat, a private generative AI tool for employees in non-plant locations across the US, Canada, the UK, and India.

It is built atop Google Gemini and provides a secure platform for writing, summarising, and brainstorming assistance. The company is also examining how AI can drive value, competitive advantage, and growth with its commercial, marketing, and supply chain teams utilising various AI and ML products.

“We have cultivated a culture of innovation through our focus on continuous learning and inclusion. Our team thrives on collaboration and is dedicated to pushing boundaries,” said Ashish Mishra, head of digital and technology at General Mills India Centre.

Hansa Cequity

Hansa Cequity, founded in 2008, is headquartered in Mumbai. It helps brands and organisations harness the power of AI and data analytics for marketing, customer strategy, campaign management, and customer relationship centre services. 

Hansa Cequity also offers a suite of ten AI-enabled tools, all designed to transform customer interactions and enhance decision-making and operational efficiency. Tools like Varta enable AI-powered call analytics, while Cequity SMART offers a real-time recommendation engine for e-commerce. Other tools include a market potential analysis platform, an image analytics tool, and a data deduplication tool to ensure accuracy. 

“Hansa Cequity excels at merging data rigor with marketing deployment and customer centricity, creating impactful solutions for clients. This commitment not only drives good results but also empowers our analytics team members to thrive in their careers. The environment fosters growth and innovation, paving the way for professional excellence,” said Prasad Kothari, head of data science and AI at Hansa Cequity.

The company has been certified as the best firm for data scientists thrice, including in 2024.

Intuit

Intuit was founded in California in 1983, and the company established a centre in Bengaluru in 2005. Intuit India focuses on developing and improving the company’s flagship products, namely QuickBooks, TurboTax, Credit Karma, and Mailchimp. 

Intuit’s India development centre employs around 1,900 people. In India, the company designed the Intuit Enterprise Suite, which enables rapid scaling and adoption of the company’s financial management tools. 

“At Intuit AI, I am inspired every day by the exceptional team of ML scientists and engineers I work with. We are dedicated to pushing the boundaries of machine learning and AI to deliver outstanding benefits to our customers,” said Anusha Mujumdar, senior manager in data science and AI at Intuit.

“We are committed to nurturing our teams, fueled by mentorship, access to the latest tech stack, and a vibrant AI culture,” she added. 

MiQ

MiQ is a global programmatic media partner that offers data-driven marketing solutions for agencies and brands. Founded in the United Kingdom in 2019, the company opened its first office in India in 2020. 

MiQ’s services include predictive analytics, campaign management, and audience targeting to deliver high-performance marketing campaigns. MiQ’s Performance Engine enables marketing professionals to deploy “intelligent optimisation strategies” and use necessary custom algorithms to achieve their KPIs. 

The platform crunches over 2.6 petabytes per day from over 170 data feeds to enable over 500 custom analytics setups. The company also mentions that it completes nearly 10,000 campaign optimisations every week. 

“The impact that data has and will have continues to grow every day. Our focus has been on solving major business challenges that go beyond media campaigns. Thanks to our entire team for making our DS practice one of the best in the industry,” said Ramya Parashar, chief operating officer at MiQ.

AIM has awarded MiQ the certification for the third time. 

Rakuten

Bengaluru-based Rakuten India houses more than 1,700 employees. The company provides businesses with a wealth of knowledge in multiple sectors of technology, such as mobile and web development, web analytics, platform development, backend engineering, data science, artificial intelligence (AI), machine learning (ML) and more. 

The company also features dedicated centres of excellence for data analytics, DevOps (development operations), information security, engineering, and mobile applications development.

“With a culture fueled by innovation, usage of cutting-edge technology, collaboration and strong business communication, we’re proud to be the premier destination where AI talent thrives, and revolutions begin,” said Anirban Nandi, head of AI products and analytics (Vice President) at Rakuten India.

Rakuten was certified as the best firm for data scientists for the second time in 2024. 

The post 8 Best Certified Companies for Data Professionals to Work For appeared first on Analytics India Magazine.

]]>
Palantir’s New Cohort to Drive Manufacturing Innovations at ‘Warp Speed’ https://analyticsindiamag.com/ai-news-updates/palantirs-new-cohort-to-drive-manufacturing-innovations-at-warp-speed/ Thu, 12 Dec 2024 07:53:21 +0000 https://analyticsindiamag.com/?p=10143405

‘At the dawn of WW2, we didn’t have a Defense Industrial Base; we had an American Industrial Base. This is also what our future must look like.’

The post Palantir’s New Cohort to Drive Manufacturing Innovations at ‘Warp Speed’ appeared first on Analytics India Magazine.

]]>

Palantir, the US-based data analytics firm, has announced a new cohort of companies that will use its tool Warp Speed to reindustrialise the United States’ manufacturing and production capabilities. Companies including Anduril Industries, L3Harris, Panasonic Energy of North America (PENA), and Shield AI are part of the cohort. 

Warp Speed is Palantir’s manufacturing operating system that provides a unified platform for companies to access multiple production tools. Warp Speed is focused on adapting towards companies’ production and business processes rather than the other way around. 

The platform combines several tools like enterprise resource planning (ERP), manufacturing execution systems (MES), product lifecycle management (PLM), and programmable logic controllers (PLC). This includes products from notable companies such as Siemens, SAP, Oracle and SolidWorks.

“The inaugural cohort is already using the software to gain an advantage in dynamic production scheduling, engineering change management, automated visual inspection for quality, and more,” read the announcement. 

One of the member companies in the cohort, Anduril, was able to observe a 200-fold efficiency gain in dealing with supply shortages. A few days ago, Anduril announced a partnership with OpenAI to bring AI technologies to the U.S. national defence and security interests. 

Leaders from other companies followed suit with a similar sentiment. “Warp Speed is enabling us to rapidly transform our manufacturing operation in Nevada and accelerate the ramp-up of our new factory in De Soto, Kansas,” said Allan Swan, President of PENA. 

In another instance, Shield AI says they are able to handle a ‘record demand’ for V-BAT, their flagship unmanned aerial system, through Warp Speed OS. “Warp Speed will help our different functions identify chokepoints and stay in lock step,” said Ryan Tseng, CEO and founder of Shield AI.

Last Month, Palantir also announced a partnership with AWS and Anthropic to provide US intelligence and defence agencies with Claude 3 and 3.5 models. 

“Our partnership with Anthropic and AWS provides US defence and intelligence communities the toolchain they need to harness and deploy AI models securely, bringing the next generation of decision advantage to their most critical missions,” said Shyam Sankar, chief technology officer at Palantir.  

The post Palantir’s New Cohort to Drive Manufacturing Innovations at ‘Warp Speed’ appeared first on Analytics India Magazine.

]]>
Lingaro CEO Thinks the GenAI Enterprise Revolution is Slower Than it Looks https://analyticsindiamag.com/ai-features/lingaro-ceo-thinks-the-genai-enterprise-revolution-is-slower-than-it-looks/ Sat, 23 Nov 2024 04:30:00 +0000 https://analyticsindiamag.com/?p=10141498

India is a critical hub for Lingaro, as CEO Mantle emphasised the country's "tremendous wealth of data talent" and its reputation as a global leader in the tech services ecosystem.

The post Lingaro CEO Thinks the GenAI Enterprise Revolution is Slower Than it Looks appeared first on Analytics India Magazine.

]]>

When Lingaro Group CEO Sam Mantle visited Bengaluru last week, the city’s traffic may have overwhelmed him, but the comfort of one of its luxurious hotels provided him with a much-needed respite. We at AIM had the opportunity to catch up with him in this elegant setting, delving into his thoughts and vision for Lingaro’s future in India.

The Polish IT firm is focused on data, and data alone, as Mantle highlighted Lingaro’s unwavering focus on this core area.

The company first set foot in India a few years ago. Now, it is here again with ambitious plans to increase its revenue by 30% year over year and double its workforce to 400 employees. 

A realist at heart, Mantle is pragmatic about industry trends and has strong opinions on generative AI. 

“There’s been a lot of hype around generative AI, but the promise hasn’t been realised. It’s going to take a lot longer than people think,” he said. Mantle isn’t the only one with this sentiment, and there’s more to the story. 

Barriers to the AI Promise

For one, Mantle mentioned that the legacy workforce is the biggest barrier to adopting a disruptive technology like AI. 

“Most people have not grown up with the capabilities that are available today. So, we have to rewire the way we think and the way we’re organised,” he explained. 

Moreover, Mantle also pointed out that, unlike individuals, it isn’t going to be easy for enterprises to adopt generative AI quickly, like the flip of a switch. 

Despite the difficulties, companies have been actively deploying AI services and products in their workflows. In the recent Microsoft Ignite 2024 event, Microsoft said that nearly 70% of the Fortune 500 companies now use Microsoft 365 Copilot. Similarly, LangChain’s recent survey revealed that 51% of companies have already implemented AI agents in their tech stack. 

India is not far behind, either. It was recently reported that over 18,000 developers at Infosys have written code using AI and that the service provider giant is fully embracing generative AI

That said, Lingaro is a proponent of AI, and the progressive adoption of AI tools aligns seamlessly with its vision. No matter what specific AI use case a company is implementing, Lingaro says it is providing the building blocks – which is its core ethos of providing data services. 

Lingaro sees a big opportunity to assist companies in adapting generative AI. Mantle said that most companies don’t have the right level of data ownership, and governance. If companies do not know what data they are using for AI algorithms, they might not be able to realise the best outcome. 

Mantle believes that regardless of where companies choose to prioritise, the backend, the engine, the accelerators, and the data assets must all be orchestrated seamlessly so they can quickly gain an advantage.

This is also what products like Snowflake and Databricks are doing, albeit with a product. Mantle revealed that Lingaro isn’t just offering a product of its own but also partnering with products like Snowflake, Databricks, and other data-focused software on the market. Moreover, Lingaro has established a close relationship with ‘hyperscalers’ like Microsoft, Google, and AWS, indicating their role in a broader section of the ecosystem. 

“Increasingly, the big enterprises are moving more towards hybrid environments. For example, you have to combine Azure with GCP. You have to combine GCP with AWS. Nobody wants to be all in with one—that’s beautiful for us because that’s the complexity that we need to help them navigate,” Mantle further said. 

That said, their big ambitions in India, have even bigger competitors in the world of AI. 

Battle With the Great Indian IT

India is a critical hub for Lingaro, as Mantel emphasised the country’s “tremendous wealth of data talent” and its reputation as a global leader in the tech services ecosystem.

With just over 2,000 successful projects, Lingaro’s portfolio is rather humble in comparison to industry giants like HCL, Infosys, or TCS. These companies are also exponentially expanding their project portfolios with time. 

For example, in the Q1 2024-25 earnings call, TCS chief K Krithivasan, said, “We are currently executing about 270 AI projects across TCS. Our AI pipeline has doubled in a quarter to $1.5 billion. Our investments in research and innovation continue. In Q1, we applied for 154 patents and were granted 277 patents.” 

At the FY25 Q2 earnings call, they announced that that over 600 AI and generative AI engagements have been deployed successfully. 

Even HCL recently onboarded 25 new clients, owing to the success of their AI suite, ‘GenAI Force’. This suite consists of some of the best market-leading AI products, including Anthropic’s Claude and Github Copilot. So, if Lingaro has to make a mark, they have strong competition to overcome. 

Lingaro differentiates itself by avoiding the ‘doing everything for everyone’ approach. Mantle said Lingaro is focusing on data specific services to their clients instead of the whole solution. “We’re only delivering data-related services, so we’re not distracted by all of the other things that are going on in the industry,” he added. 

Mantle also highlighted Lingaro’s priority of understanding their clients’ technology and business needs before laying out their expertise in data. He said that their ability to combine tech and domain data understanding is what he believes sets Lingaro apart from most of the other more generic services. 

Notably, we are living in a time when AI agents are poised to make it big, possibly threatening service providers. This is especially true due to the ease with which companies can build and deploy these agents. Owing to this, several companies in India’s IT sector may face difficulties.

Mantle, however, believes that deploying application layers across all parts of a company’s IT estate isn’t everything one needs to do. He believes that the onus must be on the data component present inside these applications and focus on its role. 

“I’m interested in who owns the data component that may sit in that application. Because if we really want to streamline things, somebody has to be responsible for that data, no matter where it flows within the organisation,” he concluded.

The post Lingaro CEO Thinks the GenAI Enterprise Revolution is Slower Than it Looks appeared first on Analytics India Magazine.

]]>
UST Expands India Presence with a Second Office in Bengaluru https://analyticsindiamag.com/ai-news-updates/ust-expands-india-presence-with-a-second-office-in-bengaluru/ Tue, 05 Nov 2024 09:38:10 +0000 https://analyticsindiamag.com/?p=10140203

UST’s growth in India has been significant, with plans for additional facilities, including a second campus in Kochi, Kerala, aimed at creating 3,000 jobs over the next five years.

The post UST Expands India Presence with a Second Office in Bengaluru appeared first on Analytics India Magazine.

]]>

UST, a digital transformation solutions company, has opened its second delivery center in Bengaluru, Karnataka, as part of its ongoing expansion in India. The new facility, located in Helios Business Park, Kadabeesanahalli, covers over 17,000 square feet and accommodates more than 300 workstations, featuring a Design Experience Center and other modern amenities.

Founded in 1999 and headquartered in California, UST operates multiple offices across India and employs over 20,000 individuals in the country. The company is marking its 25th anniversary this year, reflecting on its growth and commitment to innovation in the digital transformation space.

Bengaluru is now UST’s second largest global delivery center housing over 6,000 employees. The company first established operations in the city in 2012. UST’s growth in India has been significant, with plans for additional facilities, including a second campus in Kochi, Kerala, aimed at creating 3,000 jobs over the next five years.

The inauguration of the new office was attended by UST executives, including Alexander Varghese, Chief Operating Officer, and local leadership, who highlighted the importance of Bengaluru as a hub for IT and technology talent. The expansion is expected to enhance UST’s capabilities in delivering innovative solutions across various sectors, including healthcare, logistics, and retail.

The post UST Expands India Presence with a Second Office in Bengaluru appeared first on Analytics India Magazine.

]]>
Bangalore Leads the Way in Sourcing Talent for Frontend, Backend, DevOps, and Data Science Roles in India https://analyticsindiamag.com/ai-news-updates/bangalore-leads-the-way-in-sourcing-talent-for-frontend-backend-devops-and-data-science-roles-in-india/ Tue, 08 Oct 2024 12:02:55 +0000 https://analyticsindiamag.com/?p=10137857

Bangalore’s ability to consistently produce top-tier talent is the reason behind India on the global stage, not just in terms of technological output but also as a source of talent.

The post Bangalore Leads the Way in Sourcing Talent for Frontend, Backend, DevOps, and Data Science Roles in India appeared first on Analytics India Magazine.

]]>

Recently released Instahyre Report states that Bangalore has emerged as the most sought after  sourcing location for multiple BFSI tech skills – Frontend, Backend, DevOps and Data Science, followed by Pune and Hyderabad. The report further revealed  that Bangalore also tops as the source of talent for roles in DevOps, providing more than 30% of professionals skilled in Docker, Kubernetes, Jenkins and AWS. 

Over the years, Bangalore has become a key location for the disruptive startup ecosystem, tech MNCs, and a steady influx of technology innovations. The city is now a hotbed for companies looking to tap in top tier talent.

Despite facing uncertainties and layoff announcements, tech hiring in the Indian IT industry remains resilient, showing a positive growth outlook. 

Data indicates that major IT companies are planning to expand their workforce to meet the growing demand for IT services in India. On an average, these tech giants are expected to add between 40,000 and 50,000 new employees across various tech roles in the near future.

Bullish on the industry’s growth, Mr. Sarbojit Mallick, Co-founder of Instahyre said, “Bangalore, the ‘Silicon Valley of India’, has solidified its position as the leading hub for tech talent in the country. It consistently outshines other Indian cities, making it a crucial player in both the national and global tech landscape.” Mallick further added that along with Hyderabad, Bangalore was the cradle of tech services right from the outsourcing days. “Building on that legacy and the startup boom, it has cemented its leading position year-on-year, as our report’s data shows,” he noted. 

In the Data Science role, Bangalore is again playing host to more than 30% talent for Machine Learning, Computer Vision, NLP and Data Visualization, followed by Pune and Hyderabad. The city also leads the pack in sourcing Security professionals who are proficient in extremely vital tech security skills like Info Security, Security Testing, App Security, and Network Security. 

When it comes to talent sourcing, the Instahyre report reveals that Bangalore holds the lion’s share, supplying experts skilled in Data Analysis, Warehousing, Data Collection, and Data Extraction.

In addition, the city also leads tech hiring within the Networking space, providing skilled talent from Network Analysis, Network Testing, Network Admin and Troubleshooting functions and QA + Risk skills including Quality Assurance, Quality Control, Risk Assessment, and Risk Management. 

Given the cutting edge technology ecosystem in Bangalore, the city continues to mark its stride as the preferred destination for companies seeking skilled BFSI tech talent across various functions. Without doubt, Bangalore’s ability to consistently produce top-tier talent is the reason behind India being on the global stage, not just in terms of technological output, but also as a source of talent.

The post Bangalore Leads the Way in Sourcing Talent for Frontend, Backend, DevOps, and Data Science Roles in India appeared first on Analytics India Magazine.

]]>
Data Science Hiring and Interview Process at SAP Labs India https://analyticsindiamag.com/ai-hiring/data-science-hiring-process-at-sap-labs-india/ Mon, 29 Jul 2024 10:13:34 +0000 https://analyticsindiamag.com/?p=10110557

As a data scientist at SAP Labs, you will analyse large datasets, implement best practices to enhance ML infrastructure, and support engineers and product managers in integrating ML into products.

The post Data Science Hiring and Interview Process at SAP Labs India appeared first on Analytics India Magazine.

]]>

German tech conglomerate SAP Labs has been one of the major players in the generative AI race on the enterprise side. The company recently introduced Joule, a natural-language generative AI assistant that allows access to the company’s extensive cloud enterprise suite across different apps and programs. It will provide real-time data insights and action recommendations.


Snowflake certification

With a global presence in 19 countries, labs are responsible for driving SAP’s product strategy, developing and localising its core solutions, and contributing to the SAP Business Technology Platform. 

54-year-old SAP was founded by five former IBM employees, Dietmar Hopp, Hasso Plattner, Claus Wellenreuther, Klaus Tschira, and Hans-Werner Hector. SAP Labs is the R&D arm of SAP with its second largest office space in Bengaluru. 

AIM got in touch with Shweta Mohanty, vice president and head, of human resources, SAP, India and Dharani Karthikeyan, vice president, head of engineering for analytics, SAP Labs India, to understand the company’s AI and analytics play, customer stories, hiring process for data scientists, work culture and more. 

AI & Analytics Play

“We have fully embraced generative AI in our business AI concept, aiming to provide AI that is responsible, reliable, and relevant. The goal is to infuse AI into business applications, with a focus on trust and outcomes,” Karthikeyan told AIM

SAP has a portfolio of over 350 applications spanning various use cases, from cash management to document scanning. The company is enhancing its Business Technology Platform (BTP) with a generative AI layer.  The team aims to improve business processes while maintaining human control over decisions. They have collaborated with Microsoft for Human Capital Management tools, combating biases in recruiting, and introduced a Business Analytics tool for faster insights. 

SAP is also partnering with Google Cloud to launch a holistic data cloud, addressing data access challenges. Additionally, they have invested in generative AI players Anthropic, Cohere, and Aleph Alpha, diversifying their capabilities.

Interview Process

The hiring process for tech roles involves five to six steps starting with profile screening, focusing on the candidate’s development background and programming language proficiency. As described by Mohanty, this is followed by an online assessment to test programming skills, lasting 60 to 90 minutes. Technical interviews include case studies to assess proficiency and hands-on experience. 

For senior roles, there’s a discussion with a senior leader to gauge cultural alignment. The final step is an HR discussion focusing on cultural fit and interest in the organisation. For college recruitment, the process includes live business solutions assessments. The process concludes with a rigorous background verification.

When it comes to finding the right fit for SAP labs, the ideal candidate should have a comprehensive understanding of ML algorithms, and to build and maintain scalable solutions in production,” added Karthikeyan, highlighting that this consists of the use of statistical modelling procedures, data modelling, and evaluation strategies to find patterns and predict unseen instances. 

The roles involve using computer science fundamentals such as data structures, algorithms, computability, complexity, and computer architecture and also collaborating with data engineers is essential for building data and model pipelines, as well as managing the infrastructure needed for code production. 

As a data scientist at SAP Labs India, you will also analyse large, complex datasets, researching and implement best practices to enhance existing ML infrastructure and provide support to engineers and product managers in implementing ML into products.

Work Culture in SAP

SAP’s work culture is characterised by abundant learning opportunities and hands-on experiences where employees have chances to shadow leading data scientists, participate in fellowship projects for stretch assignments, and explore various aspects. This hands-on approach extends to customer interactions and pre-sales experiences. 

“These opportunities, along with the focus on learning and customer engagement, give SAP an edge over other organisations hiring in data science and machine learning,” Mohanty commented.

SAP prioritises its employees’ well-being through a comprehensive set of benefits and rewards. The company recognises diverse needs beyond healthcare and retirement plans, offering global and local options for work-life balance, health and well-being, and financial health. 

Embracing a highly inclusive and flexible culture, the company promotes a hybrid working model allowing employees to balance office and remote work. Employee Network Groups foster a sense of community, and inclusive benefits include competitive parental leave and disability support. 

The ERP software giant also aims to foster personal and professional growth, providing learning opportunities, career development resources, and a leadership culture focused on doing what’s right for future generations. It values fair pay, employee recognition, generous time-off policies, variable pay plans, total well-being support, and stock ownership opportunities for all employees.

Why Should You Join SAP Labs?

SAP Labs offers a sense of purpose and involvement in transformative technology phases. At SAP, candidates dive into cutting-edge technologies, explore diverse industries, and embrace continuous learning and innovation. 

Mohanty explained how the team values adaptability, emphasising fungible skills and a proactive mindset, especially in areas like AI and generative AI. 

“We seek individuals ready to tackle new challenges and solve complex problems, fostering a dynamic and impactful work environment,” she explained. 

Adding on to what Mohanty said, “The work at SAP involves mission-critical applications, like supporting cell phone towers or vaccine manufacturing so the integration of generative AI into these applications offers a unique combination of purpose and technological advancement, providing developers with a high sense of purpose in seeing their software run essential business and retail operations. This phase of technological transformation at SAP is especially significant for new joiners,” said Karthikeyan. 

Check out the job openings here.

The post Data Science Hiring and Interview Process at SAP Labs India appeared first on Analytics India Magazine.

]]>
Data Science Hiring and Interview Process at ServiceNow https://analyticsindiamag.com/ai-hiring/data-science-hiring-process-at-servicenow/ Mon, 29 Jul 2024 10:13:21 +0000 https://analyticsindiamag.com/?p=10111339

The company has eight open positions for applied research scientists and ML engineers.

The post Data Science Hiring and Interview Process at ServiceNow appeared first on Analytics India Magazine.

]]>

California-based ServiceNow, one of the leading names when it comes to operating as a cloud-based company delivering Software as a Service (SaaS) has brought purpose-built AI with the highly intelligent NOW platform.  Last September, the company expanded this using a domain-specific ServiceNow language model designed for enterprise use.


Snowflake certification

The NOW platform converts machine intelligence into practical actions, aiming to enhance process efficiency, reduce risks, optimise workforce productivity, and facilitate automated workflows with the help of purpose-built AI, providing users with self-solving capabilities through augmented intelligence

“With this (NOW Platform), we are enabling enterprises to increase process efficiency, minimise risk by avoiding human mistakes, optimise workforce productivity to focus on higher value tasks​, leverage automated workflows to drive standardisation and empower users to self-solve with augmented intelligence,” Sumeet Mathur, vice president and managing director of ServiceNow’s India Technology and Business Center, told AIM. 

The company has eight open positions in data science.

Applied research scientists in the Core LLM Team focus on developing generative AI solutions, collaborating with diverse teams to create AI-powered products and work experiences. Simultaneously, they conduct research, experiment, and mitigate risks associated with AI technologies to unlock novel work experiences. 

On the other hand, as a machine learning engineer, you’ll craft user-friendly AI/ML solutions to enhance enterprise services’ efficiency, emphasising accessibility for users with varying technical knowledge. 

Inside ServiceNow’s AI & Analytics Lab

The Now platform aims to create proactive and intelligent IT processes. The platform is built around big data and advanced analytics, incorporating real-time and stored data to enhance accessibility and support various use cases, such as self-service, incident detection, pattern discovery, knowledge base optimisation, workflow automation, and user empowerment. 

ServiceNow’s self-service has evolved with augmented AI and automation, using intelligent virtual agents to understand customer intent and resolve complex issues. Augmented agent support focuses on improving human capabilities through recommendation engines, automated workflows, and increased productivity, aligning with specific business objectives for measurable value.

Tapping into Generative AI

Last September, the company expanded its Now Platform using a domain-specific ServiceNow language model designed for enterprise use, prioritising accuracy and data privacy. The Now LLM incorporates top-notch foundational models, including a pre-trained model called StarCodel, developed in collaboration with Hugging Face and a partnership with NVIDIA, along with other open-source models. 

The initial release of Now LLM introduces features such as interactive Q&A, summarisation capabilities for incidents/cases and chats, and assistive code generation for developers. The development of this model involved significant efforts from engineering, research, product, QE, and design teams, as well as data centre operation teams managing the GPU infrastructure. 

Clients like Mondalez, Delta, Standard Chartered, Coca Cola, LTIMindtree, and various other companies across industries have used the platform for AI applications in areas like improving healthcare workflows, providing financial auditors with quick insights, and transforming supply chain management in manufacturing. 

“We believe that the most constructive and value-creating strategies for generative AI are grounded in embedding human experience and expertise into its core capabilities,” added Mathur. 

So it adopts a humans-in-the-loop model for generative AI, integrating human expertise into its core capabilities. The NOW platform’s generative AI is applied in diverse use cases, including case summarisation, content generation, conversational exchanges, and code generation. 

Interview Process

“Our hiring process for data science roles follows a structured approach aimed at attracting a diverse pool of qualified candidates. We publish job openings on various platforms, including our career site, job boards, social media, and professional networks,” added Mathur. The process involves careful evaluation through interviews to ensure the selection of the right candidate. 

The interview process consists of three technical rounds, each focusing on key competencies such as programming proficiency and experience with core ML and LLM. This assessment is followed by an interview with the hiring manager and, for certain roles, an additional round with the senior leadership. 

However, Mathur shared that during the data science interview process, candidates often make common mistakes that should be avoided. Some of them include inadequate technical readiness, a limited understanding of the company’s objectives and role, failure to ask insightful questions, overlooking the latest AI/ML trends, and neglecting to demonstrate effective problem-solving skills. 

Expectations

Upon joining the data science team at the Advanced Technology Group (ATG) of ServiceNow, candidates can expect to work within a customer-focused innovation group. The team builds intelligent software and smart user experiences using advanced technologies to deliver industry-leading work experiences for customers. 

The ATG comprises researchers, applied scientists, engineers, and product managers with a dual mission: building and evolving the AI platform and collaborating with other teams to create AI-powered products and work experiences. The company expects that team members will contribute to laying the foundations, conducting research, experimenting, and de-risking AI technologies for future work experiences.

Work Culture

“Our company fosters a purpose-driven work culture where employees have the opportunity to be part of something big. We make work better for everyone—including our own. We know that your best work happens when you live your best life and share your unique talents, so we do everything we can to make that possible for our employees,” Mathur added.

Some of the key perks include a hybrid working model, paid time off, well-being days, employee belonging groups, DEI learnings, internal opportunities, and paid volunteering.

According to him, joining ServiceNow means becoming part of an inclusive and diverse community with resources for well-being, mental health, and family planning, among others. Prioritising value and trust, SaaS giant provides ongoing support for learning and development, growth pathways, and action-oriented feedback aligned with clear expectations. The programs cater to individuals at all career stages. 

“We’re committed to creating a positive impact on the world, building innovative technology in service of people – with a core set of values and a deep responsibility to each other, our customers and our global communities,” he concluded.

Check out the careers page now.

The post Data Science Hiring and Interview Process at ServiceNow appeared first on Analytics India Magazine.

]]>
Data Science Hiring and Interview Process at Happiest Minds Tech https://analyticsindiamag.com/ai-hiring/data-science-hiring-process-at-happiest-minds-2/ Mon, 29 Jul 2024 10:08:16 +0000 https://analyticsindiamag.com/?p=10102220

Happiest Minds is currently on the lookout for a specialist in marketing analytics with over 8 years of relevant experience. 

The post Data Science Hiring and Interview Process at Happiest Minds Tech appeared first on Analytics India Magazine.

]]>

Founded in 2011 by Ashok Soota, a serial entrepreneur and Indian IT veteran, Happiest Minds boasts a robust data science team comprising over 300 members, including data engineers, intelligence specialists, and data science experts.


Snowflake certification

Based in the Silicon Valley of India, Bangalore, and extending its reach across the global landscape, including the US, UK, Canada, Australia, and the Middle East, this IT juggernaut seamlessly blends augmented intelligence with the art of understanding human language, deciphering images, analysing videos, and harnessing cutting-edge technologies such as augmented reality and virtual reality.

This dynamic fusion empowers enterprises to craft captivating customer interactions that surpass rivals and set new industry standards.

Happiest Minds distinguishes itself from traditional IT companies by avoiding legacy systems like SAP and ERP, believing that staying entrenched in these technologies limits growth and innovation. “Instead, we have chosen to focus on digital technologies like AI, which is the future of IT,” said Sundar Ramaswamy, SVP, Head of Analytics CoE, in an exclusive interview with AIM.

The team conducts regular market scans to identify the latest technologies and ensures that they are always on the forefront of innovation. This approach allows them to co-create and innovate with clients while building new solutions.

Now Hiring

Happiest Minds is currently on the lookout for a specialist in marketing analytics. The ideal candidate should possess a Master’s or Bachelor’s degree in Computer Science, STEM, or an MBA, demonstrating strong problem-solving skills. They should also have over eight years of experience in the analytics industry, particularly in marketing. 

This experience should include a track record of using AI to enhance the customer journey, encompassing areas such as customer acquisition, nurturing, retention, and improving the overall experience.

The technical skills required include proficiency in statistical techniques, ML, text analytics, NLP, and reporting tools. Experience with programming languages such as R, Python, HIVE, SQL, and the ability to handle and summarise large datasets using SQL, Hive-SQL, or Spark are essential.

Additionally, the knowledge of open-source technologies and experience with Azure or AWS stack is desirable.

AI & Analytics Play

This team collaborates closely with domain teams across diverse industry verticals. Their analytics process follows eight key steps. They integrate data from multiple sources, use BI tools for descriptive analytics, perform ad hoc analysis, build data pipelines and auto ML pipelines, retrain models regularly, focus on customer understanding, optimise cloud usage, and ensure data governance.

Their key industry verticals are CPG retail, healthcare (bioinformatics), FSI, media entertainment, and Edtech, with growing interest in manufacturing. The team works with classical analytics, deep learning, computer vision, NLP, and generative AI. This includes advanced applications like language translation and content generation from 2D to 3D images using generative AI.

Recognising the growing importance of generative AI, they have formed a dedicated task force comprising approximately 50 to 60 members, drawn from diverse domains, under the leadership of their CTO with the primary objective to leverage generative AI in addressing industry-specific challenges.

To achieve this, they’ve identified and categorised 100 to 250 distinct use cases across ten different domains, tailored to the specific requirements of each domain. The team is diligently working on creating demos and proof of concepts (POCs) that are domain-specific. 

Some team members come from analytics backgrounds, contributing their technical expertise, while others from domain areas contribute to shaping ideas and ensuring results align with the industry’s needs. This undertaking is substantial for the organisation, considering they have around 5,500 employees, with 100-160 dedicated solely to generative AI. 

In addition to building demos, the company is also focusing on educating its entire workforce about LLMs and their applications to equip all team members with a basic understanding of generative AI’s capabilities and potential applications.

To bring generative AI into action, the company is working with Microsoft’s suite of products. “We are a Microsoft select partner and are also experimenting with different language models,” he added.

The team initially experimented with Google’s BERT and now employs models like GPT-2. They have a strategic inclination towards refining existing models to suit specific applications, rather than developing entirely new foundational models. For example, they collaborate with a healthcare company to craft adaptive translation models with reinforcement learning.

Interview Process

“Data science is not just about technical skills; it also involves an element of art. Candidates are assessed on their ability to communicate their results effectively and their capacity to approach problems with creativity,” said Ramaswamy.

The interview process for data science candidates at Happiest Minds typically involves three to five levels of interviews. The first level is a screening by the HR team based on the job description. This is followed by a written test to assess the candidate’s proficiency in relevant languages and skills. For example, if the position is for a data engineer, the test might evaluate their ability to work with SQL and other database-related tasks. 

Technical interviews are conducted using case studies to evaluate the candidate’s problem-solving ability and approach. The interview process concludes with a leadership interview, especially if the position is a senior one.

In addition to understanding the interview process, candidates often wonder about the common mistakes they should avoid. According to Ramaswamy, there are two main pitfalls that candidates often fall into. First, many candidates focus excessively on specific tools or techniques and become fixated on mastering them.

“While technical proficiency is essential, it’s equally important to explain the problem being solved, the reasons for approaching it a certain way, and considering alternative solutions,” he added.

The second common mistake is becoming too narrowly focused on the solution without understanding the broader context. It’s crucial to see the big picture, why the problem is being solved for the client, and to ask relevant questions about the projects they’ve worked on. 

In terms of skills, the company looks for both technical and non-technical abilities. The specific skills depend on the role of the position, such as data engineering, business intelligence, or data science. 

However, primary technical skills include proficiency in relevant tools and technologies, certifications, and problem-solving abilities. Non-technical skills are communication and presentation skills, problem-solving skills, and the ability to coach and mentor, as collaboration and teamwork are essential for senior positions.

Work Culture

“As the company’s name suggests, we aim to cultivate a distinctive work culture based on four fundamental pillars,” Ramaswamy commented. Certified as a Great Place to Work, the company prioritises the well-being of their employees, believing that “a content workforce leads to happy customers“. They monitor and maintain employee happiness closely, offering support to those facing personal or professional challenges.

Collaboration is another key element of their culture, as they encourage a unified approach within and across different units and locations. “As a company born in the digital age, Happiest Minds thrives on agility, adapting swiftly to meet the ever-changing needs of customers and the digital industry,” he added.

Transparency is the fourth pillar, as they openly share key performance indicators and objectives with their employees, investors, and stakeholders. This culture of transparency and goal-oriented approach ensures that their efforts are always aligned with clear objectives and tracked diligently.

If you think you fit the role, check out their careers page now. 

Read more: Data Science Hiring Process at PayPal

The post Data Science Hiring and Interview Process at Happiest Minds Tech appeared first on Analytics India Magazine.

]]>
Data Science Hiring and Interview Process at Wipro https://analyticsindiamag.com/ai-hiring/data-science-hiring-process-at-wipro/ Mon, 29 Jul 2024 10:03:31 +0000 https://analyticsindiamag.com/?p=10103532

With over 30,000 AI and analytics professionals, the team is building its own LLMs. 

The post Data Science Hiring and Interview Process at Wipro appeared first on Analytics India Magazine.

]]>

Wipro, which began as a family operated vegetable oil manufacturer in the small town of Amalner, India in 1945, is now one of the largest IT companies globally, functioning in more than 167 countries.


Snowflake certification

The company has been a key player in driving generative AI. With over 30,000 AI and analytics professionals, the team is building their own LLMs. 

AIM got in touch with Sriram Narasimhan, global head of data, analytics and insights, Wipro, to understand about their data science applications, hiring for these roles, work culture and more. 

Inside Wipro’s Data Science Team

“Data serves as the bedrock for every AI NS ML initiative, laying the groundwork for success. The pivotal factor lies in guaranteeing precise data of optimal quality—an indispensable catalyst for these processes that yield the desired results,” Narasimhan told AIM.

Profound insights emerge from the ability to scrutinise, profile, and decipher patterns within the data landscape, identifying outliers to extract meaningful conclusions. At the heart of any data science endeavour is the adeptness to construct automations through AI and ML algorithms, elevating and refining the data and insight ecosystem.

This transformative process enhances operational efficiencies, underscoring the fundamental role of data science and engineering as the critical inaugural stride in the pursuit of quality outcomes in AI/ML implementations.

Wipro’s AI and analytics team is substantial, with over 30,000 practitioners globally. The company boasts 500+ AI/ML patents, 20 innovation centres, and over 15 partnerships with a strong presence in various industries. 

Recognised as a leader by agencies like Everest Group and IDC, Wipro specialises in industry-specific solutions and horizontal offerings like ML Ops and Legacy modernisation.

“The team co-builds solutions, leveraging tools like the Wipro Data Intelligence Suite (WDIS), prebuilt Industry Business Applications, and the Wipro Enterprise Generative AI (WeGA) Framework,” he added. These tools accelerate customer implementations, supporting the modernisation journey and enabling responsible AI with safety and security guardrails.

Riding the Generative AI Wave

Wipro has been actively involved in generative AI initiatives for over two years, collaborating with research institutes like the AI Institute at the University of South Carolina and IIT Patna. The company is committed to training its sizable workforce of 250,000 in generative AI. They have developed their own LLMs enhancing versatility and future-proofing, and have established a unique partnership with Google Cloud to integrate its generative AI products and services.

The company’s generative AI applications cover diverse themes, including cognitive chatbots, content creation and optimisation for marketing, media, automation in code generation, and synthetic data generation. The company’s internal initiative, Wipro ai360, focuses on incorporating AI across all platforms. Notable client projects include assisting a chocolate manufacturer in enhancing product descriptions and collaborating with a European telecom company to extract value from data.

Wipro is invested in the generative AI landscape, with 2/3rd of its strategic investments directed towards AI. The company plans to further support cutting-edge startups through Wipro Ventures and launch a GenAI Seed Accelerator program to train the top 10 generative AI startups.

Acknowledging the challenges associated with generative AI, the Bengaluru based tech giant has implemented a control framework, emphasising responsible usage. Initiatives include dedicated environments for developing generative AI solutions, GDPR-compliant training, and efforts to detect AI-generated misinformation. They have also established an AI Council to set development and usage standards, emphasising ethical guidelines, fairness, and privacy.

The team is attuned to evolving regulatory frameworks and is adapting strategies accordingly. The company envisions widespread benefits to the IT industry, with generative AI influencing code generation and call centres. The team anticipates a wave of AI services emerging in the next five years, facilitating enterprises in harnessing AI’s full potential. In the long term, they foresee AI disrupting every industry, with specific verticals like precision medicine, precision agriculture, hyper-personalisd marketing, and AI-led capabilities in smart buildings and homes gaining prominence.

Interview process

When hiring for data science roles, Wipro seeks candidates with practical experiences, strong programming and statistical skills, analytical abilities, domain knowledge, and effective presentation skills. 

“The hiring process involves a comprehensive evaluation based on real-world use cases, emphasising not only technical proficiency but also the candidate’s understanding of problem statements and the application of statistical methodologies to solve complex issues,” he added.

“Joining our data science team promises exposure to cutting-edge, real-life AI/ML problems across various industries as we encourage a democratic approach to AI, allowing teams the independence to build solutions while adhering to organisational processes,” Narasimhan commented.

The company offers a diverse range of competencies, including data engineering, data science, conversational AI, ethical AI, and generative AI, enabling associates to work on projects aligned with their capabilities and aspirations.

In interviews, Wipro emphasises the importance of showcasing real-life use cases rather than being overly theoretical. Candidates are encouraged to highlight their practical experiences, demonstrating how they understand, consider options, and provide solutions to problems in the realm of data science, AI, and ML.

Work Culture

Wipro fosters a work culture rooted in values and integrity for its global workforce of 250,000+. Guided by the ‘Spirit of Wipro’ and ‘Five Habits’ principles, it emphasises respect, responsiveness, communication, ownership, and trust. With a 36.4% gender diversity goal, the company supports inclusion through programs like Women of Wipro (WOW), addressing various aspects of diversity such as disability, LGBTQ+, race, ethnicity, and generational diversity.

For talent management, they use tech solutions like the MyWipro app and WiLearn. These tools facilitate goal documentation, feedback, skill-building, and awareness of biases. The company conducts biannual performance reviews, offers training, mentoring, and leadership programs, including global executive leadership initiatives.

Employee benefits encompass a comprehensive package, including 401k, pension, health, vision, dental insurance, competitive pay, bonuses, paid time off, health savings, flexible spending accounts, disability coverage, family medical leave, life insurance, and more. Additional perks involve retirement benefits, stock purchase plans, paid holidays, legal plans, insurance for home, auto, and pets, employee discounts, adoption reimbursement, tuition reimbursement, and well-being programs.

The post Data Science Hiring and Interview Process at Wipro appeared first on Analytics India Magazine.

]]>
Data Science Hiring and Interview Process at Pegasystems https://analyticsindiamag.com/ai-hiring/data-science-hiring-process-at-pegasystems/ Mon, 29 Jul 2024 10:03:15 +0000 https://analyticsindiamag.com/?p=10103957

For data science roles, Pega focuses on the candidate's ability to learn and adapt rather than specific tech skills. 

The post Data Science Hiring and Interview Process at Pegasystems appeared first on Analytics India Magazine.

]]>

Pegasystems, commonly known as Pega, is a global software company founded in 1983, focusing on customer engagement and operational excellence solutions. The Cambridge-based company has become a leader in business process management and customer relationship management.


Snowflake certification

The primary offering, Pega Infinity, acts as a comprehensive platform for businesses to create, implement, and improve applications, aiming to enhance customer experiences and streamline operational processes.

The company utilises AI and data science throughout its platform to improve decision-making, automate processes, and provide personalised customer interactions. CISCO, HSBC, and Siemens are a few of their primary customers. 

In their latest iteration of Pega Infinity 23, the platform introduces over 20 new features, including generative AI-powered boosters to enhance efficiency. The Connect Generative AI feature enables organisations to quickly utilise generative AI with a plug-and-play structure for low-code development.

AIM caught up with Deepak Visweswaraiah, vice president, platform engineering and site managing director, and Smriti Mathur, senior director and head of people, Pegasystems, India, to understand their generative AI play, hiring process and more.

Pega has open positions for solutions engineers and senior software quality test engineers in Hyderabad and Bengaluru.

Decoding Pega’s AI Ventures

In their core platform, Pega Infinity, the organisation relies heavily on data science, which plays a critical role in analytics, insights generation, natural language processing (NLP), generative AI, and various other applications that drive functionalities such as real-time decision-making and personalised customer communications based on attributes.

Data science also contributes significantly to the development of generative AI models, enhancing the overall intelligence of the platform. Its impact extends beyond the core platform to applications like customer service, one-to-one engagement, decision-making, sales automation, and strategic smart apps for diverse industries.

Pega GenAI provides insights into AI decision-making and streamlines processes, such as automating loan processing. “The benefits of generative AI extend to developers and end-users, improving productivity through query-based interactions, automatic summarisation, and streamlined case lifecycle generation,” Visweswaraiah told AIM

End-users also benefit from realistic training scenarios using simulated customer interactions.

Regarding proprietary foundational models, the organisation’s product architecture prioritises openness and flexibility. They support various language models, including those from OpenAI and Google. 

“In upcoming product versions, we are actively working to support and ship local language models to meet specific use case demands, focusing on accuracy, productivity, and performance in response to customer preferences for diverse capabilities,” he added. 

Interview Process

The company follows a global hybrid working model, encouraging collaboration in the office while providing flexibility, with about 60% of the workforce attending the office around three days a week. This approach aims to attract talent globally, fostering a vibrant culture and hybrid working environment.

In upskilling employees, technical competencies are crucial, and the company emphasises learning through its Pega Academy, offering online self-study training, live instructor-led courses, and online mentoring. Skill gaps are regularly assessed during performance reviews, providing learning opportunities through gateways and supporting external courses with an educational reimbursement policy.

“For data science roles, we focus on the candidate’s ability to learn rather than specific data science skills,” Mathur told AIM. The company looks for individuals capable of extracting insights from data, making informed decisions, and building models for application in various use cases.

Mathur further shared that the company emphasises the importance of understanding its problem-solving approach and creating deterministic models that consistently provide performant and real-world solutions. It encourages candidates to think from the customer’s perspective and avoid getting lost in vast amounts of data, highlighting the significance of models producing consistent and reliable answers.

Work Culture

The company emphasises diversity and inclusivity, fostering a culture centred on innovation and collaboration. It has been ranked as the best workplace for women by Avatar for five consecutive years. Pega values individuals who think independently, challenge norms and question the status quo to seek better solutions.

The company encourages leadership and curiosity in approaching tasks, promoting an environment where employees are empowered to innovate. Compared to competitors, Pega’s work culture stands out due to the unique problems it addresses and its distinctive approach.

Understanding the product architecture is crucial for employees, given the nature of the challenges they tackle. Pega’s ability to integrate technology into the platform is a significant differentiator, enhancing its capability to address complex issues. 

“With a focus on adapting to market changes, our mantra of being “built for change” reflects our commitment to staying dynamic and responsive to evolving needs,” concluded Mathur.  

So, if you want to join the dynamic community of Pega, check out the careers page here. 

The post Data Science Hiring and Interview Process at Pegasystems appeared first on Analytics India Magazine.

]]>
Data Science Hiring and Interview Process at WNS https://analyticsindiamag.com/ai-hiring/data-science-hiring-process-at-wns/ Mon, 29 Jul 2024 10:03:07 +0000 https://analyticsindiamag.com/?p=10104280

Consisting of over 6,500 AI experts, WNS Triange serves as a partner for 200 global clients in more than 10 industries

The post Data Science Hiring and Interview Process at WNS appeared first on Analytics India Magazine.

]]>

Headquartered in Mumbai, India, WNS is a prominent global Business Process Management (BPM) and IT consulting company with 67 delivery centers and over 59,000 employees worldwide. 


Snowflake certification

Combining extensive industry knowledge with technology, analytics, and process expertise, the company collaborates with clients across 10 industries to co-create digital-led transformational solutions. WNS is renowned for its strategic partnerships, delivering innovative practices and industry-specific technology and analytics-enabled solutions. The company’s services cover diverse sectors, characterised by a structured yet flexible approach, deep industry expertise, and a client-centric partnership model.

WNS Triange, the AI, analytics, data and research business unit, has successfully harnessed the power of data science to develop robust solutions that effectively address a myriad of business challenges faced by its clients. 

Among these solutions are sophisticated applications such as an advanced claims processing system, a finely tuned inventory optimisation mechanism, and the implementation of a retail hyper-personalisation strategy.

Consisting of over 6,500 experts, WNS Triange serves as a partner for 200 global clients in more than 10 industries. 

“The team is organised into three pillars: Triange Consult focuses on consulting and co-creating strategies for data, analytics, and AI; Triange NxT adopts an AI-led platform approach for scalable business value; and Triange CoE executes industry-specific analytics programs, transforming the value chain through domain expertise and strategic engagement models,”  Akhilesh Ayer, EVP & Global Business Unit Head – WNS Triange, told AIM in an exclusive interaction last week. 

WNS’s AI & Analytics Play

The data science workflow at WNS Triange follows a meticulously structured process that guides the team through various stages, including problem outlining, data collection, Exploratory Data Analysis (EDA), cleaning, pre-processing, feature engineering, model selection, training, evaluation, deployment, and continuous improvement. A pivotal element of this methodology is the proprietary AI-led platform, Triange NxT, equipped with Gen AI capabilities. This platform serves as a hub for domain and industry-specific models, expediting the delivery of impactful insights for clients.

“When it comes to claims processing, we deploy predictive analytics to conduct a thorough examination of data sourced from the First Notice of Loss (FNOL) and handler notes,” said Ayer. This approach allows for the evaluation of total loss probability, early settlement possibilities, and subrogation/recovery potential. 

Simultaneously, its Marketing Mix Modeling (MMM) is employed to optimise resource allocation by quantifying the impact of marketing efforts on key performance indicators. Furthermore, the application of advanced analytics techniques aids in the detection of suspicious patterns in insurance claims for risk and fraud detection. 

Ayer shared that the team also actively leverages generative AI across diverse sectors. In the insurance domain, it is employed to streamline claims subrogation by efficiently processing unstructured data, minimising bias, and expediting insights for recovery. 

Similarly, in healthcare, it empowers Medical Science Liaisons (MSLs) by summarising documents and integrating engagement data for more impactful sales pitches. Generative AI’s versatility is further demonstrated in customer service interactions, where it adeptly handles natural language queries, ensuring quicker responses and retrieval efficiency.

The combination of LLM foundation models from hyperscalers like AWS with WNS Triange’s proprietary ML models enables the delivery of tailored solutions that cater to various functional domains and industries. Where necessary, WNS Triange employs its AI, ML and domain capability to fine-tune existing foundation models for specific results, ensuring a nuanced and effective approach to problem-solving.

Tech Stack

In its AI model development, the team utilises vector databases and deep learning libraries such as Keras, PyTorch, and TensorFlow. Knowledge graphs are integrated, and MLOps and XAI frameworks are implemented for enterprise-grade solutions. 

“Our tech stack includes Python, R, Spark, Azure, machine learning libraries, AWS, GCP, and GIT, reflecting our commitment to using diverse tools and platforms based on solution requirements and client preferences,” said Ayer. 

Even when it comes to using transformer technology, particularly language models like Google’s BERT for tasks such as sentiment analytics and entity extraction, its current approach involves a variety of language models, including GPT variants (davinci-003, davinci-codex, text-embedding-ada-002), T5, BART, LLaMA, and Stable Diffusion. 

“We adopt a hybrid model approach, integrating Large Language Models (LLMs) from major hyperscalers like OpenAI, Titan, PaLM2, and LLaMA2, enhancing both operational efficiency and functionality,” he commented. 

Hiring Process

WNS Triange recruits data science talent from leading engineering colleges, initiating the process with a written test evaluating applied mathematics, statistics, logical reasoning, and programming skills. Subsequent stages include a coding assessment, a data science case study, and final interviews with key stakeholders.

“Joining our data science team offers candidates a dynamic and challenging environment with ample opportunities for skill development. And while engaging in diverse projects across various industries, individuals can expect exposure to both structured and unstructured data,” said Ayer. 

The company fosters a collaborative atmosphere, allowing professionals to work alongside colleagues with diverse backgrounds and expertise. Emphasis is placed on leveraging cutting-edge technologies and providing hands-on experience with state-of-the-art tools and frameworks in data science. 

WNS Triange values participation in impactful projects contributing to the company’s success, offering access to mentorship programs and support from experienced team members, ensuring a positive and productive work experience.

Mistakes to Avoid

Candidates are encouraged to not only showcase technical prowess but also articulate the business impact of their work, demonstrating its real-world relevance and contribution to business goals.

Ayer emphasised, “Successful data scientists must not only be technically adept but also skilled storytellers to present their findings in a compelling manner, as overlooking this aspect can lead to less engaging presentations of their work”

He added that candidates sometimes focus solely on technical details without articulating the business impact of their work, missing the opportunity to demonstrate how their analyses and models solve real-world problems and contribute to business goals.

Work Culture

Recognised by TIME MAGAZINE for being one of the best companies to work in,  WNS has built a work culture centered on co-creation, innovation, and a people-centric approach, emphasising diversity, equity, and inclusivity, prioritising a respectful workplace culture and extending its commitment to community care through targeted programs by the WNS Cares Foundation. 

“Our focus on ethics, integrity, and compliance ensures a safe ecosystem for all stakeholders, delivering value to clients through comprehensive business transformation,” said Ayer. 

In terms of employee perks, it offers various services and benefits, including transportation, cafeterias, medical and recreational facilities, flexibility in work hours, health insurance, and parental leave. 

“Differentiating ourselves in the data science space, we cultivate a work ecosystem that fosters innovation, continuous learning, and belongingness for the data science team. Our initiatives include engagement tools, industry-specific training programs, customised technology-driven solutions, and a learning experience platform hosting a wealth of content for self-paced learning,” he added. 

Why Should You Join WNS?

“At WNS, we believe in the transformative power of data, where individuals play a key role in shaping our organisation by directly influencing business strategy and decision-making. Recognising the significant impact of data science, we invite individuals to join our collaborative and diverse team that encourages creativity and values innovative ideas. In this dynamic environment, we prioritise knowledge sharing, continuous learning, and professional growth,” concluded Ayer. 

Find more information about job opportunities at WNS here.

The post Data Science Hiring and Interview Process at WNS appeared first on Analytics India Magazine.

]]>
Data Science Hiring and Interview Process at Marlabs https://analyticsindiamag.com/ai-hiring/data-science-hiring-process-at-marlabs/ Mon, 29 Jul 2024 10:03:00 +0000 https://analyticsindiamag.com/?p=10104968

Marlabs is currently hiring for 10 data science roles, including ML Architect, ML Engineer, and Statistical Modeling positions.

The post Data Science Hiring and Interview Process at Marlabs appeared first on Analytics India Magazine.

]]>

Founded in 2011, New York-based IT services and consulting firm Marlabs helps companies of various sizes to undergo AI-powered digital transformation. It provides a wide range of services, including strategic planning, creating rapid prototypes in specialised labs, and applying agile engineering techniques to develop and expand digital solutions, cloud-based applications and AI-driven platforms.


Snowflake certification

Marlabs’s data science team addresses a range of industry challenges, emphasising tasks like extracting insights from extensive datasets and employing pattern recognition, prediction, forecasting, recommendation, optimisation, and classification.

Exploring Generative AI at Marlabs

“In operationalising AI/ML, we have tackled diverse projects, such as demand forecasting, inventory optimisation, point of sale data linkage, admissions candidate evaluation, real-time anomaly detection, and clinical trial report anomaly detection,” Sriraman Raghunathan, digital innovation and strategy principal, Marlabs, told AIM in an exclusive interaction. 

The team is also exploring generative AI applications, particularly in knowledge base extraction and summarisation across domains like IT service desk ticket resolution, sustainability finance, medical devices service management, and rare disease education.

However, it is not developing foundational models as of now due to substantial capital requirements. “Instead, we are focussing on the value chain beyond foundational models, offering tools and practices for deploying such models within organisation boundaries, tailored for specific domains,” he added. 

Marlabs employs a variety of tools and frameworks depending on project specifics, utilising R and Python for development, Tableau, Power BI, QlikView for data exploration and visualisation, and PyTorch, TensorFlow, Cloud-Native tools/platforms, and Jupyter Notebooks for AI/ML model development.

The team leverages Transformer models like GPT-3, especially in NLP use cases, implementing them in TensorFlow, and PyTorch, and utilising pre-trained models from Hugging Face Transformers Library. For generative AI, their toolkit includes LangChain, Llama Index; OpenAI, Cohere, PaLM2, Dolly; Chroma, and Atlas.

Hiring Process

The hiring process for data science roles at the organisation emphasises a blend of technical knowledge, practical application, and relevant experience. The initial steps involve a clear definition of the role and its requirements, followed by the creation of a detailed job description. 

The interview process comprises technical assessments, video interviews with AI/ML experts, and HR interviews. Technical assessments evaluate coding skills, data analysis, and problem-solving abilities. 

Video interviews focus on the candidate’s depth of knowledge, practical application, and communication skills, often including a discussion of a relevant case study or project. HR interviews center around cultural fit, interpersonal skills, collaboration, and the candidate’s approach to handling challenges. 

Expectations

“Upon joining the data science team, candidates can anticipate a thorough onboarding process tailored to their specific team, providing access to essential tools, resources, and training for a smooth transition,” commented Raghunathan. 

The company’s AI/ML projects involve cutting-edge technologies, exposing candidates to dynamic customer use cases spanning natural language processing, computer vision, recommendation systems, and predictive analytics. The work environment is agile and fast-paced. The company places a strong emphasis on team collaboration and effective communication, given the collaborative nature of data science and AI/ML projects. 

In this rapidly evolving field, the company expects new hires to demonstrate continuous learning, tackle complex technical and functional challenges, operate with high levels of abstraction, and exhibit creative and innovative thinking.

Mistakes to Avoid

“The most prevalent error observed in candidates during data science role interviews is a lack of clear communication,” he added.

The ability to effectively communicate insights to non-technical stakeholders is crucial in the AI/ML space, and this skill is frequently overlooked. 

Another common mistake is a failure to comprehend and articulate the business context and domain knowledge of the problem, which is essential in AI/ML applications with significant business impact.

Work Culture

“We are recognised for our value-based culture focused on outcomes, emphasising a flat organizational structure to spur innovation and personal growth. Key values such as respect, transparency, trust, and a commitment to continuous learning are central to their ethos, all aimed at exceeding customer expectations,” he said.

The company’s robust learning and development program has prepared over 150 young managers for leadership roles, with a strong emphasis on AI and technology for organisational insights and sentiment analysis.

The company offers a comprehensive benefits package, including versatile insurance plans, performance incentives, and access to extensive learning resources like Courseware and Udemy, supporting a hybrid work model. Additionally, they provide mental health support and reward long-term employees based on tenure. 

Raghunathan further explained that in the data science team, Marlabs stands out for its innovative and collaborative environment, encouraging creativity and continuous learning. “This distinctive culture and investment in employee growth make us a leader in data science, differentiating it from competitors in the tech industry,” he added. 

Why Should You Join Marlabs?

“Join Marlabs for a dynamic opportunity to work with a passionate team, using data to drive meaningful change. In this collaborative setting, data scientists work with brilliant colleagues across various industries, including healthcare, finance, and retail. You’ll tackle complex issues, contributing to significant business transformations. Marlabs supports your career with essential tools, resources, training, competitive compensation, benefits, and opportunities for professional growth and development,” concluded Raghunathan.  

The post Data Science Hiring and Interview Process at Marlabs appeared first on Analytics India Magazine.

]]>
Data Science Hiring and Interview Process at Global Fintech Company Fiserv https://analyticsindiamag.com/ai-hiring/data-science-hiring-process-at-fiserv/ Mon, 29 Jul 2024 10:00:43 +0000 https://analyticsindiamag.com/?p=10095785

Fiserv prioritises up-skilling employees to help them excel in their roles and adapt to new technologies and client needs

The post Data Science Hiring and Interview Process at Global Fintech Company Fiserv appeared first on Analytics India Magazine.

]]>

Headquartered in Brookfield, Wisconsin, Fiserv is a global leader in payments and financial technology, with operations across 100 countries. Aspiring to move money and information in a way that moves the world, Fiserv helps its clients achieve best-in-class results through a commitment to innovation and excellence. Boasting a global workforce exceeding 41,000 professionals, Fiserv’s data science team plays a pivotal role in supporting various business domains, driving innovation, and cultivating engineering excellence to enhance client experiences.


Snowflake certification

Founded 39 years ago through the merger of First Data Processing and Sunshine State Systems, Fiserv has experienced rapid growth by strategically acquiring prominent entities such as CheckFree Corporation, M-Com, CashEdge, and PCLender. Again in 2019, Fiserv underwent a transformative merger with First Data, resulting in the formation of a globally recognised leader in payment solutions and financial technology, facilitating unparalleled capabilities to deliver exceptional value to financial institutions, corporate clients, small enterprises, and consumers alike.

Analytics India Magazine got in touch with Manisha Banthia, vice president, data and analytics – global services, Fiserv, to understand the importance of AI and analytics for the company and how they hire the finest tech talents. 

Inside the Data Science Team of Fiserv

Being a major player in the fintech industry, the 75-member strong data science team of Fiserv tackles various complex challenges in the fields of banking, cards, payments, digital solutions, and merchant acquisition. Besides creating embedded analytics solutions for financial institutions (FIs) to aid their decision-making processes, they offer advisory services throughout the lifecycle of FIs and merchants, covering areas such as acquisition, growth, cross-selling, and retention. The team also focuses on building solutions to optimise internal processes and operations across different businesses.

For a major US retailer, they leveraged ML to identify prepaid cardholders who would benefit from targeted marketing strategies, resulting in increased engagement and reduced attrition. In another initiative, Fiserv aimed to expand its merchant user base for cash advance loans and achieved this by developing a risk model and an AI algorithm that enabled the sales team to target the right merchants, leading to portfolio growth, reduced marketing expenses, and cost optimisation.

Furthermore, the data science team developed an advanced ML-based solution to address fraud detection and prevention for financial institutions, replacing rule-based engines. “Our data science team follows a pod structure consisting of data scientists, domain experts, ML engineers, visualisation experts, and data engineers who constantly add value to our organisation,” said Banthia. 

Data scientists apply advanced techniques and provide recommendations. Domain experts offer business context, translate problems, and validate results. ML engineers deploy ML models for performance and reliability. Visualisation experts represent data insights visually. Last but not least, data engineers collect, process, and maintain data quality.

The team actively works with Python, Pyspark, Azure, Watson, Snowflake, Adobe Analytics, and Alteryx. 

Interview Process

The interview process consists of a thorough examination by both technical and managerial authorities. Candidates with strong programming skills, statistical knowledge, and problem-solving capabilities, evaluated through case studies and in-depth domain knowledge assessment, are ideal. Following it is an HR assessment to check interpersonal skills and the culture fit.

“A successful data scientist should prioritise a client-centric approach, seeking feedback, adapting to specific needs, and aligning analytical solutions with objectives,” said Banthia. 

Technical skills like solving unstructured problems, exploring AI and ML techniques, conceptualising solutions, and simplifying findings for stakeholders are valued. Fiserv also looks for strong leadership, business acumen, and functional expertise in executive hires. When interviewing, prospective candidates should showcase a balanced combination of technical, business, and leadership skills. They should effectively communicate their proficiency without excessive technical jargon and demonstrate the ability to lead teams and collaborate effectively.

Work Culture

Certified by Great Place To Work® in 2023, Fiserv aims to foster a fast-paced and dynamic work environment. Adaptability and the ability to iterate quickly and respond to market needs are highly valued. The company prioritises up-skilling employees to help them excel in their roles and adapt to new technologies and client needs.

Besides providing an inclusive culture and professional growth opportunities, the fintech giant offers learning programs, wellness plans, and engagement initiatives. “We are committed to being an equal opportunity employer with an inclusive workplace culture and clear communication through an open office concept,” she concluded. 

Check out their careers page now. 

Read more: Data Science Hiring Process at MediBuddy

The post Data Science Hiring and Interview Process at Global Fintech Company Fiserv appeared first on Analytics India Magazine.

]]>
Data Science Hiring and Interview Process at Lendingkart https://analyticsindiamag.com/ai-hiring/data-science-hiring-process-at-lendingkart/ Mon, 29 Jul 2024 09:57:24 +0000 https://analyticsindiamag.com/?p=10106527

Founded in 2014 by Harshvardhan Lunia, Indian digital assembly fintech lender Lendingkart utilises a data-powered credit analysis system to facilitate online loans, aiming to improve accessibility in small business lending. The company’s proprietary underwriting mechanism utilises big data and analytics to evaluate the creditworthiness of borrowers. The company has so far disbursed over $1 billion […]

The post Data Science Hiring and Interview Process at Lendingkart appeared first on Analytics India Magazine.

]]>

Founded in 2014 by Harshvardhan Lunia, Indian digital assembly fintech lender Lendingkart utilises a data-powered credit analysis system to facilitate online loans, aiming to improve accessibility in small business lending. The company’s proprietary underwriting mechanism utilises big data and analytics to evaluate the creditworthiness of borrowers.


Snowflake certification

The company has so far disbursed over $1 billion in loans in over 1300 cities in the country, especially in Tier 2 and Tier 3 cities. The company, which recently reported its first-ever profits, a sum of Rs 118 crore, with total revenues reaching Rs 850 crore in FY23, specialises in providing unsecured business loans to micro, small, and medium-sized enterprises (MSMEs). 

The fintech company is backed by Bertelsmann India Investments, Darrin Capital Management, Mayfield India, Saama Capital, India Quotient and more. 

“Data science has always been at the heart and center of our operations. The AI/ML-based underwriting that this team has developed has been used to underwrite over one million MSMEs,” said Dhanesh Padmanabhan, chief data scientist, Lendingkart, in an exclusive interaction with AIM.

The 35-member data science team of the Ahmedabad headquartered firm is organised into three main groups: analytics, underwriting modelling, and ML engineering. The analytics team, with approximately 15 members, is further divided into three sub-teams focusing on revenue, portfolio (credit and risk), and collections.

“One of the key challenges addressed by our team at Lendingkart is credit risk management where we employ a combination of analytics and AI/ML models at different stages of the underwriting and collections processes to assess eligibility, determine loan amounts and interest rates, and ensure timely customer payments or settlements,” he added.

This underwriting modeling team consists of about 5 members dedicated to developing underwriting models, while the 10-member ML engineering team focuses on MLOps, feature store development, and AI applications.

Additionally, there are individual contributors like an architect and a technical program manager, along with a two-member team specializing in setting up the underwriting stack for the newly established personal loan portfolio.

The company has open positions for senior data scientist and associate director in Bengaluru.

Inside Lendingkart’s AI & Analytics Team

The team leverages AI and ML across various functions, for example, in outbound marketing to target existing customers and historical leads through pre-approved programs. Additionally, a lead prioritization framework helps loan specialists focus on leads for calling and digital engagement.

The company also employs an intelligent routing system to direct loan applications to credit analysts, and a terms gamification framework aids negotiation analysts in negotiating interest rates with borrowers. Its fraud identification framework flags potentially manipulated bank statements for further review, and a speech analytics solution is deployed to extract insights from recorded calls for monitoring operational quality.

On the other hand, collections models prioritize collections based on a customer’s likelihood of entering different delinquency levels, and computer vision models are used for KYC verification.

“We are also exploring the use of generative AI for marketing communication, chatbots, and data-to-insights applications,” said Padmanabhan. Moreover, there are plans to build transformer-based foundational models using call records and structured data sources like credit histories and bank statements for speech analytics, customer profiling, and underwriting purposes.

The tech stack comprises SQL running on Trino, Airflow, and Python. For ML tasks, they leverage scikit-learn, statsmodel, scipy, along with PyTorch and TensorFlow. Natural language processing and computer vision applications involve the use of transformers and CNNs.

The API stack is powered by fast API’s deployed on Kubernetes (k8s). In ML Engineering, the team prefers Kafka and Mongo. Additionally, there are applications built on Flask and Django, and they are currently developing interactive visualizations using the MERN stack.

Interview Process

Lendingkart’s data science hiring process includes four to five interview rounds, evaluating candidates with strong backgrounds in analytics, modelling, or ML engineering. In leadership roles such as team leads and managers, the company places emphasis not only on technical proficiency but also on crucial skills in team and stakeholder management.

During the interview process, non-managerial candidates undergo initial technical assessments in SQL, Python, or ML. Subsequent rounds explore general problem-solving and soft skills, with assessments conducted by peers, managers, and HR.

Expectations

Upon joining the team, candidates can expect to participate in a diverse range of projects encompassing revenue, risk, collections, and the development of tech and AI stacks for these applications. Collaboration with various stakeholders remains a significant aspect of the role. For example, the development of a new underwriting algorithm involves comprehensive reviews with risk and revenue teams to align with business objectives, followed by collaboration with product and ML engineering teams for successful implementation.

However, Padmanabhan notes that there is a common mistake which candidates make – they overlook the importance of thoroughly understanding the business context of the given problems.

“While they may possess knowledge of various algorithms used in different domains, they may struggle to articulate solutions or approaches when those algorithms are applied within a financial process context,” he added, highlighting the importance of connecting technical expertise with a deep understanding of the specific business challenges at hand.

Work Culture

“Our work culture is fast-paced and dynamic, characterised by group problem-solving focused on specific business goals with competitive ESOP packages and industry-standard insurance,” said Padmanabhan.

The data science team operates hands-on at all levels, adopting best practices like agile and MLOps. The “hub and spoke” approach involves data scientists taking responsibility for the entire process from conceptualization to implementation, distinguishing the work culture from competitors in the space.

At Lendingkart, you’ll collaborate closely with stakeholders on projects like developing underwriting algorithms. The company maintains a well-established agile practice led by the technical program manager and team leads, focusing on efficient planning, best practices, and clear communication to create a productive work environment. So if you think you are fit for this role, apply here. 

The post Data Science Hiring and Interview Process at Lendingkart appeared first on Analytics India Magazine.

]]>
Data Science Hiring and Interview Process at Verizon https://analyticsindiamag.com/ai-hiring/data-science-hiring-process-at-verizon/ Mon, 29 Jul 2024 09:54:17 +0000 https://analyticsindiamag.com/?p=10112646

The company plans to expand its footprint in India by hiring around 70 professionals this year for data science, data engineering, ML engineering, and cloud roles across Chennai, Hyderabad, and Bengaluru.

The post Data Science Hiring and Interview Process at Verizon appeared first on Analytics India Magazine.

]]>

New Jersey-based telecommunication operator Verizon is at the forefront of leveraging AI and analytics to transform its network, services, and customer interactions.


Snowflake certification

“When it comes to AI, especially generative AI, I think at the edge of the network it will be very important to have AI to make quick decisions very close to the end user,” Hans Vestberg, chief executive officer at Verizon, said on a panel discussion at WEF 2024, highlighting the significance of AI in powering future business growth.

“We focus on the core areas, which include customer experience and loyalty driven by personalisation when it comes to our customers. We also use AI to drive operational efficiency in areas like workforce demand prediction and audience targeting in marketing,” said Ebenezer Bardhan, senior director of data science at Verizon India, in an exclusive interaction with AIM.

Currently, the company has over 400 models deployed in production across various lines of business within sales, service and support.

Verizon plans to expand its footprint in India by hiring around 70 professionals this year for data science, data engineering, ML engineering, and cloud roles across Chennai, Hyderabad, and Bengaluru. 

Inside Verizon’s Data Science Lab

The AI and analytics team at Verizon consists of two divisions: Data and Analytics (D&A) focusing on enterprise analytics, and AI & Data (AI&D) comprising data engineers, AI engineers, and data scientists, with a team size of around 350 in India.

Some of the recent use cases of the company’s AI initiatives include personalisation and churn reduction to improve customer retention to add business value. From an AI services standpoint, it has a custom explainability and interpretability framework that data scientists in the team have adopted to diagnose models.  

“We are also exploring use cases in generative AI to improve the information access for frontline users,” said Ganesh Siddhamalli, director of AI and ML engineering, told AIM.

Internally, the company is developing a charter as part of the Responsible AI initiative to establish appropriate guardrails facilitated by LLMOPS services. Additionally, it also has a Generative AI Center of Excellence (COE) to provide a unified perspective on all potential use cases, allowing for evaluation and shared learning among teams throughout the journey.

“We primarily use frameworks like Tensorflow, Pytorch, Scikit learn, Domino DataLab, Vertex suite, Seldon, Ray, Great Expectations, and Pydantic across AWS/GCP and OnPrem for AI and analytics,” Bharathi Ravindran, director, data science, Verizon India, told AIM

Transformer-based models have been employed since 2018, particularly for real-time intervention in call and chat transcripts, aligning with generative AI use cases.

Interview Process

The team’s hiring process is well-defined and thorough, beginning with an internal posting for eight days to encourage applications from within the company. Subsequently, the job openings are publicised on various external platforms, including job boards, social media, and other professional networks, to attract a diverse and qualified pool of candidates.

“The meticulous and structured hiring process aims to thoroughly assess candidates’ decision-making and leadership capabilities, ensuring the identification of the right diverse talent to contribute to our success,” Samir Singh, director of talent acquisition, Verizon India, told AIM.

He further explained that for data science roles, the interview process consists of three comprehensive rounds. The initial stages focus on technical competencies, such as programming proficiency and experience in essential skills like Python, personalisation, forecasting, GenAI, Churn models, responsible AI, and networking. 

A techno-managerial round follows this, assessing the candidate’s ability to blend technical expertise with managerial skills, and finally, an interview with the leadership team.

On the other hand, data engineering roles involve two rounds that evaluate technical competencies in areas like Big Data, Hadoop, Teradata, and Google Cloud Platform (GCP), followed by a managerial round. 

Expectations

When joining the data science team at Verizon, candidates can expect a vibrant work culture, ample learning opportunities, and exposure to cutting-edge technologies. The company places a strong emphasis on integrity and expects candidates to embody this core value throughout their tenure.

As for expectations from candidates, Verizon values not only technical expertise but also the ability to connect the dots between technological advancements and business objectives. Candidates should confidently articulate their past or current roles during interviews. 

“In an AI-centric organisation like ours, where the daily focus is on enabling business through innovative solutions, the capacity to seamlessly integrate technical excellence with practical application is of great importance for success in data science roles,” said Singh. 

Work Culture

The company’s work culture centres around innovation, collaboration, and customer-centricity. Employees are encouraged to think creatively, take risks, and embrace change. Diversity and inclusion are integral values, creating a supportive and inclusive work environment. 

The company emphasises integrity, respect, performance, and accountability, offering benefits beyond the basics, like flexible working hours, wellness programs, and support for childcare and eldercare.

The key differentiator in Verizon’s work culture, especially within the data science team, is a strong emphasis on R&D, providing freedom for experimentation and learning, as shared by Singh. Apart from experimentation, an open culture fosters collaboration, making Verizon stand out as a great workplace.

Verizon prides itself on a diverse team, promoting gender diversity, supporting differently-abled individuals, and initiatives like WINGS (Women returnee program) and LGBTQ inclusion. The company has consciously made Diversity, Equity, and Inclusion (DEI) a core part of its culture.

“We foster a culture of inclusion, collaboration, and diversity both within the company and among our customers, suppliers, and business and community partners,” concluded Singh. 

If you think that you are a good fit for the company, check out its careers page now. 

The post Data Science Hiring and Interview Process at Verizon appeared first on Analytics India Magazine.

]]>