Branded Content

Tredence’s Case for Agentic Data Engineering

“Data engineering is the single biggest bottleneck that stops businesses from getting real value out of their data.”

In today’s data-intensive world, the traditional approach to data engineering is increasingly seen as a bottleneck rather than an enabler of business growth. 

At AIM’s DES 2025 event, Maulik Dixit, a senior director of data engineering at Tredence, highlighted the problem that many organisations face.

With more than 20 years of experience building and modernising data platforms, Maulik highlighted the growing gap between the promise of data and the reality of getting value from it.

“Data engineering is the single biggest bottleneck, in my view, that stops businesses from getting real value out of their data.”

He strongly recommended using intelligent agents to improve how data systems are built, monitored, and maintained. The focus was on moving away from manual, slow processes to something more efficient, scalable, and ready for the future.

Understanding the Investment in Data

Maulik began by contextualising the scale of data investment in large enterprises. 

Using an example, he explained how even large companies must use their data analytics budgets wisely, as it’s a small portion of overall revenue. 

He explained that for a company with $50 billion in revenue, about 4% typically goes to IT, which is around $2 billion. Of that, 20% is spent on data analytics. So, in effect, $400 million needs to be spent very wisely on data analytics projects.

This highlights the need to prioritise high-impact, value-driven projects.

Despite this significant outlay, data engineering processes often suffer from long timelines, rapidly evolving technologies, a need for highly specialised talent, and low automation. These challenges create friction and slow down the speed at which businesses can derive actionable insights and value from their data.

To address these constraints, Maulik gave a refresher on the architecture of AI agents. Unlike traditional automation tools that follow fixed rules, AI agents powered by generative AI are capable of reasoning, making decisions, and responding to natural language inputs. They function using four foundational components: tools (reusable functions), memory (contextual storage), planning (decision-making), and action (execution). This architecture allows them to perform complex workflows with minimal human intervention.

Maulik illustrated the transformation by comparing the traditional and agentic approaches to data operations. 

In the conventional setup, data engineers manually create pipelines, monitor systems, respond to incidents, and generate reports. This is not only time-consuming but also prone to delays and errors. In contrast, agentic systems can automatically generate pipelines, conduct real-time monitoring, trigger event-driven actions, and provide insight-based dashboards. The shift from reactive to proactive incident handling significantly reduces downtime and enhances reliability.

He shared a relatable story to show the impact of AI agents. 

When a critical ETL (extract, transform, and load) job fails, support staff typically struggle to diagnose the issue, leading to delays and missed SLAs. With an AI agent in the picture, the problem is detected, resolved, and communicated automatically within minutes. This highlights significant improvements in speed and reliability.

AI agents open up a wide range of use cases in data engineering. 

One of the most impactful is code translation, which automatically converts legacy ETL code into modern platform-compatible code. This traditional multi-year undertaking can now be completed in a fraction of the time. 

“Ingesting a new data source, which used to take weeks for us, is now going to take some days,” Maulik added.

Other use cases include automated pipeline documentation, accelerated onboarding of new data sources, real-time data quality checks, and intelligent support. Each of these areas contributes to faster, more efficient, and more scalable data operations.

Building the Future with Trust

However, Maulik was careful to acknowledge the associated risks. In data engineering, accuracy is paramount. Anything less than a 99% SLA is often unacceptable to the business. He emphasised that AI agents must be monitored by humans, with explainable actions, secure access, and robust data protection. Risks such as bias, hallucination, and data leakage must be mitigated through thoughtful implementation of traceability mechanisms, retrieval-augmented generation techniques, and strict access controls.

Trust, explainability, and security form the foundation of successful agent deployment.

Maulik concluded by hinting towards a shift from traditional data teams to hybrid teams composed of data engineers and agent developers. These new roles will focus on building, training, and maintaining intelligent systems. Tredence is actively building AI agents across all key data engineering functions, including data ingestion, transformation, consumption, and documentation.

The impact from a business value standpoint is immense. With significantly reduced time-to-market, companies can launch data products in weeks instead of months or years. The winners in this new era will not be those with the largest engineering teams but those who can build the smartest and most efficient AI agents. 

Picture of Aditi Suresh
Aditi Suresh
I hold a degree in political science, and am interested in how AI and online culture intersect. I am at aditi.suresh@analyticsindiamag.com & x.com/aditisuresh12
Related Posts
Download the easiest way to
stay informed