Agents Kicked In the Wall Between Analytics and Operations, and Postgres Is What's on the Other Side
Shubham Haranale, DevOps and MLOps engineer at Turing, on how AI agents are driving a convergence of transactional, analytical, and ML workloads on Postgres.

The boundaries between analytics, operations, and AI are collapsing. With agents, everything becomes real-time and interconnected.
For most of the modern data stack era, the architecture was modular by design. Separate systems for separate jobs, each optimized for its own workload. Teams built, hired, and budgeted around those walls. Now AI agents are tearing them down by accelerating what engineering teams have wanted for years: analytics and operations converging on the same data layer, in real time. The stitched-together stacks that kept these concerns separate are sometimes cracking under the load. For teams already planning to consolidate, agents just moved up the timeline.
The infrastructure question of 2026 is how much of this Postgres can absorb, and what has to change at the data layer to make it work. OpenAI offered one answer in January when it disclosed that ChatGPT's user-facing workload runs on a single-primary Postgres instance with nearly 50 read replicas, handling millions of queries per second for 800 million users. Gartner predicts 40% of enterprise applications will embed task-specific AI agents by the end of 2026, up from less than 5% in 2025. Fragmented governance, unpredictable latency, and runaway compute costs were manageable when humans were the only ones querying. Agents generate more queries than humans, continuously and autonomously, and they don't pace themselves or batch their work to stay within budget. They just run.
Shubham Haranale has been watching this convergence play out in production. A senior DevOps and MLOps engineer at Turing with experience across startups and enterprise clients, Haranale runs Postgres at the data layer, Kubernetes for compute, and Airflow, Spark, and Kafka for pipelines. His career arc from DevOps to MLOps to AI ops traces the exact shift the industry is negotiating right now, and he's living it daily: keeping production systems stable while agents start pulling the switches.
It all comes back to Postgres
"The boundaries between analytics, operations, and AI are collapsing. Traditionally, you had Postgres for your operational data, a warehouse for analytics, and separate pipelines for ML. With agents, everything becomes real-time and interconnected."
The industry is placing bets accordingly. Databricks' launch of Lakebase, a Postgres-compatible OLTP database wired directly into its analytics platform, formalizes the convergence Haranale is describing: transactional and analytical workloads sharing a governed foundation instead of bridging the gap with ETL.
The fragmentation problem follows predictably. "When you stitch together a database, a data warehouse, feature stores, and ML pipelines, you get data duplication, governance gaps, and latency," Haranale said. He's watched those exact failure modes play out across production systems previously at multiple companies, each one running Postgres as the operational backbone.
Correctness is the new performance
"Scalability got answered years ago. The question now is whether your system behaves correctly, consistently, and predictably under continuous automated use," Haranale told us in a recent interview.
It's a shift the broader industry is registering. In LangChain's 2026 State of AI Agents report, quality overtook cost as the primary barrier to putting agents into production — cited by 32% of the 1,300 practitioners surveyed, while cost concerns dropped year over year.
Infrastructure engineering has spent decades on performance questions like how fast, how available, and how scalable they could get in an increasingly data-saturated world. Postgres got very good at answering those. They were the right metrics for a world where humans drove every query.
But agents don't just fetch and serve data in the traditional sense. They act on it, continuously, at a volume that compounds every inefficiency in the underlying layers. When that happens, the small problems in data consistency, latency, and governance scale into impact that outruns human oversight.
The nuance of convergence
But he pushes back on the idea that the fix is cramming everything into a single database. "Fragmentation can be the root problem, but the solution isn't literally merging everything into one system," he said. "Postgres is optimized for transactions. Warehouses are optimized for analytics. Feature stores are optimized for serving ML models. Trying to force everything into one system can easily create new bottlenecks."
His answer is logical unification, not physical consolidation. In practice, that means ditching batch ETL for real-time data movement: change data capture, streaming via Kafka, keeping the warehouse and ML systems in sync with Postgres as the operational center of gravity. "The focus is shifting to synchronization and consistency across the whole system." The upstream ecosystem is moving in the same direction. Google Cloud recently disclosed that its core Postgres contributions are focused on advancing logical replication toward active-active configurations, including automatic conflict detection at the row level — the kind of synchronization plumbing that makes Postgres-centric architectures viable at scale.
What might sound like a concession to the best-of-breed camp is actually a Postgres-centric architecture with better plumbing. Postgres stays at the center; everything else syncs to it. The question that remains: How do you govern all of it once agents start generating load that humans never did?
Beyond the batch
For Haranale, the answer came in production. "The system workload was fine in batch mode," he said. "But when agent triggers were introduced, queries started executing repeatedly. Connection spikes happened. Latency climbed."
The tricky part: nothing crashed. "The system just became so unpredictable." No alarms or outages. Just a Postgres-backed pipeline that was technically running but could no longer be trusted to return consistent results in consistent time. The root cause was structural: missing indexes on tables that agents were now hitting at high volume, no rate limiting on automated triggers, and too many repetitive queries hammering Postgres because nobody had designed for a workload where the same data gets requested dozens of times per minute by autonomous processes instead of once every few minutes by a human refreshing a dashboard.
The fix was Postgres-native. "I used EXPLAIN ANALYZE to check query plans, found the missing indexes, identified sequential scans on large tables," Haranale said. He added proper indexing, introduced query deduplication, and capped concurrent executions. The system stabilized.
But the fix is the boring part. Indexes and rate limits are day-one Postgres hygiene. What makes the story worth telling is the failure mode. A system that performed perfectly under human-driven batch workloads degraded the moment agents started operating against it at agent-native volume. The degradation was subtle enough to skip every alert in the stack, and corrosive enough to make the team stop trusting the data.
Governance as a workflow
"Governance is less a tool and more about enforcing consistency across data, access, and behavior across the stack," said Haranale. His framework has layers: data consistency first, meaning clear schemas, data contracts between systems, and validation at ingress rather than hoping to catch problems downstream. Then access control through centralized IAM policies that determine who and what can reach each system. Then observability and auditability: logging queries, tracking automated behaviors, enforcing traceability. "In most systems I've seen, governance is applied as a tool rather than as a workflow," he said. "That's the gap."
The Postgres ecosystem is moving in this direction. Row Level Security, for example, enforces access control at the database layer rather than relying on application logic to get it right. Platforms building on Postgres are making RLS a default rather than an opt-in, treating it as infrastructure rather than configuration. When agents are the ones querying, the database has to enforce the rules because no human is checking each request.
The bottleneck has moved
But even with governance solved at the workflow level, Haranale sees a deeper problem. "I've gotten very good at infrastructure over the years. Moving data, running pipelines. But with agents, systems are now making decisions continuously, based on live and sometimes imperfect data." The gap he's flagging is between performance and correctness. "If data is slightly inconsistent, the decisions still execute, but with less control over whether they're right. There are no validation layers that can catch errors before they propagate."
That's the crux. A system that's fast and available but returning slightly wrong results to an agent that then acts on those results autonomously is a trust problem. And the traditional observability stack, Prometheus, Grafana, latency dashboards, none of it was designed to answer the question that actually matters: did the agent make the right decision?
"The industry is moving from tool-focused thinking to system-level thinking," Haranale said. "Learning how systems behave, not just how they work. Focusing on data consistency over data movement. And treating design as the most underrated part of building for agents." The teams solving for correctness at the data layer will be the ones shipping production agentic systems. The ones still optimizing for performance alone will keep wondering why their agents work in staging and fall apart in prod. The systems Haranale admires most aren't the most advanced. "The best systems I've seen are the most predictable and manageable."





