Latest News

Why The Success Of AI In Regulated Industries Depends On Compliance-As-Architecture

AI systems expert, Santhosh Bandari of TCS, talks compliance-as-architecture in banking and beyond.

Credit: The Read Replica

Without compliance, architecture doesn't scale. It simply becomes an extremely risky asset.

Santhosh Bandari

Generative AI Developer

TCS for client BMO

Building AI systems in regulated environments is increasingly an enforcement problem at the level of data access rather than a software design challenge. Modern pipelines span identity providers, APIs, vector stores, and model runtimes, creating enforcement gaps when governance is implemented in application logic. As a result, enterprises are shifting compliance closer to the system primitives that actually execute work such as query planning, data access, and storage-level policies. In this model, governance is evaluated at runtime rather than interpreted by application code or developer discipline. Compliance becomes a deterministic property of the underlying architecture rather than a set of rules distributed across services. This reframes security and governance as infrastructure concerns embedded directly into how data is accessed and executed.

We spoke with Santhosh Bandari, a forward-deployed generative AI engineer at TCS working on AI systems for banking client BMO, where he focuses on designing and deploying regulated AI pipelines in production environments. He previously built RAG systems for highly regulated insurance data at MetLife, working directly with sensitive workloads that required strict data access controls and auditability at scale. Across both enterprise environments, Bandari has focused his experience at the intersection of AI system design, data infrastructure, and governance enforcement, where compliance is treated as a core architectural constraint rather than a downstream policy requirement.

Compliance as a unified control layer across the AI stack

As enterprises operationalize AI on sensitive data, security and governance expectations continue to climb, and compliance becomes an architectural constraint. Organizations now operate under overlapping regimes such as GDPR, SOC 2, ISO 27001, and emerging AI-specific regulations, each introducing its own set of controls that must be enforced consistently across the stack. Rather than managing these as separate policy domains, some teams are converging them into a single control plane that spans data ingestion, retrieval, and model execution.

Bandari argued that this shift is driven by a fundamental scaling issue: compliance that relies on developer memory or manual enforcement breaks down in distributed AI systems where workflows are dynamic and non-deterministic. "Without compliance, architecture doesn't scale. It simply becomes an extremely risky asset," Bandari said. "Compliance as code is a combination of engineering practices and governance frameworks with automated controls embedded across the development lifecycle." In this model, enforcement is no longer procedural because it is encoded directly into the system itself.

Across the regimes mentioned above, enterprises are converging on shared control primitives such as encryption, access control, and auditability rather than implementing duplicated logic per regulation. "Organizations are building a common control layer with standards like data encryption, access management, and audit logging," Bandari said. "They map multiple frameworks into those same controls rather than duplicating them." This effectively transforms compliance from a reporting burden into a reusable system abstraction that behaves more like infrastructure than policy.

To make this abstraction durable, enforcement is being pushed down the stack into the database layer, where data access can be governed at the point of interaction. By embedding controls such as row-level security, column-level policies, and audit logging directly into the data layer, organizations eliminate reliance on upstream application logic. The result is a system where compliance is not enforced around the data, but enforced through it.

The database as the primary enforcement boundary

Security and access control are being enforced at query execution time inside the database, shifting governance from application logic to deterministic system-level policy enforcement. Bandari views the database as becoming the execution boundary for security and governance, not just a storage layer. Instead of delegating enforcement to application logic, enterprises are embedding access control directly into the query execution path, where policies are evaluated deterministically at runtime. This shifts the trust boundary away from application code and into the database engine itself, ensuring that every data interaction is governed by native rules. As Bandari shared, his teams are looking at, "RLS/CLS, data masking, strict access controls like RBAC and ABAC, and policy-based query filtering; even for AI generated queries." In his experience, every data access needs to be recorded and traceable.

"We audit absolutely everything." This approach enforces governance at query execution rather than at the request layer, making unauthorized access structurally infeasible rather than procedurally managed. However, observability introduces a parallel risk where unstructured query logs can concentrate sensitive metadata, including user behavior and accessed fields. To mitigate this, enterprises are adopting privacy-preserving logging models that treat logs as governed data, masking or tokenizing sensitive attributes while preserving metadata like timestamps, permissions, and access patterns for auditability.

Application-layer compliance breaks without architectural abstraction

When compliance is implemented at the application layer, enforcement depends on distributed identity systems and human workflows rather than guaranteed, system-level execution. In Bandari's experience, engineers typically wire authentication into external identity providers, layer input/output validation at the API boundary, and route sensitive mutations through approval workflows tied to systems like ServiceNow and hierarchical escalation chains. While this model satisfies procedural governance, it shifts enforcement outside the system of record, making correctness dependent on process adherence rather than system guarantees. As Bandari notes, this is why abstraction becomes critical because security and compliance must be handled by internal platforms that operate transparently beneath the application layer, allowing developers to build without manually reproducing policy logic in every service.

This reliance on human and process-driven controls introduces failure modes that are difficult to detect in real time. Bandari described an incident where infrastructure signals appeared healthy, but the system was non-functional due to a 32-bit hashed credential that had remained unchanged for 15 years. "That was a tough day for us, but those controls must exist within the organization," he said. "They had been using the same password for fifteen years. If someone with access changes the password and leaves the organization, the knowledge leaves with them." The issue was not technical failure in isolation, but instead, the absence of system-enforced ownership and rotation policies exposed how undocumented human dependencies become structural risk in otherwise modern architectures.

For Bandari, the incident underlined the need for routine credential rotation, clear change ownership, and auditable access paths. Embedding these rigorous controls often introduces calculated architectural tradeoffs. In one recent RAG system redesign for his past role, data privacy requirements led his team to move away from open retrieval and implement a highly restricted, role-based filtering model. Designing an AI system that incorporates modern governance guardrails like flagging high-risk outputs can slow down retrieval processes, but as Bandari notes, "the added architectural overhead and latency was a benefit because the model only accessed authorized and traceable data which improved trust in the system, and became easily scalable and adaptable across the enterprise." In Bandari's experience, rather than constraining development velocity, he's seen well-designed compliance layers reshape system architecture into something inherently more robust and extensible. "I can strongly say that compliance didn't block innovation. It actually reshaped the architecture into something more secure and enterprise-ready."

Turning institutional memory into embedded operational intelligence

Meanwhile, incident response is shifting from manual triage workflows to system-aware automation where operational knowledge is embedded directly into observability and infrastructure layers. Typically, at the infrastructure layer, organizations standardize on common monitoring stacks and just-in-time privileged access. But the operational workflows around those tools often depend on manual triage with bridge calls, ticket routing, and escalation chains that play out over hours or days. Manual triage creates operational bottlenecks that some organizations bypass with agentic AI. By granting LLMs controlled access to crawl system diagrams and observability outputs, teams move from reactive investigation to automated diagnosis and routing. This leads to agentic AI systems that are able to collapse incident response timelines by integrating directly with infrastructure metadata, observability pipelines, and system topology. In Bandari's words, "the teams no longer waste five to six hours managing the issue because within a minute, we can identify the potential root cause and the system will generate an RCA document." This small change enables teams to embed the operational knowledge directly into the tooling layer rather than relying on institutional memory. "With risk-based routing, the LLM identifies who we need to contact, searches all the knowledge documents, and finds all the server owners. It provides the appropriate point of contact, change coordinators, and escalation paths," he said.

From periodic audits to continuous evidence

Static, point-in-time audits are giving way to continuous monitoring as the way regulated industries demonstrate control. "AI is shaping the future by providing real-time monitoring of data and models for continuous compliance instead of periodic audits," Bandari said. The teams set up for that shift have already pushed enforcement out of approval workflows and into the query layer, where every data access generates its own evidence. The ones still treating compliance as something the application logic handles will spend the next audit cycle reconstructing from logs what their architecture should have been producing in real time.