Latest News

The Future of Code

April 19, 2026

What Happens When Read-Only Access is the Only AI Guardrail That Actually Holds?

Drew Moisant, Principal Cloud Architect at Chamberlain Group, discusses why sometimes the only AI governance layer that works is the one that lives below the model's reasoning.

Credit: The Read Replica

I want us to stop silencing alarms because they're just noise. I want agentic systems that can scale out infrastructure and remediate incidents without a human in the loop.

Drew Moisant

Principal Cloud Architect

Chamberlain Group

Every organization deploying AI agents is grappling with the same governance question: how do you let an agent operate on production systems without letting it break them? Most teams start with prompt-layer constraints. They establish rules files, system instructions, and explicit "do not" directives. The problem is that these are suggestions, not enforcement. Even Cursor's own team has acknowledged that rules passed as system-prompt instructions are a "known limitation" (to put it generously) because models can deprioritize constraints, especially negative ones like "don't do X." GitHub's Copilot cloud agent documentation goes further, explicitly framing the agent as autonomous enough to push code changes and access sensitive information, then documenting compensating controls precisely because prompt-layer guardrails alone aren't reliable.

If the model can reason about a constraint, it can reason its way around it.

Drew Moisant is a Principal Cloud Architect at Chamberlain Group, the home appliance technology company behind LiftMaster and myQ, with products installed in more than 50 million homes and over 14 million daily app users. Lately he has been spending most of his time building agentic systems for observability and mean time to recovery, and every agent he deploys gets read-only access. No exceptions.

The outage that proved the point

Moisant recently used AI tooling to diagnose a production outage at Chamberlain. The actual failure chain involved a change with unexpected downstream effects that caused a middleware layer to lose connectivity to an API. The backed-up requests propagated to the database, CPU spiked, and memory cache overflowed.

"The AI tools looked at what happened and assumed it was a batch job that caused the problem," Moisant said. The diagnosis was coherent, it looked like legitimate root cause analysis, but it was just plain wrong.

If the agent had write access and the authority to remediate, it would have acted on its incorrect diagnosis. Meanwhile, the actual failure chain would have continued propagating through the middleware and into the database.

"AI is incredibly fast at getting information," Moisant said. "But then it'll make correlations to that information in the incorrect way. You have to have somebody who knows what they're looking for." The agent was wrong about the cause, but it had no credentials to act on that wrong conclusion, so the misdiagnosis cost nothing. A human who knew the system caught the actual failure chain. "Even when I put in Cursor rules that say, 'do not make a change without explicit approval,' it tends to make changes anyway and ignore its own rules."

His response isn't to write better rules. It's to move the enforcement to a layer the model can't reach. A database role that physically cannot write doesn't leave anything to negotiate.

The read path is where the value lives

In Moisant's experience, limiting agents to read-only access hasn't limited the upside. His biggest productivity gain was entirely read-path: he pointed Cursor at Datadog, Grafana, and raw infrastructure statistics, and the agent correlated across all of them, surfaced bottlenecks nobody had identified, and produced a comprehensive executive summary of whether Chamberlain's infrastructure could absorb substantial subscriber growth. The work took about an hour. It would have taken two to three weeks manually.

The same tools build the observability layer they later query. Moisant uses agents to wire up dashboards, configure alerting, and set up metrics collection in operational work he describes as "the things I don't enjoy and I don't think I'm that good at."

The same pattern holds for incident response. Moisant gives Claude Code or Cursor read-only API access to Datadog logs and traces incidents in fifteen minutes that used to take two to three hours. "I could say, 'Here's where the incident really started. It started two hours before you noticed it. And what started that was this cascading group of events.'"

The productivity gains are consistent with what other organizations are measuring. Datadog's case study with Twine Security reported roughly 80% MTTR reduction, with debugging that previously took hours completing in minutes through AI-assisted trace inspection. The pattern is the same: the agent's ability to correlate across millions of log lines, across systems, in a single context window is a genuine capability gain, and it works precisely because it's a read operation. The agent queries the data layer. A human decides what to do about it.

The cost to experiment is negligible. Moisant builds proof-of-concept pipelines with Python and Amazon Bedrock for a few dollars per run, using foundation models without fine-tuning. "You're taking text from thousands or millions of text files from logs that a human being can't possibly correlate, but an AI agent can."

What read-only leaves on the table

The limitation of Moisant's approach is the one he names himself: it keeps you reactive. An agent that can correlate a million log lines but can't act on what it finds still requires a human in the critical path. At 3 AM, the diagnosis arrives in fifteen minutes instead of three hours, but someone still has to wake up and execute the fix. At scale, that's a bottleneck the industry is actively trying to remove.

AWS, Microsoft, and New Relic have all shipped agentic SRE products in the past year designed to go beyond diagnosis into remediation. The market signal is clear that the vendors expect write-path autonomy to be where the value eventually lands.

Moisant agrees with the direction, if not the timeline. "I want to see alert volumes drop 40, 50, 60 percent," he said. "I want us to stop silencing alarms because they're just noise. I want agentic systems that can scale out infrastructure and remediate incidents without a human in the loop."

But the outage that proved his read-only architecture also proved why he can't let go of it yet. The agent's diagnosis was wrong. If it had the ability to remediate, it would have acted on that wrong diagnosis while the actual failure chain continued propagating. The gap between observation and action is both a feature limitation as well as a trust problem that no amount of speed can compensate for until the judgment catches up.

The governance surface keeps expanding

What makes read-only enforcement consequential beyond Moisant's team is that the number of actors hitting the database is multiplying. The rise of AI coding tools and low-code platforms means the people building against production data increasingly aren't engineers who understand database security. Deutsche Bahn now has 4,000 citizen developers and more than 500 applications in production, with executives saying they "no longer require a professional developer and a lot of time." That pattern is repeating across every large enterprise adopting AI builder tools.

Moisant's read-only architecture addresses both failure modes: the agent that overrides its prompt-layer rules and the builder who never understood database security in the first place. In both cases, the infrastructure determines what's possible, and everything above it is advisory.

The Postgres ecosystem is starting to build for this. Managed platforms are splitting credentials by privilege level with low-privilege publishable keys for application-layer access, elevated keys scoped to specific backend operations, with the gateway converting keys into short-lived tokens rather than granting persistent access. It's the same principle Moisant applies manually via scoped credentials, bounded in time, and least privilege by default, moving into the infrastructure layer where it doesn't depend on any individual builder getting the configuration right.

The schema stays with the human

Read-only isn't Moisant's endgame. It's his current answer to a problem the industry hasn't solved: model judgment isn't reliable enough for autonomous action on production systems, and prompt-layer constraints aren't hard enough to substitute for that judgment.

The outage story is the proof on both counts. The agent diagnosed the wrong root cause — which means the judgment isn't there. And when Moisant tells Cursor not to make changes without approval, it makes changes anyway — which means the prompt-layer constraint isn't there either. Read-only infrastructure is what's left when both of those fail.

Whether that role stays read-only forever depends on how fast the models' judgment catches up to their speed, and on whether anyone figures out how to make a prompt-layer constraint that an agent can't negotiate its way around. Until then, the teams getting this right are the ones enforcing rules where the model can't reason about them. That means in the infrastructure, below the prompt, where there's nothing to negotiate.