Latest News

When AI Agents Go 'God Mode', the Security Perimeter Must Move to the Database

Mohit Bansal, Security Engineering Manager at Webflow, on how AI agents expanded database exposure from a few engineers to the entire organization, and why security needs to go deeper at the data layer.

Security teams are usually so small, and we as the security practitioners don't want to be a blocker, so we encourage work to happen in good faith.

Mohit Bansal

Security Engineering Manager

Webflow

The number of people who could modify a production database used to be a short list. AI agents have made it more closely resemble an org chart. When product managers, marketers, and designers all have MCPs connected to Cursor and the ability to make API calls against live data, the security model that assumed database access was an engineering concern stops working.

Many security-conscious engineers at established companies would balk at the notion of allowing non-technical teams anywhere near the production database. But the access isn't always arriving through formal provisioning. It often comes through tooling that was adopted for speed and never scoped for security. In one recent survey, securing AI agents ranked as the number-one unresolved concern among more than 100 Fortune 500 CISOs, above vulnerability management, data loss prevention, and third-party risk. Nearly 70% of the enterprises surveyed already have agents in production. The data confirms that access explosion isn't only happening at startups and small businesses that "don't know better", but also slipping into established companies feeling pressure from execs looking for AI productivity from the entire org.

Mohit Bansal is a Security Engineering Manager at Webflow, where his job is keeping standard security practices intact while agentic AI adoption transforms how every team in the company builds. He said his team is prepping to catch and prevent incidents early as their likelihood of occurrence only grows.

Everyone is a builder, and every builder touches the database

"There are certain settings in AI agents that can enable 'God mode', and those are very dangerous," Bansal said. "As long as the agent is asking for permission and you're reviewing what it's touching, it's fine. But as soon as you let it loose and it starts making changes on its own, specifically to databases, that's when it gets dangerous."

Bansal said that previously there were four engineers with the ability to expose the database, but he now has to prepare access routes for product and marketing teams that are building with AI agents that have the potential to carry the same access. The result for the company from a growth perspective is net-positive for obvious reasons, but the downside risk must be weighed.

"Select teams have an MCP server connected to Cursor or other tools, and they can make API calls or data modifications. So the threat landscape has evolved to the point where security teams have to step up and scale themselves to match engineering because everyone is engineering now to some degree."

The payoff keeps the pressure on. Projects that used to take six months are shipping in a month. Nobody wants to be the team that slows that down, least of all security. "Security teams are usually so small, and we as the security practitioners don't want to be a blocker so we encourage work to happen in good faith."

The sandbox that wasn't safe

The incident that made the problem concrete for Bansal at Webflow started in a sandbox. His team regularly dumps data from their production database into sandbox instances for experimentation and prototyping. On one occasion, someone discovered PII in the sandbox that an AI agent had ingested.

A DSPM sensor caught it and the team corrected the problem. But detection isn't prevention, and Bansal knows it. "In the first place, how do we avoid it? An obvious answer for that is not always clear." In that example, the root cause wasn't a misconfigured tool. It was human behavior compounding a permissioning gap. In practice, users copy production data into sandboxes, reuse credentials, or point tools directly at prod because it's easier, Bansal said. Once the data lands somewhere with broader access, the role-based access controls that governed it in production stop mattering.

"This type of work should be very binary," Bansal said. "If someone has access, they have access. But if they dump that data into the sandbox, then role-based access control goes in vain because now the data is somewhere everybody has access to it."

That's the failure mode the traditional security stack wasn't designed for. The controls are environment-based: dev vs. prod, network isolation, credential scoping. They assume data stays where it was provisioned. Agents and the humans directing them don't respect those boundaries, not out of malice but out of convenience.

Going deeper at the data layer

Fortunately, Bansal's team isn't starting from zero. Webflow's current security architecture includes isolating the production database from development environments, running security reviews and scans on anything moving from prototype to production, pen testing to catch over-permissioning, and DSPM sensors for detection. It's a layered approach, and so far it's prevented a major incident.

But Bansal is candid about where the gaps are. Asked how his team enforces row-level security or scoped access policies at the database layer, he said work is ongoing, but coverage is trending upward. Overall, anecdotal stories like this suggest that the industry's security posture is trailing the access explosion that AI agents created. The perimeter controls that worked when four engineers had production access aren't enough when hundreds of people across the organization can modify data through agents, and the next layer of defense, at the data layer itself, isn't widely deployed yet.

Baking it into the infrastructure

Bansal pointed to AWS making S3 buckets private by default as a precedent for access control baked into infrastructure rather than left to configuration. Elsewhere, modern platforms building on Postgres are making row-level security a default rather than an opt-in. They are enforcing access rules at the data layer because the environment-level controls alone can't contain what agents and their users are doing.

Bansal sees the future as a hybrid between database-level security as the foundation, and everything else. Endpoint monitoring, developer-level controls, DSPM sensors are all signals for how well that foundation is performing. "The security at the database level, that would stop a lot more risk," he said. "But the other security features are going to be a signal for how well you're doing it. If we're still seeing issues at the endpoint level, that's an early indicator that we missed something at the database level."

The teams solving this problem in 2026 will be the ones that accepted the old perimeter is gone and started enforcing rules where the data actually lives. Platforms that bake in access controls by default will help, but most enterprises aren't there yet, and the gap between where security teams are and where they need to be isn't closing on its own. Until it does, the backstop is the people in Bansal's position, toeing the line between enablement and sensible inhibition.