The Case Of 'A Billion APIs': Overcoming AI-Induced Monolithic Tech Debt No One Can Maintain
Vincenzo Agrillo, Principal Development Professional at MSG Global Solutions, discusses what happens when AI builds the frontend in an hour but nobody is minding the API contracts underneath.

We will have billions of APIs, each different from one another, and monolithic products that no one can maintain or evolve.
AI tools compress software delivery cycles in a very specific way: they generate implementation layers (frontend code, UI scaffolding, and early-stage prototypes) almost instantly, while leaving integration layers (API contracts, service boundaries, and data schemas) under-specified, creating a new form of technical debt.
The risk from interfaces that AI generates dynamically, without shared standards or long-term compatibility guarantees, is most visible at the API layer.
Vincenzo Agrillo, Principal Development Professional at MSG Global Solutions, frames the tension from a systems perspective shaped by decades in infrastructure, including ISO 27001-certified data center operations. In his experience, AI changes the topology of software construction itself.
“We will have billions of APIs, each different from one another, and monolithic products that no one can maintain or evolve,” Agrillo said, noting that AI may build the backend and frontend with zero attention to standard protocols.
The fear is based on the fact that AI-generated systems optimize locally. Each component may function correctly in isolation, but the system loses global coherence. APIs multiply without coordination, service boundaries drift, and integration assumptions break silently until maintenance becomes impractical.
Agrillo voiced concern that teams are losing the capabilities to design APIs thoughtfully on both the front and backend. He also added that the declining skill of precise technical specification is even more important when building with AI.
AI generates code faster than systems can agree on standards
Modern software development no longer struggles with raw implementation speed. It struggles with coordination across systems that evolve at different rates.
Agrillo described how teams rely on emerging protocols like MCP to manage AI-to-system communication. These protocols attempt to impose structure on a rapidly shifting ecosystem of model capabilities and integrations. However, even these standards remain unstable. “We use the MCP protocol to let our AI models talk with the external world. But we know that tomorrow MCP could be superseded by another technology.”
That instability forces teams to treat integration layers as temporary and replaceable, because AI-generated services change quickly and often don’t follow stable interface contracts. Engineers assume these boundaries will shift rather than stay fixed.
In response, engineers wrap AI-generated components in abstraction layers to isolate fast-changing code from system-level contracts. The problem is a mismatch in speed: AI produces interfaces quickly, while API standards and integration rules evolve slowly. This gap makes it hard to keep services compatible over time, and it reduces confidence that systems will behave the same after changes or upgrades.
Developers move from structured systems to probabilistic outputs
Traditional software systems move data through defined transformations: unstructured inputs are normalized into structured formats like schemas, tables, and APIs, which enforce predictable behavior across services. AI systems disrupt that linear flow. They take structured inputs like code, data models, or documents, and generate outputs that function as interfaces: APIs, service boundaries, and data schemas. But these interfaces are not always shaped by shared contracts or consistent design rules. Instead, they depend on context and generation, which means the same request can produce slightly different structures over time.
A simple mismatch shows the problem: one AI-generated service might expose a user object as {id, name}, while another returns {userId, fullName} for the same concept. Both work in isolation, but when a downstream billing service expects userId and receives id, the integration breaks. The logic is fine. The interface shape drifted just enough to stop aligning.
AI becomes a development partner before it becomes a code author
Agrillo expects a transition where AI begins to generate code, but for now places stronger emphasis on its role in testing, code review, and quality assurance, where it helps evaluate correctness and catch issues in implementations. That shift moves engineering effort away from writing implementation details and toward evaluating and validating AI-generated output.
Front-end development shows the most dramatic acceleration. “The frontend is a lightning strike for me: compression from one week to one hour to build a better experience. With style transfer, I just give a screenshot, and it gives me a UI," he said.
Backend systems do not experience the same compression because they depend on consistent service behavior, shared data contracts, and integration guarantees across systems, which limits how far automation can safely extend without introducing structural drift.
Specification becomes the primary control layer in AI-driven systems
As AI takes on more implementation work, reliability shifts upstream into specification quality. Developers must define constraints, interfaces, and expected behavior more precisely because AI systems do not consistently infer intent from inexact descriptions.
“In the past, we could give a colleague a task and work in parallel, with no need to know the technical specifications yourself. You can task an AI model, but you need to tell it precisely what to do," Agrillo said. "Writing good specifications is a fundamental skill.”
That shift replaces informal coordination with explicit system definition. In traditional teams, unclear requirements are often resolved through discussion or shared context during implementation. With AI-generated code, that ambiguity no longer resolves in conversation. It propagates directly into generated artifacts, where small differences in interpretation can produce structurally different outputs.
This introduces a second-order problem: specification drift. Even when developers believe they are describing the same system, variations in natural language prompts, documentation, or assumptions can lead to divergent implementations. Over time, this weakens consistency across services, increases integration effort, and turns specification writing into a continuous control mechanism rather than a one-time design step.
AI systems scale faster than architecture discipline, creating long-term risk
Teams respond to these risks by narrowing AI’s role to bounded functions such as code review, testing, and quality assurance, where it evaluates correctness and flags inconsistencies rather than actively shaping system design. This preserves architectural control while still accelerating implementation cycles.
The shift makes specification work central. As AI generates more of the implementation layer, engineers must define requirements, interfaces, and expected behavior with greater precision to prevent drift across services. Without clear specifications, generated components diverge in structure and weaken interoperability over time.
Agrillo believes the shift is not about writing more or less code by hand, but about moving effort into defining precise specifications, interface contracts, and expected behavior. This helps AI-generated code stay consistent across services instead of drifting into incompatible islands as systems scale.
“Protocols such as MCP are changing so fast we need to build something that is simple to convert to other technologies.”






