The Return to Database Fundamentals Driving Enterprise AI Readiness
Fawad Khan, DB Admin at Health Care Service Corporation, on why mastering advanced SQL skills for query optimization is often more effective for enterprise AI readiness than shiny tooling.

When teams focus on database fundamentals first, performance typically follows.
While enterprises are eager to deploy AI at scale, many teams are tripping over their own fragmented or poorly optimized data. Before deploying advanced models, let alone generate value, IT departments find they have to deal with the unglamorous reality of database management ensuring queries run fast, schemas are consistent, and access is controlled. Reports show that enterprise AI initiatives often stall when underlying systems cannot efficiently process the information or an organization's data readiness isn't aligned with what modern applications expect. Despite the hype around generative AI in business and the push for AI-driven data practices, success typically depends on making databases fast and accessible. That reality aligns with emerging best practices for AI-oriented data architectures, which prioritize clean plumbing as a foundation to support AI workloads at scale.
Fawad Khan is a Database Administrator specializing in SQL Server and Azure SQL at Health Care Service Corporation, where he manages the upstream environments that power analytics and emerging AI initiatives. HCSC has nearly 35,000 employees serving more than 27 million people nationwide. Khan specializes in infrastructure, access controls, and schema design, ensuring databases perform efficiently and helping teams determine whether new platforms are truly necessary or if existing systems and better SQL suffice.
Shopping your stack internally
"I’ve seen teams using a new tool but not utilizing it to its full capacity, while others adopt tools they don’t really need," he shared recently in an interview with the Read Replica. Khan recommended evaluating current platforms and core capabilities first, optimizing in-house resources, and adding additional software only when necessary. This approach reduces unnecessary complexity, lowers costs, and ensures that teams focus on solving actual data bottlenecks rather than chasing the latest tech trends or 'bolting on' the next form of tech debt.
That practical mindset shapes how his group manages HCSC's massive mix of databases, including Oracle, Teradata, DB2, Postgres, and SQL Server. To keep that environment reliable for both operational systems and new workloads, Khan’s team enforces company-wide design baselines. Those guidelines, combined with careful index design and regular review of query processing and execution plans, slash CPU and storage overhead so heavier analytics applications have room to run. In many cases, executing this level of query optimization relies more on practitioner skills than on new software. It requires understanding how indexes work under the hood and structuring joins in line with mid-to-advanced SQL techniques.
When query skill beats tooling
Khan stressed that query optimization relies more on skill than on software. "You don’t always need a special tool for query optimization, and mid-level to advanced SQL knowledge often suffices," he said. While AI or 'helper' tools can assist, Khan said over-reliance pales in comparison to mastery of SQL fundamentals such as joins, indexing, and execution planning remaining the key to efficiently accessing and manipulating large datasets.
Optimizing SQL is only part of the story; how data is partitioned and structured plays an equally critical role. For example, in one instance, Khan worked on a reporting database with millions of rows that had become sluggish for analysts. By partitioning the database and dividing the data into chunks, he restructured it so queries ran much faster. Similar dynamic partitioning strategies are becoming standard practice in large warehouses to help teams manage multi‑terabyte datasets without overwhelming production systems.
Khan emphasized the importance of structuring large datasets to improve query performance. "I was working on a project with millions of rows of data that needed to be utilized for reporting purposes, and querying the data was taking forever". By partitioning the database into chunks, he reduced query times significantly, enabling faster analytics and more efficient resource usage. This approach pairs well with patterns like read replicas, where partitioned data can be distributed across replicas to offload analytical workloads from production systems, ensuring both responsiveness and reliability at scale.
Measure twice, code once
Khan sees the impact of database partitioning and optimization as heavily reliant on the deep domain knowledge the team possesses. In his day-to-day work, "I should only have to do it once, and I never have to look back again". By understanding the data thoroughly, teams can design partitions and workflows that scale efficiently, reducing the need for constant tuning or reactive fixes.
Khan’s work is deeply programmatic, from building SQL packages to managing access and enforcing guidelines across new databases. He maintains code samples and experiments on his GitHub profile, reflecting a focus on repeatable patterns over one-off fixes. Consistent baselines at creation time can often do more for AI readiness than reactive, after-the-fact tuning. But even well-structured systems break down when requirements are incomplete or unclear. Many storage, clustering, and access issues trace back to gaps in what was defined at the start.
Plan before provision
Khan’s experience has shown him, time and time again, that effective database design starts with understanding the full scope of data needs and access patterns. "It comes down to the fact that when you’re getting requirements, they’re not fully clear," Khan shares. Incomplete or ambiguous requirements can lead to misaligned storage, clustering, and access setups that complicate downstream analytics.
Deployment issues often stem from missing or overlooked requirements. "Even when we plan everything according to the stated requirements, deployment frequently exposes missing information we didn’t anticipate," Khan said. Getting DB Admins involved early ensures that systems are correctly provisioned from the start, reducing costly retrofits later.
AI won’t save an organization if its data can’t keep up. Khan’s experience shows firsthand that clear requirements, disciplined partitioning, and repeatable database patterns are the real levers for performance at scale. Teams that focus on these fundamentals first, before chasing new tools, open the door to faster queries, reliable analytics, and a foundation that enables AI workloads to run safely and efficiently across production environments.
As Khan put it simply, "When teams focus on database fundamentals first, performance typically follows."






