Latest News

Enterprise Innovators

April 28, 2026

Grainger's MarTech Lead On How Disciplined AI Governance Prevents Expensive Shelfware

Bart Lipinski, Senior Manager of Marketing Technology at Grainger, explains why companies that skip foundational data and governance work end up paying for powerful AI tools they only use to a fraction of their potential.

Credit: The Revenue Wire

Too many companies go technology first, and then they create friction, change management issues, and end up using only 15–25% of what they bought.

Bart Lipinski

Sr. Manager, Marketing Technology

Grainger

Enterprise AI budgets grew nearly 75% last year, but utilization has not kept pace. The postmortem spending analysis from procurement and the broader boardroom blames the tools: wrong model, wrong vendor, wrong feature set. But what if the diagnosis is backwards?

"Most organizations are using only 15-25% of what they actually buy," Bart Lipinski, Senior Manager of Marketing Technology at Grainger, told the Read Replica. He labeled it a readiness problem stemming from bad data and missing process as opposed to a vendor tooling problem. "If you put garbage into AI, you're going to get garbage out," Lipinski said. "Most people can't tell a good AI output from a confident-sounding bad one, and that's the most dangerous piece, because they will run with bad data at face value."

Lipinski leads a team of business analysts and data engineers responsible for implementing, managing, and optimizing the marketing technology stack and data infrastructure across Grainger's marketing organization. He manages a $6M+ annual MarTech budget, integrated Adobe's Real-Time CDP, led a CMS migration that generated $500K+ in annual savings, and stood up the marketing organization's process for vetting and approving new AI tools. His 25-year career spans senior marketing leadership at Motorola Solutions and Wolters Kluwer.

His prescription is a specific ordering: people first, then process, then technology. Most companies reverse the sequence and wonder why adoption stalls. "Too many companies go technology first, and then they create friction, change management issues," Lipinski said. At Grainger, the corrective starts before any tool gets purchased.

The governance gate

Grainger handles this through a centralized AI council that sits within IT, not marketing. Before any team across the company can purchase a new AI tool, the council evaluates whether the use case is already served by something in the existing stack. "Their job is to say, is your use case already served with something we already have?" Lipinski said. "Why would you duplicate and introduce tech debt? And why would you pay more for something you can already do?" Only after council approval do legal and procurement engage on terms and guardrails.

On the operational side, managing AI spend requires balancing access with cost control. "You can go very crazy, very quickly on tokens," Lipinski said. "You don't want people to get a pop-up every day saying their token jar needs to refill. But you also don't want a monthly bill that says you just used $100,000 worth of tokens and nobody knows who's paying for it."

The pattern Lipinski describes echoes what engineering teams are learning at the infrastructure level: the organizations building governance into their workflows rather than bolting it on after the fact are the ones shipping production systems instead of perpetual pilots.

Agents need onboarding, too

Lipinski draws a direct parallel between AI agents and new hires. Just as employees need context, role clarity, and domain knowledge to perform, agents need the same inputs to produce outputs worth acting on. Without them, they default to the lowest common denominator.

"Just as you'd write job descriptions when you're hiring, think of it the same way when you're building agents," he said. "Without the right context and the right skills, they're going to give you things that are the path of least resistance. That's not the intent when you're hiring a human, and it shouldn't be the intent when you're building an agent."

The corollary is one enterprise security teams are confronting in real time. When AI agents expand database exposure from a few engineers to the entire organization, the onboarding analogy stops being a metaphor; every agent that touches production data needs the same scoping, permissioning, and oversight a new hire would get.

Domain expertise as the trust layer

Lipinski pointed to an internal example where a team member built an agent within Snowflake Cortex that performed a week's worth of analysis in five minutes. The difference was that the builder was the owner of the underlying dataset. "He knows that dataset like nobody else in this company," Lipinski said. "That's who you need to teach the AI, because then you have confidence that it's taught correctly, and you have somebody who can validate those early outputs."

The principle is what practitioners across the industry are discovering: AI readiness starts with the fundamentals that most organizations have neglected, and domain expertise is what separates a system that hallucinates silently from one that gets debugged in minutes.

The ROI nobody can measure yet

Measuring AI's return remains the hardest part. Lipinski separates the question into two lanes: efficiency gains, which show up as cost savings through workflow automation, and revenue impact, which depends on whether AI-generated insights actually drive better outcomes than traditional methods.

"I told Snowflake Cortex to build me a model, and I asked it if I'm going to invest a thousand dollars, which channels should I invest in to get the biggest ROI? Did it actually drive growth? Did it drive a better outcome than putting a data scientist in running a typical regression model?" The answer, he said, is still evolving. "It's still more of an art than a science. I'd like it to be more science, but we all realize we're still really early."

The measurement problem mirrors what engineering leaders are hitting from the other side: organizations are struggling to figure out how to evaluate developer productivity with AI tools, and the ones tracking token consumption or pull request volume are asking the wrong questions entirely.

Foundations before features

For Lipinski, the long-term winners will be the companies that resist the pressure to adopt AI at the speed of headlines and instead invest in the same fundamentals that have always determined whether technology creates value or waste. "AI is only going to take the job of the person who can't demonstrate they add value with AI in their toolkit," he said. "If your foundation isn't there and you don't take your time to make sure it's solid, putting AI or any technology on top of it can lead to situations where you're going to regret it."

For Lipinski, AI is the latest instance of a recurring pattern: a new technology arrives, organizations spend ahead of their readiness to use it, and the gap between what was bought and what gets used widens. The companies that capture this wave of investment will be the ones that made sure the people, process, and data were ready before the tools arrived.