All resources

DataDevOps Is Inevitable. The Question is When.

The moment your data needs to support live operations instead of just retrospective analysis, a lot of assumptions stop holding. EASL co-founder John Derham describes what starts breaking down in data environments at that inflection point, why AI makes those weaknesses impossible to ignore, and how the organizations handling it best are fundamentally rethinking who's responsible for data infrastructure.

John Derham
Posts
2/23/2026
7
min
DataDevOps Is Inevitable. The Question is When.

There’s an unspoken assumption in a lot of organizations that the data infrastructure will hold together long enough to get through the next initiative, and that anything deeper can wait until there’s time to address it properly. That assumption pushes risk forward and relies on timing to work out in everyone’s favor.

I get it. Nobody wants to rebuild the engine while the car is moving. But at some point, you have to acknowledge that the engine is making some very concerning noises, and ignoring them doesn't make them go away.

Your data feeds live systems: automated decisions, customer-facing applications, risk models, and machine learning pipelines that never stop running. It arrives from internal platforms, external partners, acquired systems, and third-party sources that evolve on their own schedules, without regard for your roadmap.

When your infrastructure assumes stability, each change introduces friction that has to be resolved manually. Over time, that friction stops feeling like a problem and starts feeling like normal operating conditions.

Why AI makes this worse

AI doesn't create these problems, and it also will not work around them.

Machine learning systems need data that's current, consistent, and traceable. When you try to build AI on top of infrastructure that barely handles its current workload, things fall apart fast. Models behave unpredictably; nobody can explain why outputs are wrong, and as a result, the operational teams stop trusting the outcomes.

The usual response is to slow down or pause the AI initiative. Leadership concludes that the technology isn't mature enough, when the root of the problem is that the data environment can't support it. You can debate whether AI is ready for your organization, but if your infrastructure can't handle variability and change, you're not even in the conversation.

What DataDevOps actually means

DataDevOps isn't a platform you buy or a vendor you hire. It's a different operating model for how you handle data infrastructure.

Data doesn’t behave predictably for very long. When systems can’t absorb change on their own, people compensate, and visibility gives way to guesswork.

Teams that operate this way don’t make complexity disappear. What changes is how that complexity shows up day to day. Instead of relying on a small number of people to smooth things over when something breaks, they put structure around how data flows and how changes are handled, which reduces surprise and limits blast radius.

Over time, this alters how planning happens. Work stops being gated by fear of touching the wrong thing, and the organization regains some confidence in its ability to adapt without destabilizing itself.

The timing question

DataDevOps is coming for your organization whether you want it or not. The forces driving it—AI adoption, operational data requirements, regulatory pressure, systems that won't stop evolving—aren't going away.

Some organizations will adopt it deliberately while they still have options. Others will be forced into it by failing initiatives or competitive pressure. Both groups end up in the same place eventually. One route is just more expensive and more chaotic.

At that point, data is no longer the bottleneck it used to be. You can choose when that happens, or you can wait until you don't have a choice, but you don't get to avoid the decision.

We’ve laid out how this operating model works in practice in our DataDevOps whitepaper. It’s a deeper look at how teams move from fragile, person-dependent data environments to systems that can absorb change without slowing everything else down.

John is a visionary architect of innovative technology stacks. His products are state-of-the-art layers of integrated AI and proprietary contextualizing software, and the platforms have utility for measurement in industries including media, financial services, e-commerce, and various other B2C and B2B applications. He is a pioneer in building and leading diverse data analytics teams and strategies. He also has an uncanny ability to effectively communicate between technology and executive layers to advance innovative strategies and solve real-world problems. Derham has many noted successes in marketing, product, risk management, and other operational disciplines. John has a Bachelor of Science from the Villanova School of Business with a concentration in Financial analytics.

John Derham
Start today

You got it. It’s time to solve your data infrastructure issues all at once

We're data geeks who love to chat with anyone who appreciates clean infrastructure and issue-free data streams.