Skip to content

Bringing Legacy Tech Systems into the Age of AI

Most enterprises have already launched AI pilots or proofs of concept, yet far fewer have successfully scaled them. In the AI era, the real challenge is turning that early experimentation into reliable, production-grade systems that deliver measurable business value.

 

To put it simply: Execution is the key to success when it comes to AI.

 

However, that challenge becomes especially complex inside large enterprises. Many organizations are trying to modernize while still relying on legacy infrastructure. Some estimates suggest over 60% of companies depend on systems that are decades old, alongside heavily regulated data environments and operational processes built over years of incremental change.

 

In those environments, AI cannot simply be haphazardly thrown into a workflow as another tool. It must be integrated thoughtfully, measured carefully, and governed responsibly.

 

A recent discussion among enterprise AI leaders highlighted five themes that consistently separate successful AI initiatives from stalled experiments. Each reflects a practical reality of deploying AI in environments where systems are complex, data is imperfect, and trust matters as much as innovation.

 

 

Five Chicago tech leaders smiling on stage for a panel discussion

Chicago’s tech leaders cut through the hype to explore how legacy organizations are deploying AI for real, measurable transformation, with insights from moderator Gregg Walrod and panelists Nathan Frank, Jen Anderson, Ishu Jaswal, and Mary Grygleski.

 

 

 

Start With the System You Have

 

 

Key insight: Enterprise AI success often begins by extending existing systems rather than replacing them.

 

Legacy systems are often viewed as obstacles to innovation, but in many organizations, they remain the backbone of daily operations. The real issue is not their age, but whether they continue to function reliably and whether replacing them would introduce more risk than value.

 

These systems frequently contain decades of business logic, compliance requirements, and operational knowledge. Replacing them outright is not only costly but can disrupt critical workflows.

 

As a result, many organizations are adopting a hybrid approach. Instead of rebuilding from scratch, they are layering new capabilities around existing systems. APIs expose key data, analytics platforms provide new visibility, and AI tools enhance specific workflows without touching the core.

 

This approach allows companies to modernize incrementally. Rather than destabilizing the business, they can introduce AI where it delivers clear value while maintaining the systems that already work.

 

In practice, this often leads to faster progress. Organizations can improve customer experience, enhance internal operations, and unlock insights without undertaking massive system overhauls.

 

 

Quotes icon

There’s nothing inherently wrong with legacy systems if you can still operate them.”

Nathan Frank, Director of Machine Learning and Operations, Grainger

 

 

 

New call-to-action

 

 

Bad Data Gets Amplified, Not Fixed

 

 

Key insight: AI magnifies data problems unless organizations first define clear metrics and trustworthy data pipelines.

 

AI is only as good as the data behind it. That principle becomes even more critical at scale. Gartner estimates that poor data quality costs organizations an average of $12.9 million per year, highlighting the real financial impact of unreliable data.

 

Many enterprises operate with fragmented or poorly instrumented systems. Marketing pipelines misclassify customers, dashboards rely on incomplete inputs, and key decisions are often based on data that is less accurate than assumed.

 

When AI is introduced into this environment, those issues do not disappear. Instead, they are amplified. Automation accelerates whatever signals the system receives, whether they are correct or flawed.

 

This is why data observability and measurement must come first. Organizations need a clear understanding of how data flows through their systems, how it is defined, and how success will be measured before deploying AI at scale.

 

In many cases, the most important work behind an AI initiative happens before any model is built. It involves cleaning data, validating pipelines, and ensuring that the inputs driving AI systems reflect reality.

 

 

Quotes icon

If you don’t know exactly what you want the AI to do, or how you’ll measure success, the system will fail.”

Jen Anderson, Founder and Principal, Aurvia Group, LLC

 

 

 

 

 

 

Production AI is a Systems Problem

 

 

Key insight: Moving AI from experimentation to production requires engineering for reliability, compliance, and failure scenarios.

 

Production AI is fundamentally different from experimental AI. That gap is reflected in industry outcomes. Studies suggest that as many as 80–85% of AI projects fail to reach production or deliver expected value, often due to challenges around data, governance, and operational integration.

 

In experimental settings, teams focus on model accuracy and proof of concept. In production environments, those metrics are only a starting point.

 

Once AI interacts with customers or operational systems, a broader set of requirements emerges. Teams must manage reliability, latency, cost, compliance, and monitoring. They must also anticipate failure scenarios and design fallback mechanisms.

 

These challenges are especially pronounced in regulated industries such as healthcare and financial services, where errors can have significant consequences.

 

As a result, AI becomes less about building models and more about engineering systems. Success depends on how well those systems perform under real-world conditions, not just in controlled environments.

 

 

Quotes icon

It’s not just about accuracy. It’s about how the system behaves when things go wrong.”

Ishu Jaswal, Head of Data Science, Attune

 

 

 

A panel discussion with five individuals, four seated on a stage and one speaking from a tall stool, under a backdrop with

Live from MATTER Chicago on March 5, 2026

 

 

Human Oversight Still Matters

 

 

Key insight: AI works best when it augments human decision-making rather than replacing it.

 

Despite rapid advances in AI, human oversight remains essential in most enterprise applications.

 

AI can analyze information, automate repetitive tasks, and generate insights at scale. But in many environments, it should not operate without supervision and clear escalation paths.

 

This has led many organizations to adopt human-in-the-loop architectures. In these systems, AI supports decision-making while humans retain final control.

 

Customer service is a clear example. AI can gather information, suggest responses, and streamline workflows. However, complex or sensitive issues still require human judgment.

 

This balance allows organizations to improve efficiency without sacrificing trust. It also reduces the risk of errors in situations where precision and accountability are critical.

 

Ultimately, the goal is not to replace people but to enable them to work more effectively alongside AI systems.

 

 

Quotes icon

AI isn’t magic. It’s a powerful pattern-matching system, and organizations still need human judgment around the data it uses.”

Mary Grygleski, Global VP Western Hemisphere, AI Collective

 

 

Curiosity Will Be the Most Important Skill

 

 

Key insight: As AI tools evolve, adaptability and curiosity are becoming the most valuable professional skills.

 

AI is reshaping how engineers, product teams, and business leaders approach their work. Tools now assist with coding, system design, and decision-making, changing the nature of technical roles.

 

In this environment, success depends less on memorizing syntax and more on understanding context, evaluating outputs, and refining workflows.

 

Professionals must learn how to guide AI systems, question their conclusions, and iterate quickly. The role of the technologist is shifting from building systems to orchestrating them.

 

This shift places a premium on adaptability. Teams that experiment, learn continuously, and remain open to new approaches will be best positioned to succeed.

 

 

Quotes icon

The people who succeed in this environment are the ones who stay curious.”

Gregg Walrod, Co-Founder, Secret Sauce

 

 

 

 

Audience Q&A:

Following the panel, we opened the floor to questions from the live audience.

What’s holding back organizations from getting to that fully agentic workflow?

Recognition that it is collaboration with AI rather than replacing humans from the whole loop. Until you have enough trust that the input-to-output relationship is above the threshold acceptable to you, you’re always going to have a human in the loop giving feedback. — Jen Anderson

You’ve all commented specifically about the non-deterministic nature of large language models. I have significant doubts that they’ll ever create reliable and novel ideas. Do you agree?

As humans, we’re never going to be 100% perfect, but we’ve turned over time to be right maybe 99% of the time. I do see a very similar set of distribution math in LLMs, and I do believe you can get there. However, the number of inputs is complex. We have personally been building smaller fine-tuned models on domain-specific areas, and they do far better than these large language models. — Jen Anderson

In legacy systems, when you have these very deep dependency chains, what are you actually using AI for? And where do you see the safety or risk in that?

Users have to think about whether they are changing the customer experience or just making tasks more efficient by speeding them up, reducing force, and increasing the amount of work they can get done faster. It’s about how they balance it and what they are trying to achieve. It’s also about having a very distinct injection point for when they hand off to a human. — Ishu Jaswal

Do you come across use cases where you see, “We can use AI for this, but what are we going to get back?” And then you have to drop them?

I think ROI comes from a two-way street. One is the ROI of implementing AI for your business or product, but another is the change you are making for your clients as well. If you’re able to show your clients how much you are saving them or what the return will look like from their side, that’s a win-win. — Ishu Jaswal

With models becoming way better, how are your engineers being upskilled to catch up with the pace? Are they ready to build production-grade systems with these models?

The people I’m seeing who are really scaling their ability to produce new software are those who are able to move from syntax and implementation toward context and control. You have to have that product mindset. What am I delivering at the end? What is success? What data drives that? What am I building? That’s how I see it evolving. — Nathan Frank

I’m newly back on the market. Is there still space for engineers who are creators of AI, not necessarily just users of AI?

Previously, engineers developed the application, and data scientists developed the model side of it. Now the roles are mixing more. People who have domain knowledge, data literacy, and AI fluency about building the tool and building the whole system around it are definitely going to be required in the market. — Ishu Jaswal

Join the
Community

Want to be the first to know about upcoming events? Sign up to receive event updates, industry insights, and more helpful content straight to your inbox!