How do you keep systems new and modern in an age where acceleration makes most aging technologies obsolete? There’s a mandate for companies to always be looking at rapid change. From the cloud era, to the AI age, the business world has been struggling with how to keep up with one sea change after another.
“In today’s rapidly evolving business landscape, legacy applications often stand as barriers to progress,” writes NNNN at IBM. “These existing systems, characterized by outdated technology and architecture can hinder an organization’s ability to keep up with changing business needs and pose significant security and operational risks. Staying competitive is essential in today’s fast-paced business industry—this is where legacy application modernization comes into play.”
In the cloud days, it was replacing old on-premises server systems and their workloads with cloud-native designs. Now, it’s bringing AI-native design to the table, which, again, means changing the build in fundamental ways.
At the same time, there’s been a staggering lag in both the public and private sectors. A surprising number of workloads, up until recently, were still run on command line terminals coded in languages like COBOL and Fortran. Now, we have virtual machines, open source code languages like Rust, and AI writing Python, not to mention new tools operating in Linux. But legacy modernization remains a broad challenge.
More from Boston
I was privileged to help host the Imagination in Action MIT event in Boston April 9 and 10, and we had a jaw-dropping roster of prominent voices in AI, bringing us the latest in what’s going on, here and around the world. One segment in particular, titled “Old Systems, New Agents” focused on that core task of replacing old business systems with AI agents that work autonomously to advance enterprise goals.
Pro NPR reporter and editor Nina Gregory interviewed Soundar Srinavasan of Microsoft, Sunyan Lee of Workbench, and Apoorv Iyer of HCLTech, about this kind of work, and what it means in 2026.
In going back to the reality of that lag, Srinavasan pointed out that 90% of mission-critical applications today are running on mainframes. That doesn’t mean they’re running on 1970s-era metal-cabinet computers the size of dishwashers. These are more modern mainframes. Still, the imperative is to bring the functionality to modern hardware setups.
The good news is: according to Lee, a lot of these existing systems are primed for agent use.
“One of the things I’ll highlight is our experience, that when working with a lot of partners, the existing IT infrastructure turns out to be often very synergistic with the AI agent model, in a sense that existing databases, existing micro services, a lot of these layers are actually foundational to creating effective agents,” he said.
Doing the Work
Panelists had some thoughts on the specific methodologies that teams should be pursuing, given the tools at their disposal.
“How do you support the change management in the workflow of the existing products?” Srinavasan asked. “How do you design new products that are based on the jobs to be done? Do you even need help? Or if you want to communicate your ideas to another agent, what does that future state of productivity look like? How do we bring the users along into a journey that is more agent made?”
He delineated some of the work that the agents do at Microsoft.
“The agents give me the ability to scale in a manner that, previously, I was achieving in scaling with my team,” he said. “I have agents to do research, and agents to do engineering. I have agents to create files to communicate my ideas. I have agents to do the reviews.”
Srinavasan also outlined how the AI helps him to do research for presentations.
“It’s not just that process of creating that artifact, but it’s also pulling the information from the places that I needed,” he said. “When those were created by me in the past, I would have to remember where those existed, and go and drag them down. Now it’s seamless, right? As long as the permissioning is there for the resources that I need, it can collect the information from all those resources in a manner that is almost negligible in terms of latency.”
It sounded like by “latency,” he was referring to the speed of human research, in comparison to which AI can go so much faster that we’d use the phrase “in the blink of an eye.”
Iyer had an insight on the philosophy of integrating AI this way, to modernize legacy systems.
“The most important aspect that we see as best practice is that now, you need to have AI that comes to you,” he said. “You don’t have to go to the AI, right? This basically means that when you are looking at your current work, AI should be a non-interventionist technology. It has to be very easy to adapt, easy to adopt, and something that increases your productivity.”
Iyer went back to the idea of PowerPoint.
“Think about building a PowerPoint,” he said. “AI can be your creative partner. AI can be your knowledge partner. It can be your design partner, all three in one, right? So that’s multiplying your capabilities, 10x, 20x, 100x, in terms of delivering any particular outcome.”
Hands-Free Client-Server Business
Later in the discussion, Gregory asked if we are now at the point that we can use natural language instead of specialized formats like SQL to query a database.
Srinavasan suggested that the technology is already there.
“You can go into Excel in what is called an agent mode,” he said, “and then ask it to gather data in whatever manner that you want, in national language, create the charts, create the reports, create the summaries, without really worrying about, ‘Okay, do I have to create a pivot table and how do I structure it? And why is that not working?’ – not worrying about any of that.”
In addition, he brought up a related idea that I find pretty relevant, the idea that instead of toggling between browser screens, the user stays on the same one, and the AI agent goes and gets what he or she, the human in the loop, is looking for. It’s not hard to imagine the productivity gains here.
“Maybe you were writing your report, and you just needed this information to come into your report,” Srinavasan said. “So instead of, like, leaving whatever place that you’re in, and then going into Excel and doing this, and bringing it back, now, you could just do this from wherever you are, and use the agent to say, ‘Hey, you go do all this in Excel and come back and just plug it into where I am right now.’”
Iyer, for his part, pointed out some deficiencies and remaining need for advancement.
“A lot of that tech also needs to evolve quite a bit,” he said.
Making the Old New
“The technology varies in different enterprises,” Iyer continued. “People have mainframes. In mainframes, it’s only assembly language. So from assembly language, how do you get the data out
Then he talked about relational databases.
“You talk about SQL, right?” he said. “The spectrum is wide. One of the things that we see as best practices is thinking about building, creating the data as an enterprise context plane, which means putting a data hub in between, and then driving a lot of capability on that.”
As for the data, Iyer pointed out some imperatives.
“You would typically vectorize the database on a vector dB,” he said. “Going forward, you need to also get the data that you’re using. So if you do the Excel, AI should be reading on what you’re doing with your Excel, and that becomes the data for future use cases. So context setting and context graphs become very important in that context, right? So these are all important technologies that we are getting involved in, and a lot of people and enterprises are using them now to drive the data integrations.”
Lee added some thoughts on SQL.
“With the institutions that I’ve been working with, most of their databases come in some form of either SQL or noSQL databases. But I would say nine times out of ten it’s a SQL database. And this infrastructure has been around, as you mentioned, for decades. And as it turns out, the coding models, all of the modern models are trained specifically to work extremely well with the existing infrastructure. So the bigger challenge in terms of actually exposing that data to an agentic framework is less its legibility to the models, and much more things like role-based access control. How do you make sure that the agent would have the ability to query any and all data that it needs, but without it accessing sensitive data? Do you consider the ability of the agent to actually write to the database? Because then you have to be much more cautious.”
In that context, Lee called for a separate system of reference for agents, in order to wall off sensitive assets and keep the automation in a sandbox.
“We’ve been very deliberate in terms of making sure we apply the right safeguards,” he said.
There’s a lot more actionable information in the rest of the discussion, including going into use cases like Shopify protocols. This was an eye-opening look at the current process of putting new wine into old wineskins.


