Jeff Saviano, Sasha Luccioni, Jag Gill, Ra’ad Siraj
A comprehensive approach to AI would not be complete without some attention to governance.
Even in the prior big data age, there was a profound focus on data governance, one that persists, especially with privacy issues and everything else. Along with the principle of data ownership, data governance is big. AI governance will be, too.
At an IIA panel in April, participants looked at some of the guiding principles around governance for new technologies.
“Brakes in cars allow cars to go faster,” said Ra’ad Siraj, explaining some challenges around the process of promoting good governance, and suggesting the value of a principle-based approach over what he called a “checklist-based approach.”
“A principle-based approach is much more agile,” he said. “It really allows one to identify the new risks and decide which controls to be mitigated on a weekly basis.”
He also spoke to the impact of a new generation.
“We’re seeing that increasingly with younger consumers, younger demographics, there’s corporate governance, because companies are now looking at this new set of consumers and realizing that the mission and values around ethical practices is really important, and so AI is really important and scientific,” he said, commenting on challenges like fragmented supply chains that come into play in today’s market.
Luccioni spoke about different ways to interpret metrics, and the importance of collaboration in research, invoking something called “Jevons paradox” that applies to new technologies.
“It’s been observed kind of time and time again,” she said. “As a new technology makes a certain task more efficient, depending on the task. So for example, when we switched from horses to cars, people tended to travel more, because you could go further: instead of going 30 miles away on the weekend, you could go 300 miles away … So you would travel more. And so any kind of efficiency gains were lost, because people used it more. And I think that with AI, we’re seeing that. We’re optimizing these metrics, whether it’s performance or efficiency, and what we’re seeing is actually these rebound effects happening, and that’s when regulation and governance have to step in to make sure that the ripple effects don’t actually neutralize the innovation.”
In discussing the value of bringing people along, panelists underscored the need for a unified approach.
“AI governance does not live in a silo,” Siraj said. “Ethics does not live in a silo. Everyone has a responsibility to do things that are ethical. So culture matters as members of the human race, as members of human race, as members of the industry that you’re operating in, as members of the culture that you’re responsible (for).”
Sustainability and resilience were also mentioned as governance goals.
“If you have goods detained in the Port of New Jersey, that’s a revenue risk to your business, that’s a branding and PR risk, and obviously that’s a responsible sourcing and ethical risk,” Gill said. “Absolutely, critically important, I was also just struck by an adjacent thought, thinking about the data around individuals. In Europe, for example, there’s increasing regulation around data around products. So in France, they are mandating that every physical product that is going to be sold from 2026, I think, has a digital product passport. So think about the billions of data points that need to be acquired, automated across millions of SKUs to be able to provide visibility into how products are made. So AI is front and center (in terms of) responsible business, and the responsible tools and technologies.”
Supply chains, ESG documents, legal tech, all of it factored into this conversation on the value of making sure that there is oversight for AI products and services. I think this will be something that we’ll come back to touch on again and again, as we move forward.