With great power comes great responsibility, and that is now certainly the case as the power of AI gets unleashed. Is it amplifying bias? Is it delivering erroneous information? Does it violate intellectual property or copyrights? Is it opening the door to even greater malfeasance than we’ve seen to date in the digital era?
Is everyone ready for all this?
Kind of. We can’t be doomsayers when it comes to AI issues. But we also need to be proactive about keeping AI responsible. When it comes to assessing the risks of their AI efforts, just over half of organizations, 58%, have some grasp of the risks involved, a recent PwC survey of 1,001 executives found. While there is interest in delivering responsible AI, only 11% of executives can say they have fully moved forward with responsible AI initiatives.
Thinkers and doers across the business landscape agree that we are entering an age of great danger– and great opportunity. “Ultimately, we all want our technology to be the safest and most sophisticated in the world,” said Arun Gupta, CEO of NobleReach Foundation. “The question is not whether this technology should be regulated, but how we ensure we have the talent and innovation infrastructure – both in government and the private sector – to unlock AI’s benefits while mitigating its dangers.”
In many cases, AI itself can help mitigate some of these dangers, Gupta added. “We must build an infrastructure that supports responsible AI optimism.”
An AI-optimism approach means “investing in initiatives that focus on trusted and secure AI,” Gupta said. “We must maintain an open dialogue between industry, academia and government as risks evolve. We need to bring the brightest minds and best research to solve problems and maximize AI’s positive societal impact.”
A responsible optimism approach encourages human oversight at all stages. There is a “lack of transparency and guardrails in the datasets used to train AI models and the potential bias and discrimination that may result from it,” said Thomas Phelps, CIO of Laserfiche and member of the SIM Research Institute Advisory Board.
“If AI is employed without human oversight, the wrong decision or recommendation could be made in critical areas such as law enforcement, court systems, credit and lending, insurance coverage, healthcare or even employment matters,” Phelps added.
Another risk AI poses is the specter of AI-based manipulation, something that its developers and proponents have yet to fully get their arms around. For example, the answers that conversational AI systems provide can impact how people think, warned David Shrier, professor at Imperial College Business School and author of Welcome to AI.
“A very small number of people, privately employed, decide on what kind of answers these companies provide you,” Shrier continued. “What’s worse, since many of these systems are self-learning, they are susceptible to manipulation. If you contaminate the data that goes into these AIs, you can corrupt them.”
It’s important, then, “to protect the rights of individuals, and the intellectual property of people who shape ideas,” said Shrier. “The average consumer or worker doesn’t realize how much they’ve been giving away to certain large tech platforms. We have to do this in a way that doesn’t damage economic productivity and competitiveness.”
More broadly, Shrier added, “as we hand over decisions to artificial intelligences, like who gets a loan, or whether or not a car will brake when a person steps in front of it, how do we know that the algorithm is giving us the correct answer?”
Significantly, people are clamoring for – not fearing – AI. But they’re also willing to accept restraints in exchange for responsible use of AI.
“We want to have these amazing technologies in our lives, much as we wanted to convenience of having cars to get around,” said Shrier. “We eventually learned to live with brake lights and windshield wipers and seat belts and airbags, all of which made our cars safer. We need the equivalent for AI.”
As new technologies emerge, the industry figures out ways to make it more secure and compliant. “Much as they did with data privacy controls and with data portability,” Shrier illustrated. “You used to not be able to very easily move your banking data or your phone number from one company to another. Yet, when privacy regulations came along, technology companies with their deep broad base of innovation and resources were able to figure out how to comply.
“It’s always a matter of striking a balance with risk and our risk appetite for AI making the wrong decisions or adversely impacting human lives,” said Phelps. “We should assume that AI will soon be embedded in everything we do.”