In the age of agentic AI, where humanity seems to be barreling toward the Singularity and the advent of robotic brethren in our lives, how do you think about “AI for good?”
It’s a common phrase, but what does it mean? Because, as mentioned in the musical “Wicked,” “good” or anything else can’t just be a word – it has to mean something.
Well, if you look at the phrase itself, it is getting used by many different parties, to say a variety of different things.
Initiatives Worldwide
One of the first things that you notice when googling “AI for good” is that the phrase serves as a tagline for the prominent campaign by the United Nations to foster good outcomes. The project includes events (where none other than Ray Kurzweil is scheduled to show up at a summit this year), aggregated insights, and collaborations aimed at advancing our understanding of these technologies, to harness them, well, for good.
You may also see authors and groups using the phrase “AI for good” to describe their own concepts. For example, Brad Smith, Vice Chair & President of Microsoft, has this to say in explaining the genesis of the company’s “AI for good lab” dedicated to looking at these issues:
“Now is the time for urgent action. Those of us that can do more, should do more. But the challenges we face are complex, and no one company, sector, or country can solve them alone.”
Posing the Question
For more, we have some words from James Landay, co-founder and Co-Director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI), speaking at Davos. As he urged the audience to think about what “AI for good” means to them, he chronicled some of the progress that we’ve made to date. Landay led with this:
“AI often has side effects on people beyond the direct user,” he said. “(Maybe) it’s an AI system that’s deciding whether I’m going to get a life-saving treatment versus another patient, or an AI system that’s deciding that ‘I’m going to put fire resources in this neighborhood versus another neighborhood,’ or whether it’s an AI system that is using trading data that’s labeled by people in another country, who are not even going to use that system … and what I think we need to do is, we need to move beyond just thinking about the user. We’ve got to think about broader communities who are impacted by AI systems if we actually want them to be good.”
Societal Impact
Landay spoke more about issues like the self-image of human users, bias and disparity in treatment and outcomes due to algorithms, and the pursuit of converging social truths.
“If we build AI systems that become successful and ubiquitous, then we’re going to have to think about, ‘how do they affect society?’” he said. “Another way to look at (this from a) societal level is to think about, ‘what are the embedded values in these large models from a cultural perspective,’ or even … ‘what does it even mean, that could be situated very differently in different cultures and societies?’ And so we can start to use techniques to actually analyze questions like that, and question the underlying architecture, even of the models and the data in the models.”
The Need for Sovereign AI in a Global Environment
In some ways, Landay suggested, the struggle for the empowerment and dignity of every user has a corollary to the ways that nations are treated by AI on the global stage.
“You need to think about sovereign AI as, ‘what are the goals of a country (in terms of) controlling their own AI,’ which is kind of a general definition of ‘sovereign’ in this case,” he said. “So there are different goals. It could be a national security goal. We want to make sure that, you know, somebody can’t cut off our AI and hurt our country from a security perspective. It could be from an economic perspective. We want to make sure that we’re actually getting value out of AI that’s coming to our country, or stays in our country.”
In assessing this, he broke things down into aspects like control of infrastructure, control of data, and the control of narratives stemming from potential AI impact on media.
“Right now, different countries are talking about different levels of this, and different goals,” Hodson said. “They don’t all talk about one definition of the same thing. So what’s really important is to figure out what is the type of ‘sovereign’ we are talking about, and then how are we implementing it? Finally, when you think about sovereign AI, we might want to separate those goals I talked about at the beginning, protecting our national security or representing our cultural language, from those different mechanisms that people are using to protect (systems).”
To some, a lot of this might sound like spycraft.
“Sovereign AI sounds modern and cool, like a James Bond international spy team guarding a super-secret underground data center,” writes Alan Zeichick at Oracle, in a wide-ranging article that breaks down strategies, factors and approaches to AI sovereignty. “However, unlike a Bond movie, sovereign AI is real and practical, and it affects more than national security. Solid sovereign AI governance policies and technical diligence can help protect corporate assets, safeguard customer privacy, and harden civic computing infrastructure against malicious actors.”
In reality, it’s more of a logistical problem. We may not have “hot wars” between nations for AI control, or efforts to vandalize national networks, but countries have to be prepared. Ideally, nations would be working together through institutions like the UN, in part because we may or may not face threats from hostile AI in the future. Speaking generally, we just don’t know a lot about how all of this is going to play out. But if we can promote “AI for good,” we’re on the right track. Stay tuned.











