Based on IIA April panel (Tobin South, Hope Schroeder, Kevin Dunnell, Nikhil Singh, Kushagra Tiwary, Laura Owens, Shane Redford Longpre)
What do you encounter when you look at major problems with AI implementation?
At our talks and conferences, we’re often focused, at least in part, on the ‘side effects’ of rampant AI – the unintended outcomes, and the ways that these new powerful tools might go off the rails. We know about a lot of benefits and how much this technology can boost productivity and insight. But we also know that it comes with definite challenges, and very definite risks.
When you hear our experts talking, for example, in recent panels, about the downsides of the AI revolution, what I think you’re going to hear is three major things:
The first one is related to cybersecurity. You have all of these bad actors trying to manipulate AI for their own ends. I’ve heard about this in multiple conferences, where people are trying to figure out how to stop AI systems from being hacked or exploited for black hat activity. There are ways to try to reduce this type of threat, but eliminating it entirely seems like a tall order.
Our people are working on this challenge, but as the capabilities expand, so do the vulnerabilities.
Another major problem is something you could call “bad content” or, more broadly, you could call it “AI malaise.”
What I’m talking about is the phenomenon where the quality of results goes down when AI takes over from humans in a comprehensive way. The idea is that while you can generate a lot with AI, you need that original training data that came from humans. And you need it continually, not just one time.
In other words, we still have a little tiny bit of an uncanny valley that separates human output from that of AI, and people can tell, especially over time and recursion. In the end, many of our people would argue, what you have is a regurgitated slop that’s trained on other AI output, and more and more lack of a human center.
To put it another way, if you take out the human sourcing, we often see the quality of AI results decrease.
Quotes from a recent IIA Panel on Frontiers of AI (Tobin South, Hope Schroeder, Kevin Dunnell, Nikhil Singh, Kushagra Tiwary, Laura Owens, Shane Redford Longpre)
“Before AlphaGo beat everyone at go. … it was actually just trained on learning. And our current systems are (based on) only the imitation learning and the learning component of it, but to truly create the superhuman AI, we need to go over and do what AlphaGo did, start to search now AI can find its own answers.” – Kushagra Tiwary
“I think that there is some interesting work happening … in the open source area. So community notes on X, I think is a great example of this, where we’re not relying on sort of a centralized entity to say whether this is good or bad, but we’re looking to the community to get their input. AI isn’t the only bad actor; there can also be human bad actors. And we still rely on our community to understand who is good actor, and who’s a bad actor.” – Kevin Dunnell
“Are people going to lose interest in content that’s AI generated as like, we just have this massive proliferation of AI generated nonsense on our platforms? … you all know someone in your life who has a Facebook feed that says ‘four cute dogs,’ right? You don’t really know where these dogs are coming from. You just keep scrolling, cause they’re cute. I follow a page that shows AI-generated images on Facebook, and Reddit and Twitter, and there are Facebook pages of just AI-generated dogs, just cute dogs. … a whole feed of them, infinite, cute dogs. People love it. It gets tons of likes, because really, you’re there to see something cute. You don’t really care who created them. And, you know, I think there will be parts of ecosystems of social media that will be full of nonsense content, and people will continue to love it.” – Tobin South
“I think this is like a massive problem in the media space, (and) as I said earlier, in the platform space, and I don’t have any short hot takes about how that will be solved. But I personally believe that human creativity and agency will be at the center, and will come out on top. But we have to make sure, as consumers and as platform designers and as technologists, that we always keep in mind what is uniquely human and not replaceable by these technologies, despite their ability to assist us and boost us in some important ways.” – Hope Schroeder
“I think people are already starting to be exhausted by the esthetics of AI-generated images and are going to start pushing alternative esthetics. They’re going to start playing with them to make them more their own, and find … new elements. So I think human creativity is remarkably resilient, and we’ll find ways around it, but that includes getting sick of things.” – Nikhil Singh
“I think, for proprietary models that are supposed to make decisions on safety, you don’t want full disclosure on how they’re making the decision, besides the verification that they’ve done it in the right way, on the back end, because then bad actors could evade these rules (in) the way that they’re
making decisions.”
Look at some of the above quotes from a recent panel of PhDs and MIT researchers to understand anecdotally some of how this works.
A third challenge has to do with corporate efforts to wrangle new technology into implementations that suit their own ends.
This is where we have visionary people like will.i.am and others asserting that you have to own your own data – that you should not be letting corporate behemoths possess and control the data that drives these AI models.
We know that intuitively, but how do you get to the legislative and regulatory vehicles? Where is the U.S. version of the GDPR?
Those are three big challenges, and some questions around what we’re doing as we try to move forward in a stable and sustainable way, with artificial intelligence as our copilot.
Again, you can read some of these quotes to see what the experts were saying at the April event, and keep an eye on more reporting showing how we build consensus around the way forward.