Aside from known risks, close to one-third of the risks seen with artificial intelligence are essentially unknown, but they’re out there. Recently, researchers at MIT CSAIL and MIT FutureTech developed a publicly available database, culled from reports, journals, and other documents to shed light on the risks AI experts are disclosing through papers, reports, and other documents.
The database of 700-plus AI risks finds more attributed to AI systems (51%) than humans (34%), and were more likely to be seem after AI was deployed (65%) rather than during its development (10%). However, even the most thorough AI frameworks overlook approximately 30% of the risks identified across the factors surfaced in the database.
The most frequently addressed risk domains include the following:
- “AI system safety, failures, and limitations” 76% of documents
- “Socioeconomic and environmental harms” 73%
- “Discrimination and toxicity” 71%
Additional risks cited in the MIT database include “privacy and security” (68%); and “Malicious actors and misuse” (68%). In contrast, “human-computer interaction” (41%) and “misinformation” (44%) were of somewhat less concern.
These benchmarked risks will help develop a greater understanding the risks versus rewards of this new force entering the business landscape. However, the challenge is understanding exactly when the threshold in which rewards surpass risks is crossed. Industry experts and leaders say we’re not yet at that point.
“Introducing new technology is always a major challenge in any organization, and AI is pretty complex,” W. Raghupathi, professor at Fordham University’s Gabelli School of Business, told Forbes. “The scale, complexity and difficulty in implementation and deployment, the upgrades, support, etc are technology-related issues. Further, privacy, security, trust, user and client acceptance are key challenges. Justifying the cost — and we do not have good measurement models — is a major challenge.”
Adding to this challenge is the almost blinding speed at which AI is being adopted — before any known and unforeseen risks are surfaced. “We need to act fast and think even faster, answering questions about how best to showcase the value AI can add to the enterprise — and whether it’s worth the risk,” said Jay Jamison, president of product and technology at LogicGate. “There are a wide range of AI solutions that may be able to improve efficiency, but how should they be measured against factors like regulatory guidelines, security risks, and additional governance needs?”
It’s likely even too soon to tell whether the rewards of AI are outweighing the risks, Raghupathi states. “There is a lag between deployment of applications and their impact on the business. Specific applications like low-level automation find success but high-level applications that support strategy are yet to translate into tangible benefits.”
It’s going to take time — perhaps years — “to assess the impact and benefits of complex applications versus simple applications automating specific routine and repetitive tasks,” Raghupathi points out. “Measuring the benefit is new and we do not have benchmarks or quantitative models.”
AI may be seeing most of the industry hype right now, but, inevitably, the hype will die down as newer generations of technology and approaches generate the excitement. As the hype around AI recedes — if not pushback — companies will be taking a second, and third, look at its risks versus rewards.
“As the technology becomes more powerful, AI will demand more and more energy – and costs will inevitably go up,” Jamison pointed out. “Right now, AI is both highly valuable and relatively inexpensive, allowing a wide range of organizations to enjoy its benefits. But that won’t be the case forever. Eventually, the bill will come due.”
Stronger AI governance is needed to develop a more telling risk assessment. “Organizations must consider how and why they plan to use AI and identify the potential risks that stem from its use,” said Jamison. “You can’t afford to turn a blind eye – it’s critical to have a plan in place that addresses how these AI solutions can be used both safely and effectively.”