Una Verhoeven is VP of Global Technology at Valtech.

While the buzz around GenAI that reached fever-pitch in 2023 has still not worn off, increasingly, companies are shifting to the next generation of AI-driven tools: agents. Though agent adoption is occurring across all industries, the shift is particularly pronounced in some spaces; this month, Gartner released a projection that by 2029 some 80% of customer service issues will be resolved using an AI agent. Deloitte estimates that “a quarter of employers will try out agentic AI this year,” and that number will grow to 50% by 2027.

However, amid the excitement, there must also be an acknowledgment of risk—at least, for circumspect, forward-thinking executives. Even as AI agents are poised to be the “next big thing,” experts acknowledge the potential for loss of control or situations in which they may make incorrect and irreversible decisions. Because they operate with a far greater degree of autonomy than their GenAI predecessors, agents also introduce security vulnerabilities that are ripe for bad actors to capitalize upon.

For these reasons, any organization using these tools must simultaneously adopt a responsible agentic AI framework, to include transparency, ethical guidelines and robust oversight.

Keeping Security Top Of Mind

Whenever humans are removed from oversight, the concern around security vulnerabilities deepens. From technical issues such as malfunctions and errors to increased threats of cyberattacks (that may have all the more severe consequences due to agents’ autonomous decision-making capabilities), the potential risks posed by agentic tools must be given careful consideration. The problems aren’t only within the technology itself; adversarial attacks, data poisoning and model vulnerabilities can be exploited to manipulate outcomes.

Additionally, ensuring data privacy and compliance is even more of a challenge with an autonomous tool. Regulations such as GDPR and CCPA are only becoming more stringent, and adherence is crucial when deploying AI systems that process sensitive information such as health or banking information.

Organizations should not immediately conclude that the risks outweigh the rewards. But they must think carefully about how to mitigate these security complications.

Implementing Robust Protections

There are a few security best practices that absolutely must be implemented by organizations leveraging AI agents, irrespective of industry:

1. Implement proactive security protocols. Strong, proactive processes such as adversarial testing, robust authentication measures, access controls and encryption all help ensure that AI systems are resilient against attacks. Getting ahead of problems before they occur is far better than trying to clean up a mess after the fact.

2. Maintain awareness and accountability. Companies should conduct regular model audits and offer explainability tools to help maintain accountability. Clear policies for AI governance and response protocols must already be in place in case of AI-driven errors or breaches. In addition, education and awareness campaigns must be prioritized within an organization.

3. Keep humans in charge. Strong transparency and human-in-the-loop mechanisms help mitigate risks and ensure AI-driven decisions align with business goals and ethical standards. AI agents can work autonomously, but human oversight is necessary for reviewing and evaluating decisions these tools have made.

4. Adhere to ethical guidelines. Strong data governance and guardrails that protect safety, human rights, privacy and security are must-haves before deploying agents. Users must be certain that the agents are always ameliorating human and societal values, as opposed to detracting from them. Avoiding over-reliance on these agentic tools (and disempowerment of human users) is part of that framework.

A Continual Safeguarding Process

Of late, we’ve been seeing increased focus on AI alignment techniques, which ensure AI systems act in accordance with human values. Regulatory frameworks are also evolving to meet the moment, requiring organizations to demonstrate AI safety and fairness when implementing agentic tools. In many organizations, the process of integrating AI with cybersecurity measures is also becoming more advanced; forward-thinking organizations are working proactively to detect and mitigate threats before they escalate in order to safely reap the benefits and capabilities of these advanced tools.

Above all, organizations must view agentic AI safety as a continuous process as opposed to a one-time initiative. By combining ethical AI principles, strong governance and proactive security measures, businesses can mitigate risks effectively while harnessing the astonishing power that agentic AI tools offer.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Share.

Leave A Reply

Exit mobile version