Cristian Randieri is Professor at eCampus University. Kwaai EMEA Director, Intellisystem Technologies Founder, C3i official member.
While a lot of attention has been given to generative AI (GenAI)—which excels in producing content like text, images, music and video—more attention should be paid to another emerging AI paradigm: agentic AI.
Agentic AI aims to let AI algorithms make independent decisions, adapt to their environment and take action without direct human intervention. This represents a significant shift from earlier forms of AI, signaling both promises and challenges for the future.
Defining Agentic AI
Today, most businesses are used to using GenAI-based chatbots to get answers: A human asks the model a question, and the chatbot replies by leveraging natural language processing. Agentic AI is different because the systems are reactive, predictive and proactive in their decision-making and behavior.
One of the essential features of this new system is its intrinsic ability to perform multiple actions simultaneously. An AI agent can simultaneously perceive the environment, learn it, adapt its responses and make decisions without human intervention.
Even though agentic AI applications hold promising potential for just about every field, they offer the most immediate and immense value for industries that constantly require decision-making and adaptability to variable scenarios. Consider these potential use cases:
1. Healthcare: Agentic AI could boost medical diagnostics and treatment by continuously monitoring patient conditions, detecting abnormalities and intervening. For example, a multi-agent AI has the potential to continuously assess a patient’s vitals and alert medical professionals or initiate necessary interventions.
2. Manufacturing And Supply Chain Management: AI agents can optimize production lines, anticipate disruptions and adjust operations dynamically.
3. Autonomous Vehicles: By optimizing routes and energy use, AI agents have the potential to reshape the transportation landscape by improving safety, reducing congestion and lowering emissions.
Of course, these examples can be extended to other sectors—such as finance, defense and environmental management—where decision-makers must make rapid decisions based on constantly evolving data.
Ethical And Societal Concerns
The evolution of agentic AI, however, poses many significant ethical and societal concerns that developers and end users will need to consider. When systems make independent decisions, it can be challenging to identify responsibility, especially if those decisions lead to unintended or harmful outcomes.
That is why the future of agentic AI involves addressing these key concerns:
1. Job Displacement: Agentic AI may threaten employment across various industries thanks to its ability to replace human decision-making in complex environments. Jobs that rely on quick decision-making, pattern recognition and dynamic response—once the domain of humans—may become obsolete. This scenario raises the question of how much time society may need to adapt to such disruptions in the workforce, and if it will.
2. Data Privacy And Security: Agentic AI relies heavily on large amounts of real-time data, often gathered from people and organizations. When these systems operate independently, ensuring the privacy of sensitive data and information can be incredibly challenging. A thoughtful approach will be needed to handle data responsibly and securely.
3. Control And Governance: As these systems become more autonomous, there is a real risk that human oversight may become insufficient or ineffective, especially if AI agents are left unchecked.
4. Safety Risk: A recent paper pointed out that integrating agentic AI into autonomous machines—especially in high-stakes environments like autonomous vehicles—introduces several safety challenges ranging from hallucinations to not having the resources to operate safely. Even if the system is competent, it requires robust safeguards to address specific risks such as unintended behaviors, decision-making errors or harmful outcomes stemming from the algorithm’s complexity and unpredictability.
Governing Agentic AI
In considering these risks, there is a pressing need for robust governance frameworks that ensure that the development, deployment and monitoring of AI systems in ways that always prioritizes well-being and safety.
The key elements of such frameworks should include:
• Transparency: AI systems, especially those that operate independently, must be transparent in their decision-making processes, providing a clear and understandable mechanism for humans to audit and intervene in the AI’s actions constantly.
• Accountability: Clear guidelines must establish and define who is responsible when AI systems make mistakes or cause harm. The responsible stakeholders may be regulatory bodies, AI developers or the organizations that deploy these systems.
• Ethical AI Design: As with other AI systems, AI agents must be designed and operate in a way that does not exacerbate existing inequalities or biases and also respects human rights and privacy.
In short, agentic AI is capable of multitasking and performing specific tasks more efficiently than humans. However, as this technology continues to evolve, all stakeholders—including researchers, policymakers and industry leaders—must deploy it in a way that ensures human control and avoids any unethical use of these technologies.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?