As artificial intelligence adoption accelerates across industries, security risks associated with AI applications are becoming a significant concern for enterprises. In response, Cisco has announced AI Defense, a security solution designed to help organizations secure their AI deployments by integrating visibility, validation, and enforcement across enterprise networks and cloud environments.

Cisco’s announcement comes at a time when AI safety and security are becoming priorities for businesses looking to integrate AI into their operations. According to Jeetu Patel, executive vice president and chief product officer at Cisco, organizations recognize that AI security is a critical factor in enterprise adoption.

“There’s a universal concern we hear from customers: What happens if these things go sideways and don’t behave the way we want? How do we prevent an application from being compromised by a prompt injection attack or manipulated to leak sensitive data?” Patel said.

Security Challenges in Enterprise AI Deployments

AI models operate in unpredictable ways, evolving as they are trained on new data. This introduces security challenges, including model manipulation, prompt injection attacks, and data exfiltration risks. Additionally, there is no standardized framework for AI security equivalent to the Common Vulnerabilities and Exposures database used in cybersecurity.

One of the issues Cisco aims to address with AI Defense is AI model validation. AI systems can generate unexpected or harmful outputs if exploited, making continuous security monitoring essential.

“A typical model provider takes seven to ten weeks to validate an AI model manually. We do it in 30 seconds by running trillions of automated test queries—detecting biases, vulnerabilities, and potential exploits faster than any human-led approach,” Patel explained.

This approach, similar to fuzz testing in cybersecurity, is intended to uncover vulnerabilities before attackers can exploit them.

Key Features of Cisco AI Defense

Cisco AI Defense is designed to integrate security across AI workflows. According to the company, the solution operates on three primary levels:

Visibility and Monitoring

  • Identifies AI applications in use across an enterprise.
  • Maps interactions between AI models, data sources, and applications.
  • Provides continuous monitoring for anomalies or unauthorized usage.

Validation and AI Red Teaming

  • Uses algorithmic red teaming—automated AI testing—to identify security risks.
  • Detects issues such as bias, toxicity, and potential attack vectors.
  • Reduces model validation time compared to manual testing approaches.

Enforcement and Guardrails

  • Applies security policies to prevent AI misuse.
  • Implements automated controls to restrict unauthorized model access.
  • Extends security enforcement across Cisco’s existing security architecture.

Cisco says AI Defense will integrate with its broader security platform, allowing organizations to apply AI security policies across their network, cloud, and endpoint infrastructure.

Integration with Security and Networking Platforms

Unlike standalone AI security tools, Cisco AI Defense will operate as part of its existing security portfolio. The company says the solution will extend across Cisco Secure Access, Secure Firewall, and its networking infrastructure to provide policy enforcement at multiple levels.

“If AI security is built into the fabric of the network, the enforcement isn’t just happening at the software layer—it’s happening at the infrastructure level. That’s the key advantage,” Patel noted.

According to Cisco, this approach allows organizations to apply AI security at both the application and network levels, reducing the complexity of managing AI-specific security risks.

Addressing a Broader AI Security Challenge

Cisco’s announcement highlights a broader industry challenge: AI security is still an emerging field with no universal framework for threat detection and mitigation. Recent incidents have raised concerns about AI misuse, such as reports of individuals using generative AI models to generate harmful content or assist in real-world attacks.

Patel emphasized that continuous AI validation is necessary as AI models change over time.

“Because models evolve with new data, their behavior can change. We’ve built a continuous validation service to detect shifts and update protections in real time.”

This underscores a growing industry focus on AI governance and oversight, as enterprises seek standardized methods to ensure AI safety.

Industry Context and Future Implications

Cisco’s AI Defense announcement comes as enterprise security vendors are expanding their focus on AI security. Companies such as Microsoft, Google, and OpenAI have introduced AI security initiatives, while startups focused on AI model security and compliance are also gaining traction.

The next phase of AI security development is likely to involve collaboration across industry stakeholders, including security vendors, AI model providers, and regulatory bodies. Patel suggested that Cisco’s AI security strategy is designed to be part of this broader ecosystem, rather than a standalone solution.

“We want to make sure we are part of the AI ecosystem rather than everyone talking in silos. Customers need to understand how AI infrastructure, safety, and security fit together.”

“To build trust in AI, its safety must match its potential,” agreed Krish Vitaldevara, senior vice president and general manager at NetApp. “The tech ecosystem must be committed to empowering enterprises with secure, scalable solutions, ensuring the development, deployment, and use of AI aligns with both innovation and responsibility.”

As AI adoption continues to expand, enterprises are expected to prioritize security solutions that can protect AI applications without slowing down innovation. Cisco’s AI Defense marks the company’s latest effort to position itself in this evolving landscape.

Share.

Leave A Reply

Exit mobile version