Cisco Redefines Security for the Agentic AI Era in 2026
Cisco launches AI Defense with AI BOM, MCP catalog, multi-turn red teaming, and AI-aware SASE for governing agent workflows in enterprises.
Enterprise Security Was Not Built for Autonomous Agents
Enterprise security architectures were designed for a world where humans initiate actions, applications execute predefined logic, and network perimeters define trust boundaries. Agentic AI breaks all three assumptions. AI agents initiate their own actions, execute dynamic and unpredictable logic, and operate across network boundaries as they interact with external services, APIs, and other agents.
Cisco's response, announced in early 2026, is a comprehensive rethinking of enterprise security for the agentic AI era. The AI Defense platform introduces new security primitives specifically designed to govern, monitor, and protect AI agent deployments. Rather than treating AI agents as another application to secure with existing tools, Cisco argues that agents require fundamentally new security concepts.
The launch represents Cisco's recognition that as enterprises deploy hundreds or thousands of AI agents across their operations, the attack surface and governance complexity grow exponentially. An agent that can access customer data, initiate API calls, and make autonomous decisions presents security challenges that traditional firewalls, endpoint protection, and identity management were never designed to address.
AI Bill of Materials: Knowing What Your Agents Are Made Of
Software Bill of Materials (SBOM) has become standard practice for tracking the components in software applications. Cisco extends this concept to AI with the AI Bill of Materials (AI BOM), a comprehensive inventory of every component in an AI agent deployment:
- Model provenance tracking: Which foundation models does the agent use? What version? Where were they trained? What data influenced them? This lineage tracking is essential for understanding the agent's behavioral characteristics and potential biases
- Tool and API inventory: Every external service, API, database, and tool that the agent can access is cataloged. This creates a clear picture of the agent's reach and potential impact if compromised
- Data access mapping: What data sources does the agent read from and write to? What sensitivity levels are involved? Are there data residency requirements that the agent's operations must respect?
- Permission and capability boundaries: What actions can the agent take? Can it create records, modify configurations, initiate payments, or communicate with external parties? The AI BOM documents these capabilities explicitly
- Dependency chain visibility: When an agent depends on another agent, which depends on a third-party API, which calls an external model, the AI BOM maps this entire dependency chain so that a vulnerability at any point in the chain can be traced to all affected agents
The AI BOM serves as the foundation for governance because security teams cannot protect what they cannot see. In many enterprises today, AI agents are being deployed by individual teams without centralized visibility into what models, tools, and data they use. The AI BOM creates the inventory that security governance requires.
MCP Catalog: Governing Agent Tools
The Model Context Protocol (MCP) has emerged as a standard for connecting AI agents to external tools and data sources. Cisco's MCP Catalog provides enterprise governance for these connections:
- Approved tool registry: Organizations define which MCP-compatible tools agents are permitted to use. Tools not in the approved registry are blocked, preventing agents from accessing unauthorized services
- Usage policy enforcement: Each tool in the catalog can have policies attached: rate limits, time-of-day restrictions, data classification requirements, and approval workflows for sensitive operations
- Version control and change management: When tool definitions change, the MCP Catalog tracks versions and can enforce review processes before agents use updated tools, preventing supply chain attacks through modified tool definitions
- Cross-agent visibility: The catalog shows which agents use which tools, enabling security teams to assess blast radius when a tool is compromised and to identify agents that have accumulated excessive capabilities
Multi-Turn Red Teaming for Agent Testing
Traditional security testing evaluates applications at a point in time against known attack patterns. AI agents require a different approach because they engage in multi-turn interactions where the context of earlier exchanges influences later behavior. Cisco's multi-turn red teaming capability addresses this:
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
- Conversational attack simulation: Red team agents engage target agents in extended conversations designed to gradually manipulate them into taking unauthorized actions. This mirrors real-world social engineering attacks where the initial interaction appears benign but builds toward a malicious objective over multiple exchanges
- Prompt injection testing: Automated tests probe agents for vulnerability to prompt injection attacks, where malicious instructions are embedded in user inputs, documents, or data sources that the agent processes
- Privilege escalation testing: Red team agents attempt to get target agents to perform actions beyond their defined capabilities, testing whether capability boundaries are properly enforced
- Data exfiltration testing: Tests verify that agents cannot be manipulated into revealing sensitive data, whether through direct requests, indirect inference, or through crafted conversations that lead agents to include sensitive information in responses to unauthorized parties
- Multi-agent interaction testing: When agents collaborate with other agents, the red team tests whether the interaction can be exploited to bypass controls that apply to each individual agent
AI-Aware SASE Integration
Cisco integrates AI agent security into its Secure Access Service Edge (SASE) architecture, extending network security concepts to agent workflows:
- Agent traffic inspection: SASE policies can inspect and control traffic between agents and external services, applying data loss prevention, content filtering, and threat detection to agent communications, not just human user traffic
- Identity-based agent access: Each agent has a verified identity within the SASE framework, with access policies that determine which networks, services, and data sources the agent can reach. This extends zero-trust principles to non-human entities
- Real-time behavioral monitoring: The SASE layer monitors agent behavior for anomalies that might indicate compromise: unusual API call patterns, access to data outside normal scope, communication with unexpected external endpoints, or attempts to establish connections not defined in the agent's AI BOM
- Policy-based workflow enforcement: Complex agent workflows that span multiple tools and services can have policies applied at each step, ensuring that the overall workflow complies with organizational security requirements even when individual steps appear benign
Secure Agent Workflow Architecture
Cisco proposes an enterprise architecture for secure agent deployment that integrates these capabilities:
- Agent deployment pipeline: A controlled process for deploying agents that includes AI BOM generation, security review, red team testing, and approval before any agent reaches production. This mirrors DevSecOps practices for traditional software
- Runtime monitoring and response: Continuous monitoring of deployed agents through the SASE layer and agent-specific security sensors, with automated response capabilities that can throttle, isolate, or shut down agents exhibiting anomalous behavior
- Incident response for agents: Playbooks and tools specifically designed for responding to security incidents involving AI agents, including agent forensics, impact assessment across the dependency chain, and coordinated remediation when multiple agents are affected
- Compliance and audit: Automated compliance checking against frameworks including NIST AI RMF, EU AI Act, and industry-specific regulations. Audit trails that document every agent action, tool usage, and data access for regulatory review
Enterprise Security Architecture for the Agent Era
Cisco's broader argument is that enterprises need to treat AI agents as a new category of entity in their security architecture, distinct from both human users and traditional applications. Agents have identities but not human accountability. They have capabilities but not predetermined behavior. They operate across trust boundaries in ways that existing network segmentation does not account for.
The recommended approach is defense in depth applied to agents: the AI BOM provides visibility, the MCP Catalog enforces governance, red teaming validates security, and AI-aware SASE provides runtime protection. No single layer is sufficient, but together they create a security posture that allows enterprises to benefit from agentic AI while managing the associated risks.
Frequently Asked Questions
What is an AI Bill of Materials and why does it matter?
An AI Bill of Materials is a comprehensive inventory of every component in an AI agent deployment, including the foundation models used, external tools and APIs the agent can access, data sources it reads from and writes to, and its defined capabilities and permissions. It matters because security teams cannot govern what they cannot see. Without an AI BOM, organizations have no centralized visibility into their AI agent deployments, making it impossible to assess risk, ensure compliance, or respond effectively to security incidents.
How does multi-turn red teaming differ from traditional security testing?
Traditional security testing evaluates systems against known attack patterns in isolated tests. Multi-turn red teaming engages AI agents in extended, conversational interactions that mirror real-world social engineering. The red team agent gradually builds context across multiple exchanges, probing for weaknesses that only emerge through sustained interaction. This is necessary because AI agents maintain conversation context and their behavior is influenced by the entire history of an interaction, not just the current input.
What is the MCP Catalog and how does it govern agent tools?
The MCP Catalog is an enterprise governance layer for the Model Context Protocol, the standard that connects AI agents to external tools and data sources. It functions as an approved tool registry where organizations define which tools agents are permitted to use, attach usage policies to each tool, control tool versions, and maintain visibility into which agents use which tools. This prevents agents from accessing unauthorized services and provides the control plane that enterprise security requires for tool governance.
How does Cisco's AI-aware SASE protect agent workflows?
Cisco extends its SASE architecture to treat AI agents as first-class entities alongside human users. This means agent traffic is inspected and controlled using data loss prevention, content filtering, and threat detection policies. Each agent has a verified identity with access policies determining what it can reach. The SASE layer monitors agent behavior in real time for anomalies that might indicate compromise, and policies can be applied at each step of multi-step agent workflows.
NYC News
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.