Security discussions around AI often focus on model behavior, such as hallucinations or unsafe outputs.
The bigger issue emerging in real deployments lies in the surrounding infrastructure.
AI models interact with tools, external data, internal codebases, messaging platforms, and other agents. These connections create new ways for attackers to interfere with how software behaves.
Cisco’s State of AI Security 2026 report looks at how these threats are showing up in real systems.
In many cases, attackers are not trying to break the model itself. They target the surrounding components that feed information into the model or allow it to interact with other systems, such as training datasets, model repositories, external tools, and agent frameworks.
Manipulating those connections can cause AI systems to leak sensitive data or carry out actions that were never intended inside enterprise environments.
Why It Matters: Organizations are adopting AI systems faster than they are securing them. One survey cited in the report found that 83% of organizations planned to deploy agentic AI capabilities into their business functions, while only 29% reported being ready to operate those systems securely. This gap leaves many deployments exposed as attackers begin targeting the infrastructure that supports AI systems. Security failures in these environments can allow attackers to influence how connected software behaves or gain access to sensitive internal data.
- Many Attacks Target How Models Interact With Their Environment: Early experiments focused on getting models to produce responses they were designed to avoid. More recent attacks manipulate the instructions that AI systems read while performing tasks. For example, a hidden prompt inside a GitHub issue could instruct an AI coding assistant to pull private data from internal repositories and send it elsewhere. Because the instructions appear inside normal content, the AI system may treat them as legitimate commands.
- Agents Introduce Security Risks When They Are Allowed to Take Actions: Some AI systems can perform tasks such as writing code, accessing files, calling APIs, or interacting with applications. These agents often rely on small extensions called “skills.” Security researchers analyzed more than 30,000 skills and found that over a quarter contained at least one vulnerability. If one of these extensions is compromised or poorly designed, it can give attackers a path into the system running the AI agent.
- The AI Supply Chain Creates Opportunities for Hidden Tampering: Many developers use open model repositories and shared datasets when building AI applications. This creates opportunities for attackers to insert poisoned training data or tamper with model files. Research cited in the report found that adding around 250 poisoned documents to training data can embed hidden triggers inside a model without affecting how it performs in normal testing. Some model formats can also include code that runs automatically when the model is loaded.
- New Protocols That Connect Models to Tools Can Expose Sensitive Data: Technologies such as the Model Context Protocol allow AI systems to access external tools and data sources. Security researchers discovered several vulnerabilities in these integrations. In one example, a malicious tool could silently collect a user’s entire chat history and send it to an external server once the AI agent installed the tool.
- Attackers Are Starting to Use AI as Part of Cyber Operations: Security investigations show threat actors experimenting with AI to support cyber operations, including drafting phishing messages or generating malicious code. In one reported espionage campaign, attackers used an AI coding agent to scan systems for weaknesses and assist with developing exploit scripts during the intrusion.
Go Deeper -> State of AI Security 2026 – Cisco
Trusted insights for technology leaders
Our readers are CIOs, CTOs, and senior IT executives who rely on The National CIO Review for smart, curated takes on the trends shaping the enterprise, from GenAI to cybersecurity and beyond.
Subscribe to our 4x a week newsletter to keep up with the insights that matter.


