Autonomous AI agents are already performing tasks, managing data, and making decisions across enterprises. With nearly all large organizations planning to expand their use within the next year, these tools are becoming central to operations.
Although they offer clear gains in productivity, many are being introduced without the cybersecurity planning that normally supports new enterprise technology. Most companies now rely on dozens of AI tools, often without meaningful oversight.
A recent study found that 90% of these tools are used without proper licensing or internal controls.
Employees are feeding sensitive data into systems they do not fully understand.
In some cases, agents have retrieved information that was hidden simply because no one expected a system to find it. The more capable AI agents become, risk becomes a larger concern.
Why It Matters: AI agents analyze and act on data. When deployed without rules or protections, they can uncover private files, misuse confidential data, or create compliance issues that are hard to contain. This becomes a potentially major liability when security is not properly integrated.
- Security Basics Are Being Skipped During AI Rollouts: Many organizations are moving ahead with AI deployment without following basic security procedures. Steps such as vendor vetting, risk reviews, and role-based access audits are being left behind. A study found that nearly all companies affected by AI-related breaches lacked access controls that should have been in place from the beginning. While these are not new problems, AI systems increase the damage when legacy practices are ignored.
- Data Integrity Depends on Tracking How Information Changes: AI agents rely on accurate information, but many systems were not built to preserve the history behind that data. Without a way to track when and why changes happen, agents may treat outdated or incomplete information as current. Some companies embed AI into platforms that were never designed for this purpose, which creates blind spots. Reliable data requires not only accuracy in the moment but also a clear view of how it arrived there.
- Shadow AI Is Spreading in Two Overlooked Ways: One type of shadow AI appears when employees bring in outside tools without approval, and another appears when trusted platforms quietly add AI features without revisiting controls. In both cases, teams subject themselves to a loss of visibility and control. In one example, Samsung engineers uploaded sensitive material to a chatbot and lost the ability to retrieve it. These actions often come from convenience, not malice, but the consequences are the same.
- AI Agents Still Need Human Oversight: While agents can take over tasks such as reviewing access logs or detecting threats, they cannot apply judgment. Cybersecurity teams are still needed to guide their work and enforce proper use. Employees also need training to avoid preventable mistakes, such as sharing confidential data with tools that were never cleared for that purpose.
- AI Can Improve Security When Used Within Guardrails: AI agents can reduce the workload on security teams by handling repetitive monitoring tasks. This allows human teams to focus on areas that require interpretation and experience. Teams can only see benefits appear when agents are deployed with clear controls and governance from the start.
Go Deeper -> Agentic AI Is Coming—But Is Your Cybersecurity Really Ready For It? – Forbes
Trusted insights for technology leaders
Our readers are CIOs, CTOs, and senior IT executives who rely on The National CIO Review for smart, curated takes on the trends shaping the enterprise, from GenAI to cybersecurity and beyond.
Subscribe to our 4x a week newsletter to keep up with the insights that matter.



