AI’s presence across enterprise systems is established; managing its impact is the next phase. New research from Nudge Security, based on anonymized and aggregated telemetry across enterprise environments, provides a data-backed view into how AI tools are actually being used inside modern organizations.
The findings reinforce what most technology executives already see firsthand: AI is distributed across the stack.
Core LLM providers are nearly ubiquitous, with OpenAI present in 96% of observed organizations and Anthropic in nearly 78%.
But the more relevant signal is diversification.
The spread of meeting intelligence platforms, AI-native coding tools, presentation generators, and voice AI systems all show how thoroughly AI has been woven into collaboration, development, and productivity environments.
AI’s Risk Profile Is Defined by What It Connects To
Where the research becomes especially useful is in clarifying where governance pressure points are emerging.
Integrations define exposure.
AI tools are frequently connected to Google Workspace, Microsoft 365, GitHub, Slack, and knowledge platforms. As these connections deepen, a mis-scoped OAuth token or over-permissioned automation can extend access across document repositories, ticketing systems, or source code. In this model, blast radius scales with integration depth.
The rise of agentic tooling adds another layer of complexity.
Early adoption of AI agents capable of executing actions across systems introduces governance considerations around least-privilege access, action logging, and approval workflows. The research exemplifies how experimentation can quietly accumulate persistent permissions, creating what it characterizes as “permission debt.”
For organizations already managing SaaS sprawl, AI agents add a new dimension of governance drift.
Prompt Behavior Tells a Clear Story
Usage patterns also surface practical risk indicators. Prompt activity is concentrated, with OpenAI accounting for 67% of observed prompt volume.
More notably, 17% of prompts include copy/paste or file-upload behavior, primary pathways for data egress into AI systems. The majority of uploads originate from local files.
Sensitive-data detections skew toward secrets and credentials (48%), followed by financial information (36%) and health-related data (16%). API keys, tokens, and access credentials frequently appear in routine debugging and integration workflows.
Taken together, the data suggests that AI risk is less about headline-grabbing misuse and more about everyday integration and workflow design.
The Wrap
As AI becomes another embedded layer in collaboration and development environments, its risk profile increasingly mirrors that of SaaS: permissions, integrations, and data handling patterns matter more than the model itself.
Trusted insights for technology leaders
Our readers are CIOs, CTOs, and senior IT executives who rely on The National CIO Review for smart, curated takes on the trends shaping the enterprise, from GenAI to cybersecurity and beyond.
Subscribe to our 4x a week newsletter to keep up with the insights that matter.


