
Two thirds of organizations have suffered from a cybersecurity incident related to the deployment of AI agents during the last year, research by the Cloud Security Alliance (CSA) has warned.
According to research, conducted alongside Token Security, unchecked AI agents operating on corporate networks caused damage including data exposure, operational disruption and financial losses.
The CSA paper, titled Autonomous but Not Controlled: AI Agent Incidents Now Common in Enterprises, published on April 21, warned that the majority of organizations have no strategy set up around decommissioning AI agents, further putting them at risk of cybersecurity incidents.
According to the report, 68% of respondents claim to have high confidence in the visibility of AI agents on their network. However, 82% of all respondents said they have discovered previously unknown agents in the past year.
The most common places for previously unknown AI agents to be discovered were within internal automation environments and large language model (LLM) platforms.
“This gap highlights a distinction between operational visibility and complete governance assurance, limiting the effectiveness of control models that depend on known and bounded agents,” said the CSA report.
If cybersecurity and infrastructure teams are unaware of AI agents which employees have deployed in the network, this makes it almost impossible to ensure those AI agents are deployed securely. This has already resulted in cybersecurity incidents.
AI Agents Cause Data Breaches and Operational Disruption
During the last twelve months, 65% of organizations have experienced at least one cybersecurity incident which occurred because of the use of AI agents, the research found.
The operational consequences of AI agent related security incidents included data exposure (61%), operational disruption (43%) or unintended actions in business processes (41%).
Just over a third of organizations (35%) said that a security incident as a result of actions by an AI agent resulted in financial losses, while 31% experienced delays in customer-facing or internal services.
The paper warned that AI agent incidents are already affecting core enterprise functions, including data protection, operational continuity, financial performance, and service delivery. Businesses must ensure that they are performing appropriate risk assessments to apply controls around AI agents.
“For organizations, this shifts AI agent governance from a technical oversight issue to a business risk management concern. Agent behavior must now be integrated into broader security, compliance, and operational resilience strategies rather than managed as an isolated automation challenge,” said the report.
AI Agent Decommissioning Lacks Governance
Once area where governance of AI agents is particularly lagging is around what happens when AI agents are decommissioned, with a distinct lack of controls around end-of-life governance.
Only one in five organizations have formal processes in place for decommissioning AI agents, meaning that AI agents may persist within the network, even after they have completed their intended purpose.
However, in many cases, they still hold onto credentials, permissions, or operational hooks, which could result in unintended data leaks or data breaches. The CSA report warned that as more AI agents become part of enterprise networks, the problem of forgotten agents retaining permissions could create cybersecurity risk.
Cloud Security Alliance Calls for Stronger AI Agent Security and Governance
The CSA has called for the issues around managing security and risk around AI agents to be addressed.
“AI agent security and governance encompass an interconnected system spanning visibility, lifecycle management, policy, and monitoring. While foundational controls are in place, gaps in consistency and end-of-life management remain,” said Hillary Baron, assistant vice president of research at the Cloud Security Alliance.
“As agents gain greater autonomy, governance must evolve into a more unified, operational model that can sustain control at scale,” she added.
To tackle this the CSA has issued the following advice to organizations:
- Maintain visibility across AI agents — ensure agents operating across SaaS platforms, internal systems, and LLM environments are identified and within governance scope
- Define and document agent purpose — establish intended function to set operational boundaries and align access with that scope
- Apply lifecycle governance consistently — extend onboarding, ownership, review, and decommissioning processes across the full agent lifecycle
- Evaluate actions based on risk and authorization — use contextual signals such as action risk and explicit human approval to guide decision-making
- Align monitoring with agent activity — evolve from periodic oversight toward more continuous or event-driven detection models
- Incorporate agents into enterprise risk models — treat AI agents as part of broader security, compliance, and operational resilience frameworks
