SailPoint has unveiled SailPoint Shadow AI Remediation, a component of its AI governance and security framework. This offering is designed to help enterprises discover, monitor, and manage the use of unauthorised AI tools, often referred to as “Shadow AI”. The solution is intended to help address security and compliance risks associated with the growth of artificial intelligence.
With employees increasingly using AI platforms such as ChatGPT, Claude, and Gemini to support productivity, sometimes outside approved IT channels, organisations face challenges in managing and understanding this activity. Shadow AI can create limited visibility for security teams over how these tools are used. SailPoint’s Shadow AI Remediation provides real-time visibility into employee use of AI tools, including monitoring document uploads and interaction frequency. This capability is relevant in the context of reports indicating that 80% of organisations have observed unintended actions by AI agents, such as inappropriate data access or sharing.
The Shadow AI Remediation solution seeks to offer organisations several capabilities:
“Many vendors are trying to solve the Shadow AI problem with isolated browser or endpoint tools, but that misses the bigger picture. This is fundamentally an identity challenge,” said Chandra Gnanasambadam, EVP of Product and Chief Technology Officer at SailPoint.
The release of Shadow AI Remediation forms part of SailPoint’s AI governance strategy. It integrates with SailPoint’s Identity Security Cloud, contributing AI tool usage data to the identity graph to support access and security decisions. This approach links human and non-human identities with data and security information to support operations in an AI-driven environment.