Assessing AI deployment risks and security challenges

Despite security concerns, organisations are advancing with AI deployment, underscoring governance gaps and highlighting rising risks.

In the evolving technology landscape, TrendAI has released research examining AI deployment alongside security and compliance challenges. The study looks at how organisations are adopting AI despite known risks.

The research surveyed 3,700 business and IT decision-makers. It found that 67% reported feeling pressure to approve AI projects despite security concerns, with one in seven describing those concerns as “extreme” but proceeding in response to competitive and internal demands.

The findings indicate that AI adoption is, in some cases, occurring ahead of governance measures, with systems being introduced without fully established security controls. Security teams are often responding to AI deployment decisions after the fact, which can contribute to the use of unsanctioned or “shadow” AI tools.

Additional findings show that cybercriminals are using AI to support activities such as reconnaissance and phishing, increasing the speed and scale of attacks.

The study also highlights a gap between AI adoption and oversight. While 57% of respondents say AI is advancing faster than they can secure it, 55% report only moderate confidence in their understanding of the legal frameworks governing AI. Around 38% of organisations have comprehensive AI policies in place, while others are still developing them.

Confidence in autonomous AI systems remains limited. The report states that 44% of respondents believe such systems will significantly improve cybersecurity in the short term, while concerns persist around data access, misuse, and oversight.

Respondents identified several key risks, including AI agents accessing sensitive data (42%), malicious prompts (36%), and an expanded attack surface (33%). A similar proportion (33%) highlighted risks related to misuse of trusted AI systems and autonomous code deployment.

The report also notes that 31% of organisations report limited observability or auditability of AI systems, raising questions about monitoring and intervention after deployment.

Around 40% of respondents support the introduction of AI “kill switch” mechanisms to shut down systems in cases of failure or misuse, while nearly half remain uncertain.

The findings indicate that organisations are continuing to deploy AI systems while governance, visibility, and control measures are still developing.
Extreme Networks reports growing adoption of Platform ONE, with customers using its AI-driven model...
UK executives face rising pressures from AI-accelerated decision-making, grappling with the demand...
As AI eases manual burdens for IT teams, it simultaneously brings added pressures and...
Commvault has released details of AI capabilities focused on managing data, agents, and recovery...
Certes v7 platform focuses on a shift from perimeter-based security to data-centric security for...
A gap exists between executive enthusiasm for AI and employee trust in these tools, alongside the...
AlphaSense strengthens its presence in APAC and EMEA, aiming to enhance AI capabilities and expand...
More than half of UK business leaders face challenges from AI-powered cyber threats, with many...