Generative AI in financial services: navigating the risks

Financial services face escalating data security risks from widespread generative AI usage.

In the evolving landscape of financial services, the use of generative AI (genAI) has increased, bringing both opportunities for efficiency and new security considerations. A recent report by Netskope Threat Labs examines risks associated with this trend.

The report highlights potential exposure of sensitive financial and customer data as organisations adopt genAI tools. According to the findings, regulated data accounts for 59% of data policy breaches related to genAI usage. Intellectual property accounts for 20%, while source code and passwords or API keys represent 11% and 9% respectively.

It also indicates widespread adoption within the sector, with around 70% of financial services users actively engaging with genAI technologies. Approximately 97% of these users interact with applications that include genAI-powered features indirectly, and 94% use applications that may incorporate user data for model training.

The report notes a shift in usage patterns aimed at reducing unmanaged or “shadow AI” activity. Use of personal genAI applications has decreased from 76% to 36%, while organisation-managed solutions have increased from 33% to 79%. At the same time, 15% of users reportedly use both personal and enterprise accounts, which may introduce additional data governance considerations.

The genAI ecosystem is also expanding and diversifying. ChatGPT is reported to be used by 76% of organisations, with Google Gemini at 68%. AssemblyAI usage has increased from 1% to 37%. Some tools, such as ZeroGPT, are being restricted in certain environments due to compliance policies.

Beyond dedicated AI tools, the use of personal cloud and online applications remains a factor in data security considerations. In this area, 65% of data breaches involve regulated data, with platforms such as LinkedIn and Google Drive frequently used in workplace contexts.

The report also references malware activity targeting cloud services. GitHub is noted as being impacted in 11% of organisations, with Microsoft OneDrive also affected. According to the findings, attackers may use widely trusted cloud platforms as part of their distribution methods, which can make detection more challenging within normal traffic patterns.

Sytronix has entered a partnership to provide high-performance computing infrastructure for AI...
Cognizant is partnering with the UK government’s TechFirst initiative to support technology...
Cisco’s report highlights Wi-Fi’s strategic role and the dual impact of AI on wireless ROI....
OutSystems introduces a new AI development approach, enhancing enterprise software with Agentic...
Exploring the challenges and implications of AI adoption for cybersecurity in modern enterprises.
OpenNebula Systems has released version 7.2 of its platform, introducing updates for AI production...
Node4 has launched an AI-driven Financial Operations (FinOps) solution designed to help UK...
London Tech Week 2026 will feature a Deep Tech Stage covering developments in space, AI, quantum...