In the evolving landscape of financial services, the use of generative AI (genAI) has increased, bringing both opportunities for efficiency and new security considerations. A recent report by Netskope Threat Labs examines risks associated with this trend.
The report highlights potential exposure of sensitive financial and customer data as organisations adopt genAI tools. According to the findings, regulated data accounts for 59% of data policy breaches related to genAI usage. Intellectual property accounts for 20%, while source code and passwords or API keys represent 11% and 9% respectively.
It also indicates widespread adoption within the sector, with around 70% of financial services users actively engaging with genAI technologies. Approximately 97% of these users interact with applications that include genAI-powered features indirectly, and 94% use applications that may incorporate user data for model training.
The report notes a shift in usage patterns aimed at reducing unmanaged or “shadow AI” activity. Use of personal genAI applications has decreased from 76% to 36%, while organisation-managed solutions have increased from 33% to 79%. At the same time, 15% of users reportedly use both personal and enterprise accounts, which may introduce additional data governance considerations.
The genAI ecosystem is also expanding and diversifying. ChatGPT is reported to be used by 76% of organisations, with Google Gemini at 68%. AssemblyAI usage has increased from 1% to 37%. Some tools, such as ZeroGPT, are being restricted in certain environments due to compliance policies.
Beyond dedicated AI tools, the use of personal cloud and online applications remains a factor in data security considerations. In this area, 65% of data breaches involve regulated data, with platforms such as LinkedIn and Google Drive frequently used in workplace contexts.
The report also references malware activity targeting cloud services. GitHub is noted as being impacted in 11% of organisations, with Microsoft OneDrive also affected. According to the findings, attackers may use widely trusted cloud platforms as part of their distribution methods, which can make detection more challenging within normal traffic patterns.