CISOs confident about data privacy and security risks of generative AI

Over half of CISOs believe generative AI is a force for good and a security enabler, whereas only 25% think it presents a risk to their organisational security.

  • 6 months ago Posted in

New data from the latest members’ survey of the ClubCISO community, in collaboration with Telstra Purple, highlight CISOs’ confidence in generative AI in their organisations. Around half of those surveyed (51%), and the largest contingent, 50%) believe these tools are a force for good and act as security enablers. In comparison, only 25% saw generative AI tools as a risk to their organisational security.

The study's findings underscore the proactive stance of CISOs in comprehending the risks linked to generative AI tools and their active support in implementing these tools across their respective organisations.

45% of respondents suggested they now allow generative AI tools for specific applications, with the CISO office making a final decision on their use. Only a quarter (23%) also have region-specific or function-specific rules to govern generative AI use. The findings represent a marked change from when generative AI applications first landed following the launch of ChatGPT and when data privacy and security concerns were top-of-mind risks for organisations.

Despite ongoing concerns around the data privacy of specific applications, 54% of CISOs are confident they know how AI tools will use or share the data fed to them, and 41% have a policy to cover AI and its usage. In contrast, only a minority (9%) of CISOs say they do not have a policy governing the use of AI tools and have not set out a direction either way.

Inspiring further confidence, 57% of CISOs also believe that their staff are aware and mindful of the data protection and intellectual property implications of using AI tools.

Commenting on the findings, Rob Robinson, Head of Telstra Purple EMEA, sponsors of the ClubCISO community, said, “While we do still hear examples of proprietary data being fed to AI tools and then that same data being resurfaced outside of an organisation’s boundaries, what our members are telling us is that this is a known risk, not just in their teams, but across the employee population too.”

He continued, “Generative AI is rightly being seen for the opportunity it will unlock for organisations. Its disruptive force is being unleashed across sectors and functions, and rather than slowing the pace of adoption, our survey highlights that CISOs have taken the time to understand and educate their organisations about the risks associated with using such tools. It marks a break away from the traditional views of security acting as a blocker for innovation.”

54% of consumers don’t know how much personal data AI tools collect.
AI-powered personalisation, loyalty and human-like interactions redefine success for CX leaders.
‘Extreme transparency’ is helping clinical teams at Karolinska University Hospital to deliver...
The promise of AI is on every biopharma’s radar, but the reality today is that much of the...
NTT DATA research shows organizations shifting from experiments to investments that drive...
Architectural challenges are holding UK organisations back - with just 24% citing having sufficient...
Skillsoft has released its 2024 IT Skills and Salary Report. Based on insights from more than 5,100...
Talent and training partner, mthree, which supports major global tech, banking, and business...