How can organisations defend themselves in the era of the AI hacker?

By Adam Maruyama, Field CISO at Garrison.

  • 7 months ago Posted in

Artificial Intelligence (AI) is radically changing the threat landscape. Adversaries are turning to readily available large language models (LLMs) to rapidly launch new phishing campaigns, identify new targets, and research companies and individuals. AI has the potential to supercharge both the sophistication and frequency of phishing scams, meaning that businesses need to protect sensitive data and mission-critical systems more than ever.

Moving past user training 

Bolstering defences against AI-driven attacks is rightly a top priority for businesses. But concerningly, most organisations continue to have an over-reliance on training employees to detect these attacks. 

In the age of AI, even the most comprehensive and up-to-date phishing training has limited value. No amount of training can ensure users won’t fall victim to phishing attacks that are now even more believable and almost impossible to identify. Asking users to scrutinize each and every email before clicking on links is simply unrealistic considering the quantity of emails users receive on a daily basis and the integral role emails play in day-to-day business life. Furthermore, “overtraining” users into being too suspicious of emails could have an inverse effect on businesses – while they may dodge cyber risk, they open themselves up to business risk when employees are too hesitant to open emails from legitimate contacts, which can inhibit users’ ability to effectively carry out their day-to-day roles.

The shift towards deploying more phishing simulations is also flawed. These training exercises can unintentionally contribute to a culture of blame, where the employee fears slipping up and is therefore less likely to be upfront with security teams about potentially risky behaviour. Critically, these simulations probably won’t increase the likelihood that an employee will accurately identify phishing attacks, which are highly-personalised and no longer follow a one-size-fits-all approach.

The limitations of detection-based security

It’s not just the human component of businesses’ security approach that is falling short. Traditional detection tools – such as firewalls, web proxy and web filtering – are also no longer up to the job. 

These technologies focus on defending an organisation’s perimeters, leveraging signatures taken from known bad attacks to quickly identify and isolate anomalous behaviour. This is highly problematic when you consider that social engineering techniques – such as phishing – rely on human error to enable malware to act as though it was being run by an expected user on the system. Social engineering effectively allows adversaries to bypass detection tools, making these technologies powerless against phishing attacks. When you also factor in the inability of these tools to detect zero-day attacks – eight of which were discovered against the Chromium browser stack (Chrome and Edge) last year –  it isn’t hard to see why detection-based security is largely ineffectual when it comes to cyberattacks.

While AI detection tools have been heralded by some as the answer, these products also encounter the same challenges. AI-powered tools are also likely to generate significant numbers of false positives, particularly when trying to determine if a message is written by AI rather than a human, increasing the resource strain on security professionals whose role it will be to sift through these potential threats and decide what action to take, while also heightening the likelihood of a real threat slipping past security defences. If AI were allowed to make these determinations itself rather than relying on human triage, such a false positive rate could have negative impacts on productivity and collaboration by mistakenly isolating genuine emails and legitimate system activity, even if technical attacks don’t reach their targets. 

Embracing simplicity in security 

When it comes to the challenge of user behaviour, part of the answer is real-time security awareness through visual cues like pop-ups, highlights, and mouseovers that alert users in real-time when potential security threats emerge. These real-time alerts flag risks that the user may otherwise miss, helping to promote safer user practises.

Such cues are already in widespread use in industries such as banking, often in the shape of pop ups or alerts into websites and mobile apps. When combined with advanced browsing technology, they can be deployed to alert a user before they enter financial details into a website that has not been evaluated by the business’s cybersecurity team, helping employees make better informed decisions about their online activity in real time rather than through periodic training. 

But while nudges are an important step towards an improved security posture, they don’t address the issue of advanced technical attacks – such as zero-day attacks launched via vulnerabilities in the browsing stack. A proven way to mitigate against this threat is to isolate users’ browsing activity from company systems via remote browser isolation (RBI). RBI hosts browsing sessions on a virtually or physically distinct system, meaning that malicious code can never be executed on business networks. This eliminates technical risk, while visual cues in the remote browsing environment also reduce the likelihood that a user will disclose sensitive data to bad actors.

The need for a paradigm shift 

In the AI era, security professionals must urgently rethink security tactics if they are to defend themselves against adversaries. Highly-targeted and increasingly complex phishing scams are being launched at an unprecedented scale, and no users are immune from attack. 

     

Rather than security strategies that rely on users successfully identifying phishing scams or on detection tools that are powerless to defend against social engineering attacks, the focus should be on preventing threat actors from accessing company networks or executing code on these systems in the first place. Achieving this requires a blend of technical controls that shield users from technical risk, and security nudges that warn users before they engage in potentially dangerous behaviours. 

By Darren Thomson, Field CTO EMEAI, Commvault.
By Oliver Feiler, Head of Global Alliances and Strategic Partnerships EMEA, Nozomi Networks and...
By David Higgins, EMEA Technical Director at CyberArk.
By Manuel Sanchez, Information Security and Compliance Specialist, iManage.
Anita Mavridis, VP of Product at Zivver, and Sue Musumeci, Director of Quality & Clinical...
By Danny Lopez, CEO of Glasswall.
Nadir Izrael, Co-Founder and CTO at Armis discusses the importance of critical infrastructure...
By Darren Thomson, Field CTO EMEAI at Commvault.