The rising threat of AI weaponisation in cybersecurity

AI's accelerated role in creating cyber threats necessitates new security measures.

This week, Anthropic revealed a concerning development - hackers have weaponised its technology for a series of sophisticated cyber-attacks. With artificial intelligence (AI) now playing a critical role in coding, the time required to exploit cybersecurity vulnerabilities is diminishing at an alarming pace.

Kevin Curran, IEEE senior member and cybersecurity professor at Ulster University, highlights the methods attackers employ when using large language models (LLMs) to uncover flaws and expedite attacks. He emphasises the need for organisations to partner robust security practices with AI-specific policies amidst this changing landscape.

Curran explains, "This shows just how quickly AI is changing the threat landscape. It is already speeding up the process of turning proof-of-concepts – often shared for research or testing – into weaponised tools, shrinking the gap between disclosure and attack. An attacker could take a PoC exploit from GitHub, feed it into a large language model and quickly get suggestions on how to improve it, adapt it to avoid detection or customise it for a specific environment. That becomes particularly dangerous when the flaw is in widely used software, where PoCs are public but many systems are still unpatched."

“We’re already seeing hackers use LLMs to identify weaknesses and refine exploits by automating tasks like code completion, bug hunting or even generating malicious payloads designed for particular systems. They can describe malicious behaviour in plain language and receive working scripts in return. While this activity is monitored and blocked on many legitimate platforms, determined attackers can bypass safeguards, for example by running local models without restrictions.

Curran concludes, "The bigger issue is accessibility. Innovation has made it easier than ever to create and adapt software, which means even relatively low-skilled actors can now launch sophisticated attacks. At the same time, we might see nation-states using generative AI for disinformation, information warfare and advanced persistent threats. That’s why security strategies can’t just rely on traditional controls. Organisations need AI-specific defences, clear policy frameworks and strong human oversight to avoid becoming dependent on the same technology that adversaries are learning to weaponise."

As AI continues to evolve, so too does its potential for misuse in cyber arenas. This calls for innovative solutions and strategic thinking to counteract its possible threats, ensuring digital realms remain secure.

A survey of 650 global CISOs examines how security leaders are navigating AI adoption, expanding...
Veracode's latest report highlights the widening gap between rapid software development and slower...
Veeam has launched Agent Commander, a solution designed to combine data resilience with AI...
How Site24x7's new AI features aim to enhance IT operations, reduce recovery time, and ensure...
The unveiling of CrowdStrike's 2026 Global Threat Report highlights a surge in AI-enabled threats,...
Capgemini and OpenAI collaborate to support enterprise AI adoption via the Frontier platform.
Tech Mahindra and University College London are collaborating on research and solution development...
BMC is working with financial institutions to support mainframe modernisation, workflow...