AI tools to digital employees: why traditional security models are falling behind

This article is based on an exclusive interview with Steve Wilson, Chief AI Officer at Exabeam, exploring how the rise of the digital workforce is reshaping cybersecurity. The discussion covers the shift from AI tools to “digital workers,” why autonomous security models fall short, and how organisations must rethink governance, identity, and control by treating AI agents as employees within human–agent teams.

  • Wednesday, 22nd April 2026 Posted 1 hour ago in by Sophie Milburn

Exabeam is a behaviour intelligence company for the agentic enterprise, aiming to deliver flexible, industry-proven solutions for insider threat coverage of humans and agents. Alongside this, Wilson also founded the OWASP GenAI Security Project, a global initiative of more than 25,000 contributors working to secure large language models and agentic AI systems.

In this interview, Wilson sets out a stark view of where enterprise technology is heading: away from traditional AI tools and towards a growing digital workforce of autonomous ‘digital employees.’ That shift is not just changing how work is done, but fundamentally reshaping the scale, speed and nature of cyber risk. His perspective challenges assumptions around autonomous SOCs and human-in-the-loop models, instead pointing towards a future built on tightly governed human–agent collaboration. 

These themes set the foundation for a wider discussion on why security, governance, and organisational design must evolve together if enterprises are to keep pace with the increase in data, agents and increasingly sophisticated threat activity he believes is coming.

The limitations of legacy SOC models

Wilson points to a fundamental mismatch between how security operations have evolved and the speed of today’s threat landscape. While SOC tooling has advanced significantly over the past two decades, he argues that the underlying operating model has remained largely unchanged.

At the centre of the problem, Wilson says, is a model still built around human-led triage of overwhelming volumes of alerts. “The basic model is collect logs, have humans sort through the alerts, possibly giant piles of them, and try to make decisions fast enough to keep up with what’s going on,” he notes. But in today’s environment, that approach is no longer sustainable, with the volume and speed of activity now outpacing what human-led processes can realistically handle.

What this means for MSP security strategies

Two major forces are reshaping the threat landscape and, in turn, changing what MSPs need to protect against. The first is the rise of what Wilson describes as the agentic enterprise, where organisations are rapidly deploying AI agents at scale. These are not simply chatbots, but autonomous systems capable of carrying out real tasks across business environments, bringing both productivity gains and new forms of risk.

Alongside this, threat actors are also adopting the same technologies. AI-driven tools are increasingly being used to support offensive cyber activity, lowering the barrier for more sophisticated and scalable attacks. As a result, MSPs are no longer operating in a world where automation is purely defensive on the customer side.

The combined effect of these shifts is a significant expansion in both the digital workforce inside organisations and the volume of external threats targeting them. “You are going to be dealing with 100 times the amount of data, signal, and noise that your security operations team is going to have to sift through,” he explains. For MSPs, this creates a fundamentally different operating environment, where scale, complexity, and speed are all increasing at once.

Why autonomous SOCs fall short in an agentic enterprise

As the concept of an agentic enterprise gains momentum, Wilson draws a clear distinction between what AI agents are well suited for and where traditional security models begin to break down.

The starting point is understanding the fundamental strengths and limitations of both software and emerging AI systems. “Computers are good at maths. And they are good at repeatability,” he says, highlighting the strengths of traditional systems. In contrast, “AI agents, things built on large language models, they’re good at dealing with uncertainty, they’re good at dealing with unstructured data, and they’re great at dealing with language.”

The issue emerges when organisations attempt to replace traditional software with newer agentic models without fully recognising how fundamentally different their strengths and limitations are. This often results in using agents in contexts where they lack the consistency and judgement required, while still expecting them to perform highly structured and reliable tasks.

This mismatch is what ultimately undermines the idea of a fully autonomous SOC. The notion of handing cybersecurity entirely over to digital agents, without meaningful human judgement in the loop, is a direction he sees as increasingly unrealistic in practice. Instead of removing human judgement from the equation, he argues for a different model altogether. “We need to be building high-performance human agent teams, not autonomous SOCs.”

Rethinking the “human-in-the-loop” model

Wilson is quick to challenge the idea of a traditional “human-in-the-loop” approach, which he sees as an early and somewhat simplistic attempt to ensure oversight in AI-driven systems. In practice, he suggests, it often reduces human involvement to little more than repetitive approval tasks, with limited real impact on outcomes.

In many early implementations, humans are effectively reduced to validation roles, clicking through decisions generated by machines. Over time, this can lead to disengagement, where attention and accountability start to erode. In that scenario, the human becomes the weakest link rather than a meaningful layer of control.

The issue, he adds, is that this structure also undermines the very advantage organisations are trying to achieve with AI: speed. If every action requires human review, the system loses the efficiency gains that agents are meant to deliver.

Instead, he points towards a different model, where roles are more deliberately separated: human-on-the-loop. In this structure, AI agents handle high-speed processing of complex, unstructured data, while humans focus on judgement, context and accountability.

The goal is not to slow systems down with constant intervention, but to combine strengths more effectively. When designed properly, this balance creates high-performance human–agent teams, capable of managing the scale and complexity of an environment facing a dramatic increase in data and activity.

Where MSPs fit in the agentic security model

MSPs have traditionally been early adopters of new security technologies, and many are already beginning to integrate agentic capabilities into their environments. However, Wilson suggests the real opportunity lies not in replacing existing approaches, but in applying these tools to the parts of MSP operations that are still highly manual and costly.

One of the most immediate benefits is improving explainability. Earlier security models often generated large volumes of machine-led outputs that required significant human effort to interpret. In contrast, newer agent-driven systems are better suited to translating complex security data into something more usable for human operators, particularly in environments where legacy systems still struggle to make findings intelligible at speed.

As Wilson argues, “we could have high-end models with thousands of rules that were processing the data being piped into your SIEM, your log aggregator, and it would pop out a finding that says, ‘this is a problem’. Your humans then spend hours decoding that.” He contrasts this with newer approaches that reduce that translation burden and make outputs far more consumable within the SOC.

Beyond internal operations, MSPs can also use these capabilities to extend communication with their customers, particularly in high-pressure situations where speed and clarity are critical. Agents can help streamline these workflows and improve how information flows during incidents.

At the same time, he cautions against viewing agentic systems as a simple replacement for existing automation. He points out that some capabilities are often misunderstood or overstated, particularly when applied to structured processes that still require deterministic reliability. “Go back to what are these things bad at? They’re bad at maths. They’re bad at repeatability,” he notes, highlighting the risk of over-relying on agents in the wrong parts of the stack.

For MSPs, the value doesn’t come from replacing what already works, but from integrating agents in the right places. The focus shifts to using them where they improve clarity and efficiency, while established systems are still relied on for consistency and reliability where it matters most.

The digital workforce: when AI agents become employees

Wilson frames governance and accountability around a deeper shift in what these systems are becoming, and the extent to which existing language is already struggling to keep up. He notes that the term ‘AI agent’ has become so expansive that it has effectively lost precision, describing how it can simultaneously mean everything, and nothing at all.

That ambiguity, he suggests, reflects a change in the underlying architecture itself. Rather than short-lived, prompt-based tools, he describes systems designed for continuous operation and execution, explaining that “these are not prompt and response architectures. These have what are called ‘agentic loops’, they run 24 hours a day.”

From this perspective, the shift is not just technical but structural. These systems are no longer episodic tools sitting inside workflows, but persistent entities that carry out work over time.

That is why he leans on the idea of ‘digital workers’ to describe them, and extends the comparison into organisational design itself. As he puts it, “I'm going to onboard them, I'm going to assign them an identity, I'm going to assign them access to the applications that they need, and only the applications that they need.”

This framing fundamentally alters how accountability is applied. Instead of treating these systems as conventional software, he suggests they should be handled in the same way organisations think about human actors inside their environment, noting the need to “treat these as potential insider threats, just like I do my humans, rather than treating them as software applications.”

MSPs in the age of digital workers

Looking ahead, Wilson places MSPs at the centre of an escalating security challenge shaped by both scale and speed. He highlights a sharp rise in AI-enabled activity, with threat environments becoming increasingly dense and difficult to manage as automated systems generate and process vast volumes of data.

In this context, the traditional MSP role is shifting. Once primarily focused on supporting organisations without in-house security capability, MSPs are now operating in an environment where even large enterprises are struggling to keep pace with the complexity of modern threats.

Wilson emphasises that the accessibility of advanced offensive tools is accelerating quickly, with capabilities emerging far closer to the frontier than many organisations are prepared for, and rapidly diffusing into the wider threat landscape.

Against this backdrop, he sees the opportunity for MSPs in adopting agentic technologies in a structured way, particularly by building hybrid models that combine human expertise with AI systems rather than pursuing full automation.

He also highlights a shift in security analytics, where organisations will need to monitor not just people, but the behaviour of autonomous agents operating inside their environments. For MSPs, this marks a clear opportunity to stand out by building capability in this emerging layer of visibility. Those that move early will be better placed to handle rising complexity and secure higher-value enterprise work.


Flotek Group has grown rapidly in a market known for complexity and fragmentation. In this...
In an exclusive conversation with Isobelle Coventry, this article explores the significant growth...
By Ryan Davis, Channel Account Manager at CultureAI.
By Sean Tilley, Senior Director of Sales EMEA at 11:11 Systems.
In an exclusive conversation with Devang Mehta of Infrassist, the focus is on how MSPs can move...
By Roy Azoulay, Co-founder, Chief Information & AI Officer at Cynomi.
By Brett Candon, VP International, Dropzone AI.
By Dan Bridges, Technical Director, Dropzone AI.