How MSSPs can adopt AI with confidence

By Haris Pylarinos, Founder and CEO of Hack The Box

AI is increasingly embedded in cybersecurity operations. For managed security service providers (MSSPs), AI-driven analytics, automated monitoring and machine-assisted response provide a way to manage growing volumes of security data, accelerate investigations and deliver 24/7 service. All without needing to increase SOC headcount.   

But as automation becomes more prevalent, the challenge for MSSPs is how to know that they can rely on it. Any fully autonomous decision-making will introduce risk if systems are not continuously evaluated, supervised and governed by humans.  

There is no doubt that AI can strengthen cybersecurity operations but only when its capabilities are proven in practice rather than assumed.  

Speed alone does not equal resilience  

AI is great for processing at scale and speed. It correlates large data sets, surfaces anomalies quickly and assists with repetitive operational tasks. But speed without validation of actions and outcomes undermines resilience. Automated actions that are poorly tuned or contextually unaware may have unwanted consequences such as disrupting business processes or obscuring the real nature of an incident.  

Adversaries are also increasingly using AI to lower the barrier to attack. Generative models now support reconnaissance, social engineering and vulnerability research at scale. And MSSPs cannot respond effectively without some level of automation. However, automation that operates without oversight becomes its own new category of operational risk.  

Clearly, AI is neither inherently beneficial nor inherently dangerous. Its impact depends on how rigorously its behaviour is measured and controlled over time.  

Why evaluation must be continuous  

Traditionally SOC tools were validated through staged testing, rule tuning and periodic reviews. That approach is not enough for AI systems that continuously adapt, learn and make probabilistic decisions. Detection models evolve, attacker techniques change and enterprise environments shift continuously.  

As a result, confidence in AI-driven security cannot be established once and then left unchecked. Continuous confidence is needed. MSSPs need a method to ensure ongoing evaluation, including benchmarking automated responses against realistic adversarial scenarios and comparing machine outcomes with human analyst judgement.  

Regular, ongoing testing under real-world conditions provides evidence of how AI-driven security systems behave under pressure, how they degrade over time and where human intervention is required.   

Without this feedback loop, cybersecurity automation risks drifting away from operational reality.  

Humans remain central to effective security operations  

AI changes how analysts work but it does not remove the need for human involvement. Automation is at its most effective when it is applied to high-volume, repeatable tasks such as alert triage and initial investigation. This allows cybersecurity analysts to focus on contextual analysis, decision validation and escalation.  

Humans must remain responsible for overseeing automated actions. Analysts need to understand when AI recommendations should be followed, when they should be questioned and when they should be overridden.   

Cybersecurity decisions often carry regulatory, legal and commercial consequences that cannot be delegated entirely to a machine. Keeping humans in the loop is not a limitation of AI adoption; it is an essential requirement for responsible use.  

Upskilling for hybrid environments  

As AI becomes more integrated into SOC workflows, skills requirements are shifting. Analysts need more than technical knowledge of threats and response procedures. They must also understand how AI systems behave, where they are likely to fail and how to assess outputs.  

For MSSPs, upskilling has to be a priority. Teams that can supervise, test and challenge automation will be better positioned to deliver consistent outcomes and demonstrate operational maturity to customers. Training must also support both human development and continuous improvement of AI-powered systems.  

The hybrid SOC as the practical model  

The concept of a fully autonomous SOC may seem appealing, but it is not realistic. The cost of error is just too high. A hybrid model is more sustainable. Machines handle scale, speed and repetition, while humans provide context, judgement and accountability.  

Looking ahead, the competitive advantage for MSSPs will not be defined by who automates fastest. It is about who can prove their automation behaves predictably, is continuously evaluated and operates under clear human oversight.  

With evidence, governance and skilled people in place, automation becomes not just efficient but demonstrably dependable.  

By Linda Kerr, director of marketing, managed services at WatchGuard Technologies.
By Lorenzo Romano, CEO of GCX Managed Services
By David Trossell, CEO and CTO of Bridgeworks
Jamie Akhtar, CEO and Co-Founder of CyberSmart
By Richard Mitchell, Head of Channel, ThreatAware
By Andy Cocking, Sales Director, MSP, EMEA & APAC, Barracuda
International Women in Engineering Day provides an opportunity to celebrate the women driving...