The Responsible Use of AI: A Step into the Future for MSPs

By Ian Wharton, Technical Architect at Principle Networks.

  • 8 months ago Posted in

Artificial Intelligence (AI) is everywhere. It’s a topic clients often ask me about, and there’s no escaping the fact that it’s here to stay. Managed Service Providers (MSPs) are at the forefront of innovation, constantly adapting to meet the ever-changing needs of our customers. And AI is no longer just a buzzword, it’s going to shape the future of our sector.  

Generative AI - AI systems capable of creating content - has already become part of everyday life. Answering questions, sales prompts, drafting emails, writing articles and generating images, you name it, it can probably do it. However, it’s not all positive. 

Conversational AI leaks - the loss of data where chatbots are involved - are on the rise and now pose an increased threat to organisations and their customers. Another challenge is shadow AI practices, where staff start using unsanctioned AI tools and further expose the business to threats, or introduce AI hallucinations - misinformation generated by AI models - into their work.  

It all raises questions about whether MSPs need to restrict access to AI tools, or are there responsible ways to use them? 

Navigating a Complex Landscape 

Eighteen months ago, most people had not heard of tools like ChatGPT, Gemini or Microsoft Copilot, and now they are used by millions, not just socially but often in a work environment. While there is no doubt that generative AI offers huge potential to help employees and organisations become more productive, taking care of lots of administrative and process-driven tasks so that we don’t have to, we are still at a point in time where these tools do not have the proper vetting and control mechanisms in place. Without the necessary governance and security features, generative AI poses more risks than there are benefits. 

For example, in May 2023, electronics giant Samsung reportedly banned generative AI tools after discovering an engineer accidentally leaked sensitive internal source code when uploading it to ChatGPT. The concern for organisations is that data shared with AI chatbots gets stored on external servers owned by companies operating the service, and there is no easy way to access and delete it. Generative AI tools store chat histories by default and use these conversations to train future Large Language Models (LLMs). Users can change this setting manually, but it still needs to be clarified if it can be applied retrospectively, leaving organisations vulnerable.  

On the one hand, these platforms offer huge potential for employees to collaborate and become more productive, but on the other, they pose a great threat for compromising data integrity and customer confidentiality, leading to non-compliance and possibly fines. 

Until appropriate guard rails can be put in place, it’s recommended that MSPs implement stringent access controls, firstly to ensure only authorised individuals can access sensitive information, which is best practice in any security posture, but also restricting access to the tools themselves. While nobody is under any illusion that one day we will be in a much better position for safe and compliant use of these types of AI tools, until then it is vital to prioritise data governance, transparency and accountability to protect organisational and customer data. 

The Responsible Use of AI 

Let’s not just focus on the negative though. AI will also be the fastest way to monitor and identify cybersecurity threats automatically, removing the need for human intervention and speeding up the remediation process.  

For example, one of the biggest challenges for MSPs is the sheer volume of data passing through networks and applications. How do security teams guarantee this data remains secure, private and out of harm's way? Implementing AI algorithms to analyse aggregated data from various sources, including network devices, endpoints and security appliances, can transform how organisations detect and respond to cybersecurity threats. 

AI-based security tools are adaptive and self-learning. As your network environment evolves and new threats emerge, AI algorithms can adjust their behaviour and defences, leveraging feedback loops to improve and continuously scale.  

AI will also continuously monitor data to detect suspicious activity, identify threats and prioritise security alerts based on their severity and potential impact, allowing security teams to focus their resources on addressing and resolving the most critical issues.  

One of the most significant benefits of AI-based security tools is they provide IT managers with reliable, evidence-based information. The ability to analyse patterns and anomalies in network traffic means the AI can alert IT personnel of the threats that require human review, and this is getting better at an exponential rate. Now, it’s a case of learning to read and analyse the AI’s recommendations appropriately, as well as manage and refine its rules to train it according to your purposes. 

It’s a Marathon, Not a Sprint 

Delivering effective AI services requires a holistic approach, encompassing authentication, access rights management, data tracking and leakage prevention. Implementing mechanisms such as multi-factor authentication and biometric verification is the first line of defence when trying to prevent unauthorised access to services and data. The same approach should be applied to the use of AI tools. Managing employee access rights ensures the right people have the right permissions to access the tools needed to do their jobs. Understand who will be using them and why. Integration with existing infrastructure is another crucial consideration, as you must ensure compatibility with current systems and processes.  

If we’re to use AI responsibly, adhering to regulatory compliance is non-negotiable. Addressing mitigation bias, fairness, and accountability are central to responsible use. It’s imperative to evaluate the cost of implementing and maintaining AI-based tools against the potential benefits and return on investment in terms of threat detection and response capabilities.  

AI is coming, but you should remain in control of how it is safely introduced into organisational processes. An environment designed for the responsible use of AI keeps customer data secure and lays the foundation for future success. While there is a lot to get excited about, for now MSPs should be adopting a caution-first approach. 

By Paul Birkett, VP Strategic Portfolio Management at Ricoh Europe.
By Liz Centoni, Chief Customer Experience Officer, Cisco.
By Alasdair Anderson, VP of EMEA at Protegrity.
By Martin Hosken, Field CTO, Cloud Providers, Broadcom.
By Peter Hayles, Product Marketing Manager HDD at Western Digital.
By Eric Herzog, Chief Marketing Officer, Infinidat.
The Network and Information Systems (NIS) Directive 2.0, an update to the original NIS Directive,...