How to get AI into production

You know you need AI. You have already committed to delivering AI, and you’re working to build your organisation into an AI-driven enterprise. To do so, you have hired top-notch data scientists and invested in data science tools -- a great start. Yet, somehow your AI projects are still not getting off the ground. By Sivan Metzger, Managing Director of MLOps & Governance at DataRobot.

  • 3 years ago Posted in

Research shows that the share of AI models deemed ‘production worthy’ but never put into production is anywhere between 50 and 90%. A recent survey by NewVantage Partners showed that only a mere 15% of leading enterprises have deployed any AI into widespread production. These are staggering figures that mean models are indeed being built and then not going anywhere. So, why is this actually so difficult? What are leaders and teams actually missing? What can they do to finally obtain the long-anticipated value from AI?

Data scientists typically do not see their role as including production rollout and management of their models, and IT or Operations, who usually own all production services across the company, are reluctant to take ownership for machine learning services. This is actually pretty natural from their perspective, as neither group is even remotely familiar with the others’ concerns and considerations. However, without bridging the gap between them, progress will simply not happen.

Enter MLOps. MLOps precisely solves this challenge by bridging this inherent gap between data teams and IT Ops teams, providing the capabilities that both teams need to work together to deploy, monitor, manage, and govern trusted machine learning models in production without introducing unnecessary risk to their organisation.

What is MLOps?

MLOps, which takes its name from a mature industry process for managing software in production by the name of DevOps, is a practice for alignment and collaboration between data scientists and Operations professionals to help manage production machine learning. This ensures that the burden of productionalising AI does not entirely rest on the data scientist’s shoulders. It also ensures that a data scientist doesn’t just throw a model ‘over the wall’ to the IT team and then forget about it.

Additional issues stem from the fact that machine learning models are typically written in various languages on MLDev platforms unfamiliar to Ops teams, while these models also need to technically run in complex production environments that are unfamiliar to the data teams. Therefore, typically the models ignore the sensitivities and characteristics of those environments and systems. Needless to say, this only amplifies the gaps.

What problems does it solve?

With MLOps, users have a single place to deploy, monitor, and manage all of their production models in a fully governed manner, regardless of how they were created or where they are to be deployed. This is

essentially the ‘missing link’ between creating and writing machine learning, to actually obtaining business value from machine learning.

MLOps removes the inherent business risk that comes with deploying machine learning models without monitoring them, ensuring processes for problem resolution and model replacements. These capabilities become increasingly important particularly during these tumultuous times. It also ensures that all centralised production machine learning processes work under a robust governance framework across your organisation, leveraging and sharing the load of production management with additional teams you already have in place.

DataRobot’s MLOps centralised hub allows you to foster collaboration between data science and IT teams, while maintaining control over your production machine learning to scale up your AI business adoption with confidence.

How do organisations derive value with AI?

With the right processes, tools, and training in place, businesses will be able to reap many benefits from AI using MLOps, including gaining insight into areas where the data might be skewed. One of the many frustrating parts of running AI models, especially right now, is that the data is constantly shifting. With MLOps, businesses can quickly identify and act on new information in order to retrain production models on the latest data, using the data pipeline, algorithms, and code used to create the original.

The outline above is what allows users to scale AI services in production, while also minimising risk. Scaling AI across the enterprise is easier said than done. There can be numerous roadblocks that stand in the way, such as lack of communication between the IT and data science teams or lack of visibility into AI outcomes. With MLOps, you can support multiple types of machine learning models created by different tools, as well as supporting software dependencies needed by models across different environments.

Adopting MLOps best practices, processes, and technologies will get your AI projects out of the lab and into production where they can generate value and help transform your business. With MLOps, your data science and IT Operations teams can collaborate to deploy and manage models in production using interfaces that are relevant for each role. You can continuously improve model performance with proactive management and, last but not least, you can reduce risk and ensure regulatory compliance with a single system to manage and govern all your production models.

Put it all together and you can lead a true transformation throughout your organisation. By bridging the gap between IT and data science teams, you can truly adopt AI and demonstrate value from it, while ensuring that you can scale production of your models, eliminating all unnecessary AI-related risk for your company.

 

By Steve Young, UK SVP and MD, Dell Technologies.
By Richard Chart, Chief Scientist and Co-Founder, ScienceLogic.
By Óscar Mazón, Senior Product Manager Process Automation at Ricoh Europe.
By Chris Coward, Director of Project Management, BCS.
By Trevor Schulze, Chief Information Officer at Alteryx.