How is the cloud encouraging hyperautomation, and why should I care?

Every business needs to be thinking in hyperautomation terms and investing tools such as DPCs - or risk being left behind. By Alasdair Hodge, Principal Engineer and Solutions Architect, Cloudsoft.

  • 3 years ago Posted in

Digital Transformation initiatives have naturally encouraged organisations to consume infrastructure resources through APIs (Application Programming Interface) which can save costs, increase productivity and encourage innovation. There is still, of course, a need for traditional paper-based sources in the chain, but by and large IT infrastructure can now be provisioned through an API, which is great news for businesses looking to drive scale and enables what we now know as hyperautomation.

Defining hyperautomation

Although it is a relatively new term, hyperautomation is defined by Gartner as being “an approach that enables organisations to rapidly identify, vet and automate as many processes as possible using technology, such as robotic process automation (RPA), low-code application platforms (LCAP), artificial intelligence (AI) and virtual assistants”. In recent years it has shifted from an option to a condition for survival and, according to the latest forecast by the analyst firm, the global market for technology that enables hyperautomation will reach $596.6 billion in 2022.

So, what’s really behind this trend?

The blend of hyperautomation and the cloud allows engineers to be more creative and start to build tools to automate wider processes. The public cloud has taken this to the next level, meaning that we are now seeing not only virtual disks, machines and networks being provisioned through APIs, but we are now also able to accelerate complex work – such as virtual big data analysis – as well as performing repetitive yet intuitive tasks like reading invoices, sales reports, contracts and official documents through digital twins.

Organisations have been quick to notice the increased agility hyperautomation affords them to drive innovation and bring new products to the market. Meanwhile, the speed and efficiency of the people who have built the tools that make it easier to consume APIs is a key driver behind hyperautomation.

Partnering with hyper cloud specialists

Historically, many big organisations (think banks and insurers) would view IT infrastructure as so crucial to their day-to-day operations that having their own data centre was a necessity to provide them with competitive edge. Those days are long gone, and now most leading providers see the opposite is true: their IT infrastructure is so critical to their day-to-day operations that they must outsource their data centre to drive other benefits across their organisation. This includes the cloud.

This approach is nothing new and logic behind this is undeniable – why should an organisation waste valuable time, creativity and resource employing a team of talented engineers to babysit operating systems and install patches when they could partner with a specialist, freeing up time to drive innovation and improve business processes?

Enter hyper cloud providers. By partnering with a company like AWS (Amazon Web Services), Microsoft Azure or Google Cloud Platform, not only does an organisation have access to leading APIs, but the scale at which they operate is truly global. This provides organisations with even greater opportunity to consume IT infrastructure in different ways, as well as letting businesses tap into the likes of AWS’s network to deploy their resources anywhere in the world and reach a global audience for a fraction of the cost.

Security and Resilience

The next concern for businesses is usually the security of their data. There is a common misconception that moving to the public cloud – and the associated move towards hyperautomation – means losing control. To a degree, an organisation will cede a small amount of control, but this sacrifice in favour of the public cloud brings with it huge convenience as well as big gains.

Not only does an organisation have full control over where it geographically stores its data – think GDPR – but crucially it can still maintain it in a highly available way by using resilient mechanisms that exist across hyper cloud platforms. The denial-of-service protection that hyper cloud providers offer, for example, are the best mitigation against threats due to the huge economies of scale they can support. One only needs to look at the many different cyber-attacks in recent years to see how critical this is to a business’s reputation and bottom line, so it’s easy to see why the public cloud is so attractive to organisations and how this, in turn, drives the march towards hyperautomation.

Using DPCs to coordinate the information orchestra

In the wake of the Lloyds TSB IT disaster in 2018, where 1.9 million customers were locked out of their bank accounts for a week, banks and other high-profile institutions moved quickly to shore up their systems. Previously, resilience in the banking world was focused on ensuring the organisation had enough capital to deal with any fines. However, the TSB fiasco showed the world that the operational resilience of a firm depends on the uptime of the IT system and how critical this is to remain functional.

Of course, large organisations’ IT estates have evolved via a hybrid model, with increasing complexity, interconnectedness and interdependence. With such complex architectures to oversee, and the need for RTOs (recovery time objectives) of a few seconds, we must recognise that technologies will fail; it is one of the fundamental flaws of distributed systems. More than ever, what is needed is for workloads and applications to be continually managed. But how can organisations do this effectively?

We work with a large, multinational bank that deals with sensitive information on a daily basis and has a complex network of systems supporting functions such as underwriting or fraud. Because banks and other large financial institutions are taking trading positions, often running into billions of dollars, they must have real time insight into how much risk the bank is exposed to. If any one of these systems goes down for any length of time, then the company is not able to operate – resulting not only in reputational damage, but potentially also regulatory fines.

In line with the ITIL 4 principle of "automate where possible", the automation of failure detection and recovery is a must. This can only be achieved by tooling that can operate across both on-premises and external environments, managing your hybrid estate and enabling the (hyper)automation of workloads 'at the right time, using the right technology, in the right location, for the right price’.

Gartner recently announced a new product category that meets this need – the Digital Platform Conductor (DPC) tool. DPCs go beyond solving the resilience problem; they are a solution to the complexity crisis many large organisations are facing. Our tool, AMP, has been deployed across a series of verticals, including banking and defence, and is a crucial tool for dealing with automation at scale – or hyperautomation – and for bringing the public cloud-like benefits of such automation to the entirety of the IT estate.

In today’s social media -fuelled news cycles, businesses can’t afford for everyday failures to bring down their IT systems. The risks are too great and a firm’s system must be available as much as possible because its operational resiliency depends on it and hyperautomation, driven by the automation benefits of public cloud, is the answer. To adapt one of my favourite quotes; “software hasn’t just eaten the world, it devoured it and has come back for seconds”.

That’s why every business needs to be thinking in hyperautomation terms and investing in tools such as DPCs - or risk being left behind.

By John Kreyling, Managing Director, Centiel UK.
By David de Santiago, Group AI & Digital Services Director at OCS.
By Krishna Sai, Senior VP of Technology and Engineering.
By Danny Lopez, CEO of Glasswall.
By Oz Olivo, VP, Product Management at Inrupt.
By Jason Beckett, Head of Technical Sales, Hitachi Vantara.