Service providers on the edge

Common automation frameworks and the end of speed and security compromises. By Bart Salaets, Senior Solution Architect Director, F5 Networks.

  • 3 years ago Posted in

In the service provider realm’s not-too-distant past, there was a distinct line in the sand.


On the one side, networking and security teams spearheaded the evolution to an NFV architecture, with a strong focus on virtualising network and security functions. Forays into the world of automation were tentative at best.

On the other side, developers enthusiastically embraced cloud platforms, DevOps methodologies and automation via CI/CD pipelines. Rapid application deployment and delivery was, and still is, the name of the game.

The edge is where they both come together, and applications can live harmoniously side-by-side with network functions.

Thanks to its distributed nature, and fuelled by the gradual global rollout of 5G, edge computing is finally starting to empower service providers to offer new solutions and services that simultaneously increase revenue streams and cut network transport costs.

Rather than transmitting data to the cloud or a central data warehouse for analysis, processing can take place at the ‘edge’ of a network, reducing network latency, increasing bandwidth and delivering significantly faster response times.

Take self-driving cars.

Hosting applications, with all their associated data, in a centralised location like a public cloud can yield end-to-end latency to the tune of tens of milliseconds. That is far too slow. You’ll get the same result if the the application stays central and the network function is moved to the edge. However, when you move the application and network function to the edge it is possible slash latency to milliseconds. Now we’re in business.

Virtualised Content Delivery Networks (CDN) are another compelling case in point.

To date, a third party CDN tended to be hosted at a peering point or in a centralised data centre. This is changing, with some canny service providers building their own edge computing based CDNs to cover local video content and IPTV services, all while saving on transit and backhaul costs.

There are different business models available to bring these kind of use cases to life.

The simplest scenario is a service provider allowing physical access to an edge compute site. Third parties bring their own hardware and manage everything. The service provider is responsible for space, power and connectivity, also known as a colocation model.

A far more interesting approach is for the service provider to offer Infrastructure as a Service (IaaS) or Platform as a Service (PaaS) options to third parties through a shared edge compute platform. Service providers can build these themselves or with specialist partners.

The power of automation

Automation is the secret ingredient to making it all work.

In the context of cloud computing, automation is critical for developers to publish new versions of code at pace and with agility. 

In the networking world of NFV, automation is key to driving a service provider’s operational costs down. Previously, network-  and service provisioning were manual, time-consuming tasks. Today, while objectives may differ, the tooling and techniques are the same – or sharable – for both network and developer teams. Applications and network functions co-exist in an edge compute environment.

So how can developers automate the deployment of applications and associated application services in the cloud?

For the purpose of this article, we’re concentrating on application services automation. It is worth noting that the steps described below can be easily integrated into popular configuration and provisioning management tools such as Ansible or Terraform, which are then further complemented by CI/CD pipeline tools such as Jenkins.

The first step is bootstrapping or introducing the virtual machine to deliver application services into the cloud of choice.

Next is onboarding, which implies introducing a basic configuration with networking and authentication parameters (e.g. IP addresses, DNS server etc.). Finally, there’s the actual deployment of application services – such as ADC or security policies – using declarative Application Programming Interfaces (API).

The last point is critical.

Imperative APIs, which most vendors have, means you tell the system what to do at every juncture. Firewalls are a good example. You’d need to create address lists and align them with firewall rules. These are then grouped together in a policy, which is then assigned to an interface. There are distinct steps and requirements for rest API calls to go through in sequence, otherwise everything fails. Contorting all of this into an automation tool is expensive and takes time.

Declarative APIs are a different beast. You can tell the system what you want, and it figures out the way ahead. With one declaration (in JSON or YAML format) you could, for instance, define all ADC and security service parameters and give it to the system with a single rest API call. In this case, the outcome is either a success (the service has been deployed) or it has failed but the overall system remains unaffected. There is no requirement for intelligence in the automation system. The intelligence stays within the systems you are configuring, which dramatically reduces automation costs.

The exact same steps can be taken to provision a virtual network function in an NFV environment. A declarative API markedly simplifies integration with end-to-end NFV orchestration tools. The orchestrator doesn’t need to know the individual steps to configure a service or network function, it simply pushes a single JSON declaration with the parameters the system needs to set up the service. Again, the intelligence on ‘how’ to configure the service stays within the system you are configuring.

Through closer alignment between networking and developer disciplines, we can now build a distributed telco cloud with a common automation framework for applications and network functions. It is agile and secure at every layer of the stack – from the central data center all the way to the far edge and can even span into the public cloud.

Industry-wide, we expect common automation frameworks that enable the deployment of applications and their services, as well as network functions, to become the norm in the coming years – particularly as 5G rollout continues worldwide. The pressure is building for service providers to unify siloes, get agile and start living more on the edge.

 

 

By Barry O'Donnelll, Chief Operating Officer at TSG.
By Dr. Sven Krasser, Senior Vice President and Chief Scientist, CrowdStrike.
By Gareth Beanland, Infinidat.
By Nick Heudecker, Senior Director at Cribl.
By Stuart Green, Cloud Security Architect at Check Point Software Technologies.
The cloud is the backbone of digital cybersecurity. By Walter Heck, CTO HeleCloud
By Damien Brophy, Vice President EMEA at ThoughtSpot.