Enabling successful 5G rollouts through effective orchestration

By Brooke Frischemeier, Sr Director of Product Management at Robin.io

  • 2 years ago Posted in

With 3 billion 5G mobile subscriptions expected by 2026, the race is truly on for operators and providers across the world to provide adaptable, dependable mobile networks. Cloud native technology is becoming increasingly important for providers and operators, allowing them to be responsive to market fluctuations and their competitors, as they look to efficiently roll-out and iterate services. As the amount of 5G services continues to increase, so too does the requirement for unified and optimized operations.

To this end, operators face the need to deliver a greater number of new services with increased speed and throughput than ever before, but with lower latency and a greater demand for Quality of Service (QoS). Those who adopt fully orchestrated cloud-native platforms will be best placed to succeed, with deeper market penetration and advanced lifecycle operations on offer. Utilizing Kubernetes platforms, along with unified operations models and fully shared resource pools, is the best way to reduce time to market, with flexible and profitable 5G solutions.

Supporting 5G services with innovative edge computing

5G networks are already beginning to revolutionize how we live our lives, offering real-time connections on an unprecedented scale. However, the spectrum of 5G services now on offer requires significant amounts of bandwidth whilst needing to be delivered at much lower latency than seen previously. As Internet of Things (IoT) devices are broadly embedded across a wide variety of industries – from energy to agriculture – companies need reliable, self-adapting connectivity in order to harness the abundant potential of connected operations.

As applications such as Virtual and Augmented Reality (VR/AR), Autonomous X, Ultra-High-Definition (UHD) and Industry 4.0 become increasingly prevalent in our lives, the cloud-computing capabilities offered at the edge of the network can hold the key that enables operators to deliver real-time services. Multi-access Edge Computing (MEC) hosts virtual environments closer to the devices that require connectivity, removing the need to backhaul data to central sites which reduces the time it takes to process and analyze that data. Through these virtual environments created by MEC, operators they provide and access a service that runs locally, offering high throughput with minimum latency.

Reducing lifecycle tasks from hours to seconds

More and more operators are adopting cloud native, Kubernetes orchestration tools in edge environments to automate and optimize the use of infrastructure. Migrating from Virtual Machines (VMs) to containers can help improve efficiency and agility, while reducing costs.

With containers, applications are broken down into their constituent parts or functions, called micro-services. By doing this, one only needs to scale out the micro-serviced container dedicated to a specific function or task. This drastically reduces the resources needed and the time it takes to auto-scale, and when you look at the complexity of entire services, tasks can be reduced from hours to seconds.

Kubernetes has innovative self-healing capabilities when a discrepancy is detected between the declared optimal state and any suboptimal state. Furthermore, Kubernetes can also be set to auto-scale the microservices that can be based on a number of Key Performance Indicators (KPIs), to

further reduce service reaction times. For example, Central Processing Unit (CPU) usage degradation, or loss of connectivity can be used as a trigger for one automated response.

Choosing the right platform to suit your purpose The competitive edge Kubernetes offers to operators is paramount, with 85% of IT professionals agreeing that the platform is ‘extremely important’, ‘very important’ or ‘important’ to cloud-native application strategies. But as more vendors turn to these platforms to scale massive 5G services, it is becoming increasingly apparent that “how you automate is just as important as what you automate”. Kubernetes has played a significant role enabling the mass rapid move to the cloud over the past few years, but it is not a simple cure-all for any repetitive or scale-out task. There is more to making 5G services a success at a scale, than just choosing Kubernetes itself.

There are often large disparities in time to outcome, resource utilisation, solution costs, and opportunities in variations between Kubernetes cloud platforms and orchestration solutions. Operators choosing these platforms must do so methodically, thinking not only about the features available, but how they impact performance, flexibility and scale. Considering how they can be used to reduce time to outcome for service integration and production lifecycles must also play a key part when choosing what is best for your operation, throughout the lifecycle of its service. The right selection can help to bring about increased optimization and efficiency.

So, what do you look for? Lifecycles must be policy driven to eliminate hunting and hard coding, in order to make your service impacting events fully automated. This must include compute/storage/resource locality and the environmental variables that control migration of applications from core to edge to far-edge. Automation tools must incorporate workflows from the entire solutions stack including bare metal servers, Kubernetes clusters, supporting applications, services and physical devices. User interfaces need to look like your solution requirements, not a scripting nightmare. Deep technical expertise should never be a mandate in order to operate the solution. Observability must incorporate the full solutions stack, multi-tenancy and roles-based access with built in analytics. Solutions must also support both Virtual Machines (VMs) and containers on a single platform - not a multi-headed beast - to reduce resource silos and eliminate operations silos.

Kubernetes has the potential to reduce the time needed for scale-out tasks from weeks to minutes, with a reduction of CapEx and OpEx costs by 50% and 40% respectively. Interoperability across a vibrant vendor ecosystem ensures that chosen Kubernetes infrastructure will work with all kinds of applications and services, preventing any pitfalls as they are processed to the cloud.

Handling stateful workloads to enable cloudification

Stateful workloads, such as edge applications, databases and subscriber information are vital for optimized cloudification and must be handled with care. When handled well, agility and efficiency can vastly improve, but Kubernetes microservices add a level of complexity, meaning snapshotting and cloning storage volumes alone is insufficient. For zero-touch-automation, a solution needs to snapshot, backup and clone not only the data, but all of the applications’ constructs such as metadata, configuration, secrets and SLA policies. Doing so enables teams to rollback, recover or migrate an entire Kubernetes application. No hunting, no hardcoding and no restarting from scratch should be required from the user. The storage only way of performing data protection and recovery things goes against the agility and efficiency expected from Kubernetes, lessening its capabilities.

Orchestrating a cloud-native future for 5G

Choosing fully orchestrated, cloud native-platforms will be key to any successful 5G deployment, allowing operators to explore a more competitive and vibrant supplier ecosystem whilst enabling a wide variety of services and applications. Utilizing advanced orchestration tools provided through platforms like Kubernetes can reduce overhead through streamlined lifecycle automation. It’s no surprise Kubernetes has been dubbed the secret weapon for unlocking cloud-native potential, with unified operations models and resource pools that lead to improved user experiences and, for businesses, deeper market penetration.

By John Kreyling, Managing Director, Centiel UK.
By David de Santiago, Group AI & Digital Services Director at OCS.
By Krishna Sai, Senior VP of Technology and Engineering.
By Danny Lopez, CEO of Glasswall.
By Oz Olivo, VP, Product Management at Inrupt.
By Jason Beckett, Head of Technical Sales, Hitachi Vantara.