The journey to serverless

By Sat Gainda, Cloud Solutions Architect at Version 1.

  • 4 years ago Posted in

The latest word on every technical evangelist’s lips is “Serverless”. Yet outside of leading technical circles, it is not widely adopted. So why is there so much of a buzz about Serverless and when will more organisations use it?

 

The idea of serverless is made possible because we can now run code, services and applications in the cloud without the need to provision, manage and maintain local infrastructure. Instead, we use a cloud provider to manage any servers provisioned to provide a runtime execution environment.

 

There are some obvious advantages to this approach.

 

The first relates to infrastructure management -- simply put, because you no longer have to provision, manage, patch and keep these servers secure, this reduces your workload substantially and allows your team to focus on more valuable challenges.

 

The second is that resources do not need to be pre-allocated. Concerns about preallocation of memory, disks and CPUs for servers and virtual machines is now delegated without risk of running out.

 

From this, we gain greater scalability and elasticity. If there is an increased demand and invocation of the Serverless solution, the underlying infrastructure is automatically scaled by the cloud provider.

 

And so, accordingly, you only pay for resources used. So the price paid by the cloud user revolves around the number and duration of invocations of the Serverless function. When the Serverless solution is not being used, no costs are incurred.

 

So why aren’t we seeing faster progress?

What’s holding it back?

 

Naturally the first is a matter of budget justification. Senior management always demand a business case and want to quantify a return on investment. In this case, that can mean serious preparation to describe and justify the potential cost reduction or gaining a competitive advantage. 

 

At the same time, these services are relatively new. AWS was the first to release their “Lambda” Serverless offering as recently as 2014. Since then, other major cloud providers have released their own but more risk averse organisations are still getting used to the pace of change that has come with cloud.

 

This has caused some to stick with containers instead -- small, self-contained packages that have a runtime environment and application. They also have the advantage of less risk for vendor lock-in than going completely serverless.

 

Hand in hand with the more modern nature of serverless, some teams have also struggled with having a team with skills and experience to implement and maintain serverless solutions.

 

But these factors can’t be allowed to hold back organisations from one of the larger leaps in enterprise IT in recent years. So how do you begin your serverless journey with confidence and purpose?

 

The journey

 

1. Justification

The biggest factor in starting a Serverless journey would be providing a justification for its introduction. This would have to include making an assessment and presenting a cost-benefit analysis to decision makers within the organisation. 

 

Use cloud pricing calculators to provide pricing details.

 

2. Cloud Presence

The introduction to Serverless will be made much easier if the organisation already has a cloud presence. Serverless is primarily a cloud native framework. Organisations already using the cloud will have access to the Serverless services made available to them by their cloud provider.

 

3. Case Studies

There are many organisations that have embraced Serverless and are open to talk about their experiences. Seeking out these references will help to understand the benefits, their approach and any obstacles they may have encountered.

 

4. Start Small

To take a low risk approach, Serverless adoption should start small. A proof of concept (POC) can be created. This can be used to demonstrate and build a case for further investment in the project. An alternative to building a POC can be to use Serverless for a new, small, requirement. 

 

Example: Comic Relief, the British charity, started its Serverless journey small by basing their contact site on Serverless. Following the success of the project, Serverless grew into the other parts of their technology stack until it became a major presence. Comic Relief were able to cut their monthly platform costs by over 90%.

 

5. Entire workloads do not have to be replaced

Serverless solutions do not have to involve large scale solutions. It can be used for small jobs such as running maintenance, scheduled or back up jobs. 

 

Example: Netflix, the media streaming company uses Serverless in many ways. A lot of them are for scheduled jobs, file sorting and back up maintenance. Serverless is not just used for enterprise solutions within the organisation.

 

6. Skills

Retraining employees should be considered an opportunity. The skills will provide benefits to the employee and therefore the organisation. External consultants can be hired for major transformations bringing experience, best practices, quicker project turn-around and technical guidance to the project. 

Staying secure

One of the benefits of using Serverless is that it can introduce a more secure environment for solutions to operate in. The responsibility for patching and maintaining the underlying platform the Serverless solution runs on, lies with the cloud provider. This does not mean that security can be ignored and several components still need to be considered.

 

Source control management (SCM)

As the majority of the Serverless solution will be code based, it is important to secure the code as well as the intellectual property. This can be done by using a SCM service such as GitHub or Bitbucket. Access to the 1SCM service should be managed and only provided to a limited set of users. 

 

Automate the deployment

Taking a Development, Security and Operations (DevSecOps) approach can greatly increase the security of the Serverless solution. DevSecOps focuses on automating deployments into a pipeline and ensuring that security is considered every step of the way. 

 

Static Testing & Dynamic Testing

Static testing tools examine the source code and flag areas of concern.  Dynamic testing examines the solution while it is in a running state. Both Static and Dynamic Testing can be integrated into a deployment pipeline.

 

Make use of integrated services

Serverless must integrate with other services such as API gateways, object stores, messaging and databases. Each of these services will have security features such as API throttling or authentication which should also be configured to ensure defence in depth.

 

Monitoring, logging and alerting

Either the cloud provider or third-party solutions offer monitoring, logging and alerting capabilities. Logs need to be monitored and retained for a period of time. Alerts based on specified criteria can be created to inform the organisation of abnormal behaviour. 

 

It may seem intimidating to make such a large leap -- but as ever, it’s more about how you conduct the journey and plan for success than some threat in the tech itself. if you can follow the advice above, it should put you on a good path to a secure, stable and profitable project with a serverless future.

 

By Tytus Kurek, Product Manager, Canonical.
It is an unavoidable fact, high performance computing (HPC) is an energy intensive environment....
We live in a world that is increasingly driven by technology. As we continue to increase the number...
AI has begun to change many facets of our lives, creating tremendous societal advancements. From...
Greater performance, less capital expenditure, and lower operating costs with AMD EPYC processors.
The manufacturing industry is producing more data than ever before as sensors become readily...
The internet has become a fundamental part of so many businesses around the world. A large...
In 2020, we are used to the idea that physical goods imports include a degree of embodied carbon,...