Serverless is the new buzzword that is quickly gaining momentum and attention.
The concept is to be able to run server-side code without worrying about the messy details of provisioning and setting up servers, disk drives and other resources. You write code, upload it and — voilà! — it starts running. All the complications of managing the infrastructure, provisioning servers, auto-scaling, installing languages and frameworks are eliminated and hidden away by the cloud provider (AWS, Azure, Google Cloud). The cloud provider takes care of allocating and managing the resources, invoking the code in response to a request, providing it the context and input information it needs to do its job and return the result to the client. By focusing less time on managing scaling and availability, software developers are increasingly using serverless architectures for more advanced workloads.
Functions as a Service
There is no clear view or consensus on what serverless is; for many people, it means writing your code as function and giving it to cloud providers for execution. This is referred to as Functions as a Service** or *FaaS. This view of serverless is the main focus of this article.
All major cloud vendors provide FaaS:
- AWS Lambda on AWS. The most popular implementation of FaaS.
- Azure Functions on Microsoft Azure
- Cloud Functions on Google Cloud
Going serverless requires a different approach to application design. The backend service is broken down into stand-alone functions that perform a single task in response to a user action or event. In serverless architectures, the backend is composed of thin, single-purpose functions that are event driven. The business logic shifts from the backend to the client e.g. mobile app. It becomes the main orchestrator, calling various functions to perform some action for the user when needed. For example, running in serverless architecture, a photo-sharing app like Instagram might call one function to upload image to the server followed by another call to a function that reads all the followers’ information from a database and notifies them.
Serverless architectures require smart clients that know about and talk to a wide range of remote functions. While mobile app developers have had rich frameworks and platforms that allowed them to build complex logic on the client easily, things weren’t so simple for web applications. But thanks to rich client-side application frameworks like React and Angular, and fast HTTP/2 protocol, it is now possible to build complex applications seamlessly into the browser. This will help drive the serverless trend even further.
Amazon Lambda - Features, Pros and Cons
Amazon Web Services (AWS), the undisputed leader of cloud computing, launched a product called Lambda for serverless applications back in 2014.
AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.
I gave it a try and converted one of our small microservices into a Lambda function.
Setting up a Lambda function in AWS was straightforward. The only challenge I faced was connecting the function to another AWS product, the API Gateway, to expose it as a REST end-point. Here are the pros and cons I discovered during the process.
1. Lambda Programming Languages
AWS Lambda supports not all, but major languages. Java 8, Python, Go, Powershell, Ruby, C# and Node.js. In addition, there is a Runtime API which allows you to use any programming languages for your functions.
Another thing which I liked about Lambda is that there are many templates to choose from when creating a new Lambda function that could come very handy. They contain examples on how to access various databases and integrate with other AWS products and services.
2. Lambda Latency & Cold Starts
Up until recently, Lambda startup times were an issue for latency-sensitive applications. ‘Cold functions’ are those that haven’t run in some time. When a new request or event triggers a cold Lambda function, the cloud provider needs to find an environment (server) to load the function and its related resources into and run. This usually took 50 to 500 milliseconds. Once Lambda is loaded onto a server, it will stay there for some time ~ 30 minutes or so. If a new request comes in, it is executed immediately because the function is already loaded. This might not sound like an issue, but for latency-sensitive applications that dealt with thousands of simultaneous requests with irregular traffic patterns, warming up functions to keep them ready was a problem.
AWS recently introduced Provisioned Capacity that fixes this issue by allowing developers to specify the number of warm instances they want to keep ready always to handle incoming requests. This is an excellent feature that gives developers greater control on ensuring low latencies and faster response times. Provisioned capacity isn’t free so please do check the pricing so you understand the costs before you use it.
3. Lambda Execution Duration Limit
The execution duration of lambda functions has an upper time limit. It is currently at 15 minutes. This is more than sufficient for many use cases but could be an issue for batch type applications or tasks that are performing a long running-task like converting videos, etc. I don’t see this as a huge issue.
4. Pricing
The pricing is based on the number of requests and the duration of script’s execution, billed in 100 millisecond increments. So if a Lambda function runs for 15 milliseconds, it will be billed for 100. This could be an issue for very high-volume applications with lots of short-running functions. A crude hack to get the best bang for the buck would be to combine short-running Lambda operations into a larger one. Also, if you want to expose your Lambda methods as REST end-points using AWS API Gateway, you’d incur extra costs as the API Gateway has separate pricing.
5. Lambda is Stateless
Lambda functions are stateless and asynchronous. Each function invocation has no idea about the state of previous invocations and its output or state isn’t automatically available to subsequent functions. You can still access external data by calling other services such as S3 or ElastiCache.
It would be wonderful to share a few things like connection pools, that are expensive to setup. Connection pooling isn’t properly supported. Setting up and tearing down database connections for each request increases latency and affect performance. Although there are work arounds, like using Amazon RDS Proxy to maintain connection pools.
6. Lambda Debugging and Logging
Debugging and logging isn’t easy and has a learning curve. When testing my Lambda function, I spent a lot of time scrolling through CloudWatch entries to find issues.
7. Lambda CI/CD
Lambda functions can be setup for automatic deployments through CI/CD pipelines. You could host your function code on GitHub, setup a new pipeline using AWS CodePipeline and then use AWS CodeBuild to build and deploy the function.
Future of serverless vs server. Should we ditch the servers?
While the AWS Lambda and serverless haven’t yet broken the threshold, the future looks promising. Going serverless requires a shift in thinking, re-inventing tooling and setting up the right processes for version control, deployments, cost monitoring and control, monitoring, testing, security, etc.
No, developers are not going to ditch the servers and move everything to servless. Instead, developers are adopting serverless for certain use cases where it fits great, while still continuing to use servers, containers and microservices.
Serverless is a great concept. Infrastructure management is challenging and can be very painful and requires a dedicated team who can manage resources. It shifts the focus away from the real problem to the undifferentiated heavy lifting of managing servers, auto-scaling groups, instance tagging, and even worse, building infrastructure specific logic in applications like the health checks. Development time increases because developers now have the burden of managing their infrastructure. That’s the biggest beef I have with DevOps: it forces skilled developers to spend their time and energy worrying about infrastructure intricacies instead of building useful applications that solve real problems. Serverless architectures take some of these barriers away and reduce the friction that allows developers to get started quickly.
Updated: March 1, 2020