Demystifying Serverless Computing: Definition, Tutorial, and Examples

Written by: Erik Francis

Serverless computing is getting more and more attention every day. It all started in 2014 when Amazon Web Services launched AWS Lambda, one of the first function as a service (FaaS) cloud computing services to reach the market.  Other cloud vendors are following in Amazon's footsteps by creating similar services. For example, Microsoft and Google offer their respective services, Azure Functions and Cloud Functions. But why are serverless architectures and services so popular now? To demystify this topic, I'll give you a quick overview of serverless computing, provide a simple but interesting example on getting started with serverless, and list a few other interesting use cases where this architectural style makes sense.

What Is Serverless Computing?

Serverless computing—or simply serverless, as it is more frequently abbreviated—is a service type offering from cloud providers for developers who don't want to deal with the infrastructure for their applications. Wikipedia defines serverless as:

A cloud-computing execution model in which the cloud provider acts as the server, dynamically managing the allocation of machine resources. Pricing is based on the actual amount of resources consumed by an application, rather than on pre-purchased units of capacity. It is a form of utility computing.

With serverless, apps still need servers, but the difference is who's responsible for managing and operating those servers. Cloud providers take care of all the heavy lifting to have a highly available and elastic infrastructure. So going serverless doesn't mean that you completely forget about infrastructure, but now you deal with it from another perspective. Your code needs to perform better to keep infrastructure costs efficient. You only pay for the bare minimum of what you use. If your application needs to run for a few seconds with a certain amount of memory and CPU, you only pay for the time the application runs. If the application needs to run three times a day, you only pay for that amount of usage time—not for the whole hour, day, or month. Serverless implements event-driven architectures naturally by seamlessly integrating several cloud services. And this is what makes these types of services appealing. For example, you can spin up an image processing system in a matter of minutes.

How Does a Typical System Look Without Serverless?

To get a better understanding of what serverless is, let's take a look at what you need to take care of when you have to administer infrastructure. Let's say you're using a system that will make use of a queue for delayed processing. A user visits a page that contains a pixel to track specific information while the user is visiting the site. An async request is sent to the servers to prevent the user from waiting for a response back. Let's take a look at what you'll need to solve this problem:

  • A message queuing software like RabbitMQ to store the messages and process them later.

  • A batch process where you'll code the logic to handle each message.

  • A database to persist the processed message.

For each of the components above, you'll need to provision infrastructure, configure the server(s), install the software, deploy the batch processor code, scale the infrastructure when needed, and provide support for the servers and the application. Even if you have a cloud provider and use their IaaS or PaaS services, there are many elements for this simple architecture that you'll need to take care of.

How Does a Typical System Look With Serverless?

Now let's take a look at how the system from the example above will look with serverless. I'll use the offering from AWS, but you can find similar services in Google Cloud Platform (GCP) or Microsoft's Azure. In the following list you can see which services fit perfectly to solve the same problem from the above example:

  • A message broker with AWS SQS to store the messages and process them later.

  • An AWS Lambda function to handle each message from the queue.

  • A DynamoDB table to persist the processed message.

By using the above services, you'll need to provide the resources from each service with a few clicks or CLI commands, configure the services, and deploy code to a lambda function. You can forget about scaling the infrastructure because AWS will do it for you. You'll still need to provide support but you'll have far less to do because you won't have to administer servers.

Getting Started With Serverless

Understanding the theory and practicing applying it will help you get a better understanding of the "why" of serverless. Let's create a simple solution for the example I've been using to explain serverless. AWS has a detailed guide to store and handle messages from the queue. I'll summarize the steps from the AWS, and then share some additional steps and code to persist messages into DynamoDB. Let the fun begin!

1. Prerequisites

Prepare your dev environment by creating an AWS account. But also remember that you could replicate this example with similar services like Azure or GCP. The service names and integration methods may vary, but you should still be able to create a similar solution to the one I'm discussing.

2. Create an Execution IAM Role for AWS Lambda

Follow the AWS guide to create the IAM role that the AWS Lambda function will use to run, with one minor change. Instead of the policy indicated in the post, use the following JSON definition:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "dynamodb:PutItem"
            ],
            "Resource": "arn:aws:dynamodb:REGION_CODE:ACCOUNT_NUMBER:table/messages"
        },
        {
            "Effect": "Allow",
            "Action": [
                "sqs:DeleteMessage",
                "sqs:ChangeMessageVisibility",
                "sqs:ReceiveMessage",
                "sqs:GetQueueAttributes"
            ],
            "Resource": "arn:aws:sqs:REGION_CODE:ACCOUNT_NUMBER:messages"
        },
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "arn:aws:logs:REGION_CODE:ACCOUNT_NUMBER:*"
        },
        {
            "Effect": "Allow",
            "Action": "logs:CreateLogGroup",
            "Resource": "*"
        }
    ]
}

Make sure you replace REGION_CODE and ACCOUNT_NUMBER with the region code you choose to create the resources and your account number.

3. Create the Lambda Function

On the AWS Lambda home page, create a new lambda function. Then click on the orange "Create function" button. Now you should see the screen below. Leave the default option to "Author from scratch" and fill in the details of the form. Make sure you choose "Python 3.6" for the runtime option and the role you created in the previous step.

Click on the orange "Create function" button.

4. Upload the Code to Process Messages

Scroll down a little bit and you'll find the section for the function code. Copy and paste the following code:

from __future__ import print_function
import uuid
import boto3

def lambda_handler(event, context):
    dynamodb = boto3.resource('dynamodb')
    table = dynamodb.Table('messages')

    for record in event['Records']:
        payload=record["body"]
        print(str(payload))

        response = table.put_item(
           Item = {
                'id': str(uuid.uuid1()),
                'message': payload
            }
        )

        print(response)

Click on the orange "Save" button located on the top-right of the screen.

5. Create the DynamoDB Table

Go to the DynamoDB home page and create a new table called "messages" with a primary key named "id" that we might use later to get a message by the id. AWS console is really easy to use, but you can also reference this guide on how to create a DynamoDB table.

6. Create the SQS Queue

Go to the SQS home page and create a new queue called "messages" and choose the "Standard Queue" option, as we'll be creating a simple process. Note that you can create a FIFO queue with SQS.

7. Configure the Event Trigger

Configuring the integration within the lambda function and the SQS queue is pretty simple. You can follow this guide, just make sure you choose the queue and function you created before.

8. Send Messages to the Queue

You can send messages from the SQS home page and test that the message you sent is then persisted in the DynamoDB table. Here's how to send a message.

Go to the CloudWatch Logs homepage. That's where you'll see all the execution logs for the function. Once you're there, you'll see something like this:

If everything worked correctly, you should now see the messages you sent from the queue stored in the DynamoDB table, like this:

And just like that, you now have a system that you can use to persist data asynchronously and persist them in a database. Now your code needs to send messages to SQS—you'll only need to make sure that the lambda function has the necessary memory and timeout configured. Then, if need be, you can quickly scale out the reads/writes for the DynamoDB table. Serverless is amazing!

Serverless Architecture and Use-Case Examples

There are other use cases where serverless will help you to develop a scalable solution in minutes. The following examples and links are from AWS, but you can replicate them in Azure or GCP.

You can find more reference architectures for AWS here; odds are you'll see an example that fit your needs.

Build Elastic Applications Effortlessly

Every time you need to think about servers for your applications make sure to consider using a serverless offering from a public cloud provider. AWS is leading the way in serverless, but Azure and GCP are catching up quickly. Serverless is becoming more relevant and prevalent each day that goes by; you'll find there are plenty of frameworks, conferences, and other resources that will help you to implement your solution.

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.