Building and Running Your Own Serverless Apps on Jenkins X

Written by: Michael Neale
5 min read
Stay connected

I previously wrote about serverless apps (webapps, specifically) hosted on AWS Lambda versus Knative. Just to rehash in terms so simple it is kind of embarrassing: A serverless web app is a web app that scales down to zero when no one is using it (i.e. you only pay for what you use).

In this post, I want to show you how in a few commands you can get this running with Jenkins X on Google Compute. The benefit is that you can boast to all your friends that you know serverless and they don’t. This will be powered by Knative and something called Gloo under the covers (you don’t need to know the details, but I will provide some links later).

Firstly, I set up a cluster on GKE (I am assuming you have the jx command line installed):

> jx create cluster gke --tekton

This will create a cluster with the latest stuff and get it all running (takes a few minutes, I used all defaults). This also uses serverless pipelines which, like your app, will only consume resources when there is work to be done.

Next, I installed the Gloo add on (funny name, will explain why later).

> jx create addon gloo

This will do its thing, install Knative and Gloo, which is a service mesh that listens for events such as http and wakes up the serverless containers (You don’t need to know the details of this).

After another minute or so you will be ready to create or import your first app.

> jx create quickstart

I picked a nodeJS http one. It created the repository with sample code, imported it to GitHub and that kicked things off.

Shortly after I had a serverless app running. This is exactly the same as a normal app creation or import (run jx get applications to see running apps and how to access them). You can run the usual commands to follow the pipeline along, as is normal. I can even open a pull request to my app and the usual Jenkins X flow kicks in.

This gives me a preview app as is standard. Really, nothing has changed. Kinda neat?

So what makes it serverless? Try this: go have a coffee and come back in a few minutes, I’ll wait.

Back? Let's list the applications running.

Notice the missing pods in the second app (“quickynode," which is the one I created)? That is as no one has accessed the app in a while. It is not running, as no need to be running. Then go to the address of your app shown in a browser (and wait for it to load), and try this again.

Now we can see our app running in at least one pod (in this case the second line “quickynode”) as normal. Serving http. It will scale up and down as necessary automatically with usage (you don’t have to do anything for that). The app above it is a non serverless app, which means it is running all the time (or at least one pod is running off it), otherwise, they are identical!

This also works with preview apps; which is fantastic. Preview apps are very lightly used and can sit around for a long time so why not make them use zero resources when idle? It will wake up when needed. In fact, many applications are lightly used so there isn't a reason why this pattern can’t be the default (do note that this feature is currently experimental).

The benefit of this serverless approach is you can have many more apps and many more preview apps running whilst saving money. Your compute usage can scale down and stay lower, yet scale up to meet demand. When things are very quiet, usage can go close to zero. This is documented and also describes how you can convert existing apps to the serverless mode if you wish to). Note this is still an early stage preview feature at the time of writing.

Peeking under the hood

If you want to peek deeperm let's look at the staging namespace:


After no one uses it for a while, it starts to shut down.

Remember you don’t have to know this or do anything, it is just how it works automatically.

You can see all your containers via kubectl get pods --all-namespaces if you are curious (you will see the preview apps in there, too). As you watch that you will note applications go from Running to Terminating and Container Starting and so on all in the background as serverless applications (or preview applications) are accessed over time.

Gloo in this case is used as a gateway and will listen for http requests and via knative-serve to make sure the application is running. This can also be done with tools like Istio .

Additional resources

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.