Getting Started with rkt

Written by: Luke Bond

8 min read

This February, CoreOS announced that their rkt container runtime had graduated to version 1.0. rkt has come a long way since its initial announcement in December 2014, so now's a good time to take a closer look and consider how it fits into the rapidly changing container ecosystem.

This article is intended for people who are new to rkt but have some experience with Linux containers, e.g., with Docker. Throughout this post, I’ll be assuming you’re using rkt with systemd on Linux.

You will learn:

  • history and context

  • how to build ACI images with acbuild for running with rkt

  • starting and stopping containers with rkt and systemd

  • how image discovery works, in contrast to Docker's registry

  • pods and rkt

  • cgroups and rkt

History

CoreOS started building rkt because they felt that Docker was no longer “the simple composable building block [they] had envisioned.”

rkt was announced alongside the appc set of specifications, which focused on composability, security, image distribution, and openness of containers. Docker’s daemon and monolithic CLI tool make composability a problem -- of processes in the case of the daemon and of tools in that of the CLI. The fact that Weave and Flocker had to resort to wrapping the Docker CLI tool was evidence of the latter (before plug-ins were introduced).

The rkt announcement blog post appeared to be successful in shaking things up enough to get Docker to commit to open standards. The Open Container Project (later renamed to the Open Container Initiative), with appc and CoreOS founders Alex Polvi and Brandon Philips as founding members, was launched on June 22, 2015.

Since then, we’ve not seen the kind of progress on standards from the OCI that will enable true interoperability of container discovery, distribution, and runtime environment between Docker, rkt, and others. Docker is clearly going all-in on building a platform, whilst rkt appears to be firmly aimed at platform builders. A smart move by CoreOS, considering signs suggesting that wholesale adoption of Docker’s platform could lead to vendor lock-in.

rkt Features

rkt boasts the following features:

  • Modularity - rkt is architected in stages (image fetching, cgroup and networking setup, and execution) that can have different implementations, providing separation of privileges as well as concerns.

  • Composability - rkt is not a daemon, is not the parent process of all your containers (and therefore can be updated without affecting running containers), and is composable with other tools. Natively runs appc images built with acbuild.

  • Security - It has Intel Clear Container, SELinux and TPM support, as well as image signature validation.

  • It can run Docker images.

The new 1.0 release includes some notable fixes and improvements:

  • Bash autocompletion for rkt commands

  • rkt fly - a new rkt stage1 that allows containers to be run with reduced isolation and extra privileges. This is useful for running software such as cluster management controllers.

Building and Running Containers with rkt

rkt is a container runtime rather than an image build tool. Since it natively runs appc containers, we will use the appc acbuild tool to build images. Let’s start with a very basic C app:

$ cat hello.c
#include <stdio.h>
int main (int argc, char** argv) {
  printf("Hello, world!\n");
  return 0;
}
$ gcc -o hello -static hello.c

Building images with acbuild is similar to building Docker images but without the Dockerfile -- i.e., we run a sequence of acbuild commands. We can put these into a script to give us a one-liner for building our image. We can use this simple script for building the image:

$ cat appc-hello.sh
#!/usr/bin/env bash
acbuild begin
acbuild set-name hello
acbuild copy hello /app/hello
acbuild set-working-directory /app
acbuild set-exec -- /app/hello
acbuild write --overwrite hello-latest-linux-amd64.aci

If we run this script, it will build our image:

$ ./appc-hello.sh
Beginning build with an empty ACI
Setting name of ACI to hello
Copying host:hello to aci:/app/hello
Setting working directory to /app
Setting exec command [/app/hello]
Writing ACI to hello-latest-linux-amd64.aci

As a result, we have a file, hello-latest-linux-amd64.aci, in our current directory. This is an unsigned, unencrypted appc container image that we can run with rkt. We can see it in rkt’s image list:

$ sudo rkt image list
ID          NAME                    IMPORT TIME LAST USED   SIZE    LATEST
sha512-c500b17b60fa hello                   3 minutes ago   3 minutes ago   1.6MiB  false

We can launch a container from this image now:

$ sudo rkt --insecure-options=image run hello-latest-linux-amd64.aci
image: using image from local store for image name coreos.com/rkt/stage1-coreos:1.0.0
image: using image from file hello-latest-linux-amd64.aci
networking: loading networks from /etc/rkt/net.d
networking: loading network default with type ptp
[296002.703623] hello[4]: Hello, world!

As in Docker, the container is still present after execution has completed:

$ sudo rkt list
UUID        APP IMAGE NAME  STATE   CREATED     STARTED     NETWORKS
6e651372    hello   hello       exited  3 seconds ago   2 seconds ago

rkt includes a handy command for helping you clean up your unused images and containers. To clean up images:

$ sudo rkt gc

To clean up pods (containers):

$ sudo rkt image gc

Now we'll run something that does some networking:

$ git clone https://github.com/lukebond/demo-api
$ cd demo-api
$ sudo ./appc.sh
$ sudo rkt --insecure-options=image run demo-api-latest-linux-amd64.aci

Launch another terminal and run the following to find out the IP address and test it with curl:

$ sudo rkt list
UUID        APP     IMAGE NAME      STATE   CREATED     STARTED     NETWORKS
55cb3a96    demo-api    lukebond/demo-api   running 4 minutes ago   4 minutes ago   default:ip4=172.16.28.7
6e651372    hello       hello           exited  2 days ago  2 days ago
$ curl 172.16.28.7:9000
"Hello, world 172.16.28.7!"

Logs

To access the logs for our containers, we use systemd's journalctl, like so:

$ machinectl list
MACHINE                                  CLASS     SERVICE
rkt-e16bafd0-3b0b-4ade-b482-d6de42d35e8c container nspawn
1 machines listed.
$ journalctl -M rkt-e16bafd0-3b0b-4ade-b482-d6de42d35e8c
-- Logs begin at Fri 2016-02-26 12:57:14 GMT, end at Fri 2016-02-26 12:57:14 GMT. --
...

Stopping containers

Since the rkt implementation we're using uses systemd-nspawn as the underlying tool to launch containers, we therefore use systemd to stop containers:

$ machinectl list
MACHINE                                  CLASS     SERVICE
rkt-55cb3a96-8199-4d08-a998-713b631d3210 container nspawn
1 machines listed.
$ machinectl kill rkt-55cb3a96-8199-4d08-a998-713b631d3210

A native rkt command for stopping containers is reportedly coming in a future release.

Signing images

You will notice that we passed --insecure-options=image to rkt run. This is to disable signature verification, which is enabled by default in rkt. Signing images is easily done using standard gpg tools. Instructions can be found here.

Converting Docker images

The appc tool docker2aci can be used to download Docker images and convert them to appc’s ACI format. Get the tool here.

Converting a Docker image is as simple as:

$ docker2aci docker://lukebond/demo-api

It will squash the Docker layers into one ACI file. If you’d prefer to keep them separate, pass --nosquash, and it will set the correct dependencies between the layers.

Running Docker images directly

rkt can also run Docker images directly, without first converting them in a separate build step.

$ sudo rkt run --insecure-options=image docker://lukebond/demo-api

The “insecure” option is required here because Docker doesn’t support the same image signature verification that rkt does.

Image Discovery and Distribution

Whereas Docker supports image discovery via registries (either the default, Docker Hub, or another), rkt follows the appc spec and uses a combination of HTTPS and HTML meta tags via a discovery URL.

For example, take a look at this for CoreOS' Etcd:

$ curl -sL https://quay.io/coreos/etcd | grep meta | grep discovery
  <meta name="ac-discovery" content="quay.io https://quay.io/c1/aci/{name}/{version}/{ext}/{os}/{arch}/">
  <meta name="ac-discovery-pubkeys" content="quay.io https://quay.io/aci-signing-key">

The content attributes are templatized locators that can be used to obtain a download URL for a particular OS and architecture. Using this method, signatures and public keys can also be fetched for rkt to use in verification.

Appc image discovery is so simple and flexible that you can store your images just about however and wherever you like. As long as you conform to the HTTPS + HTML meta tag scheme, rkt will be able to find your images. A registry is not required. You can store rkt images in CoreOS' Quay service.

For a full run-down of how appc image discovery works, read the specification.

Pods

A "pod" is a term popularized by the Kubernetes project. You could define it as a collection of applications that are grouped logically together and should be run on the same machine. In short, they should be scheduled as a unit.

Since version 0.5, pods have been a first-class citizen in rkt. The appc spec defines a pod as "the deployable, executable unit... a list of apps that will be launched together inside a shared execution context," which includes network configuration and isolators. Whether you're running one process or multiple, rkt still considers it a pod.

Let's look at an example of running a pod of two apps that need to talk to each other. We'll use a super-trivial extension of the above demo-api app that has one GET endpoint that increments a counter in Redis on each request.

$ git clone https://github.com/lukebond/demo-api-redis
$ cd demo-api-redis
$ sudo ./appc.sh
$ sudo rkt run --volume volume--var-lib-redis,kind=host,source=/var/lib/redis quay.io/quay/redis --insecure-options=image --port=http:9000 --set-env REDIS_HOST=localhost ~/Development/demo-api-redis/demo-api-redis-latest-linux-amd64.aci

The above will build an ACI for the app and then launch one pod containing both Redis (a signed ACI pulled from quay.io) and the demo app. Passing --port maps the port to the host and --set-env tells the demo app how to communicate with Redis. The --volume argument will mount the host directory /var/lib/redis into the Redis container's data directory, where it will write its snapshots.

$ sudo rkt list
UUID      APP             IMAGE NAME                STATE   CREATED       STARTED       NETWORKS
e16bafd0  redis           quay.io/quay/redis:latest running 6 seconds ago 6 seconds ago default:ip4=172.16.28.6
          demo-api-redis  lukebond/demo-api-redis
$ curl 172.16.28.6:9000
"Hello, world 172.16.28.6! 1 hits."
$ curl 172.16.28.6:9000
"Hello, world 172.16.28.6! 2 hits."
$ curl 172.16.28.6:9000
"Hello, world 172.16.28.6! 3 hits."

Great, it works!

Limiting Resources with CGroups

rkt enables you to restrict the CPU and memory permissible to be used by a container. These limitations are expressed in units modelled after the Kubernetes Resource Model. CPU limitations are expressed in milli-cores (1/1000th of a core) and memory in MB/GB.

For instance, to run the above example and limit Redis' memory usage to 512MB and CPU to half a core, we could use the following run command:

$ sudo rkt run --cpu=500 --memory=512M quay.io/quay/redis --insecure-options=image --port=http:9000 --set-env REDIS_HOST=localhost demo-api-redis-latest-linux-amd64.aci

Depending how swap is configured on your host, you might find that containers appear to use more memory than you specified with rkt run --memory. With swap disabled, it will be a hard limit. With swap enabled, you will see the process using more memory because the system is configured to enable it to augment its memory using swap.

rkt for Platform Builders

CoreOS's design goals for rkt have resulted in a container runtime that is perfectly suited for building container-based platforms.

Docker's path of going all-in on building a complete platform, culminating in the recent release of Docker Datacenter and Universal Control Plane, makes for a great end-user experience; however, building a platform on top of Docker is difficult because of the lack of composability.

rkt's low-level nature and its modular and composable design make it a great choice for building container platforms. This is where I see rkt finding its niche in the container ecosystem.

Conclusion

For such a young project, rkt has come a long way. Now the core runtime is approaching feature parity with Docker. The recent v1.0 release represents the graduation of rkt into a production-ready container runtime that is a genuine alternative to Docker.

I've shown you the basics of building and running containers with acbuild and rkt, and you should now know enough to run containers with rkt just as you might with Docker. Of course, there's more to cover if we want to run rkt in production, including multi-host networking, monitoring, and integration with existing container tooling such as Weave, Flocker, and Sysdig.

Happy rkting!

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.