In my previous articles, you looked at setting up a Kubernetes cluster on the Civo server platform. In this article, you'll follow on from the setup from Part 4 and will apply an Ingress controller using free SSL certificates from LetsEncrypt.
Load Balancing in Kubernetes
Kubernetes provides a number of options for load balancing your application. These currently include:
NodePort
Load Balancer
Ingress
The NodePort option was the solution used in the previous articles, enabling you to expose a service on an external port number.
The Load Balancer option is provided for server platforms that provide an external load balancer solution. This includes Amazon Web Services, Google Kubernetes Engine, and Azure (to name a few). Although Civo has its own load balancing service, there is currently no provider to leverage this in Kubernetes at the moment.
What Is Ingress?
Ingress is a resource that can be configured in Kubernetes to use with a given controller. An Ingress controller can be platform specific, such as AWS's Route53, or it can be specific to common software such as NGINX and Traefik. The Ingress controller provides HTTP routing to your application.
The Ingress controller provides additional features over simply exposing a NodePort to your services, but it doesn't provide as much abstraction as the Load Balancer option, which facilitates solutions like Route53.
One of the greater benefits for selecting Ingress as your HTTP routing option is the ability to assign an SSL certificate, thus securing requests to your application over HTTPS. With the recent launch of LetsEncrypt (a free SSL certificate publishing platform) and the Kubernetes Cert-Manager plugin, developers can utilize Ingress to provide long-term HTTPS support to their apps with automatically renewing SSL certificates for FREE!
Getting Started
To begin this tutorial, you'll need a running application. You can go ahead and launch the services from Part 4 of this series, or you can launch them from the following YAML schema.
Enter the following into a file called backend.yaml
.
apiVersion: v1 kind: Service metadata: name: service spec: selector: app: php-service srv: backend ports: - protocol: TCP port: 80 targetPort: http --- apiVersion: apps/v1 kind: Deployment metadata: name: service spec: selector: matchLabels: app: php-service srv: backend replicas: 3 template: metadata: labels: app: php-service srv: backend spec: containers: - name: php-service image: "leesylvester/phpinf" ports: - name: http containerPort: 80
There won't be a need for a frontend schema as the Ingress load balancer will expose the service directly to the Internet.
At this point, you can check that all is well by listing the pods in your application
$ kubectl get svc NAME TYPE CLUSTER-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 443/TCP 14m service ClusterIP 10.102.146.177 80/TCP 13s
Assigning a Domain
Since you'll be using HTTPS, you will need a domain. Assign one to your Civo account using the DNS dashboard. You can then either create a CNAME
to your controller node or an A NAME
alias.
Installing Helm
Before you get to installing the Ingress controller, you will first need to install some necessary tools. The first of such tools is Helm. Helm is Kubernetes's answer to a package manager. With Helm, you can more easily install the Cert-Manager plugin needed for this tutorial.
To install Helm, run the following in the console:
$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh $ chmod 700 get_helm.sh $ ./get_helm.sh
With that installed, you now have the client element of the package manager. The server element is known as Tiller and is installed by sufficiently initializing the Helm client:
$ helm init $ helm init --upgrade
You can confirm it is installed with:
$ kubectl get pods -n kube-system | grep tiller
Installing the Ingress Controller
Now that Helm is installed, it's time to install the NGINX Ingress controller. The NGINX Ingress project provides some remote URLs to make this simple. Simply execute each of them one at a time.
First, you need to install the ConfigMap and namespace configuration schemas.
$ curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml \ | kubectl apply -f - $ curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml \ | kubectl apply -f - $ curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml \ | kubectl apply -f - $ curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml \ | kubectl apply -f - $ curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml \ | kubectl apply -f -
Next, install the RBAC and Nginx Ingress schemas.
$ curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml \ | kubectl apply -f - $ curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml \ | kubectl apply -f -
Having run this, you will need to give Kubernetes a little time to install and run the necessary pods. You can see the status of everything by checking the pod list.
$ kubectl get pods -n ingress-nginx
Installing Cert-Manager
Cert-Manager is the Kubernetes plugin that handles the automated installation of LetsEncrypt certificates. The actual Cert-Manager plugin is installed by running the following:
helm install --name cert-manager --namespace kube-system stable/cert-manager
Configuring the Server
Now that the plugins are installed, it's time to configure the server a little. Typically, when working with pods and services, Kubernetes will only assign NodePorts somewhere near the ethereal port range (30000 and above). This isn't the case when using HostPorts, but those aren't so easy to manage unless your network layer supports it and I'm not aware that Flannel does.
To get around this limitation requires modifying the Kubernetes configuration.
Exit out of the kubeuser
user (or whatever you decided to call it) and run the following as sudo
.
$ nano /etc/kubernetes/manifests/kube-apiserver.yaml
Then, with the configuration file open, enter the following line at the end of the arguments list. The arguments list is the long list of entries starting with - --
.
- --service-node-port-range=80-32767
Now, you'll need to restart Kubernetes to let this change take effect.
$ service docker restart
You can now su
back into kubeuser
. However, ensure you check that the pods have fully restarted before proceeding.
The Ingress resource
With the Ingress controller installed, it's necessary to set up an associated Ingress resource. An Ingress resource associates the Ingress controller with the appropriate service in your application. Here, you will want to attach the Ingress resource to the service
service which, if you remember, is the name given to the backend
service.
Copy the following to a file called ingress.yaml
.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-lb namespace: default annotations: ingress.kubernetes.io/ssl-redirect: "true" kubernetes.io/tls-acme: "true" certmanager.k8s.io/issuer: letsencrypt-staging kubernetes.io/ingress.class: "nginx" spec: rules: # change the domain below to the domain you are using - host: my-domain.example.com http: paths: - path: / backend: serviceName: service servicePort: 80
Then, go ahead and create the resource with:
$ kubectl create -f ingress.yaml
Next, you need to add a matching service, which opens ports to the Ingress controller.
apiVersion: v1 kind: Service metadata: name: ingress-nginx namespace: ingress-nginx spec: type: NodePort ports: - name: http port: 80 targetPort: 80 nodePort: 80 protocol: TCP - name: https port: 443 targetPort: 80 nodePort: 443 protocol: TCP selector: app: ingress-nginx
Save this to ingress-service.yaml
and create it.
If you look closely at the above service, you can see that the service type is set to NodePort
. This is a hack, in this case, as it allows the specification of the known ports 80 and 443 even without the use of the HostPort
feature of Kubernetes.
If all is well, you should be able to visit the Ingress endpoint in your browser.
[unsecure.png]
Configuring for security
Having already opened port 443 in the Ingress service, there are still a few steps before being able to access your application using HTTPS.
The Cert-Manager plugin uses a certificate Issuer
configuration to supply details of the HTTPS and a Certificate
schema for the actual certificate configuration.
Before jumping in to using production certificates, it is essential to first run the setup with staging configuration. LetsEncrypt provides a staging endpoint for this purpose, which you'll use first.
apiVersion: certmanager.k8s.io/v1alpha1 kind: Certificate metadata: name: secure-example-com-tls spec: secretName: secure-example-com-tls issuerRef: name: letsencrypt-staging commonName: my-domain.example.com dnsNames: - my-domain.example.com acme: config: - http01: ingressClass: nginx domains: - my-domain.example.com
Create and deploy the above schema as staging-cert.yaml
. Ensure that the correct domain is supplied in place of my-domain.example.com
. Then, you'll also need to create and deploy the following.
apiVersion: certmanager.k8s.io/v1alpha1 kind: Issuer metadata: name: letsencrypt-staging spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory email: "your@email.com" privateKeySecretRef: name: letsencrypt-staging http01: {}
The http01
entry represents the provider challenge method. The two common options here are the http01
option used above and dns01
. http01
requests the provider to validate ownership of the domain using HTTP, while dns01
uses specific DNS provider API configuration. The http01
option should be more than sufficient for your app on Civo.
With the two schemas added to Kubernetes, you can now query the certificate state.
$ kubectl describe certificate Name: secure-example-com-tls Namespace: default Labels: <none> Annotations: <none> API Version: certmanager.k8s.io/v1alpha1 Kind: Certificate Metadata: Creation Timestamp: 2018-07-11T19:05:27Z Generation: 1 Resource Version: 7208 Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/default/certificates/secure-example-com-tls UID: 5f7dc87a-853d-11e8-93cb-fa163ed4b57b Spec: Acme: Config: Domains: my-domain.example.com-tls Http 01: Ingress: Ingress Class: nginx Common Name: my-domain.example.com Dns Names: my-domain.example.com Issuer Ref: Name: letsencrypt-staging Secret Name: secure-example-com-tls Status: Acme: Order: URL: https://acme-staging-v02.api.letsencrypt.org/acme/order/6431918/3725032 Conditions: Last Transition Time: 2018-07-11T19:23:46Z Message: Certificate issued successfully Reason: CertIssued Status: True Type: Ready Last Transition Time: <nil> Message: Order validated Reason: OrderValidated Status: False Type: ValidateFailed Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning IssuerNotFound 36m (x4 over 37m) cert-manager Issuer issuer.certmanager.k8s.io "letsencrypt-staging" not found does not exist Warning IssuerNotReady 21m (x16 over 36m) cert-manager Issuer letsencrypt-staging not ready Normal CreateOrder 20m cert-manager Created new ACME order, attempting validation... Normal DomainVerified 18m cert-manager Domain "my-domain.example.com" verified with "http-01" validation Normal IssueCert 18m cert-manager Issuing certificate... Normal CertObtained 18m cert-manager Obtained certificate from ACME server Normal CertIssued 18m cert-manager Certificate issued successfully
Finally, you will need to update the Ingress resource with a TLS entry.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-lb namespace: default annotations: ingress.kubernetes.io/ssl-redirect: "true" kubernetes.io/tls-acme: "true" certmanager.k8s.io/issuer: letsencrypt-staging kubernetes.io/ingress.class: "nginx" spec: rules: # change the domain below to the domain you are using - host: my-domain.example.com http: paths: - path: / backend: serviceName: service servicePort: 80 # The below has been added tls: - hosts: - my-domain.example.com secretName: secure-example-com-tls
If you overwrite the ingress.yaml
file with the above, you can update the ingress
deploy with:
$ kubectl apply -f ingress.yaml
If you now visit the application using HTTPS in the browser, you should see your application served in full secure glory!
If you see that the application is unable to send a correct HTTPS response, don't panic; this is normal. It simply means you have exceeded LetsEncrypt's request rate. You may need to wait several days for this to work again, however.
Now that you know LetsEncrypt is able to validate the domain for certificate issuing, it's time to create the production certificate and issuer.
Production Certificates
The following YAML schemas are replacements for the staging certificate and issuer created above. Ensure you remove the old before applying the new. Also, ensure you update your Ingress controller with the letsencrypt-prod
reference by editing and applying the ingress.yaml
file.
apiVersion: certmanager.k8s.io/v1alpha1 kind: Certificate metadata: name: secure-example-com-tls spec: secretName: secure-example-com-tls issuerRef: name: letsencrypt-prod commonName: my-domain.example.com dnsNames: - my-domain.example.com acme: config: - http01: ingressClass: nginx domains: - my-domain.example.com apiVersion: certmanager.k8s.io/v1alpha1 kind: Issuer metadata: name: letsencrypt-prod spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: "your@email.com" privateKeySecretRef: name: letsencrypt-prod http01: {}
Conclusion
Being able to sign and secure your own applications is extremely useful. The above walkthrough does not exactly cover load balancing with Ingress, which will be covered in a future post.