Thursday, January 10, 2019

Practical Helm in 5 minutes

https://helm.sh/
Yet another ship-themed name

Containerization is increasingly a powerful way to deploy applications on anonymous infrastructure, such as a set of many identical virtual machines run by some cloud provider. Since container images ship a full OS, there is no need to manage packages for the servers (a PHP or Python interpreter), but there are still other environment-specific choices that need to be provided to actually run the application: configuration files and environment variables, ports, hostnames, secrets.

In an environment like Kubernetes, you would create all of this declaratively, writing YAML files describing each Pod, ConfigMap, Service and so on. Kubernetes will take these declarations and apply them to its state to reach what is desired.

As soon as you move outside of a demo towards multiple environments, or towards updating one, you will start to see Kubernetes YAML resources not directly as code to be committed into a repository, but as an output of a generation process. There are many tweaks and customizations that need to be performed in each environment, from simple hostnames (staging--app.example.com vs app.example.com) to entire sections being present or not (persistence and replication of application instances).

The problem you need to solve then is to generate Kubernetes resources from some sort of templates: you could choose any template engine for this task, and execute kubectl apply on the result. To avoid reinventing the wheel, Helm and other competitors were created to provide an higher abstraction layer.

Enter Helm

Helm provides templating for Kubernetes .yaml file; as part of this process, it extracts the configuration values for Kubernetes resources into a single, hierarchical data source.

Helm doesn't stop there however: it aims to be a package manager for Kubernetes, hence it won't just create resources such as a Deployment, but it will also:
  • apply the new resources on the Kubernetes cluster
  • tag the Deployment with metadata and labels
  • list everything that is installed in terms of applications, rather then Deployments and ConfigMaps
  • find older versions of the Deployment to be replaced or removed
The set of templates, helpers, dependencies and default values Helm uses to deploy an application is called a chart whereas every instance of a chart created on a cluster is called a release. Therefore, Helm keeps track of objects in terms of releases and allows you to update a release and all its contents, or to remove it and replace it with a new one.

Folder structure

The minimal structure of an Helm chart is simply a folder on your filesystem, whose name must be the name of the chart. As an example, I'll use green-widgets as a name, a fictional web application for ordering green widgets online.

This is what you'll see inside a chart:
  • Chart.yaml: metadata about the chart such as name, description and version.
  • values.yaml: configuration values that may vary across releases. At a bare minimum the image name and tag will have defaults here, along with ports to expose.
  • the templates/ subfolder: contains various YAML templates that will be rendered as part of the process of creating a new release. There is more in this folder like a readme for the user and some helper functions for generating common snippets.
Apart from this minimal setup, there may also be a requirements.yaml file and a charts/ subfolder to deal with other charts to use as dependencies; for example, to install a database through an official chart rather than setting up PostgreSQL replication on your own. These can be safely ignored until you need these features though.

Once you have the helm binary on your system, you can generate a new chart with helm create green-widgets.

Cheatsheet

You can download a helm binary for your platform from the project's releases page on Github. The helm init command will use your kubectl configuration (and authentication) to install tiller, the server-side part of Helm, onto a cluster's system namespace.

Once this is setup, you will be able to execute helm install commands against the cluster, using charts on your local filesystem. For real applications, you can install official charts that are automatically discovered from the default Helm repositories.

The command I prefer to use to work on a chart however is:
helm upgrade --install --set key=value green-widgets--test green-widgets/

The mix of upgrade and install means this command is idempotent and will work for the first installation as well as for updates. Normally you would issue a new release for a change to the chart, but this approach allows you to test out a chart while it's in development, using a 0.0.1 version.
There is no constraint on the release name green-widgets--test, and Helm can even generate random names for you. I like to use the application name and its environment name as a team convention, but you should come up with your own design choices.

A final command to keep in mind is helm delete green-widgets--test which will delete the release and all the resources created by your templates. This is enough to stop using CPU, memory and IP addresses, but it's not enough to completely remove all knowledge of the release from Tiller's archive. To do so (and free the release name allowing its re-creation) you should use add the --purge flag.

Caveats

This 5-minute introduction makes it all seem plain and simple, but it should be clear that simply downloading Helm and installing it is not a production-ready setup. I myself have only rolled out this setup to testing environment at the time of writing.

I can certainly see several directions to explore, that I either cut from the scope in order to get these environments up and running for code review; or investigated and used but not included in this post. For example:
  • requirements.yaml allows to include other charts as dependencies. This is very powerful for off-the-shelf open source software such as databases, caches and queues; it needs careful choices for the configuration values being passed to these dependencies, and your mileage may vary with the quality of the chart you have chosen.
  • chart repositories are a good way to host stable chart versions rather than copying them onto a local filesystem. For example, you could push tarballs to S3 and have a plugin regenerate the index.
  • the whole Helm and Tiller setup arguably needs to be part of a Infrastructure as Code apporach like the rest of the cluster. For example, I am creating a EKS cluster using Terraform and that would need to include also the installation and configuration of Tiller to provide a turnkey solution for new clusters.

No comments:

ShareThis