Thursday, January 10, 2019

Practical Helm in 5 minutes

https://helm.sh/
Yet another ship-themed name

Containerization is increasingly a powerful way to deploy applications on anonymous infrastructure, such as a set of many identical virtual machines run by some cloud provider. Since container images ship a full OS, there is no need to manage packages for the servers (a PHP or Python interpreter), but there are still other environment-specific choices that need to be provided to actually run the application: configuration files and environment variables, ports, hostnames, secrets.

In an environment like Kubernetes, you would create all of this declaratively, writing YAML files describing each Pod, ConfigMap, Service and so on. Kubernetes will take these declarations and apply them to its state to reach what is desired.

As soon as you move outside of a demo towards multiple environments, or towards updating one, you will start to see Kubernetes YAML resources not directly as code to be committed into a repository, but as an output of a generation process. There are many tweaks and customizations that need to be performed in each environment, from simple hostnames (staging--app.example.com vs app.example.com) to entire sections being present or not (persistence and replication of application instances).

The problem you need to solve then is to generate Kubernetes resources from some sort of templates: you could choose any template engine for this task, and execute kubectl apply on the result. To avoid reinventing the wheel, Helm and other competitors were created to provide an higher abstraction layer.

Enter Helm

Helm provides templating for Kubernetes .yaml file; as part of this process, it extracts the configuration values for Kubernetes resources into a single, hierarchical data source.

Helm doesn't stop there however: it aims to be a package manager for Kubernetes, hence it won't just create resources such as a Deployment, but it will also:
  • apply the new resources on the Kubernetes cluster
  • tag the Deployment with metadata and labels
  • list everything that is installed in terms of applications, rather then Deployments and ConfigMaps
  • find older versions of the Deployment to be replaced or removed
The set of templates, helpers, dependencies and default values Helm uses to deploy an application is called a chart whereas every instance of a chart created on a cluster is called a release. Therefore, Helm keeps track of objects in terms of releases and allows you to update a release and all its contents, or to remove it and replace it with a new one.

Folder structure

The minimal structure of an Helm chart is simply a folder on your filesystem, whose name must be the name of the chart. As an example, I'll use green-widgets as a name, a fictional web application for ordering green widgets online.

This is what you'll see inside a chart:
  • Chart.yaml: metadata about the chart such as name, description and version.
  • values.yaml: configuration values that may vary across releases. At a bare minimum the image name and tag will have defaults here, along with ports to expose.
  • the templates/ subfolder: contains various YAML templates that will be rendered as part of the process of creating a new release. There is more in this folder like a readme for the user and some helper functions for generating common snippets.
Apart from this minimal setup, there may also be a requirements.yaml file and a charts/ subfolder to deal with other charts to use as dependencies; for example, to install a database through an official chart rather than setting up PostgreSQL replication on your own. These can be safely ignored until you need these features though.

Once you have the helm binary on your system, you can generate a new chart with helm create green-widgets.

Cheatsheet

You can download a helm binary for your platform from the project's releases page on Github. The helm init command will use your kubectl configuration (and authentication) to install tiller, the server-side part of Helm, onto a cluster's system namespace.

Once this is setup, you will be able to execute helm install commands against the cluster, using charts on your local filesystem. For real applications, you can install official charts that are automatically discovered from the default Helm repositories.

The command I prefer to use to work on a chart however is:
helm upgrade --install --set key=value green-widgets--test green-widgets/

The mix of upgrade and install means this command is idempotent and will work for the first installation as well as for updates. Normally you would issue a new release for a change to the chart, but this approach allows you to test out a chart while it's in development, using a 0.0.1 version.
There is no constraint on the release name green-widgets--test, and Helm can even generate random names for you. I like to use the application name and its environment name as a team convention, but you should come up with your own design choices.

A final command to keep in mind is helm delete green-widgets--test which will delete the release and all the resources created by your templates. This is enough to stop using CPU, memory and IP addresses, but it's not enough to completely remove all knowledge of the release from Tiller's archive. To do so (and free the release name allowing its re-creation) you should use add the --purge flag.

Caveats

This 5-minute introduction makes it all seem plain and simple, but it should be clear that simply downloading Helm and installing it is not a production-ready setup. I myself have only rolled out this setup to testing environment at the time of writing.

I can certainly see several directions to explore, that I either cut from the scope in order to get these environments up and running for code review; or investigated and used but not included in this post. For example:
  • requirements.yaml allows to include other charts as dependencies. This is very powerful for off-the-shelf open source software such as databases, caches and queues; it needs careful choices for the configuration values being passed to these dependencies, and your mileage may vary with the quality of the chart you have chosen.
  • chart repositories are a good way to host stable chart versions rather than copying them onto a local filesystem. For example, you could push tarballs to S3 and have a plugin regenerate the index.
  • the whole Helm and Tiller setup arguably needs to be part of a Infrastructure as Code apporach like the rest of the cluster. For example, I am creating a EKS cluster using Terraform and that would need to include also the installation and configuration of Tiller to provide a turnkey solution for new clusters.

Wednesday, January 02, 2019

The path from custom VM to VM with containers

https://commons.wikimedia.org/wiki/File:Kanda_container.jpg
Image of a single container being transported by OiMax
Before the transition to Docker containers started at eLife, a single service deployment pipeline would pick up the source code repository and deploy it to one or more virtual machines on AWS (EC2 instances booted from a standard AMI). As the pipeline went across the environments, it repeated the same steps over and over in testing, staging and production. This is the story of the journey from a pipeline based on source code for every stage, to a pipeline deploying an immutable container image; the goal pursued here being the time savings and the reduced failure rate.

The end point is seen as an intermediate step before getting to containers deployed into an orchestrator, as our infrastructure wasn't ready to accept a Kubernetes cluster when we started the transition, nor Kubernetes itself was trusted yet for stateful, old-school workloads such as running a PHP applications that writes state on the filesystem. Achieving containers-over-EC2 allows developers to target Docker as the deployment platform, without realizing yet cost savings related to the bin packing of those containers onto anonymous VMs.

Starting state

A typical microservice for our team would consist of a Python or PHP codebase that can be deployed onto a usually tiny EC2 instance, or onto more than one if user-facing. Additional resources that are usually not really involved in the deployment process are created out of band (with Infrastructure as Code) for this service, like a relational database (outsourced to RDS), a load balancer, DNS entries and similar cloud resources.

Every environment replicates this setup, whether it is a ci environment for testing the service in isolation, or an end2end one for more large-scale testing, or even a sandbox for exploratory, manual testing. All these environments try to mimic the prod one, especially end2end which is supposed to be a perfect copy on fewer resources.

A deployment pipeline has to go through environments as a new release is promoted from ci to end2end and prod. The amount of work that has to be repeated to deploy from source on each of the instances is sizable however:

  • ensure the PHP/Python interpreter is correctly setup and all extensions are installed
  • checkout the repository, which hopefully isn't too large
  • run scripts if some files need to be generated (from CSS to JS artifacts and anything similar)
  • installing or updating the build-time dependencies for these tasks, such as a headless browser to generate critical CSS
  • run database migrations, if needed
  • import fixture data, if needed
  • run or update stub services to fill in dependencies, if needed (in testing environments)
  • run or update real sidecar services such as a queue broker or a local database, if present
These ever-expanding sequence of operations for each stage can be optimized, but in the end the best choice is not to repeat work that only needs to be performed once per release.

There is also a concern about the end result of a deploy being different across environments. This difference could be in state, such as a JS asset served to real users being different from what you tested; but also in outcome, as a process that can run perfectly in testing may run into a APT repository outage when in production, failing your deploy halfway through, only on one of the nodes. Not repeating operations leads not just to time savings but to a simpler system in which fewer operations can fail just because there are fewer of them in general.

Setting a vision

I've automated before builds that generated a set of artifacts from the source code repository and then deploy that across environments, for example zipping all the PHP or Python code into an archive or in some other sort of package. This approach works well in general, and it is what compiled languages naturally do since they can't get away with recompiling in every environment. However, artifacts do not take into account OS level dependencies like the Python or PHP version with their configuration, along with any other setup outside of the application folder: a tree of directories for the cache, users and groups, deb packages to install.

Container images promise to ship a full operating system directory tree, which will run in any environment only sharing a kernel with its host machine. Seeing docker build as the natural evolution of tar -cf ... | bzip2, I set out to port the build processes of the VMs into portable container images per each service. We would then still be deploying these images as the only service on top an EC2 virtual machine, but each deployment stage should just be consisting of pulling one or more images and starting them with a docker-compose configuration. The stated goal was to reduce the time from commit to live, and the variety of failures that can happen along the way.

Image immutability and self-sufficiency

To really save on deployment time, the images being produced for a service must be the same across environments. There are some exceptions like a ci derivative image that adds testing tools to the base one, but all prod-like environment should get the same artifact; this is not just for reproducibility but primarily for performance.

The approach we took was to also isolate services into their own containers, for example creating two separate fpm and nginx images (wsgi and nginx for Python); or to use a standard nginx image where possible. Other specialized testing images like our own selenium extended image can still be kept separate.

The isolation of images doesn't just make them smaller than a monolith, but provides Docker specific advantages like leveraging independent caching of their layers. If you have a monolith image and you modify your composer.json or package.json file, you're in for a large rebuild. But segregating responsibilities leads instead to only one or two of the application images being rebuilt: never having to reinstall those packages for Selenium debugging. This can also be achieved by embedding various targets (FROM ... AS ...) into a single Dockerfile, and having docker-compose build one of them at a time with the build.target option.

When everything that is common across the environments is bundled within them, what remains is configuration in the form of docker-compose.yml and other files:
  • which container images should be running and exposing which ports
  • which commands and arguments the various images should be passed when they are started
  • environment variables to pass to the various containers
  • configuration files that can be mounted as volumes
Images would typically have a default configuration file in the right place, or be able to work without one. A docker-compose configuration can then override that default with a custom configuration file, as needed.

One last responsibility of portable Docker images is their definition of a basic HEALTHCHECK. This means an image has to ship enough basic tooling to, for example, load a /ping path on its own API and verify a 200 OK response is coming out. In the case of classic containers like PHP FPM or a WSGI Python container, this implies some tooling will be embedded into the image to talk to the main process through that protocol rather than through HTTP.

It's a pity to reinvent the lifecycle management of the container (being started, then healthy or unhealthy after a series of probes), whereas we can define a simple command that both docker-compose or actual orchestrators like Kubernetes can execute to detect the readiness of the new containers after deploy. I used to ship smoke tests with the configuration files to use, but these have largely been replaced by polling for an health status on the container itself.

Image size

Multi-stage builds are certainly the tool of choice to keep images small: perform expensive work in separate stages, and whenever possible only copy files into the final stage rather than executing commands that use the filesystem and bloat the image with their leftover files.

A consolidated RUN command is also a common trick to bundle together different processes like apt-get update and rm /var/lib/apt/lists/* so that no intermediate layers are produced, and temporary files can be deleted before a snapshot is taken.

To find out where this optimization is needed however, some introspection is needed. You can run docker inspect over a locally built image to check its Size field and then docker history to see the various layers. Large layers are hopefully being shared between one image and the next if you are deploying to the same server. Hence it pays to verify that if the image is big, most of its size should come from the ancestor layers and they should seldom change.

A final warning about sizes is related to images with many small files, like node_modules/ contents. These images may exhaust the inodes of the host filesystem well before they fill up the available space. This doesn't happen when deploying source code to the host directly as files can be overwritten, but every new version of a Docker image being deployed can easily result in a full copy of folders with many small files. Docker's prune commands often help by targeting various instance of containers, images and other leftovers, whereas df -i (as opposed to df -h) diagnoses inodes exhaustion.

Underlying nodes

Shipping most of the stack in a Docker image makes it easier to change it as it's part of an immutable artifact that can be completely replaced rather than a stateful filesystem that needs backward compatibility and careful evolution. For example, you can just switch to a new APT repository rather than transition from one to another by removing the old one; only install new packages rather than having to remove the older ones.

The host VMs become leaner and lose responsibilities, becoming easier to test and less variable; you could almost say all they have to run is a Docker daemon and very generic system software like syslog, but nothing application-specific apart from container dependencies such as providing a folder for config files to live on. Whatever Infrastructure as Code recipes you have in place for building these VMs, they will become easier and faster to test, with the side-effect of also becoming easier to replace, scale out, or retire.

An interesting side effect is that most of the first stages of projects pipelines lost the need for a specific CI instance where to deploy. In a staging environment, you actually need to replicate a configuration similar to production like using a real database; but in the first phases, where the project is tested in isolation, the test suite can effectively run on a generic Jenkins node that works for all projects. I wouldn't run multiple builds at the same time on such a node as they may have conflicts on host ports (everyone likes to listen on localhost:8080), but as long as the project cleans up after failure with docker-compose down -v or similar, a new build of a wholly different project can be run with practically no interaction.

Transition stages

After all this care in producing good images and cleaning up the underlying nodes, we can look at the stages in which a migration can be performed.

A first rough breakdown of the complete migration of a service can be aligned on environment boundaries:
  1. use containers to run tests in CI (xUnit tools, Cucumber, static checking)
  2. use containers to run locally (e.g. mounting volumes for direct feedback)
  3. roll out to one or more staging environments
  4. roll out to production
This is the path of least resistance, and correctly pushes risk first to less important environments (testing) and only later to staging and production; hence you are free to experiment and break things without fear, acquiring knowledge of the container stack for later on. I think it runs the risk of leaving some projects halfway, where the testing stages have been ported but production and staging still run with the host-checks-out-source-code approach.

A different way to break this down is perform the environment split by considering the single processes involved. For example, consider an application with a server listening on some port, a cli interface and a long-running process such as a queue worker:
  1. start building an image and pulling it on each enviroment, from CI to production
  2. try running CLI commands through the image rather than the host
  3. run the queue worker to the image rather than the host
  4. stop old queue worker
  5. run the server, using a different port
  6. switch the upper layer (nginx, a load balancer, ...) to use the new container-based server
  7. stop old server
  8. remove source code from the host
Each of these slices can go through all the environments as before. You will be hitting production sooner, which means Docker surprises will propagate there (it's still not as stable as Apache or nginx); but issues that can only be triggered in production will happen on a smaller part of your application, rather than as a big bang of the first production deploy of these container images.

If you are using any dummy project, stub or simulator, they are also good candidates for being switched to a container-based approach first. They usually won't get to production however, as they will only be in use in CI and perhaps some of the other testing environments.

You can also see how this piece-wise approach lets you run both versions of a component in parallel, move between one and the other via configuration and finally remove the older approach when you are confident you don't need to roll back. At the start using a Docker image doesn't seem like a huge change, but sometimes you end up with 50 modified files in your Infrastructure as Code repository, and 3-4 unexpected problems to get them through all the environments. This is essentially Branch by Abstraction applied to Infrastructure as Code: a very good idea for incremental migrations applied to an area that normally needs to move at a slower pace than application code.

Friday, December 28, 2018

Delivery pipelines for CDNs

https://www.fastly.com/network-map
In the last couple of years I have integrated Content Delivery Networks into various eLife applications, managing objects ranging from static files and images to dynamic HTML. These projects mainly consisted of:
  • implementing Infrastructure as Code for these CDNs inside the Github repositories we already use for all other cloud resources (AWS and GCP)
  • effectively authorize HTTPS on the CDN side, which will be impersonating your origin servers
  • create instances of the same CDN services, first in testing and then in production environments, keeping them in parity with each other
  • expand end-to-end testing (the tip of the pyramid) to cover also the CDNs rather than just covering the applications involved
  • integrate logging in order to catch any problem happening between the user and the origin servers
  • finally phase in the new CDNs with new geotagged DNS entries
Our first implementation from 2016 was widely integrated into AWS and as such CloudFront was the chosen solution. We subsequently switched to Fastly for all ordinary traffic, experiencing a general increase in features, customization and expenses. What follows is a comparison that isn't just meant to orient the reader between CloudFront and Fastly, but also against the third option of not using a CDN at all. In fact, there are many concerns that may be glossed upon but that you need to take into account seriously when you move your web presence from a few origin servers to a global network of shared, locked down servers managed by an external organization.

Infrastructure as Code

Our AWS-based setup is making a large use of CloudFormation, the native service for declaratively specifying resources such as servers, load balancers and disks. The simple setup has been augmented over the years by a code generation layer for the CloudFormation templates; this Python code reduces duplication between the various templates by starting from standard EC2/ELB/EBS resources that can be customized in size and other parameters.
If we start from a simple single-server setup for a microservice (this was before Docker containers got stable enough), we are looking at a template containing at least an EC2 instance and a DNS entry pointing to it. With multiple servers, we expand this with a load balancer that pulls in a TLS certificate provided to IAM by an administrator.
To configure CloudFront via CloudFormation, an additional resource for the CDN distribution is introduced. All the configuration you need will be visible in this resource, a JSON dictionary or XML tag respecting a certain schema.
Since CloudFormation can only manage AWS resources and nothing outside that tended garden, Fastly was the reason for introducing Terraform alongside it. Whereas almost anything AWS-specific still goes through CloudFormation, Terraform has opened up new roads such as Infrastructure as Code implementations for Google Cloud Platform (storage buckets and BigQuery tables).
Applying changes in this context is not trivial as you may inadvertently reboot or destroy a server while believing you were only changing a minor setting. Yet Infrastructure as Code is about making the current state of infrastructure and all changes visible, easy to review and safe to rollout across multiple environments. It is imperative therefore to maintain testing environments created with the same tooling as production, and to use them to ultimately integration test all changes.
The caveat of using multiple tools in lockstep for the same instance of a project (including servers, cloud resources and CDNs) is that they can't declare dependencies between resources managed by different tools. For example, since we manage DNS in CloudFormation and Fastly CDNs in Terraform, we can both at the same time but can't couple together the existence of a DNS and the CDN it points to, or impose a creation or update order that is different from the general order we run the tools in.
The most glaring difference in updates rollout between the various options is that, to rollout a CDN configuration change, it takes:
  • no deployment time if you don't use a CDN (obviously)
  • 10s of seconds for Fastly
  • 10s of minutes (up to 1 hour was common) for CloudFront
This means Fastly opens up the possibility for experimentation, even if with a slower feedback that your local TDD cycle. With CloudFront this is painful and haphazard as you decide on a change, start applying it and come back one hour later to check its effects, after having already switched to another task.
Still, minutes of update and/or creation time make Fastly unavailable for inclusion in the CI environments where the tests of a single service are run. You could in theory create a Fastly service on the fly when the build of the service runs, but this will add minutes to your build _and_ also promote coupling to the CDN itself. Fast forward this a bit and you'll see an application unable to be run locally anymore for exploration because of the missing CDN layer. Therefore, like cloud services the CDN is treated like a long-lived resource, with its regression testing performed into a shared environment on every new application commit, but after merge.

Logging

Within a web service, you usually have some kind of access log being generated by nginx or Apache. These logs can sit on a single server or can be uploaded to some aggregation point, whether it is a local Logstash or an external platform that can index them.
Even load balancing doesn't change this picture very much as the load balancer(s) logs should be identical to the ones of the application servers if everything is working well. But with a CDN, large-scale caching is introduced and so it's plausible that you will stop directly seeing a large percentage of your traffic. Statistics or monitoring based on access logs may get skewed; or worse, Japan may be cut off from your website for a while because the health checks from the CDN points of presence there have a timeout of a few milliseconds too low to get to your servers in us-east-1 (of course this never happened).
Hence, to understand what's going on in those few hundred servers you have no access to, you need a way to stream them to some outsourced service; this can be storage as a service (S3 or GCS) or directly some log infrastructure provider. The latency with which logs can get in the right place is a key metric of the feedback loop from changes.
Since we are striving for Infrastructure as Code, all the logging configuration should be kept under version control together with hostnames and caching policies. We got to a standard logging format (JSON Lines with certain fields) and frequency, along with GCS bucket where to put new entries, bucket names following conventions. This was later expanded into BigQuery tables providing queries over the same data, after the Terraform Fastly provided started supporting this delivery mechanism.
The main difficulty in integration was credentials management: you aren't told much if credentials are not correct or not authorized to perform certain actions like writing to BigQuery. Moreover, you can't just commit a bunch of private keys for anyone to see, especially since Infrastructure as Code repositories tend to be made very visible to as many people as possible.
We ended up putting GCP credentials and similar secrets in Vault, running on the same server as the Salt master (same thing as Puppet master). The GCP Service Account itself and its permissions to write to the bucket needed some special permissions to set up (it's turtles all the way down) so couldn't put it directly into Infrastructure as Code but had an admin manually creating it instead. The ideal thing would be for Vault to generate credentials by itself, following the pattern of periodically rotating them. But then it would need to push these credentials somehow into the Fastly configuration and I'm here to provide efficient delivery pipelines, not make cloud giants wrestle.

Flexibility

Your own application is usually highly customizable, with a certain cost associated. You have to write some code in your favorite programming language, possibly following some framework conventions and calling your classes Middleware or EventListener.
CDNs work on shared servers, so they have limits on what can be safely run in that sandboxed environment. Nevertheless, Fastly provides the possibility to customize the VCL that runs each service with your own snippets and macros.
This is very flexible, perhaps even too much: you can introduce headers with random values, write conditionals and implement loops by restarting requests. It feels similar to working in nginx configurations but with a more predictable language.
The main problem with this form of customization is that there is no way to run it or test it on your own. The best feedback loop we found is the Fastly Fiddle (similar to JS Fiddle) where you test out bits of code, hit a save button and see it propagated to servers around the world for you to test.
The fact that this even exists is impressive, but you can imagine how well it works for actual development. Once you get past experimenting, you can't integrate a Fiddle with your own Infrastructure as Code approach (e.g. Terraform templates) nor easily port code from one to another besides copying and pasting. You can run integration-only tests in some other window, but the feedback loop can't be shorter than the deployment time; unit tests are not a thing. You can't even use your IDE as much as you may love it. In the end, Fastly's Varnish diverged from the open source one 4 major versions ago; hence, this VCL is a proprietary language and you'll feel the same as writing stored procedures in Oracle's PL/SQL.
I tend to see VCL and other intermediate declarative templates (such as Terraform .tf files) as a generation target for Infrastructure as Code to compile to. This lets you unit test that your tools generate a certain output for these templates; use dummy inputs in tests and check dummy expected outputs; all of this will still need to be integration tested with the application itself in a real environment, but some of the responsibilities can be developed in the tool itself and reused across many applications.

Integration testing

We have understood by now that to keep the ensemble of servers, code, cloud services and CDNs we need some automated integration testing in place that touches all the different pieces. We don't want many scenarios to be tested at this level because it's slow and brittle to do so, but we need a tracer bullet that goes through everything, if only to verify all configurations are correct.
In the general context of outsourcing of responsibilities to a service or a library, you still own it as a dependency of your application and still need to verify the emergent behavior of custom code and borrowed architecture.
Therefore, I always put at least a staging environment in place replicating production where automated tests can run. This doubles as the place where to try and roll out infrastructure updates that are risky (which are risky? If you have to ask, all of them; just roll out everything through staging).
As we have seen, creating too many different, ad-hoc environments to test pull requests doesn't scale; this will reach death by feature branch as all of your Jenkins nodes are waiting for yet one more RDS node or CloudFront distribution to be created.
A common example of a coupled, integration-related feature to test is the forwarding of Host and other headers; these go through so many layers: a couple of CDN servers, a load balancer, an nginx daemon and finally the application. Some headers don't just have to be forwarded, but have to be rewritten or renamed or added (X-Forwarded-For). All of this can in theory be specified for every single layer but testing the whole architecture probably makes for easier long-term maintenance.

Why?

In various projects you always have to ask yourself why you are doing something (especially complex things) and what value you want to get out of it. CDNs are one of the go-to solution for web performance, their killer feature being huge caches for slow-changing HTML and assets across the world so that even a casual Indian reader can load your homepage in one second. Moreover, if done right the load on your origin servers will also be greatly reduced with respect to not using caching layers.
On the other hand, you can see the complexity, observability and maintenance needs that every additional layer introduces. When asking whether a CDN should do something or your application should do something, it's the same decision as for a database or a cloud service: how can you effectively store and update its configuration in multiple environments? Do you want to oursource that responsibility? How will you know when something's wrong? Do you feel comfortable writing stored procedures in a language you can't run on your laptop? All of these are architectural questions to go through when evaluating various CDNs, or no CDN.

Thursday, December 06, 2018

Book review: The 5 Dysfunctions of a team

https://en.wikipedia.org/wiki/The_Five_Dysfunctions_of_a_Team
This is a spur-of-the-moment review of The 5 Dysfunctions of a Team, a business novel on team health that I've read today as part of the quarterly Professional Development Days I take as part of working at eLife.

As a follow-up to my role evolution into Software Engineer in Tools and Infrastructure, I am looking again more into the people skills side of my job (as opposed to purely technical skills). I have done this cyclically during my career, as the coder hat becomes too restrictive and you have to pick up other tools to achieve improvement. In particular, I am working on eLife's Continuous Delivery platform and it is crucial to work with multiple product-oriented teams to have them adopt your latest Jenkins pipelines and Github reports.

Dysfunctions

Patrick Lencioni's model of team dysfunctions (or of blessed behaviors if you flip all definitions) is a pyramid where each dysfunction prevents the next level from being reached. It would be a disservice to how well and quickly this is got across in the novel to just try to list them here, but if I had to summarize this in a long paragraph it would look like:
Building trust between team members allows constructive conflicts, which enable people to commit to action and hold each other accountable for what has been decided; all in the service of results. -- not really a quote
The dysfunctions are the flip side of these positive behaviors, for example lack of trust or fear of conflict. The definitions of some of these terms are more precise than what you find in many Agile and business coaching books; so don't dismiss trust as just a buzzword, for example.

Some context

The case being treated in the book, which comes from the author's management consulting firm, is that of a CEO turning around a team of executives. This makes for a somewhat more fascinating view of results as it's talking about a (fictional) company's IPO or eventual bankruptcy. Despite not being a clear parallel to a software development team, I do think this is applicable in every situation where professionals are paid to work daily together, with some caveats.
In fact, I suspect the level of commitment to the job that you see in the book would be typical of either high stakes roles (executives) or a generally healthy organization that has already removed common dysfunctions at the individual level. If in your organization:
  • people are primarily motivated by money
  • they look forward to 5 PM
  • they browse Facebook and Twitter for hours each day
then there are personal motives that have to be addressed before teams can start thinking about collective health.

Yes but, what can I do in practice?

After the narrative part, an addendum to the book contains a self-administered test to zoom in on which possible dysfunctions your team may exhibit at the moment. It continues with a series of exercises and practices that address these topics, with an estimation of their time commitments or how difficult they would be to run. I definitely look forward to anonymously try out the test with my technical team, out of curiosity for other views.

My conclusions

I think most of the dysfunctions are real patterns, that can only be exacerbated by the currently distorted market for software developers and CV-driven development. The last dysfunction, Inattention to results, is worth many books on its own on how to define those results as employees at all levels are known for optimizing around measurable goals to the detriment of, for example, long-term maintenance and quality.
So don't start a crusade armed by this little book, but definitely keep this model in your toolbox and share it with your team to see if you can all identify areas for collective improvement; it is painfully obvious to say you can't work on this alone!
The author is certainly right when writing that groups of people truly working together can accomplish what any assembly of single individuals could never dream of doing.

Sunday, September 16, 2018

Eris 0.11.0 is out

Eris 0.11.0 has been freshly released, and I'll be listing here various contributions that the project has received that are included in this new version and in the previous one, 0.10.0, which didn't have an associated blog post.

For a full list and links to the relevant pull requests and commits, see the ChangeLog.

0.10

  • The Eris\Facade class was introduced to allow usage outside of a PHPUnit context.
  • Official PHPUnit 7 support was introduced.
  • Fixed a corner case in suchThat()
There are some small backward compatibility breaks with respect to 0.9; they regard unused features (or at least I thought) including Generator::contains().

0.11

  • Official PHP 7.2 support
  • Annotations support for configuring behavior that is usually configured through methods: @eris-method, @eris-shrink, @eris-ratio, @eris-repeat, @eris-duration

Some acknowledgements

Most of this work comes from contributions, not from me. I'd like to say a word of thanks to the people that have taken the time to use Eris in some of their projects but also to feed back a fix, an extension, or a substantial improvement.

Sunday, January 21, 2018

Book review: Production-ready microservices

https://www.amazon.co.uk/Production-Ready-Microservices-Standardized-Engineering-Organization/dp/1491965975
Production-Ready Microservices is a short book about consistently practicing architecture and design over a fleet of microservices.
In general, I think the principles described here apply very much to any service-oriented initiative, even more so if the services are coarse grained and hence require more maintenance than finely isolated ones.

Uber

The book extrapolates from the author's experience at Uber "standardizing over a thousand microservices". Given a few developers for each microservice team, that makes up 2000-3000 engineers from the total >10000 Uber employees (I wonder how many are lawyers). After WhatsApp's famous story of being acquired at 55 employees in total, that really highlights the difficulty level of running a business and operations all over the physical world (sending cars and drivers around in dozens of countries) with respect to a digital-only enterprise. We should remember this and many other directions of change the next time we hear a technology advocate saying how much the cost of his 2-people startup has been reduced by $technology.

The main message

You should be this tall to use microservices; this architecture doesn't necessarily fit every context; although integrating separate services of some size is becoming a standard after the API revolution (before that, it was integrate through the database which is arguable worse).
You will encounter many different social and technical problems, such as:
  • Inverse Conway's Law, with the shape of the products defining the shape of the company. Although I found out this doesn't really apply at smaller scales as development teams can own more than one service and experience a successful decoupling between people and code.
  • Technical sprawl, where multiple languages, databases and other key choices spread without a consistent, central planning.
  • More ways to fail: distributed and concurrent systems are more difficult to work with and to reason upon, plus the fact that there are more servers, containers or applications will simply multiply the failures you'll see.
There are lots of non-functional requirements like scaling each microservice and isolating it from the rest of the fleet; perhaps don't go too micro- if you don't have the resources to ensure an acceptable level in each service. Perhaps in your context the acceptable SLA for some particular service is low, because it's not change often or is only internally facing or is only used several times per day.
One particular aspect of the Consistency is important lesson is that the whole lifecycle of services should be considered. Maintenance and even decomissioning are as important as producing new MVPs: but I've seen many times services being neglected, or being considered very easy to migrate away from one some new shiny substitute was available. In reality, it takes time and effort to keep services up and running, and to finally kill them when you have an alternative, as data and users are slowly migrated off from the old to the new platform.
Lots of requirements are also overlooked but often turn out to be important as you increase your population of services: the scalability of a single endpoint, fault tolerance, even documentation (ADR are the only form I trust very much right now in a fast-moving organization context.) Every single section of this book will make you think about it, but won't give much of an overview: you're better served by reading the SRE book for example.

Value for money

This book is a short read which gives you an overview of what microservices challenges you're likely to face down that rabbit hole; in particular, it focuses on a medium-to-large organization context. I'm not sure this book is worth the price tag however: 20 pounds for a Kindle edition of ~170 pages, where ~25 pages are glossary, index and lots of checklists.

Wednesday, January 03, 2018

Book review: Algorithms to live by

https://www.amazon.com/Algorithms-Live-Computer-Science-Decisions/dp/1627790365 Algorithms to Live By: The Computer Science of Human Decisions is a book that puts together the domains of computer science and real life. The ensemble of topics being touched is wide. The book treats deterministic algorithms such as optimal sorting, but then moves on to more context-dependent strategies for caching and scheduling. The last chapter even get to model identification, (tractable and intractable-made-tractable) optimization problems, stochastic algorithms and game theory.

All the while, computer science concepts are compared to conscious and unconscious human processes. For example, caching and the memory hierarchy have great parallels with how the human brain recollects memories of recent events, and how we can augment our brain with external, slower supports like paper. Scheduling is useful not only to allocate processes on CPU cores, but also to make an explicit choice of strategy when prioritizing the tasks that you or your team face. Up to the more extreme examples of game theory and mechanism design, when the incentive system becomes more important than the individual agents (manage the system, not the people rings a bell?)

If you like viewing the world throughout the lens of algorithms and see how the strategies of humans and computer compare with each other, I would strongly recommend this book as it will make for an entertaining read and some principles to take away for real life usage (I hope sorting socks will be easier now). Skip it if you have a very wide knowledge of computer science, operations research, Nash equilibria... but even if I was familiar with the technical part, I was missing the connection to different domains or everyday, real world problems. I listened to the audiobook version, which lasts about 12 hours. You may find it easier to skim through some chapters if you are more (or less) interested in some topics. The problem with audiobooks is that I can't easily take notes, while highlighting on an e-book reader is quick and lets me recollect all important gotchas later into a text file.

New role: Software Engineer in Tools and Infrastructure

After working on eLife's testing and deployment infrastructure in 2016, in the last year my responsibilities in the technical team have shifted towards the domain of engineering productivity. Testing is one phase of the development process that is often a bottleneck, but there are many more areas like code reviews, monitoring and infrastructure itself (being it servers or services):
In summary, the work done by the SETs naturally progressed from supporting only product testing efforts to include supporting product development efforts as well. Their role now encompassed a much broader Engineering Productivity agenda. -- Ari Shamash on the Google Testing Blog
Moreover, the team starts from a high level of coverage and design on many projects, to the point that my focus has always been on the provisioning and automation of testing environments, and on large-scale end2end testing.

What seems just a letter on a job title (from SET to SETI) is in fact an alignment of responsibilities so that I am not accidentally mistaken for "the QA guy" but always seen as a problem solver instead.
https://en.wikipedia.org/wiki/Pulp_Fiction
Solving problems and propagating the solution, so that you don't have to solve them over and over again
Roles are always an approximation in a team of generalizing specialist that also distributes and collaborate on some roles such as that of architecture. But it's helpful in a cross-functional team to have someone dedicated to the task of productivity, whether it is reached through automation, tooling, or continuous improvement.

Thursday, November 30, 2017

Book review: Building Microservices

http://shop.oreilly.com/product/0636920033158.do
Building Microservices: Designing Fine-Grained Systems by Sam Newman is the seminal book on microservices as a concept. It was published at the start of 2015 (that's a long time ago in tech... or is it?), it's focused on high-level topics rather than implementation and hence has aged well.

There are in fact several concepts, both at the methodology and the technical level, that the books does justice to. Here's what turned on the light bulb over my head.

Modeling

Modeling is important and a whiteboard discussion can save weeks of implementation down the line. Both me and Sam Newman are not the first to say this. Modelling in microservices, like in Domain-Driven Design, is all built around business capabilities and the shape of your organization (yes, Conway).

Styles of integration

Like for other design choices, especially at the architectural level, it's important to explicitly choose whether to go for a shared database (please don't), synchronous or asynchronous communication; orchestration through a Facade or coreography distributing responsiblities between services; explicit versioning the kinds of backward compatibility; pushing or pulling data from one physical location to another, and with which granularity of time and entity.
There are also styles of isolation, not just of integration: code reuse is maturely described on a trade-off scale with decoupling. It feels like a pattern book in which these options are given a standard name for further discussion, and evaluated with respect to the contexts in which they work well.

Deployment

Should you go for virtual machines or containers? How do you map services to physical or virtual machines? The book couldn't possibly keep up with the rise of container orchestrators in the last couple of years, so it won't be a complete guide but could give you a sense of the problems that virtual machines create and that we are going to solve in this next generation. What will be the problems that containers create, and most of all how to solve them, is not in the scope of this book instead.

Testing

After the basics like a testing pyramid, I don't find myself in complete agreement with large scale testing strategies proposed here like consumer-driven contract testing. Yes, it works well enough if you can specify a formal contract that a service should adhere to, and test it in isolation in the implementer service. But the overhead of doing so, in a context in which we are supposed to create dozens if not hundreds of service, is very significant.
At eLife we have relied on a wide RESTful API specification, each group of endpoints implemented by a different service. As such, the overhead is limited, and this is just a description for validating requests and responses rather than a full contract, as most of these services are read-only.
All in all, I find myself relying on the end2end beast to get heterogeneous services, written in multiple languages, in different times by different people, to talk together reliably. I would spend a lot of energy in trying to square the circle of contracts, but I suppose they work well at an higher scale of traffic or on selected services.
The end2end tests we use are limited in their scope, constrained by being at the top of the pyramid; they do not necessarily cover a full end2end scenario but rather a data path involving more than on service, often skipping the user interface. It helps that we have no Selenium-based testing in the end2end layer, as the user interface is fully accessible to a HTML parser and requires no Javascript.
The problems that we encounter daily happen in production all the same: timeouts, dirty data, the automation challenge of turning on and off new nodes reliably, the race conditions that come from distributed executions. I'd rather not hide these problems but solve them, and I'm looking at containers instead to try to shrink the big picture and have a simpler end2end environment, easier to spin up and down, or to provide with clean databases.

Hidden gems

There are several hidden gems that would let you pick the brain of the author on a common problem cited in a chapter. For example, we have such a problem in integrating with a CRM that doesn't even support PHP 7 (what is it with CRMs and always being sources of technical debt?) There are some example patterns that you could apply in that situation, like hiding the CRM behind a specialized service that cleans up its API. Nothing miraculous, but a glimmer of hope for these desperate situations.

Conclusions

If you are going to work with microservices, or milliservices, or small enough services, this book is worth a read. If you are only having troubles with a particular area, such as testing or security, going through a single chapter will give you a big picture before you go in depth with further sources.
Remember that this book is starting to become a bit dated, so you cannot take highly technical lessons from it (and I doubt books are a great tool for those in general in this fast-moving environment). Think about your context, learn the theory, and fill in the parts in which the map is blank (or erased) with what you are learning from the web in 2017 (soon to be 2018 - attempt to make this review valid for one more year).

Monday, April 03, 2017

Pipeline Conf 2017

Not liquorice
Post originally shared on eLife's internal blog, but in the spirit of (green) open access here it is.

Last month I have attended the PIPELINE conference in London, 2017 edition. This event is a not-for-profit day dedicated to Continuous Delivery, the ability to get software changes into the hands of users, and to do so safely, quickly, and in a sustainable way. It is run by practitioners for practitioners, everyone on different sides of the spectrum like development, operations, testing, project management, or coaching.

The day is run with parallel tracks, divided into time slots of 40-minute talks and breaks for discussions and, of course, some sponsor pitches. I have been picking talks from the various tracks depending on their utility to eLife's testing and deployment platform, since our tech team has been developing every new project with this approach for a good part of 2016.

The conceptual model of Continuous Delivery and of eLife's implementation of it is not dissimilar to the scientific publishing process:
  • there is some work performed into an isolated environment, such as a laboratory, but also someone's laptop;
  • which leads to a transferable piece of knowledge, such as a manuscript, but also a series of commits, roughly speaking some lines of code;
  • which is then submitted and peer reviewed. We do so through pull requests, which perform a series of automated tests to aid human reviewers inside the team; part of the review is also running the code to reproduce the same results on a machine which is not that original laptop.
  • after zero or more rounds of revisions, this work gets accepted and published...
  • which means integrating it with the rest of human knowledge, typesetting it, organizing citations and lots of metadata about the newly published paper. In software, the code has to be transformed into an efficient representation, or virtual machines have to be configured to work with it.
  • until, finally, this new knowledge (or feature) is in the hands of a real person, who can read a paper or enjoy the new search functionalities
Forgive me for the raw description of scientific work.

In software, Continuous Delivery tries to automate and simplify this process to be able to perform it on microchanges multiple times per day. It aims for speed to be able to bring a new feature live in tens of minutes; it aims for safety to avoid breaking the users work on new changes; and to do all of this in a sustainable way, not to sacrifice tomorrow's ability to evolve for a quick gain today.

Even without the last mile of real user traffic, the 2.0 software services have been running in production or production-like servers from the first weeks of their development. A common anti-pattern in software development is to say "It works on my machine" (imagine some saying "It reproduces the results, but only with my microscope"); what we strive for is "It works on multiple machines, that can be reliably created; if we break a feature we know within minutes and can go back the latest version known to work."

Dan North: opening keynote

Dan North started to experiment with Continuous Delivery in 2004, at a time when builds were taking 2 days and a half to run in a testing environment contended by multiple teams. He spoke about several concepts underpinning Continuous Delivery:
  • conceptual consistency: the ability of different people to make similar decisions without coordination. It's an holy grail for scaling the efforts of an organization to more and more members and teams.
  • supportability: championing Mean Time To Repair over Mean Time Between Failures. The three important questions for facing a problem as what happened? Who is impacted? How do we fix it?
  • operability: what does it feel like to build your software? To deploy it? Test it? Releasing it? Monitor it? Support it? Essentially, developer experience in additio to user experience.
Operability is a challenge we have to face ourselves more and more as we move from running our own platform to provide open source software for other people to use. Not only reading an article has to be a beautiful experience, but publishing one should be.

John Clapham: team design for Continuous Delivery

This talk was more people-oriented, I agree with the speaker that engagement of workers is what really drive profits (or value in case of non-profits).
Practically speaking:
  • reward the right behaviors to promote the process you want;
  • ignore your job title as everyone's job is to deliver value together;
  • think small: it's easier to do 100 things 1% better than to do 1 thing 100% better (aka aggregation of marginal gains)

Abraham Marin: architectural patterns for a more efficient pipeline

The target for a build is for it to take less than 10 minutes. The speaker promotes the fastest builds as the one you don't have to run, introducing a series of patterns (and the related architectural refactorings) to be executed, safely, to simplify your software components:
  • decoupling an api from implementation: extracting an interface package to reduce the dependencies to a component to a dependency to an interface;
  • dividing responsibiliteis vertically or horizontally trying to isolate the most frequent changes and minimizing cross-cutting requirements;
  • transform a library into a service;
  • transform configuration into a service.
Some of these lessons are somewhat oriented to compiled languages, but not limited to them. My feeling is that even if you reduce compile times, you still have to test some components in integration, which is a large source of delay.

Steve Smith: measuring Continuous Delivery

How do you know whether a Continuous Delivery effort is going well? Or more pragmatically,  which of your projects is in trouble?
The abstract parameters to measure in pipelines are speed (throughput, cycle time) and stability. Each declines differently depending on the context.
In deployment pipelines that go from a commit to a new version release in production, lead time and the interval of new deployments can be measured. But also failure rate (how many runs fail) and failure recovery time are interesting. In more general builds or test suites, execution time is a key parameter but a more holistic view includes interval (how frequent are builds executed).
I liked some of these metrics so much that they are now in my OKRs for the new quarter. Simplistic quote: you can't manage what you can't measure.

Alastair Smith: Test-driving your database

To continuously deploy new software versions, you need an iterative approach to evolve your database and the data within it. When you evolve, you also have to test every new schema change. Even in the context of stored procedures for maximum efficiency (and lock-in), Alastair showed how to write tests that can reliably run on multiple environments.

Rachel Laycock: closing keynote, Continuous Delivery at Scale

Rachel Laycock is the Head of technology for North America at Thoughtworks, the main sponsor of the conference. The keynote however had nothing to do with sales pitches. Here are some anti-patterns:
  • "We have a DevOps team" is an oxymoron, as that kind of team doesn't exist; what often happens is that the Ops team gets renamed."
  • Do we chose Kubernetes or Mesos?" as in getting excited about the technology before you understand the problem to solve.
The "at scale" in the title pushes for seeing automation as a way to build a self-serving platform, where infrastructure people are not bottlenecks but enablers for the developers to build their own services.
The best quote however really was "yesterday's best practice becomes tomorrow's anti-pattern". What we look for is not to be the first to market but to have an adaptable advantage, a product that can evolve to meet new demands rather than being a dead end.

ShareThis