Thursday, November 30, 2017

Book review: Building Microservices

http://shop.oreilly.com/product/0636920033158.do
Building Microservices: Designing Fine-Grained Systems by Sam Newman is the seminal book on microservices as a concept. It was published at the start of 2015 (that's a long time ago in tech... or is it?), it's focused on high-level topics rather than implementation and hence has aged well.

There are in fact several concepts, both at the methodology and the technical level, that the books does justice to. Here's what turned on the light bulb over my head.

Modeling

Modeling is important and a whiteboard discussion can save weeks of implementation down the line. Both me and Sam Newman are not the first to say this. Modelling in microservices, like in Domain-Driven Design, is all built around business capabilities and the shape of your organization (yes, Conway).

Styles of integration

Like for other design choices, especially at the architectural level, it's important to explicitly choose whether to go for a shared database (please don't), synchronous or asynchronous communication; orchestration through a Facade or coreography distributing responsiblities between services; explicit versioning the kinds of backward compatibility; pushing or pulling data from one physical location to another, and with which granularity of time and entity.
There are also styles of isolation, not just of integration: code reuse is maturely described on a trade-off scale with decoupling. It feels like a pattern book in which these options are given a standard name for further discussion, and evaluated with respect to the contexts in which they work well.

Deployment

Should you go for virtual machines or containers? How do you map services to physical or virtual machines? The book couldn't possibly keep up with the rise of container orchestrators in the last couple of years, so it won't be a complete guide but could give you a sense of the problems that virtual machines create and that we are going to solve in this next generation. What will be the problems that containers create, and most of all how to solve them, is not in the scope of this book instead.

Testing

After the basics like a testing pyramid, I don't find myself in complete agreement with large scale testing strategies proposed here like consumer-driven contract testing. Yes, it works well enough if you can specify a formal contract that a service should adhere to, and test it in isolation in the implementer service. But the overhead of doing so, in a context in which we are supposed to create dozens if not hundreds of service, is very significant.
At eLife we have relied on a wide RESTful API specification, each group of endpoints implemented by a different service. As such, the overhead is limited, and this is just a description for validating requests and responses rather than a full contract, as most of these services are read-only.
All in all, I find myself relying on the end2end beast to get heterogeneous services, written in multiple languages, in different times by different people, to talk together reliably. I would spend a lot of energy in trying to square the circle of contracts, but I suppose they work well at an higher scale of traffic or on selected services.
The end2end tests we use are limited in their scope, constrained by being at the top of the pyramid; they do not necessarily cover a full end2end scenario but rather a data path involving more than on service, often skipping the user interface. It helps that we have no Selenium-based testing in the end2end layer, as the user interface is fully accessible to a HTML parser and requires no Javascript.
The problems that we encounter daily happen in production all the same: timeouts, dirty data, the automation challenge of turning on and off new nodes reliably, the race conditions that come from distributed executions. I'd rather not hide these problems but solve them, and I'm looking at containers instead to try to shrink the big picture and have a simpler end2end environment, easier to spin up and down, or to provide with clean databases.

Hidden gems

There are several hidden gems that would let you pick the brain of the author on a common problem cited in a chapter. For example, we have such a problem in integrating with a CRM that doesn't even support PHP 7 (what is it with CRMs and always being sources of technical debt?) There are some example patterns that you could apply in that situation, like hiding the CRM behind a specialized service that cleans up its API. Nothing miraculous, but a glimmer of hope for these desperate situations.

Conclusions

If you are going to work with microservices, or milliservices, or small enough services, this book is worth a read. If you are only having troubles with a particular area, such as testing or security, going through a single chapter will give you a big picture before you go in depth with further sources.
Remember that this book is starting to become a bit dated, so you cannot take highly technical lessons from it (and I doubt books are a great tool for those in general in this fast-moving environment). Think about your context, learn the theory, and fill in the parts in which the map is blank (or erased) with what you are learning from the web in 2017 (soon to be 2018 - attempt to make this review valid for one more year).

Monday, April 03, 2017

Pipeline Conf 2017

Not liquorice
Post originally shared on eLife's internal blog, but in the spirit of (green) open access here it is.

Last month I have attended the PIPELINE conference in London, 2017 edition. This event is a not-for-profit day dedicated to Continuous Delivery, the ability to get software changes into the hands of users, and to do so safely, quickly, and in a sustainable way. It is run by practitioners for practitioners, everyone on different sides of the spectrum like development, operations, testing, project management, or coaching.

The day is run with parallel tracks, divided into time slots of 40-minute talks and breaks for discussions and, of course, some sponsor pitches. I have been picking talks from the various tracks depending on their utility to eLife's testing and deployment platform, since our tech team has been developing every new project with this approach for a good part of 2016.

The conceptual model of Continuous Delivery and of eLife's implementation of it is not dissimilar to the scientific publishing process:
  • there is some work performed into an isolated environment, such as a laboratory, but also someone's laptop;
  • which leads to a transferable piece of knowledge, such as a manuscript, but also a series of commits, roughly speaking some lines of code;
  • which is then submitted and peer reviewed. We do so through pull requests, which perform a series of automated tests to aid human reviewers inside the team; part of the review is also running the code to reproduce the same results on a machine which is not that original laptop.
  • after zero or more rounds of revisions, this work gets accepted and published...
  • which means integrating it with the rest of human knowledge, typesetting it, organizing citations and lots of metadata about the newly published paper. In software, the code has to be transformed into an efficient representation, or virtual machines have to be configured to work with it.
  • until, finally, this new knowledge (or feature) is in the hands of a real person, who can read a paper or enjoy the new search functionalities
Forgive me for the raw description of scientific work.

In software, Continuous Delivery tries to automate and simplify this process to be able to perform it on microchanges multiple times per day. It aims for speed to be able to bring a new feature live in tens of minutes; it aims for safety to avoid breaking the users work on new changes; and to do all of this in a sustainable way, not to sacrifice tomorrow's ability to evolve for a quick gain today.

Even without the last mile of real user traffic, the 2.0 software services have been running in production or production-like servers from the first weeks of their development. A common anti-pattern in software development is to say "It works on my machine" (imagine some saying "It reproduces the results, but only with my microscope"); what we strive for is "It works on multiple machines, that can be reliably created; if we break a feature we know within minutes and can go back the latest version known to work."

Dan North: opening keynote

Dan North started to experiment with Continuous Delivery in 2004, at a time when builds were taking 2 days and a half to run in a testing environment contended by multiple teams. He spoke about several concepts underpinning Continuous Delivery:
  • conceptual consistency: the ability of different people to make similar decisions without coordination. It's an holy grail for scaling the efforts of an organization to more and more members and teams.
  • supportability: championing Mean Time To Repair over Mean Time Between Failures. The three important questions for facing a problem as what happened? Who is impacted? How do we fix it?
  • operability: what does it feel like to build your software? To deploy it? Test it? Releasing it? Monitor it? Support it? Essentially, developer experience in additio to user experience.
Operability is a challenge we have to face ourselves more and more as we move from running our own platform to provide open source software for other people to use. Not only reading an article has to be a beautiful experience, but publishing one should be.

John Clapham: team design for Continuous Delivery

This talk was more people-oriented, I agree with the speaker that engagement of workers is what really drive profits (or value in case of non-profits).
Practically speaking:
  • reward the right behaviors to promote the process you want;
  • ignore your job title as everyone's job is to deliver value together;
  • think small: it's easier to do 100 things 1% better than to do 1 thing 100% better (aka aggregation of marginal gains)

Abraham Marin: architectural patterns for a more efficient pipeline

The target for a build is for it to take less than 10 minutes. The speaker promotes the fastest builds as the one you don't have to run, introducing a series of patterns (and the related architectural refactorings) to be executed, safely, to simplify your software components:
  • decoupling an api from implementation: extracting an interface package to reduce the dependencies to a component to a dependency to an interface;
  • dividing responsibiliteis vertically or horizontally trying to isolate the most frequent changes and minimizing cross-cutting requirements;
  • transform a library into a service;
  • transform configuration into a service.
Some of these lessons are somewhat oriented to compiled languages, but not limited to them. My feeling is that even if you reduce compile times, you still have to test some components in integration, which is a large source of delay.

Steve Smith: measuring Continuous Delivery

How do you know whether a Continuous Delivery effort is going well? Or more pragmatically,  which of your projects is in trouble?
The abstract parameters to measure in pipelines are speed (throughput, cycle time) and stability. Each declines differently depending on the context.
In deployment pipelines that go from a commit to a new version release in production, lead time and the interval of new deployments can be measured. But also failure rate (how many runs fail) and failure recovery time are interesting. In more general builds or test suites, execution time is a key parameter but a more holistic view includes interval (how frequent are builds executed).
I liked some of these metrics so much that they are now in my OKRs for the new quarter. Simplistic quote: you can't manage what you can't measure.

Alastair Smith: Test-driving your database

To continuously deploy new software versions, you need an iterative approach to evolve your database and the data within it. When you evolve, you also have to test every new schema change. Even in the context of stored procedures for maximum efficiency (and lock-in), Alastair showed how to write tests that can reliably run on multiple environments.

Rachel Laycock: closing keynote, Continuous Delivery at Scale

Rachel Laycock is the Head of technology for North America at Thoughtworks, the main sponsor of the conference. The keynote however had nothing to do with sales pitches. Here are some anti-patterns:
  • "We have a DevOps team" is an oxymoron, as that kind of team doesn't exist; what often happens is that the Ops team gets renamed."
  • Do we chose Kubernetes or Mesos?" as in getting excited about the technology before you understand the problem to solve.
The "at scale" in the title pushes for seeing automation as a way to build a self-serving platform, where infrastructure people are not bottlenecks but enablers for the developers to build their own services.
The best quote however really was "yesterday's best practice becomes tomorrow's anti-pattern". What we look for is not to be the first to market but to have an adaptable advantage, a product that can evolve to meet new demands rather than being a dead end.

Tuesday, March 28, 2017

Book review: Site Reliability Engineering

http://shop.oreilly.com/product/0636920041528.do

The overwhelming majority of a software system’s lifespan is spent in use, not in design or implementation. So, why does conventional wisdom insist that software engineers focus primarily on the design and development of large-scale computing systems? -- the book tagline
Site Reliability Engineering - How Google runs production systems is a 2016 book about the ops side of Google services, and the set of principles and practices that underlie it.

A Site Reliability Engineer is a software engineer (in the developer sense) that designs and implement systems that automated what would otherwise be done manually by system administrators. As such, SREs have a directive for employing a minimum 50% of their time in development rather than firefighting and maintenance of existing servers (named toil in the book).

The book really is a collection of chapters, so you don't have to be scared by its size as you don't necessarily need to read it cover to cover. You can instead zoom in on the interesting chapters, being them monitoring, alerting, outage tracking or even management practices.

Principles, not just tools

I like books that reify and give names to concepts and principles, so that we can talk about those concepts and refer to them. This book gives precise definitions for the Google version of measures such as availability, Service Level Objectives and error budgets.
This is abstraction at work: even if the examples being used show Google specific tools like Borgmon or Outalator, the solutions are described at an higher abstraction level that makes them reusable. When load balancing practices are made generic enough to satisfy all of Google services, you can bet that they are reusable enough to be somewhat applicable to your situation.

Caveat emptor

Chances are that you're not Google: the size of your software development effort is order of magnitudes smaller, and even when it is of a comparable size it's not easy to turn an established organization into Google (and probably not desirable nor necessary.)
However, you can understand that Google and the other giants like Facebook and Amazon are shaping our industry through their economically irresistible software and services. Angular vs React is actually Google vs Facebook; containers vs serverless is actually Kubernetes vs Lambda which is actually Google vs Amazon. The NoSQL revolution was probably started by the BigTable and Dynamo papers... and so on: when you deployed your first Docker container, Google engineers had already been using similar technology for 10 years; as such, they can teach you a lot at a relatively little cost through the pages of a book. And it's better to be informed on what may come to a cloud provider near you in the next years.

Conclusions

It took some time to get through this book, but it gives a realistic picture of running systems that undergo a large scale of traffic and changes at the same time. Besides the lessons you can directly get from it, I would recommend it to many system administrators and "devops guys" as a way to think more clearly about which forces and solutions are at play in their datacenters (or more likely, virtual private clouds).

Sunday, March 12, 2017

Eris 0.9.0 is out

In 2016 I moved to another country and as a result of this change I didn't had much time to develop Eris further. Thankfully Eris 0.8 was already pretty much stable, and in this last period I could pick up development again.

What's new?

The ChangeLog for 0.9 contains one big new feature, multiple shrinking. While minimization of failing test cases is usually performed with a single linear search, multiple shrinking features a series of different options for shrinking a value.
For example, the integer 1234 was usually shrunk to 1233, 1232, 1231 and so on. With multiple shrinking, there are a series of options to explore that make the search logarithmic, such as 617, 925, 1080, 1157, 1195, 1214, 1224, 1129, and 1231. If the simplest failing values is below 617 for example, at least (1234-617) runs of the test will be skipped by this optimization, just in the first step.
This feature is the equivalent of QuickCheck's (and other property-based testing libraries') Rose Trees, but implemented here with an object-oriented approach that makes use of `GeneratedValueSingle` and `GeneratedValueOptions` as part of a Composite pattern.

This release also features support for the latest versions of basic dependencies:
  • PHPUnit 6.x is now supported
  • PHP 7.1 is officially supported (I expect there were mostly no issues in previous releases, but not the test suite fully passes.)
Several small bugs were fixed as part of feedback from projects using Eris:
  • the pos() and neg() generators should not shrink to 0.
  • float generation should never divide by 0.
  • shrinking of dates fell into a case of wrong operator precedence.
  • reproducible PHPUnit commands were not escaped correctly in presence of namespaced classes.
A few backward compatibility fixes were necessary to make room for new features:
  • minimumEvaluationRatio is now a method to be called, not a private field.
  • GeneratedValue is now an interface and not a class. This is supposed to be an internal value: project code should never depend on it and it should build custom generators with map() and other composite generators rather than implementing the Generator interface, which is much more complex.
  • the Listener::endPropertyVerification() method now takes the additional parameters $iterations and the optional $exception. When creating listeners should always subclass EmptyListener in order not to have to modify the not implemented methods, which will be inherited.

What's next?

My Trello board says:
  • still decoupling from PHPUnit, for usage in scripts, mainly as a programmable source of randomness.
  • more advanced Generators for finite state machines and in general a more stateful approach, for testing stateful systems.
  • faster feedback for developers, like having the option to run fewer test cases in a development environment but the full set in Continuous Integration.
I'm considering opening up the Trello board for public read-only visibility, as there's nothing sensible in there, but potential value in transparency and feedback from the random people encountering the project for the first time.

As always, if you feel there is a glaring feature missing in Eris, feel free to request it on the Github's project issues.

Sunday, February 12, 2017

Book review: Fifty quick ideas to improve your tests

https://leanpub.com/50quickideas-tests
Fifty quick ideas to improve your tests is, well, a series of fifty quick ideas that you can implement on some of your automated test suites to improve their value or lower their creation or maintenance costs.

These ideas are pattern-like in which they are mostly self-contained and often independent from each other. They are distilled from real world scenarios that the authors (David Evans, Tom Roden and Gojko Adzic) have encountered in their work.

This format helps a lot readability, as ideas are organized into themes, giving you the ability to focus on the area you want to improve and to quickly skip the ideas that do not make sense in your context or you find impractical or not worth the effort. For the same reasons I enjoyed the book this one is a sequel to, Fifty quick ideas to improve your user stories. Moreover, both were published on Leanpub so you have the ability to adjust the price to the value you think you'll get out of them; and despite Leanpub's large collection of unfinished books, this one is 100% complete and ready to read without the hassle of having to update to a new version later (who really does that?)

Some selected quotes follow, highlighted phrases mine.
Something one person considers critical might not even register on the scale of importance for someone from a different group. 
Drawing parallels between the different levels of needs, we can create a pyramid of software quality levels: Does it work at all? What are the key features, key technical qualities? Does it work well? What are the key performance, security, scalability aspects? Is it usable? What are the key usability scenarios? Is it useful? What production metrics will show that it is used in real work? Is it successful? 
In order to paint the big picture quickly, we often kick things off with a ten-minute session on identifying things that should always happen or that should never be allowed. This helps to set the stage for more interesting questions quickly, because absolute statements such as ‘should always’ and ‘should never’ urge people to come up with exceptions. 
Finally, when an aspect of quality is quantified, teams can better evaluate the cost and difficulty of measuring. For example, we quantified a key usability scenario for MindMup as ‘Novice users will be able to create and share simple mind maps in under five minutes’. Once the definition was that clear, it turned out not to be so impossible or expensive to measure it.
Avoid checklists that are used to tick items off as people work (Gawande calls those Read-Do lists). Instead, aim to create lists that allow people to work, then pause and review to see if they missed anything (‘Do-Confirm’ in Gawande’s terminology). 
A major problem causing overly complex examples is the misunderstanding that testing can somehow be completely replaced by a set of carefully chosen examples. For most situations we’ve seen, this is a false premise. Checking examples can be a good start, but there are still plenty of other types of tests that are useful to do. Don’t aim to fully replace testing with examples in user stories – aim to create a good shared understanding, and give people the context to do a good job. 
Waiting for an event instead of waiting for a period of time is the preferred way of testing asynchronous systems
The sequence is important: ‘Given’ comes before ‘When’, and ‘When’ comes before ‘Then’. Those clauses should not be mixed. All parameters should be specified with ‘Given’ clauses, the action under test should be specified with the ‘When’ clause, and all expected outcomes should be listed with ‘Then’ clauses. Each scenario should ideally have only one ‘When’ clause that clearly points to the purpose of the test.
Difficult testing is a symptom, not a problem. When it is difficult for a team to know if they have a complete picture during testing, then it will also be difficult for it to know if they have a complete picture during development, or during a discussion on requirements. It’s unfortunate that this complexity sometimes clearly shows for the first time during testing, but the cause of the problem is somewhere else. 
Although it’s intuitive to think about writing documents from top to bottom, with tests it is actually better to start from the bottom. Write the outputs, the assertions and the checks first. Then try to explain how to get to those outputs. [...] Starting from the outputs makes it highly unlikely that a test will try to check many different things at once, 
Technical testing normally requires the use of technical concepts, such as nested structures, recursive pointers and unique identifiers. Such things can be easily described in programming languages, but are not easy to put into the kind of form that non-technical testing tools require. 
For each test, ask who needs to resolve a potential failure in the future. A failing test might signal a bug (test is right, implementation is wrong), or it might be an unforeseen impact (implementation is right, test is no longer right). If all the people who need to be make the decision work with programming language tools, the test goes in the technical group. If it would not be a technical but a business domain decision, it goes into the business group. 
Manual tests suffer from the problem of capacity. Compared to a machine, a person can do very little in the same amount of time. This is why manual tests tend to optimise human time [...] Since automated tests are designed for unattended execution, it’s critically important that failures can be investigated quickly. [...] To save time in execution, it’s common for a single manual test to check lots of different things or to address several risks. 
Whenever a test needs to access external resources, in particular if they are created asynchronously or transferred across networks, ensure that the resources are hidden until they are fully complete. 
Time-based waiting [sleep() instead of polling or waiting for an event in tests] is the equivalent of going out on the street and waiting for half an hour in the rain upon receiving a thirty-minute delivery estimate for pizza, only to discover that the pizza guy came along a different road and dropped it off 10 minutes ago. 
Instead of just accepting that something is difficult to test and ignoring it, investigate whether you can measure it in the production environment. 
Test coverage is a negative metric: it measures how bad something is, not how good it is. 
There are just two moments when an automated test provides useful information: the first time it passes and when it subsequently fails. 
It’s far better to optimise tests for reading than for writing. Spending half an hour more writing a test will save days of investigation later on. [...] In business-oriented tests, if you need to compromise either ease of maintenance or readability, keep readability. 

ShareThis