Thursday, December 06, 2018

Book review: The 5 Dysfunctions of a team

https://en.wikipedia.org/wiki/The_Five_Dysfunctions_of_a_Team
This is a spur-of-the-moment review of The 5 Dysfunctions of a Team, a business novel on team health that I've read today as part of the quarterly Professional Development Days I take as part of working at eLife.

As a follow-up to my role evolution into Software Engineer in Tools and Infrastructure, I am looking again more into the people skills side of my job (as opposed to purely technical skills). I have done this cyclically during my career, as the coder hat becomes too restrictive and you have to pick up other tools to achieve improvement. In particular, I am working on eLife's Continuous Delivery platform and it is crucial to work with multiple product-oriented teams to have them adopt your latest Jenkins pipelines and Github reports.

Dysfunctions

Patrick Lencioni's model of team dysfunctions (or of blessed behaviors if you flip all definitions) is a pyramid where each dysfunction prevents the next level from being reached. It would be a disservice to how well and quickly this is got across in the novel to just try to list them here, but if I had to summarize this in a long paragraph it would look like:
Building trust between team members allows constructive conflicts, which enable people to commit to action and hold each other accountable for what has been decided; all in the service of results. -- not really a quote
The dysfunctions are the flip side of these positive behaviors, for example lack of trust or fear of conflict. The definitions of some of these terms are more precise than what you find in many Agile and business coaching books; so don't dismiss trust as just a buzzword, for example.

Some context

The case being treated in the book, which comes from the author's management consulting firm, is that of a CEO turning around a team of executives. This makes for a somewhat more fascinating view of results as it's talking about a (fictional) company's IPO or eventual bankruptcy. Despite not being a clear parallel to a software development team, I do think this is applicable in every situation where professionals are paid to work daily together, with some caveats.
In fact, I suspect the level of commitment to the job that you see in the book would be typical of either high stakes roles (executives) or a generally healthy organization that has already removed common dysfunctions at the individual level. If in your organization:
  • people are primarily motivated by money
  • they look forward to 5 PM
  • they browse Facebook and Twitter for hours each day
then there are personal motives that have to be addressed before teams can start thinking about collective health.

Yes but, what can I do in practice?

After the narrative part, an addendum to the book contains a self-administered test to zoom in on which possible dysfunctions your team may exhibit at the moment. It continues with a series of exercises and practices that address these topics, with an estimation of their time commitments or how difficult they would be to run. I definitely look forward to anonymously try out the test with my technical team, out of curiosity for other views.

My conclusions

I think most of the dysfunctions are real patterns, that can only be exacerbated by the currently distorted market for software developers and CV-driven development. The last dysfunction, Inattention to results, is worth many books on its own on how to define those results as employees at all levels are known for optimizing around measurable goals to the detriment of, for example, long-term maintenance and quality.
So don't start a crusade armed by this little book, but definitely keep this model in your toolbox and share it with your team to see if you can all identify areas for collective improvement; it is painfully obvious to say you can't work on this alone!
The author is certainly right when writing that groups of people truly working together can accomplish what any assembly of single individuals could never dream of doing.

Sunday, September 16, 2018

Eris 0.11.0 is out

Eris 0.11.0 has been freshly released, and I'll be listing here various contributions that the project has received that are included in this new version and in the previous one, 0.10.0, which didn't have an associated blog post.

For a full list and links to the relevant pull requests and commits, see the ChangeLog.

0.10

  • The Eris\Facade class was introduced to allow usage outside of a PHPUnit context.
  • Official PHPUnit 7 support was introduced.
  • Fixed a corner case in suchThat()
There are some small backward compatibility breaks with respect to 0.9; they regard unused features (or at least I thought) including Generator::contains().

0.11

  • Official PHP 7.2 support
  • Annotations support for configuring behavior that is usually configured through methods: @eris-method, @eris-shrink, @eris-ratio, @eris-repeat, @eris-duration

Some acknowledgements

Most of this work comes from contributions, not from me. I'd like to say a word of thanks to the people that have taken the time to use Eris in some of their projects but also to feed back a fix, an extension, or a substantial improvement.

Sunday, January 21, 2018

Book review: Production-ready microservices

https://www.amazon.co.uk/Production-Ready-Microservices-Standardized-Engineering-Organization/dp/1491965975
Production-Ready Microservices is a short book about consistently practicing architecture and design over a fleet of microservices.
In general, I think the principles described here apply very much to any service-oriented initiative, even more so if the services are coarse grained and hence require more maintenance than finely isolated ones.

Uber

The book extrapolates from the author's experience at Uber "standardizing over a thousand microservices". Given a few developers for each microservice team, that makes up 2000-3000 engineers from the total >10000 Uber employees (I wonder how many are lawyers). After WhatsApp's famous story of being acquired at 55 employees in total, that really highlights the difficulty level of running a business and operations all over the physical world (sending cars and drivers around in dozens of countries) with respect to a digital-only enterprise. We should remember this and many other directions of change the next time we hear a technology advocate saying how much the cost of his 2-people startup has been reduced by $technology.

The main message

You should be this tall to use microservices; this architecture doesn't necessarily fit every context; although integrating separate services of some size is becoming a standard after the API revolution (before that, it was integrate through the database which is arguable worse).
You will encounter many different social and technical problems, such as:
  • Inverse Conway's Law, with the shape of the products defining the shape of the company. Although I found out this doesn't really apply at smaller scales as development teams can own more than one service and experience a successful decoupling between people and code.
  • Technical sprawl, where multiple languages, databases and other key choices spread without a consistent, central planning.
  • More ways to fail: distributed and concurrent systems are more difficult to work with and to reason upon, plus the fact that there are more servers, containers or applications will simply multiply the failures you'll see.
There are lots of non-functional requirements like scaling each microservice and isolating it from the rest of the fleet; perhaps don't go too micro- if you don't have the resources to ensure an acceptable level in each service. Perhaps in your context the acceptable SLA for some particular service is low, because it's not change often or is only internally facing or is only used several times per day.
One particular aspect of the Consistency is important lesson is that the whole lifecycle of services should be considered. Maintenance and even decomissioning are as important as producing new MVPs: but I've seen many times services being neglected, or being considered very easy to migrate away from one some new shiny substitute was available. In reality, it takes time and effort to keep services up and running, and to finally kill them when you have an alternative, as data and users are slowly migrated off from the old to the new platform.
Lots of requirements are also overlooked but often turn out to be important as you increase your population of services: the scalability of a single endpoint, fault tolerance, even documentation (ADR are the only form I trust very much right now in a fast-moving organization context.) Every single section of this book will make you think about it, but won't give much of an overview: you're better served by reading the SRE book for example.

Value for money

This book is a short read which gives you an overview of what microservices challenges you're likely to face down that rabbit hole; in particular, it focuses on a medium-to-large organization context. I'm not sure this book is worth the price tag however: 20 pounds for a Kindle edition of ~170 pages, where ~25 pages are glossary, index and lots of checklists.

Wednesday, January 03, 2018

Book review: Algorithms to live by

https://www.amazon.com/Algorithms-Live-Computer-Science-Decisions/dp/1627790365 Algorithms to Live By: The Computer Science of Human Decisions is a book that puts together the domains of computer science and real life. The ensemble of topics being touched is wide. The book treats deterministic algorithms such as optimal sorting, but then moves on to more context-dependent strategies for caching and scheduling. The last chapter even get to model identification, (tractable and intractable-made-tractable) optimization problems, stochastic algorithms and game theory.

All the while, computer science concepts are compared to conscious and unconscious human processes. For example, caching and the memory hierarchy have great parallels with how the human brain recollects memories of recent events, and how we can augment our brain with external, slower supports like paper. Scheduling is useful not only to allocate processes on CPU cores, but also to make an explicit choice of strategy when prioritizing the tasks that you or your team face. Up to the more extreme examples of game theory and mechanism design, when the incentive system becomes more important than the individual agents (manage the system, not the people rings a bell?)

If you like viewing the world throughout the lens of algorithms and see how the strategies of humans and computer compare with each other, I would strongly recommend this book as it will make for an entertaining read and some principles to take away for real life usage (I hope sorting socks will be easier now). Skip it if you have a very wide knowledge of computer science, operations research, Nash equilibria... but even if I was familiar with the technical part, I was missing the connection to different domains or everyday, real world problems. I listened to the audiobook version, which lasts about 12 hours. You may find it easier to skim through some chapters if you are more (or less) interested in some topics. The problem with audiobooks is that I can't easily take notes, while highlighting on an e-book reader is quick and lets me recollect all important gotchas later into a text file.

New role: Software Engineer in Tools and Infrastructure

After working on eLife's testing and deployment infrastructure in 2016, in the last year my responsibilities in the technical team have shifted towards the domain of engineering productivity. Testing is one phase of the development process that is often a bottleneck, but there are many more areas like code reviews, monitoring and infrastructure itself (being it servers or services):
In summary, the work done by the SETs naturally progressed from supporting only product testing efforts to include supporting product development efforts as well. Their role now encompassed a much broader Engineering Productivity agenda. -- Ari Shamash on the Google Testing Blog
Moreover, the team starts from a high level of coverage and design on many projects, to the point that my focus has always been on the provisioning and automation of testing environments, and on large-scale end2end testing.

What seems just a letter on a job title (from SET to SETI) is in fact an alignment of responsibilities so that I am not accidentally mistaken for "the QA guy" but always seen as a problem solver instead.
https://en.wikipedia.org/wiki/Pulp_Fiction
Solving problems and propagating the solution, so that you don't have to solve them over and over again
Roles are always an approximation in a team of generalizing specialist that also distributes and collaborate on some roles such as that of architecture. But it's helpful in a cross-functional team to have someone dedicated to the task of productivity, whether it is reached through automation, tooling, or continuous improvement.

Thursday, November 30, 2017

Book review: Building Microservices

http://shop.oreilly.com/product/0636920033158.do
Building Microservices: Designing Fine-Grained Systems by Sam Newman is the seminal book on microservices as a concept. It was published at the start of 2015 (that's a long time ago in tech... or is it?), it's focused on high-level topics rather than implementation and hence has aged well.

There are in fact several concepts, both at the methodology and the technical level, that the books does justice to. Here's what turned on the light bulb over my head.

Modeling

Modeling is important and a whiteboard discussion can save weeks of implementation down the line. Both me and Sam Newman are not the first to say this. Modelling in microservices, like in Domain-Driven Design, is all built around business capabilities and the shape of your organization (yes, Conway).

Styles of integration

Like for other design choices, especially at the architectural level, it's important to explicitly choose whether to go for a shared database (please don't), synchronous or asynchronous communication; orchestration through a Facade or coreography distributing responsiblities between services; explicit versioning the kinds of backward compatibility; pushing or pulling data from one physical location to another, and with which granularity of time and entity.
There are also styles of isolation, not just of integration: code reuse is maturely described on a trade-off scale with decoupling. It feels like a pattern book in which these options are given a standard name for further discussion, and evaluated with respect to the contexts in which they work well.

Deployment

Should you go for virtual machines or containers? How do you map services to physical or virtual machines? The book couldn't possibly keep up with the rise of container orchestrators in the last couple of years, so it won't be a complete guide but could give you a sense of the problems that virtual machines create and that we are going to solve in this next generation. What will be the problems that containers create, and most of all how to solve them, is not in the scope of this book instead.

Testing

After the basics like a testing pyramid, I don't find myself in complete agreement with large scale testing strategies proposed here like consumer-driven contract testing. Yes, it works well enough if you can specify a formal contract that a service should adhere to, and test it in isolation in the implementer service. But the overhead of doing so, in a context in which we are supposed to create dozens if not hundreds of service, is very significant.
At eLife we have relied on a wide RESTful API specification, each group of endpoints implemented by a different service. As such, the overhead is limited, and this is just a description for validating requests and responses rather than a full contract, as most of these services are read-only.
All in all, I find myself relying on the end2end beast to get heterogeneous services, written in multiple languages, in different times by different people, to talk together reliably. I would spend a lot of energy in trying to square the circle of contracts, but I suppose they work well at an higher scale of traffic or on selected services.
The end2end tests we use are limited in their scope, constrained by being at the top of the pyramid; they do not necessarily cover a full end2end scenario but rather a data path involving more than on service, often skipping the user interface. It helps that we have no Selenium-based testing in the end2end layer, as the user interface is fully accessible to a HTML parser and requires no Javascript.
The problems that we encounter daily happen in production all the same: timeouts, dirty data, the automation challenge of turning on and off new nodes reliably, the race conditions that come from distributed executions. I'd rather not hide these problems but solve them, and I'm looking at containers instead to try to shrink the big picture and have a simpler end2end environment, easier to spin up and down, or to provide with clean databases.

Hidden gems

There are several hidden gems that would let you pick the brain of the author on a common problem cited in a chapter. For example, we have such a problem in integrating with a CRM that doesn't even support PHP 7 (what is it with CRMs and always being sources of technical debt?) There are some example patterns that you could apply in that situation, like hiding the CRM behind a specialized service that cleans up its API. Nothing miraculous, but a glimmer of hope for these desperate situations.

Conclusions

If you are going to work with microservices, or milliservices, or small enough services, this book is worth a read. If you are only having troubles with a particular area, such as testing or security, going through a single chapter will give you a big picture before you go in depth with further sources.
Remember that this book is starting to become a bit dated, so you cannot take highly technical lessons from it (and I doubt books are a great tool for those in general in this fast-moving environment). Think about your context, learn the theory, and fill in the parts in which the map is blank (or erased) with what you are learning from the web in 2017 (soon to be 2018 - attempt to make this review valid for one more year).

Monday, April 03, 2017

Pipeline Conf 2017

Not liquorice
Post originally shared on eLife's internal blog, but in the spirit of (green) open access here it is.

Last month I have attended the PIPELINE conference in London, 2017 edition. This event is a not-for-profit day dedicated to Continuous Delivery, the ability to get software changes into the hands of users, and to do so safely, quickly, and in a sustainable way. It is run by practitioners for practitioners, everyone on different sides of the spectrum like development, operations, testing, project management, or coaching.

The day is run with parallel tracks, divided into time slots of 40-minute talks and breaks for discussions and, of course, some sponsor pitches. I have been picking talks from the various tracks depending on their utility to eLife's testing and deployment platform, since our tech team has been developing every new project with this approach for a good part of 2016.

The conceptual model of Continuous Delivery and of eLife's implementation of it is not dissimilar to the scientific publishing process:
  • there is some work performed into an isolated environment, such as a laboratory, but also someone's laptop;
  • which leads to a transferable piece of knowledge, such as a manuscript, but also a series of commits, roughly speaking some lines of code;
  • which is then submitted and peer reviewed. We do so through pull requests, which perform a series of automated tests to aid human reviewers inside the team; part of the review is also running the code to reproduce the same results on a machine which is not that original laptop.
  • after zero or more rounds of revisions, this work gets accepted and published...
  • which means integrating it with the rest of human knowledge, typesetting it, organizing citations and lots of metadata about the newly published paper. In software, the code has to be transformed into an efficient representation, or virtual machines have to be configured to work with it.
  • until, finally, this new knowledge (or feature) is in the hands of a real person, who can read a paper or enjoy the new search functionalities
Forgive me for the raw description of scientific work.

In software, Continuous Delivery tries to automate and simplify this process to be able to perform it on microchanges multiple times per day. It aims for speed to be able to bring a new feature live in tens of minutes; it aims for safety to avoid breaking the users work on new changes; and to do all of this in a sustainable way, not to sacrifice tomorrow's ability to evolve for a quick gain today.

Even without the last mile of real user traffic, the 2.0 software services have been running in production or production-like servers from the first weeks of their development. A common anti-pattern in software development is to say "It works on my machine" (imagine some saying "It reproduces the results, but only with my microscope"); what we strive for is "It works on multiple machines, that can be reliably created; if we break a feature we know within minutes and can go back the latest version known to work."

Dan North: opening keynote

Dan North started to experiment with Continuous Delivery in 2004, at a time when builds were taking 2 days and a half to run in a testing environment contended by multiple teams. He spoke about several concepts underpinning Continuous Delivery:
  • conceptual consistency: the ability of different people to make similar decisions without coordination. It's an holy grail for scaling the efforts of an organization to more and more members and teams.
  • supportability: championing Mean Time To Repair over Mean Time Between Failures. The three important questions for facing a problem as what happened? Who is impacted? How do we fix it?
  • operability: what does it feel like to build your software? To deploy it? Test it? Releasing it? Monitor it? Support it? Essentially, developer experience in additio to user experience.
Operability is a challenge we have to face ourselves more and more as we move from running our own platform to provide open source software for other people to use. Not only reading an article has to be a beautiful experience, but publishing one should be.

John Clapham: team design for Continuous Delivery

This talk was more people-oriented, I agree with the speaker that engagement of workers is what really drive profits (or value in case of non-profits).
Practically speaking:
  • reward the right behaviors to promote the process you want;
  • ignore your job title as everyone's job is to deliver value together;
  • think small: it's easier to do 100 things 1% better than to do 1 thing 100% better (aka aggregation of marginal gains)

Abraham Marin: architectural patterns for a more efficient pipeline

The target for a build is for it to take less than 10 minutes. The speaker promotes the fastest builds as the one you don't have to run, introducing a series of patterns (and the related architectural refactorings) to be executed, safely, to simplify your software components:
  • decoupling an api from implementation: extracting an interface package to reduce the dependencies to a component to a dependency to an interface;
  • dividing responsibiliteis vertically or horizontally trying to isolate the most frequent changes and minimizing cross-cutting requirements;
  • transform a library into a service;
  • transform configuration into a service.
Some of these lessons are somewhat oriented to compiled languages, but not limited to them. My feeling is that even if you reduce compile times, you still have to test some components in integration, which is a large source of delay.

Steve Smith: measuring Continuous Delivery

How do you know whether a Continuous Delivery effort is going well? Or more pragmatically,  which of your projects is in trouble?
The abstract parameters to measure in pipelines are speed (throughput, cycle time) and stability. Each declines differently depending on the context.
In deployment pipelines that go from a commit to a new version release in production, lead time and the interval of new deployments can be measured. But also failure rate (how many runs fail) and failure recovery time are interesting. In more general builds or test suites, execution time is a key parameter but a more holistic view includes interval (how frequent are builds executed).
I liked some of these metrics so much that they are now in my OKRs for the new quarter. Simplistic quote: you can't manage what you can't measure.

Alastair Smith: Test-driving your database

To continuously deploy new software versions, you need an iterative approach to evolve your database and the data within it. When you evolve, you also have to test every new schema change. Even in the context of stored procedures for maximum efficiency (and lock-in), Alastair showed how to write tests that can reliably run on multiple environments.

Rachel Laycock: closing keynote, Continuous Delivery at Scale

Rachel Laycock is the Head of technology for North America at Thoughtworks, the main sponsor of the conference. The keynote however had nothing to do with sales pitches. Here are some anti-patterns:
  • "We have a DevOps team" is an oxymoron, as that kind of team doesn't exist; what often happens is that the Ops team gets renamed."
  • Do we chose Kubernetes or Mesos?" as in getting excited about the technology before you understand the problem to solve.
The "at scale" in the title pushes for seeing automation as a way to build a self-serving platform, where infrastructure people are not bottlenecks but enablers for the developers to build their own services.
The best quote however really was "yesterday's best practice becomes tomorrow's anti-pattern". What we look for is not to be the first to market but to have an adaptable advantage, a product that can evolve to meet new demands rather than being a dead end.

Tuesday, March 28, 2017

Book review: Site Reliability Engineering

http://shop.oreilly.com/product/0636920041528.do

The overwhelming majority of a software system’s lifespan is spent in use, not in design or implementation. So, why does conventional wisdom insist that software engineers focus primarily on the design and development of large-scale computing systems? -- the book tagline
Site Reliability Engineering - How Google runs production systems is a 2016 book about the ops side of Google services, and the set of principles and practices that underlie it.

A Site Reliability Engineer is a software engineer (in the developer sense) that designs and implement systems that automated what would otherwise be done manually by system administrators. As such, SREs have a directive for employing a minimum 50% of their time in development rather than firefighting and maintenance of existing servers (named toil in the book).

The book really is a collection of chapters, so you don't have to be scared by its size as you don't necessarily need to read it cover to cover. You can instead zoom in on the interesting chapters, being them monitoring, alerting, outage tracking or even management practices.

Principles, not just tools

I like books that reify and give names to concepts and principles, so that we can talk about those concepts and refer to them. This book gives precise definitions for the Google version of measures such as availability, Service Level Objectives and error budgets.
This is abstraction at work: even if the examples being used show Google specific tools like Borgmon or Outalator, the solutions are described at an higher abstraction level that makes them reusable. When load balancing practices are made generic enough to satisfy all of Google services, you can bet that they are reusable enough to be somewhat applicable to your situation.

Caveat emptor

Chances are that you're not Google: the size of your software development effort is order of magnitudes smaller, and even when it is of a comparable size it's not easy to turn an established organization into Google (and probably not desirable nor necessary.)
However, you can understand that Google and the other giants like Facebook and Amazon are shaping our industry through their economically irresistible software and services. Angular vs React is actually Google vs Facebook; containers vs serverless is actually Kubernetes vs Lambda which is actually Google vs Amazon. The NoSQL revolution was probably started by the BigTable and Dynamo papers... and so on: when you deployed your first Docker container, Google engineers had already been using similar technology for 10 years; as such, they can teach you a lot at a relatively little cost through the pages of a book. And it's better to be informed on what may come to a cloud provider near you in the next years.

Conclusions

It took some time to get through this book, but it gives a realistic picture of running systems that undergo a large scale of traffic and changes at the same time. Besides the lessons you can directly get from it, I would recommend it to many system administrators and "devops guys" as a way to think more clearly about which forces and solutions are at play in their datacenters (or more likely, virtual private clouds).

Sunday, March 12, 2017

Eris 0.9.0 is out

In 2016 I moved to another country and as a result of this change I didn't had much time to develop Eris further. Thankfully Eris 0.8 was already pretty much stable, and in this last period I could pick up development again.

What's new?

The ChangeLog for 0.9 contains one big new feature, multiple shrinking. While minimization of failing test cases is usually performed with a single linear search, multiple shrinking features a series of different options for shrinking a value.
For example, the integer 1234 was usually shrunk to 1233, 1232, 1231 and so on. With multiple shrinking, there are a series of options to explore that make the search logarithmic, such as 617, 925, 1080, 1157, 1195, 1214, 1224, 1129, and 1231. If the simplest failing values is below 617 for example, at least (1234-617) runs of the test will be skipped by this optimization, just in the first step.
This feature is the equivalent of QuickCheck's (and other property-based testing libraries') Rose Trees, but implemented here with an object-oriented approach that makes use of `GeneratedValueSingle` and `GeneratedValueOptions` as part of a Composite pattern.

This release also features support for the latest versions of basic dependencies:
  • PHPUnit 6.x is now supported
  • PHP 7.1 is officially supported (I expect there were mostly no issues in previous releases, but not the test suite fully passes.)
Several small bugs were fixed as part of feedback from projects using Eris:
  • the pos() and neg() generators should not shrink to 0.
  • float generation should never divide by 0.
  • shrinking of dates fell into a case of wrong operator precedence.
  • reproducible PHPUnit commands were not escaped correctly in presence of namespaced classes.
A few backward compatibility fixes were necessary to make room for new features:
  • minimumEvaluationRatio is now a method to be called, not a private field.
  • GeneratedValue is now an interface and not a class. This is supposed to be an internal value: project code should never depend on it and it should build custom generators with map() and other composite generators rather than implementing the Generator interface, which is much more complex.
  • the Listener::endPropertyVerification() method now takes the additional parameters $iterations and the optional $exception. When creating listeners should always subclass EmptyListener in order not to have to modify the not implemented methods, which will be inherited.

What's next?

My Trello board says:
  • still decoupling from PHPUnit, for usage in scripts, mainly as a programmable source of randomness.
  • more advanced Generators for finite state machines and in general a more stateful approach, for testing stateful systems.
  • faster feedback for developers, like having the option to run fewer test cases in a development environment but the full set in Continuous Integration.
I'm considering opening up the Trello board for public read-only visibility, as there's nothing sensible in there, but potential value in transparency and feedback from the random people encountering the project for the first time.

As always, if you feel there is a glaring feature missing in Eris, feel free to request it on the Github's project issues.

Sunday, February 12, 2017

Book review: Fifty quick ideas to improve your tests

https://leanpub.com/50quickideas-tests
Fifty quick ideas to improve your tests is, well, a series of fifty quick ideas that you can implement on some of your automated test suites to improve their value or lower their creation or maintenance costs.

These ideas are pattern-like in which they are mostly self-contained and often independent from each other. They are distilled from real world scenarios that the authors (David Evans, Tom Roden and Gojko Adzic) have encountered in their work.

This format helps a lot readability, as ideas are organized into themes, giving you the ability to focus on the area you want to improve and to quickly skip the ideas that do not make sense in your context or you find impractical or not worth the effort. For the same reasons I enjoyed the book this one is a sequel to, Fifty quick ideas to improve your user stories. Moreover, both were published on Leanpub so you have the ability to adjust the price to the value you think you'll get out of them; and despite Leanpub's large collection of unfinished books, this one is 100% complete and ready to read without the hassle of having to update to a new version later (who really does that?)

Some selected quotes follow, highlighted phrases mine.
Something one person considers critical might not even register on the scale of importance for someone from a different group. 
Drawing parallels between the different levels of needs, we can create a pyramid of software quality levels: Does it work at all? What are the key features, key technical qualities? Does it work well? What are the key performance, security, scalability aspects? Is it usable? What are the key usability scenarios? Is it useful? What production metrics will show that it is used in real work? Is it successful? 
In order to paint the big picture quickly, we often kick things off with a ten-minute session on identifying things that should always happen or that should never be allowed. This helps to set the stage for more interesting questions quickly, because absolute statements such as ‘should always’ and ‘should never’ urge people to come up with exceptions. 
Finally, when an aspect of quality is quantified, teams can better evaluate the cost and difficulty of measuring. For example, we quantified a key usability scenario for MindMup as ‘Novice users will be able to create and share simple mind maps in under five minutes’. Once the definition was that clear, it turned out not to be so impossible or expensive to measure it.
Avoid checklists that are used to tick items off as people work (Gawande calls those Read-Do lists). Instead, aim to create lists that allow people to work, then pause and review to see if they missed anything (‘Do-Confirm’ in Gawande’s terminology). 
A major problem causing overly complex examples is the misunderstanding that testing can somehow be completely replaced by a set of carefully chosen examples. For most situations we’ve seen, this is a false premise. Checking examples can be a good start, but there are still plenty of other types of tests that are useful to do. Don’t aim to fully replace testing with examples in user stories – aim to create a good shared understanding, and give people the context to do a good job. 
Waiting for an event instead of waiting for a period of time is the preferred way of testing asynchronous systems
The sequence is important: ‘Given’ comes before ‘When’, and ‘When’ comes before ‘Then’. Those clauses should not be mixed. All parameters should be specified with ‘Given’ clauses, the action under test should be specified with the ‘When’ clause, and all expected outcomes should be listed with ‘Then’ clauses. Each scenario should ideally have only one ‘When’ clause that clearly points to the purpose of the test.
Difficult testing is a symptom, not a problem. When it is difficult for a team to know if they have a complete picture during testing, then it will also be difficult for it to know if they have a complete picture during development, or during a discussion on requirements. It’s unfortunate that this complexity sometimes clearly shows for the first time during testing, but the cause of the problem is somewhere else. 
Although it’s intuitive to think about writing documents from top to bottom, with tests it is actually better to start from the bottom. Write the outputs, the assertions and the checks first. Then try to explain how to get to those outputs. [...] Starting from the outputs makes it highly unlikely that a test will try to check many different things at once, 
Technical testing normally requires the use of technical concepts, such as nested structures, recursive pointers and unique identifiers. Such things can be easily described in programming languages, but are not easy to put into the kind of form that non-technical testing tools require. 
For each test, ask who needs to resolve a potential failure in the future. A failing test might signal a bug (test is right, implementation is wrong), or it might be an unforeseen impact (implementation is right, test is no longer right). If all the people who need to be make the decision work with programming language tools, the test goes in the technical group. If it would not be a technical but a business domain decision, it goes into the business group. 
Manual tests suffer from the problem of capacity. Compared to a machine, a person can do very little in the same amount of time. This is why manual tests tend to optimise human time [...] Since automated tests are designed for unattended execution, it’s critically important that failures can be investigated quickly. [...] To save time in execution, it’s common for a single manual test to check lots of different things or to address several risks. 
Whenever a test needs to access external resources, in particular if they are created asynchronously or transferred across networks, ensure that the resources are hidden until they are fully complete. 
Time-based waiting [sleep() instead of polling or waiting for an event in tests] is the equivalent of going out on the street and waiting for half an hour in the rain upon receiving a thirty-minute delivery estimate for pizza, only to discover that the pizza guy came along a different road and dropped it off 10 minutes ago. 
Instead of just accepting that something is difficult to test and ignoring it, investigate whether you can measure it in the production environment. 
Test coverage is a negative metric: it measures how bad something is, not how good it is. 
There are just two moments when an automated test provides useful information: the first time it passes and when it subsequently fails. 
It’s far better to optimise tests for reading than for writing. Spending half an hour more writing a test will save days of investigation later on. [...] In business-oriented tests, if you need to compromise either ease of maintenance or readability, keep readability. 

ShareThis