In the previous post in this series, we discussed patterns of applying DevOps principles in a way that yields high performance outcomes. In part two, we are going to discuss what is called “The Second Way”. This second “way” is ultimately about amplifying and shortening feedback loops such that corrections can be made fast and continuously. This is sometimes referred to as the right to left flow.
A defect is not a defect unless it hits the customer. Lean principles teach us that the earlier we can catch a potential downstream defect, the less costly it will be on the overall cost of the service delivery. Therefore the three V’s also apply in this “Second Way”. Velocity of a correction in the flow is essential. Variation also plays a role in the “Second Way” in that the complexity of the infrastructure where the defect has been identified needs to be simpler so that it requires less time to detect. Lastly, software artifacts need to be elevated and bounded (i.e., visual) to their source (e.g., source code, source code repository) in order to be able to decrease the overall Lead Time of a service delivered.
Docker and the Second Way
This is similar to velocity in the “First Way” in that it is about speed with direction. The important thing to remember about flow is that it’s not always going in the same direction. There are interrupts in the flow due to defects and the potential changeover time related to the defect. To be effective at this second way, you need to have velocity in both directions (i.e., the complete feedback loop). How fast can the changeover occur? How adaptive is the process for not only quick defect detection but how fast can the system be reimplemented and rebased to the original baseline conditions? In Toyota Lean, there was something called the Andon Cord that was used to stop the line if a defect was detected in the production process. Even though the Andon Cord would actually stop the line, it was more of a metaphor for the strength of the process.
It is the idea that pulling the cord could actually fix the defect and make a difference. Under this premise, it was more likely that a line worker would actually stop the line, even for minor defects, because they knew the process to fix the defect was actually streamlined. Docker’s streamline packaging, provisioning and immutable delivery of artifacts allow an organization to take advantage of shortened changeover times due to defects and make it easier to stop the line if a defect is detected.
Here again the advantages of using Docker in the “Second Way” are similar to the ones discussed on the “First Way. In this example, it’s about the complexity of the infrastructure that is created of where the defect is detected. A complex set of software artifacts at scale can be fragile. Software-based services can be made up of thousands of classes and libraries with many different integration points. A slight delivery variation (e.g., how the full stack was built) can be just enough to trigger a defect that can be very difficult to detect. Good service delivery hygiene mandates that all artifacts start as source in a version control system; however rebuilding everything from source at every stage of the pipeline might be just enough variation to trigger defect variants. A Docker delivery and the use of immutable artifacts through the pipeline reduces variation and therefore, reduces the risk of defect variants later in the delivery pipeline.
One of the advantages of an immutable delivery process is that most of the artifacts are delivered throughout the pipeline as binaries. This allows a service delivery team to create meta data from the source that is maintained and can be visualized at any stage of the pipeline. It is not uncommon to see developers embed GIT SHA hashes related to the GIT Commit for a particular section of code in the Docker image. Other techniques forf including additional metadata about the software artifact can also be embedded in the Docker image. R.I.Pienaar has an interesting blog post on his devco.net site about embedding metadata inside every one of his Docker images along with a couple of useful inspection scripts. Here is a list of some of the operational metadata R.I. includes in all of of his container images:
- Where and when was it built and why
- What was its ancestor images
- How do I start, validate, monitor and update it
- What git repo is being built, what hash of that git repo was built
- What are all the tags this specific container is known as at time of build
- What’s the project name this belongs to
- Have the ability to have arbitrary user supplied rich metadata
All of this is another form of a “Second Way” feedback loop. When troubleshooting a discovered or potential defect, visualizing embedded metadata can speed up the time required to correct the defect and therefore, reduce the overall Lead Time of the service being delivered.
Read Part 1 and Part 3 in this series
Learn More about Docker
- New to Docker? Try our 10 min online tutorial
- Share images, automate builds, and more with a free Docker Hub account
- Read the Docker 1.6 Release Notes
- Subscribe to Docker Weekly
- Register for upcoming Docker Online Meetups
- Attend upcoming Docker Meetups
- Register for DockerCon 2015
- Start contributing to Docker