My earlier article on product delivery recommended the core strategy of implementing Continuous Delivery as described by Dave Farley. To recap, Continuous Delivery rests on three legs, all equally important for success. This article will take a closer look at the Deployment Pipeline.
To illustrate the power of the Deployment Pipeline, we start "simple"
By “simple,” I here mean purely digital products (though these need to be executed in some physical environment). In a future article, I’ll add the physical (mechanics, electronics, procurement, production, etc.) dimension to the product equation.
From my article on product delivery, the purpose of the Deployment Pipeline is as fast as possible, to provide answers to the questions
- Is my code technically correct?
- Is the system releasable?
See also Dave Farley’s book Continuous Delivery Pipelines, and his video below.
The keyword here is “fast”. We want feedback loops to be as short as possible and have a potentially deployable release candidate as frequently as possible because
quality needs speed, and speed needs quality.
When to release to customers is a business decision.
Some key Deployment Pipeline characteristics
Inspired by e.g., Continuous Delivery and Continuous Delivery Pipelines, here are some of the characteristics.
The Deployment Pipeline defines the releaseability, and is the only route to production. Normal feature releases, patch releases, Long Term Support releases. The only route, no exception! | When the work of the Deployment Pipeline is complete, we will know that the software is sufficiently fast, scalable, secure, resilient and does what our customers want it to do. | |
The Deployment Pipeline is a falsification mechanism. | It is not a tool to prove that our software is good. Rather, it is a mechanism based on the scientific principle of challenging our hypotheses. Even if just one test fails, we know our code is not good enough and is therefore not fit for production. | |
We include in the Deployment Pipeline all tests that are necessary to determine releaseability of our software. This includes
| When all automatic tests (plus potentially some manual explorative tests) are passed, we know that the release candidate is good, and safe to release to our customers. | |
We automate everything we can in the Deployment Pipeline so that development activities are repeatable, reliable and carried out efficiently, often in parallel, with consistent results. Automation is not just for the code and the tests. Automation should extend to
| We do not rely on people manually following a (maybe valid) recipe, configuring a piece of environment. It is error-prone and slow. We want to use, e.g., “golden images” and infrastructure as code. Only when presented with indisputable arguments will we accept manual intervention. Otherwise, no manual processes. | |
We take version control very seriously and apply it to everything: code, dependencies, test cases, test data, test results, configurations, infrastructure … everything. Our aim is to ensure that every bit and byte that we deploy into production is the one that we intend. | For every release candidate, we know exactly what version of code, test cases, test data, configuration, infrastructure etc. constitute that consistent release candidate – exactly! |
Version control and configuration management
Recap the points above on version control and configuration management: If you are not stringent about it, forget about harvesting the benefits from the Deployment Pipeline.
Thought experiment: We need to establish, exactly, the versions of code, 3. party libraries, configurations, build systems, test cases, test data, infrastructure, etc. used in the past to produce a specific release candidate. Can we do that, fast?
YES: Great, we’re doing configuration management as we should!
NO: Huston, we have a problem!
Test stategy
One of the characteristics above is the Deployment Pipeline, having automatic tests. Getting a test strategy right is a big topic. What is test-driven development? What is behavioral-driven development? How do you avoid flaky tests? How do you manage test data, etc.?
I could include it in a future article. This will be a quick overflight. The tradition is to use the test quadrant combined with the classic test pyramid.
In October 2024, Dave Farley released this video with an updated view on software testing, challenging the classic test quadrant and test pyramid. It is great stuff and worth investing 18 minutes!
Dave Farley provides us with a treasure chest of insights in his channel. I highly recommend it.
Cybersecurity (CS)
You may have heard the notion of “shift security left” or the more general term “shift quality left”. We must ensure that quality (including cybersecurity) is natural and integral in developing software solutions. CS must not come as an afterthought. Also, it must not be a parallel development track with late integration. I’ve experienced both, and it leads to poor software quality.
No doubt that, e.g., IEC 62443 for OT (Operational Technology) is complicated, but that is not an argument for separating CS development from the other solution development. We must (shift left) ensure the CS aspects are fully integrated with the ongoing solution development. A future article might dive into IEC 62443.
Wrap-up
The world’s best Deployment Pipeline does not place your organization among the best, but it helps – a lot – moving in that direction! The two other legs in Continuous Delivery must also be strong.
Remember, product delivery is “only” one of the three dimensions of the product operating model.