The DevOps Roadmap: 7 Best Practices in CI/CD

by | Jan 22, 2021 | Engineering

The software industry is moving towards embracing DevOps practices quickly, and it’s more important than ever to implement the best CI/CD practices to be future proof. Every organization has a different use case and requirements which dictate the adoption principles for DevOps. In general, there are a few fundamental principles that are considered best practices. We will go through the best practices in this post.

The list is not in any order of priority. Feel free to experiment or adapt any/all!

Practice 1: Track Work Item

Use a system that lets you track metrics and KPIs relevant to you. Refer to the list for an overview of metrics you should track:

  1. Deployment Frequency:
  2. Change Volume
  3. Deployment Time
  4. Failed Deployment Rate
  5. Change Failure Rate
  6. Time to Detection
  7. Mean Time to Recovery
  8. Lead Time
  9. Defect Escape Rate
  10. Defect Volume
  11. Availability
  12. Service Level Agreement Compliance
  13. Unplanned Work
  14. Customer Ticket Volume
  15. Cycle Time

This list will allow you to understand your progress and don’t be just occupied in yak shaving. It will also enable you to understand the system in a high-level overview and map future strategy and previous progress with each data point.

Make sure you use an electronic application to track the metrics and not some sticky note or spreadsheet. You want the metrics to be accessible by everyone in the organization.

Practice 2: Divide and Conquer

It’s a common practice in Industry to use convoluted multi-stage CI+CD pipelines with a bunch of tests and approvals. Having a multi-stage CI/CD pipeline is better than having a manual no-CI/CD approach, but it’s not that great.

The issue with these super-pipelines is that they get stuck in the middle while waiting for approvals or test to complete. Super-pipelines adds up to an unmanageable backlog of work-in-progress pipelines, and now it’s hard and unclear which knot to untie first.

A better and easy to manage approach would be using small, separate, and asynchronous pipelines. This approach would divide the multi-stage pipeline into multi single-stage one’s. Now you would have some pipeline performing tests and some other carrying out deployment. You are allowed to experiment. You can sometimes add a unit test in a CI pipeline but don’t add load testing in the pipeline. It wouldn’t be a best practice.

This separation of pipelines into smaller ones and their asynchronous behavior would create a need to use a system to record and store results and progress. Again, try to use an electronic system and not spreadsheets 🙂

A lack of metrics will tend to make the management messy.

Practice 3: Your base CI should be your Dockerfile

The multi-stage docker builds allows all the CI logic for a component wrapped into it’s Dockerfile. This wrapped structure allows you to have a consistent build environment everywhere. Now forget getting a headache while mix-matching tools, i.e., you don’t need VCS Repository, your CI, your storage, your CD, your infrastructure management, and your actual cloud – all from the same vendor! Use whatever you need to carry out the task in the best way.

The dockerfile as your base CI would allow you to prevent the service provider lock-in, and now you can easily switch between various CI tools. Get out and try multi-environment projects!

Practice 4: Keep it DRY – use templating tools

If you follow the trend templating tools would show up. Standard Dockerfiles, standard terraform scripts, standard cookbooks, playbooks – templates are essential when dealing with multi-environment projects and using different tools. Using templating tools would also allow for a quick comparison of what is different between environments.

Templating tools like Terragrunt, Helm, and Kustomize are good ones and covering frequent use cases, if not all.

Practice 5: Fail Fast

The asynchronous execution of timelines in practice 2 allows for a fail-fast pattern if provided with sufficient automated testing. This allows to catch buggy code and provide fast feedback to the developers who can fix them.

It’s better to fail fast than to create cascading problems in the organization and hunt across the deployment logs to find the root cause. It’s hectic and unnecessary.

The failing fast methodology allows the developer to correct code faster and roll back to a successful state. Now no task would be pushed aside for a bug that must be detected at the initial state, nor a developer loses his night’s sleep.

Practice 6: Use Branches

Branches are the core idea for a successful CI/CD implementation. This is often missed. Branches allow you to build a codebase and then move it across different build environments and then carry out stages, tests, and deployments.

Once tests are approved and you’re ready to publish the changes, you can create a pull request and then move the approved changes to the main branch. This approach allows developers to try out parallel development or experiment with a new feature by creating a new branch and not jeopardizing the whole codebase if something goes wrong.

How awesome is this?

Branching

Practice 7: Verify integrity – use signatures or digest

Security issues are frequently ignored despite one of the critical components in CI/CD is being able to verify integrity. The best way to verify integrity is by using signed content, but sometimes this approach gets very complicated.

Cryptographic digest (usually sha256) is a relatively simpler way to verify integrity and is recommended.

One of the core things developers miss adding something like redis:5.0.9 in their docker file can mutate over time with every additional build. The containers have mutable tags and its preferable way to use sha256 digest over something else.

redis:[email protected]:08aab527ca57f536f2805e031535a6881bab63171146aa6414de69d54b14a84d. -sha256 digest

Final Thoughts

I hope you went through all of these practices with your organization’s needs at the back of your mind. Try to implement these practices for a better and modern DevOps experience for your organization.

Happy Experimentation!

Explore more

Serverless, FaaS and why do you need them?

In recent years, serverless adoption has started, with more and more individuals depending on serverless technology to meet organizations’ specific needs. A survey conducted by Serverless Inc showed in 2018 that half of the respondents used serverless in their job,...

read more

The DevOps Roadmap: Unikernels

Containerization is one of the core building principles of clouds and DevOps, but traditional VMs and containers lack the security and agility that modern infrastructure craves. We are moving towards workloads that are smaller, faster, and more secure than the...

read more

The DevOps Roadmap: Virtualization

The Full-Stack Developer's Roadmap Part 1: FrontendThe Full-Stack Developer's Roadmap Part 2: BackendThe Full-Stack Developer's Roadmap Part 3: DatabasesThe Full-Stack Developer's Roadmap Part 4: APIsThe DevOps Roadmap: Fundamentals with CI/CDThe DevOps Roadmap: 7...

read more

Cloud Computing models: SaaS vs IaaS vs PaaS

Companies embrace cloud computing worldwide, and the forecasted size of 1025.9 billion USD by 2026 says the same story. Owning and managing infrastructure comes with a considerable cost and improper utilization of human resources. Companies are meant to foster...

read more

Interested in what we do? Looking for help? Wanna talk about software strategy?