Photo by EJ Strat on Unsplash

The DevOps Roadmap: 7 Best Practices in CI/CD

by | 22.01.2021 | Engineering

The software industry is moving towards embracing DevOps practices quickly, and it’s more important than ever to implement the best CI/CD practices to be future proof. Every organization has a different use case and requirements which dictate the adoption principles for DevOps. In general, there are a few fundamental principles that are considered best practices. We will go through the best practices in this post.

The list is not in any order of priority. Feel free to experiment or adapt any/all!

Practice 1: Track Work Item

Use a system that lets you track metrics and KPIs relevant to you. Refer to the list for an overview of metrics you should track:

  1. Deployment Frequency:
  2. Change Volume
  3. Deployment Time
  4. Failed Deployment Rate
  5. Change Failure Rate
  6. Time to Detection
  7. Mean Time to Recovery
  8. Lead Time
  9. Defect Escape Rate
  10. Defect Volume
  11. Availability
  12. Service Level Agreement Compliance
  13. Unplanned Work
  14. Customer Ticket Volume
  15. Cycle Time

This list will allow you to understand your progress and don’t be just occupied in yak shaving. It will also enable you to understand the system in a high-level overview and map future strategy and previous progress with each data point.

Make sure you use an electronic application to track the metrics and not some sticky note or spreadsheet. You want the metrics to be accessible by everyone in the organization.

Practice 2: Divide and Conquer

It’s a common practice in Industry to use convoluted multi-stage CI+CD pipelines with a bunch of tests and approvals. Having a multi-stage CI/CD pipeline is better than having a manual no-CI/CD approach, but it’s not that great.

The issue with these super-pipelines is that they get stuck in the middle while waiting for approvals or test to complete. Super-pipelines adds up to an unmanageable backlog of work-in-progress pipelines, and now it’s hard and unclear which knot to untie first.

A better and easy to manage approach would be using small, separate, and asynchronous pipelines. This approach would divide the multi-stage pipeline into multi single-stage one’s. Now you would have some pipeline performing tests and some other carrying out deployment. You are allowed to experiment. You can sometimes add a unit test in a CI pipeline but don’t add load testing in the pipeline. It wouldn’t be a best practice.

This separation of pipelines into smaller ones and their asynchronous behavior would create a need to use a system to record and store results and progress. Again, try to use an electronic system and not spreadsheets 🙂

A lack of metrics will tend to make the management messy.

Practice 3: Your base CI should be your Dockerfile

The multi-stage docker builds allows all the CI logic for a component wrapped into it’s Dockerfile. This wrapped structure allows you to have a consistent build environment everywhere. Now forget getting a headache while mix-matching tools, i.e., you don’t need VCS Repository, your CI, your storage, your CD, your infrastructure management, and your actual cloud – all from the same vendor! Use whatever you need to carry out the task in the best way.

The dockerfile as your base CI would allow you to prevent the service provider lock-in, and now you can easily switch between various CI tools. Get out and try multi-environment projects!

Practice 4: Keep it DRY – use templating tools

If you follow the trend templating tools would show up. Standard Dockerfiles, standard terraform scripts, standard cookbooks, playbooks – templates are essential when dealing with multi-environment projects and using different tools. Using templating tools would also allow for a quick comparison of what is different between environments.

Templating tools like Terragrunt, Helm, and Kustomize are good ones and covering frequent use cases, if not all.

Practice 5: Fail Fast

The asynchronous execution of timelines in practice 2 allows for a fail-fast pattern if provided with sufficient automated testing. This allows to catch buggy code and provide fast feedback to the developers who can fix them.

It’s better to fail fast than to create cascading problems in the organization and hunt across the deployment logs to find the root cause. It’s hectic and unnecessary.

The failing fast methodology allows the developer to correct code faster and roll back to a successful state. Now no task would be pushed aside for a bug that must be detected at the initial state, nor a developer loses his night’s sleep.

Practice 6: Use Branches

Branches are the core idea for a successful CI/CD implementation. This is often missed. Branches allow you to build a codebase and then move it across different build environments and then carry out stages, tests, and deployments.

Once tests are approved and you’re ready to publish the changes, you can create a pull request and then move the approved changes to the main branch. This approach allows developers to try out parallel development or experiment with a new feature by creating a new branch and not jeopardizing the whole codebase if something goes wrong.

How awesome is this?

Branching

Practice 7: Verify integrity – use signatures or digest

Security issues are frequently ignored despite one of the critical components in CI/CD is being able to verify integrity. The best way to verify integrity is by using signed content, but sometimes this approach gets very complicated.

Cryptographic digest (usually sha256) is a relatively simpler way to verify integrity and is recommended.

One of the core things developers miss adding something like redis:5.0.9 in their docker file can mutate over time with every additional build. The containers have mutable tags and its preferable way to use sha256 digest over something else.

redis:5.0.9@sha256:08aab527ca57f536f2805e031535a6881bab63171146aa6414de69d54b14a84d. -sha256 digest

Final Thoughts

I hope you went through all of these practices with your organization’s needs at the back of your mind. Try to implement these practices for a better and modern DevOps experience for your organization.

Happy Experimentation!

CommunityNew

The DevOps Awareness Program

Subscribe to the newsletter

Join 100+ cloud native ethusiasts

#wearep3r

Join the community Slack

Discuss all things Kubernetes, DevOps and Cloud Native

Related articles6

How to clean up disk space occupied by Docker images?

How to clean up disk space occupied by Docker images?

Docker has revolutionised containers even if they weren't the first to walk the path of containerisation. The ease and agility docker provide makes it the preferred engine to explore for any beginner or enterprise looking towards containers. The one problem most of...

Parsing Packages with Porter

Parsing Packages with Porter

Porter works as a containerized tool that helps users to package the elements of any existing application or codebase along with client tools, configuration resources and deployment logic in a single bundle. This bundle can be further moved, exported, shared and distributed with just simple commands.

eBPF – The Next Frontier In Linux (Introduction)

eBPF – The Next Frontier In Linux (Introduction)

The three great giants of the operating system even today are well regarded as Linux, Windows and Mac OS. But when it comes to creating all purpose and open source applications, Linux still takes the reign as a crucial piece of a developer’s toolkit. However, you...

Falco: A Beginner’s Guide

Falco: A Beginner’s Guide

Falco shines through in resolving these issues by detecting and alerting any behaviour that makes Linux system calls. This system of alerting rules is made possible with the use of Sysdig’s filtering expressions to detect potentially suspicious activity. Users can also specify alerts for specific calls, arguments related to the calls and through the properties of the calling process.

Why DevOps Engineers Love Fluentd?

Why DevOps Engineers Love Fluentd?

Fluentd’s main operational forte lies in the exchange of communication and platforming for creating pipelines where log data can be easily transferred from log generators (such as a host or application) to their preferred destinations (data sinks such as Elasticsearch).

Operating On OpenTracing: A Beginner’s Guide

Operating On OpenTracing: A Beginner’s Guide

OpenTracing is a largely ignored variant of the more popular distributed tracing technique, commonly used in microservice architectures. Users may be familiar with the culture of using distributed tracing for profiling and monitoring applications. For the newcomers,...