Photo by Fotis Fotopoulos on Unsplash

The DevOps Roadmap: Fundamentals with CI/CD

by | 22.01.2021 | Engineering

We have reached a point in the web development roadmap where you can build great web applications with loads of features. But how about building applications fast and deploying user-requested features faster?

In this post, we will explore how few companies push code to production every day (multiple times a day in some instance) by leveraging what we have learned till now, i.e., the cloud-native way of development.

And we have a fun experiment at the end of the blog so that you can get your hand dirty with Continuous Integration; no prerequisites for the fun part.

Table of Contents:

  • What is DevOps?
  • What are the DevOps principles?
  • How DevOps works?
  • Fun Experiment with CI 😋

Let’s dive in!

What is DevOps?

DevOps is a philosophy that is implemented through tools, a lot of them. The philosophy focuses on bringing together the development & operations team to work in unison and increase the organization’s ability to deliver applications and services at high velocity.

We’ve written a few words on a more philosophical viewpoint on DevOps here.

What is the development & operations team?

The development team writes lines of code, and the operations team takes care of deployment alongside application availability and latency.

Newcomers have a misconception about how new software is made and released. After developers write code for some applications, it’s not published for the user to use. There’s a lot of things in between. The code is forwarded to operations for testing, debugging, integration of all the other stuff, and this part consumes most of the time in the development life cycle.

Development & operations: Not a perfect match?

Let’s understand this first!

Developers want to deploy new features as fast as possible.

Operations want to do precisely the opposite, i.e., deploy as little as possible to maintain the application’s stability. There are several instances where a single deployment brought down the entire system.

The Battle – Dev vs Ops

Customers don’t care about what developers and operations think; they only care about getting new ‘stable’ features without any glitches as fast as possible. Think about the software updates you’re waiting for 😛

We can’t blame operations for deploying updates not quickly. They want to maintain the overall application’s stability, and they are entirely right in their thought process, but with this approach comes a few drawbacks.🙁

  1. Lack of Agility: A developer abstains from quick feedback from users as the deployment is not rapid. The delay caused by operations doesn’t allow the product to adjust itself to the ever-changing market in a fast manner. This delay may give other competitors an advantage. An extreme scenario for lack of agility can be security patches being not published even after the vulnerability is discovered, jeopardizing users and the organization.
  2. Chance of complete blackout: A developer’s nightmare is being called at 3 AM to fix a blackout caused by a deployment. Companies like google earn 4K USD per second, and a minute of downtime could be costly. It’s more expensive if customers migrate to other services after the downtime. Rollbacks are more demanding and more time consuming when deployments are big. (a lot of services are changed all together)

Moreover, there is an internal blame game & conflict that drives down the team’s productivity and the organization. The dev team and operations are very defensive and won’t admit their fault when something is not completed before a deadline or more if something goes wrong.

Traditional Development and Operations

Manager: Why is the product not yet ready?

Operations Team Lead: We have submitted the changes to the dev team.

Dev Team Lead: We have submitted the updates to operations.

Resultant work 0!

Manager: Why is the system down?

Operations Team Lead: Dev team might have a bug in their code. We can find that for you.

Dev Team Lead: Operations never pointed out any bug we could fix while testing. We could have fixed that before and not at 3 AM if pointed.

Resultant Accountability 0! Customers are migrating to a different service, and the company is losing money😐

What are DevOps principles?

As you can understand, the previous approach of having two teams is not suitable as you scale. Your problems scale with you. The best solution for that to not happen is to integrate both teams with the help of DevOps (a clipped compound of “development” and “operations”) and its principles. This integration helps you increase your capacity to innovate, provide a better time to value ratio, and most importantly, an optimal customer experience.

Principle 1: Global Optimization / Bottleneck reduction

Save time in every step of the development cycle by reducing unnecessary conversation and automating everything to focus solely on customers.

Principle 2: Amplify Feedback loops

Getting input from the end-users and work on it to improve on overall customer experience.

Principle 3: Continuous Learning

Adapt to changing circumstances fast, whether that may be the emergence of new technology, customer needs, or legislation changes.

How DevOps works?

Pipelines are what drives the DevOps principle. BMC defines pipeline as:

A pipeline in a Software Engineering team is a set of automated processes that allow Developers and DevOps professionals to reliably and efficiently compile, build and deploy their code to their production compute platforms. There is no hard and fast rule stating what a pipeline should look like and the tools it must utilise, however the most common components of a pipeline are; build automation/continuous integration, test automation, and deployment automation.

DevOps pipelines

The pipeline consists of a set of tools which are broken down into the following categories:

  • Continuous Development
  • Continuous Integration
  • Continuous Deployment
  • Continuous Testing
  • Continuous Monitoring

Let’s dive in 😃

Continuous Development

In this stage, developers commit their code to the codebase using a version control system like git. You may ask why continuous? It’s because the development cycle goes on & on; it never stops.

Git maintains the different version of the codebase and tools like Ant, Maven, Gradle is used for packaging/building the code so that executables file could be sent for testing.

Different versions provide a benefit for fast rollback if something goes wrong or a bug is found in testing or a peer code review. Peer review is easy as the committed change is small, and your team members can look at what you’re working on and provide feedback. The difficulty of understanding code increases with the size of the codebase.

Continuous Integration (CI)

The integration stage is a critical point in the DevOps life cycle. This stage facilitates the integration of stages of the lifecycle, therefore automating the whole devops process. A person won’t be occupied with moving a code base from development to testing when a version control system detects a change.

Jenkins is a popular example of the tool which is used to automate the non-human part of the development process with the help of CI.

Continuous Integration as a critical point.

Continuous Deployment (CD)

In this stage, the code is pushed to a specific server (maybe production/test) after the environment, or the application is containerized and push to the desired server.

Virtualization & Containerization is used to create a docker image where the software can be tested and then executed on any other machine. This assures the operating environment doesn’t changes with change in different developer system.

If you want to test software, you need test environment & tools. For example, to test a python program, you would need python and some operating system.

The test environment and the needed tools are configured automatically with preconfigured files using configuration management tools like puppet and ansible.

Continuous Deployment and it’s tools

Continuous Testing

This stage deals with testing the codebase changes and then informing the integration (CI) tool that decides what needs to be done next.

If the code passes the tests, then it’s informed to the CI tool. Therefore, it’s deployed to the production server using CD. If the tests aren’t passed, the integration tool notifies the developer about the errors, and the cycle continues until the tests aren’t passed.

Selenium is a tool that is used to test web applications by carrying out automated tests.

Continuous Monitoring

This stage constantly monitors bugs or crashes in a deployed or tested application. It is also set up to collect user feedback so the continuous delivery could provide users with a better experience.

Nagios is a popular tool used to monitor systems, networks and infrastructure. The continuous monitoring tool also offers monitoring and alerting services for any configurable event.

A quick overview of the DevOps pipeline.

Fun Experiment with CI 😋

Now, as we have skimmed through all of it and got a beginner level understanding of the whole process, let’s get to the fun part.

Let’s set up a continuous integration in our GitHub, which will update our blog posts or youtube videos on your GitHub profile readme. This task will run at a fixed interval of time to fetch the blogs or videos from RSS feeds and then integrate them using GitHub actions.

Continuous Integration of blogposts on GitHub

  • Have a GitHub account and then create a repository with the same name as your github username. Select Initialize this repository with a README. Having confusion? Read a more detailed tutorial here.

if your username is “octocat”, the repository name must be “octocat”

  • Go to your repository
  • Add the following section to your file, you can give whatever title you want. Just make sure that you use <!-- BLOG-POST-LIST:START --><!-- BLOG-POST-LIST:END --> in your readme. The workflow will replace this comment with the actual blog post list:
# Blog posts
  • Create a folder named .github and create a workflows folder inside it if it doesn’t exist.
  • Create a new file named blog-post-workflow.yml with the following contents inside the workflows folder:
name: Latest blog post workflow
  schedule: # Run workflow automatically
    - cron: '0 * * * *' # Runs every hour, on the hour
  workflow_dispatch: # Run workflow manually (without waiting for the cron to be called), through the Github Actions Workflow page directly
    name: Update this repo's README with latest blog posts
    runs-on: ubuntu-latest
      - uses: actions/checkout@v2
      - uses: gautamkrishnar/blog-post-workflow@master
          feed_list: "<,>
  • Replace the above url list with your own rss feed urls. See popular-sources for a list of common RSS feed urls. You can use our feed for a quick demonstration.
  • Commit and wait for it to run automatically or you can also trigger it manually to see the result instantly. To trigger the workflow manually, please follow the steps in the video.

Final Thoughts 🤩

We’ve covered a lot in this piece, but we’ve only touched the surface of this topic. There are various kind of deployment strategies, use cases and concepts which makes DevOps such exciting stuff.

If you enjoyed the CI/CD experiment part and want to dive deeper into CI/CD the best strategy would be going through the 7 best practices to create a better and modern development environment.

Happy Learning! 😎


The DevOps Awareness Program

Subscribe to the newsletter

Join 100+ cloud native ethusiasts


Join the community Slack

Discuss all things Kubernetes, DevOps and Cloud Native

Related articles6

How to clean up disk space occupied by Docker images?

How to clean up disk space occupied by Docker images?

Docker has revolutionised containers even if they weren't the first to walk the path of containerisation. The ease and agility docker provide makes it the preferred engine to explore for any beginner or enterprise looking towards containers. The one problem most of...

Parsing Packages with Porter

Parsing Packages with Porter

Porter works as a containerized tool that helps users to package the elements of any existing application or codebase along with client tools, configuration resources and deployment logic in a single bundle. This bundle can be further moved, exported, shared and distributed with just simple commands.

eBPF – The Next Frontier In Linux (Introduction)

eBPF – The Next Frontier In Linux (Introduction)

The three great giants of the operating system even today are well regarded as Linux, Windows and Mac OS. But when it comes to creating all purpose and open source applications, Linux still takes the reign as a crucial piece of a developer’s toolkit. However, you...

Falco: A Beginner’s Guide

Falco: A Beginner’s Guide

Falco shines through in resolving these issues by detecting and alerting any behaviour that makes Linux system calls. This system of alerting rules is made possible with the use of Sysdig’s filtering expressions to detect potentially suspicious activity. Users can also specify alerts for specific calls, arguments related to the calls and through the properties of the calling process.

Why DevOps Engineers Love Fluentd?

Why DevOps Engineers Love Fluentd?

Fluentd’s main operational forte lies in the exchange of communication and platforming for creating pipelines where log data can be easily transferred from log generators (such as a host or application) to their preferred destinations (data sinks such as Elasticsearch).

Operating On OpenTracing: A Beginner’s Guide

Operating On OpenTracing: A Beginner’s Guide

OpenTracing is a largely ignored variant of the more popular distributed tracing technique, commonly used in microservice architectures. Users may be familiar with the culture of using distributed tracing for profiling and monitoring applications. For the newcomers,...