Photo by Fotis Fotopoulos on Unsplash

The DevOps Roadmap: Fundamentals with CI/CD

by | 22.01.2021 | Engineering

We have reached a point in the web development roadmap where you can build great web applications with loads of features. But how about building applications fast and deploying user-requested features faster?

In this post, we will explore how few companies push code to production every day (multiple times a day in some instance) by leveraging what we have learned till now, i.e., the cloud-native way of development.

And we have a fun experiment at the end of the blog so that you can get your hand dirty with Continuous Integration; no prerequisites for the fun part.

Table of Contents:

  • What is DevOps?
  • What are the DevOps principles?
  • How DevOps works?
  • Fun Experiment with CI 😋

Let’s dive in!

What is DevOps?

DevOps is a philosophy that is implemented through tools, a lot of them. The philosophy focuses on bringing together the development & operations team to work in unison and increase the organization’s ability to deliver applications and services at high velocity.

We’ve written a few words on a more philosophical viewpoint on DevOps here.

What is the development & operations team?

The development team writes lines of code, and the operations team takes care of deployment alongside application availability and latency.

Newcomers have a misconception about how new software is made and released. After developers write code for some applications, it’s not published for the user to use. There’s a lot of things in between. The code is forwarded to operations for testing, debugging, integration of all the other stuff, and this part consumes most of the time in the development life cycle.

Development & operations: Not a perfect match?

Let’s understand this first!

Developers want to deploy new features as fast as possible.

Operations want to do precisely the opposite, i.e., deploy as little as possible to maintain the application’s stability. There are several instances where a single deployment brought down the entire system.

The Battle – Dev vs Ops

Customers don’t care about what developers and operations think; they only care about getting new ‘stable’ features without any glitches as fast as possible. Think about the software updates you’re waiting for 😛

We can’t blame operations for deploying updates not quickly. They want to maintain the overall application’s stability, and they are entirely right in their thought process, but with this approach comes a few drawbacks.🙁

  1. Lack of Agility: A developer abstains from quick feedback from users as the deployment is not rapid. The delay caused by operations doesn’t allow the product to adjust itself to the ever-changing market in a fast manner. This delay may give other competitors an advantage. An extreme scenario for lack of agility can be security patches being not published even after the vulnerability is discovered, jeopardizing users and the organization.
  2. Chance of complete blackout: A developer’s nightmare is being called at 3 AM to fix a blackout caused by a deployment. Companies like google earn 4K USD per second, and a minute of downtime could be costly. It’s more expensive if customers migrate to other services after the downtime. Rollbacks are more demanding and more time consuming when deployments are big. (a lot of services are changed all together)

Moreover, there is an internal blame game & conflict that drives down the team’s productivity and the organization. The dev team and operations are very defensive and won’t admit their fault when something is not completed before a deadline or more if something goes wrong.

Traditional Development and Operations

Manager: Why is the product not yet ready?

Operations Team Lead: We have submitted the changes to the dev team.

Dev Team Lead: We have submitted the updates to operations.

Resultant work 0!

Manager: Why is the system down?

Operations Team Lead: Dev team might have a bug in their code. We can find that for you.

Dev Team Lead: Operations never pointed out any bug we could fix while testing. We could have fixed that before and not at 3 AM if pointed.

Resultant Accountability 0! Customers are migrating to a different service, and the company is losing money😐

What are DevOps principles?

As you can understand, the previous approach of having two teams is not suitable as you scale. Your problems scale with you. The best solution for that to not happen is to integrate both teams with the help of DevOps (a clipped compound of “development” and “operations”) and its principles. This integration helps you increase your capacity to innovate, provide a better time to value ratio, and most importantly, an optimal customer experience.

Principle 1: Global Optimization / Bottleneck reduction

Save time in every step of the development cycle by reducing unnecessary conversation and automating everything to focus solely on customers.

Principle 2: Amplify Feedback loops

Getting input from the end-users and work on it to improve on overall customer experience.

Principle 3: Continuous Learning

Adapt to changing circumstances fast, whether that may be the emergence of new technology, customer needs, or legislation changes.

How DevOps works?

Pipelines are what drives the DevOps principle. BMC defines pipeline as:

A pipeline in a Software Engineering team is a set of automated processes that allow Developers and DevOps professionals to reliably and efficiently compile, build and deploy their code to their production compute platforms. There is no hard and fast rule stating what a pipeline should look like and the tools it must utilise, however the most common components of a pipeline are; build automation/continuous integration, test automation, and deployment automation.

DevOps pipelines

The pipeline consists of a set of tools which are broken down into the following categories:

  • Continuous Development
  • Continuous Integration
  • Continuous Deployment
  • Continuous Testing
  • Continuous Monitoring

Let’s dive in 😃

Continuous Development

In this stage, developers commit their code to the codebase using a version control system like git. You may ask why continuous? It’s because the development cycle goes on & on; it never stops.

Git maintains the different version of the codebase and tools like Ant, Maven, Gradle is used for packaging/building the code so that executables file could be sent for testing.

Different versions provide a benefit for fast rollback if something goes wrong or a bug is found in testing or a peer code review. Peer review is easy as the committed change is small, and your team members can look at what you’re working on and provide feedback. The difficulty of understanding code increases with the size of the codebase.

Continuous Integration (CI)

The integration stage is a critical point in the DevOps life cycle. This stage facilitates the integration of stages of the lifecycle, therefore automating the whole devops process. A person won’t be occupied with moving a code base from development to testing when a version control system detects a change.

Jenkins is a popular example of the tool which is used to automate the non-human part of the development process with the help of CI.

Continuous Integration as a critical point.

Continuous Deployment (CD)

In this stage, the code is pushed to a specific server (maybe production/test) after the environment, or the application is containerized and push to the desired server.

Virtualization & Containerization is used to create a docker image where the software can be tested and then executed on any other machine. This assures the operating environment doesn’t changes with change in different developer system.

If you want to test software, you need test environment & tools. For example, to test a python program, you would need python and some operating system.

The test environment and the needed tools are configured automatically with preconfigured files using configuration management tools like puppet and ansible.

Continuous Deployment and it’s tools

Continuous Testing

This stage deals with testing the codebase changes and then informing the integration (CI) tool that decides what needs to be done next.

If the code passes the tests, then it’s informed to the CI tool. Therefore, it’s deployed to the production server using CD. If the tests aren’t passed, the integration tool notifies the developer about the errors, and the cycle continues until the tests aren’t passed.

Selenium is a tool that is used to test web applications by carrying out automated tests.

Continuous Monitoring

This stage constantly monitors bugs or crashes in a deployed or tested application. It is also set up to collect user feedback so the continuous delivery could provide users with a better experience.

Nagios is a popular tool used to monitor systems, networks and infrastructure. The continuous monitoring tool also offers monitoring and alerting services for any configurable event.

A quick overview of the DevOps pipeline.

Fun Experiment with CI 😋

Now, as we have skimmed through all of it and got a beginner level understanding of the whole process, let’s get to the fun part.

Let’s set up a continuous integration in our GitHub, which will update our blog posts or youtube videos on your GitHub profile readme. This task will run at a fixed interval of time to fetch the blogs or videos from RSS feeds and then integrate them using GitHub actions.

Continuous Integration of blogposts on GitHub

  • Have a GitHub account and then create a repository with the same name as your github username. Select Initialize this repository with a README. Having confusion? Read a more detailed tutorial here.

if your username is “octocat”, the repository name must be “octocat”

  • Go to your repository
  • Add the following section to your README.md file, you can give whatever title you want. Just make sure that you use <!-- BLOG-POST-LIST:START --><!-- BLOG-POST-LIST:END --> in your readme. The workflow will replace this comment with the actual blog post list:
# Blog posts
<!-- BLOG-POST-LIST:START -->
<!-- BLOG-POST-LIST:END -->
  • Create a folder named .github and create a workflows folder inside it if it doesn’t exist.
  • Create a new file named blog-post-workflow.yml with the following contents inside the workflows folder:
name: Latest blog post workflow
on:
  schedule: # Run workflow automatically
    - cron: '0 * * * *' # Runs every hour, on the hour
  workflow_dispatch: # Run workflow manually (without waiting for the cron to be called), through the Github Actions Workflow page directly
jobs:
  update-readme-with-blog:
    name: Update this repo's README with latest blog posts
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - uses: gautamkrishnar/blog-post-workflow@master
        with:
          feed_list: "<https://dev.to/feed/gautamkrishnar,https://www.gautamkrishnar.com/feed/>
  • Replace the above url list with your own rss feed urls. See popular-sources for a list of common RSS feed urls. You can use our feed for a quick demonstration.
  • Commit and wait for it to run automatically or you can also trigger it manually to see the result instantly. To trigger the workflow manually, please follow the steps in the video.

Final Thoughts 🤩

We’ve covered a lot in this piece, but we’ve only touched the surface of this topic. There are various kind of deployment strategies, use cases and concepts which makes DevOps such exciting stuff.

If you enjoyed the CI/CD experiment part and want to dive deeper into CI/CD the best strategy would be going through the 7 best practices to create a better and modern development environment.

Happy Learning! 😎

CommunityNew

The DevOps Awareness Program

Subscribe to the newsletter

Join 100+ cloud native ethusiasts

#wearep3r

Join the community Slack

Discuss all things Kubernetes, DevOps and Cloud Native

Related articles6

Startup speed, enterprise quality

Startup speed, enterprise quality

Liebe Kunden, Partner und Kollegen,2021 ist vorbei und uns alle erwarten neue Herausforderungen und Ziele in 2022.In den letzten 3 Jahren hat sich p3r von einer One-Man-Show zu einer festen Größe im deutschen Cloud-Sektor entwickelt. Mit inzwischen 11...

Introduction to GitOps

Introduction to GitOps

GitOps serves to make the process of development and operations more developer-centric. It applies DevOps practices with Git as a single source of truth for infrastructure automation and deployment, hence the name “Git Ops.” But before getting deeper into what is...

Kaniko: How Users Can Make The Best Use of Docker

Kaniko: How Users Can Make The Best Use of Docker

Whether you love or hate containers, there are only a handful of ways to work with them properly that ensures proper application use with Docker. While there do exist a handful of solutions on the web and on the cloud to deal with all the needs that come with running...

Cilium: A Beginner’s Guide To Improve Security

Cilium: A Beginner’s Guide To Improve Security

A continuation from the previous series on eBPF and security concerns; it cannot be reiterated enough number of times how important it is for developers to ensure the safety and security of their applications. With the ever expanding reach of cloud and software...

How to clean up disk space occupied by Docker images?

How to clean up disk space occupied by Docker images?

Docker has revolutionised containers even if they weren't the first to walk the path of containerisation. The ease and agility docker provide makes it the preferred engine to explore for any beginner or enterprise looking towards containers. The one problem most of...

Parsing Packages with Porter

Parsing Packages with Porter

Porter works as a containerized tool that helps users to package the elements of any existing application or codebase along with client tools, configuration resources and deployment logic in a single bundle. This bundle can be further moved, exported, shared and distributed with just simple commands.