Docker: Photo by Abigail Lynn on Unsplash

The DevOps Roadmap: Docker

by | 13.03.2021 | Engineering

Table of Contents

The containerization revolution has just begun, which means you have heard about docker at least once in your professional life. Containerization has made our apps’ deployment cycle faster and efficient. Leading the containerization wave is docker, the most popular container runtime.

Containers, containerization doesn’t click so well? Read this curated post to get accustomed to every prerequisite.

Back to dockers, and in this post, we would dive into this topic and understand its ins and outs with a high-level overview.

What is Docker?

Docker is a tool or platform that makes it easier to create, deploy, package, and ship applications and their components like libraries and other dependencies together. Its main goal is to simplify the application deployment process on Linux. Linux because it’s the most used server OS.

docker
Docker. Source:Freecodecamp

The container runtime enables many containers to operate on the same hardware (aka virtualization), resulting in increased efficiency, isolation of applications, and ease of configuration.

How does Docker work?

Docker packs an application and all of its dependencies in a virtual container that can be run on any Linux server. To function the docker container is composed of the following components:

Daemon: The Docker daemon ( dockerd ) manages Docker resources such as images, containers, networks, and volumes by listening for Docker API requests. To control Docker services, a daemon may interact with other daemons.

High-Level REST API: It allows users to communicate with the daemon or the dockerd.

A CLI: It is a command-line tool (aka docker client) that lets you talk to the Docker daemon.

Docker Engine
Docker Engine. Source: FAUN

How is a docker container built?

Docker containers are built using a docker image which in turn is created using a dockerfile.

What’s dockerfile?

The dockerfile consists of the instructions which are needed to build a docker image. Every time you run a new command, a new layer is built on top of the existing layer in the docker image called intermediate images.

A simple Node.js docker file:

FROM node:stable
COPY . /usr/src/app/
RUN npm install && npm run build
EXPOSE 3000
ENTRYPOINT ["npm", "start"]

What is a docker image?

If you build using the dockerfile, the result is called a docker image. To build use the command docker build. The docker image is layered and hashed, with each layer containing new instructions.

Docker container images hashed
Hashed Container Image layers. Source: FAUN

Since all layers are hashed, Docker can cache them and reduce build times for layers that don’t change between builds. If the COPY phase hasn’t changed, you won’t have to restore and re-copy any of the files, saving you a lot of time in the build process.

Docker builds a new thin writable layer on top of all other layers at the end of the build process.

So, what’s the docker container?

The docker container is the running instance of the docker image.

We can sum things up as:

  • dockerfile is a recipe for creating Docker images
  • A Docker image gets built by running a Docker command (which uses that dockerfile)
  • A Docker container is a running instance of a Docker image

What are docker registries?

It’s not always necessary that you need to build all the images on your own using custom dockerfile. It’s a common practice to pull images (use docker pull) from Docker registries or the store containing many prebuilt images.

So common that if you pick a course, you will interact with pulling docker images from repositories and not building from your own dockerfile.

Docker image being pulled from Registry
Docker image being pulled from Registry. Source: Docker docs

Docker provides an option to have public or private (for inter-organization use) registries to help you with the hassle of building images on your own every time. You can get images from them and also upload your own images to use later on by using docker push.

Commands Crash Course

You might have come across many commands in this post, and this section summarizes all of these commands for you to reference.

$ docker build

This command is used to build a fresh image from your docker file and store it to your local docker image directory. This image can later be published to registry.

$ docker pull

This command lets you pull an existing image from a docker registry and save it to your local docker image directory. Later on, this image can be used to spin up containers.

$ docker run

This command lets you run a container out of an existing container image.

$ docker push

This command helps you to push an image or a repository to a registry.

Final Thoughts

I hope this post helped you understand docker and its key terminologies besides the back-end with the daemon and APIs. Docker is a leap, and if you want to explore more, like why docker is better than VMs.

You can read this post.

Interested in best containerization practices? Here we go!

That’s it for now, then. Thanks for reading, and don’t forget to check the official docker docs for more information.

Happy Containerizing!

CommunityNew

The DevOps Awareness Program

Subscribe to the newsletter

Join 100+ cloud native ethusiasts

#wearep3r

Join the community Slack

Discuss all things Kubernetes, DevOps and Cloud Native

Related articles6

How to clean up disk space occupied by Docker images?

How to clean up disk space occupied by Docker images?

Docker has revolutionised containers even if they weren't the first to walk the path of containerisation. The ease and agility docker provide makes it the preferred engine to explore for any beginner or enterprise looking towards containers. The one problem most of...

Parsing Packages with Porter

Parsing Packages with Porter

Porter works as a containerized tool that helps users to package the elements of any existing application or codebase along with client tools, configuration resources and deployment logic in a single bundle. This bundle can be further moved, exported, shared and distributed with just simple commands.

eBPF – The Next Frontier In Linux (Introduction)

eBPF – The Next Frontier In Linux (Introduction)

The three great giants of the operating system even today are well regarded as Linux, Windows and Mac OS. But when it comes to creating all purpose and open source applications, Linux still takes the reign as a crucial piece of a developer’s toolkit. However, you...

Falco: A Beginner’s Guide

Falco: A Beginner’s Guide

Falco shines through in resolving these issues by detecting and alerting any behaviour that makes Linux system calls. This system of alerting rules is made possible with the use of Sysdig’s filtering expressions to detect potentially suspicious activity. Users can also specify alerts for specific calls, arguments related to the calls and through the properties of the calling process.

Why DevOps Engineers Love Fluentd?

Why DevOps Engineers Love Fluentd?

Fluentd’s main operational forte lies in the exchange of communication and platforming for creating pipelines where log data can be easily transferred from log generators (such as a host or application) to their preferred destinations (data sinks such as Elasticsearch).

Operating On OpenTracing: A Beginner’s Guide

Operating On OpenTracing: A Beginner’s Guide

OpenTracing is a largely ignored variant of the more popular distributed tracing technique, commonly used in microservice architectures. Users may be familiar with the culture of using distributed tracing for profiling and monitoring applications. For the newcomers,...