kaniko

Kaniko: How Users Can Make The Best Use of Docker

by | 17.11.2021 | Engineering

Whether you love or hate containers, there are only a handful of ways to work with them properly that ensures proper application use with Docker. While there do exist a handful of solutions on the web and on the cloud to deal with all the needs that come with running Docker, Kaniko has something amazing to offer.

Kaniko was released as a standalone addition through the Google cloud interface and has been inducted under the Cloud Native Computing Foundation(CNCF). Kaniko as a tool helps users to build containerized images from their Docker applications through the Dockerfiles. It doesn’t require any standalone Docker run daemon’s to process it through.

Let’s take a more defined look at some of it’s more undiscussed elements, advantages and demerits.

The Problems People Run Into With Docker

Docker has become an industry standard with a multitude of uses and benefits but still has cross dependency issues due to the need for a DockerFile to rely upon interactive access to a Docker daemon. This essentially requires root access on your machine to run.

Users can thus run into problems for making containerized images in environments that don’t provide any support or can’t run the Docker daemons. Kubernetes clusters are a good example of this.

Kanniko, services these issues by creating a method to convert the container images from a Dockerfile even in the absence of any privileged root access. Users can use Kaniko to build an image from a Dockerfile and push it to a registry all in one go. Since it doesn’t require any special privileges or permissions, it can be installed and run on a typical Kubernetes cluster, Google Kubernetes Engine, or in any environment that lacks access control.

Docker vs Kaniko Source: Stackshare

Kaniko is usually run as a container itself, and needs the following information to build a docker image as per the user requirements:-

  1. The path to Dockerfile.
  2. The path to build a context (workspace) directory. The build context directory is a repository that contains all the necessary resources that are required while building an image.
  3. Destination/URL of the repository where the image will be pushed after the execution completes.

How Does Kaniko Work Exactly?

Kaniko can be run as a container image that takes in the three arguments as previously seen. Once this information has been included to the main registry, the final push will contain only a static Go binary plus the configuration files needed for pushing and pulling images.

The Kaniko executor then finds and extracts the base image file system to the root, which is the image in the FROM line of the Dockerfile. It executes each command in order, and takes a snapshot of the file system after each command.

The snapshot created is then fed into a user-space by running the filesystem and comparing it to the prior state that was stored in memory. Kaniko then combines any changes made to the filesystem as a new layer to the original image. These changes are also reflected in the image metadata. After executing every command in the Dockerfile, the user is then free to push the newly built image to the output registry.

Kaniko unpacks the filesystem, executes commands and creates a copy of the filesystem completely in the user-space within the requirements of the user’s image, which is how it bypasses the needs for any privilege access needs on the machine where it is being deployed. The docker daemon or CLI is not involved in this entire process.

Image Container Creation Steps Source: Google CloudOps

The Merits and Pitfalls of Kaniko

Kaniko’s serverless all in one solution for handling container issues seems like a welcoming change from the more convoluted mess that users face when using Kubernetes or Docker. The lack of root access needs in clusters combined with a simpler interface for languages makes Kaniko an amazing tool to use, especially for  software engineers who need fast acting methods to copy and deploy applications through a common Docker setup. Since there’s no dependency on the daemon process, users are free to run Kaniko in any kind of environment.

Kaniko executes each command within the Dockerfile completely in the userspace using an executor image: gcr.io/kaniko-project/executor which is often made to run inside the container. Kaniko then uses the system interface to run commands inside the Dockerfile and creates a copy of the file system after each command.

If there are changes to the file system, the executor pipeline takes a snapshot of the filesystem change and reports it as a “diff” layer. These changes are later made permanent in the image metadata. This brings another merit for using Kaniko which is a seamless mechanism to execute files and run applications.

Some backlash for the system seems to be pitched at the build for Kaniko taking up too many resources and execution time, depending on the applications which can make it unreliable when not working with a serverless platform with much needed computing. Most common build runs for Kaniko have been reported to be much slower compared to it’s cloud build. The general solution may be to simply stick to the cloud version but may not always be the best approach if cloud access isn’t always available.

Comparison between common tools Source: Slideshare

Final Thoughts and Review

If you still haven’t used Kaniko for easing the process of container image deployment and copying, it doesn’t hurt to take a small dive at community documentations and even running the application. Kaniko has a fairly simple syntax that can be run with just a few commands through the command line and has no major dependencies for it to be run.

Users shouldn’t be put off by the extreme dependence towards containers, seeing how resources can be better utilized if proper images can be built on top of them. As an application that is focused on Kubernetes and Docker, Kaniko has much to apply itself onto other common platforms and architectures.

For the novice or the expert, there are several resources available on the community pages and even on Google’s flagship website for Kaniko to get you started. Tune as we take a much needed look at other applications in the next articles.

CommunityNew

The DevOps Awareness Program

Subscribe to the newsletter

Join 100+ cloud native ethusiasts

#wearep3r

Join the community Slack

Discuss all things Kubernetes, DevOps and Cloud Native

Related articles6

Introduction to GitOps

Introduction to GitOps

GitOps serves to make the process of development and operations more developer-centric. It applies DevOps practices with Git as a single source of truth for infrastructure automation and deployment, hence the name “Git Ops.” But before getting deeper into what is...

Cilium: A Beginner’s Guide To Improve Security

Cilium: A Beginner’s Guide To Improve Security

A continuation from the previous series on eBPF and security concerns; it cannot be reiterated enough number of times how important it is for developers to ensure the safety and security of their applications. With the ever expanding reach of cloud and software...

How to clean up disk space occupied by Docker images?

How to clean up disk space occupied by Docker images?

Docker has revolutionised containers even if they weren't the first to walk the path of containerisation. The ease and agility docker provide makes it the preferred engine to explore for any beginner or enterprise looking towards containers. The one problem most of...

Parsing Packages with Porter

Parsing Packages with Porter

Porter works as a containerized tool that helps users to package the elements of any existing application or codebase along with client tools, configuration resources and deployment logic in a single bundle. This bundle can be further moved, exported, shared and distributed with just simple commands.

eBPF – The Next Frontier In Linux (Introduction)

eBPF – The Next Frontier In Linux (Introduction)

The three great giants of the operating system even today are well regarded as Linux, Windows and Mac OS. But when it comes to creating all purpose and open source applications, Linux still takes the reign as a crucial piece of a developer’s toolkit. However, you...

Falco: A Beginner’s Guide

Falco: A Beginner’s Guide

Falco shines through in resolving these issues by detecting and alerting any behaviour that makes Linux system calls. This system of alerting rules is made possible with the use of Sysdig’s filtering expressions to detect potentially suspicious activity. Users can also specify alerts for specific calls, arguments related to the calls and through the properties of the calling process.