If you’d gone through Containers, Unikernels and VMs, I would bet you’re confused about which one to try for your new venture. It’s normal and happens to everyone while experimenting with adopting new technology.
Remember the age-old dilemma of you thinking which language suits your project well?
It’s just similar. You need to balance agility, security, community support, and performance while finding the ideal way to ship your application in a cloud-native environment.
In this post, we would dissect the differences so that you’re clear on what to use by the end.
Let’s dive in!
Containers and unikernels fall under the same umbrella if you compare them to a VM (virtual machine). It would be naive to compare them all together, and therefore we have curated a more experienced approach that would help you decide well.
We would first compare the traditional (VMs) to a modern (container) solution and then differentiate between the modern and breakthrough (unikernel) solutions.
Containers vs. Virtual machines
Containers, if you haven’t read about them already are an implementation of operating system-level virtualization. The OS-level virtualization translates to your containers running on a host operating system without installing any additional OS and only using the hardware resources they need.
The presence of host OS means containers aren’t capable of running a different OS than the host.
On the other hand, Virtual Machines are an implementation of hardware-level virtualization. Generally, a hypervisor sits on top of our hardware and balances between two instances of different or similar OSs. You need to install an additional OS for each instance of running VMs as the idea of host OS is missing because we virtualize on top of our hardware and not on top of our OS, unlike containers.
As you might have already understood, this is heavy on system resources (CPU, memory, and storage), but you can run different OSs.
VMs provide machine-based isolation between two instances as they use different OSs and kernels. SO, two machines running side by side are unaware of the other’s existence. If you talk to any security guy about these features and everything would sound like music to their ears.
This extreme security is why all cloud providers use a hypervisor-based approach to manage their computing resources.
On the other hand, Containers provide process-based isolation that is typically considered lightweight isolation between the host and other containers. There are ways to increase security, but containers lack a bit behind VMs. If an attack on the host is successful, it will compromise the host and all the containers running in it.
In a typical VM, the resources are allocated via a hypervisor from the actual hardware to different machines. Whereas with containers, we’re primarily relying on two features of the Linux kernel to give our processes the appearance of isolation.
The first one I’d like to mention is namespaces, which allow for customization and the appearance that each container instance has its operating system.
Then there are our “cgroups” (control groups), who are in charge of controlling and metering our resources to ensure that we never overburden our system with containers, that we restrict the amount of resources they have access to, that we monitor what we’re giving each container control over, and that we can control precisely what we’re giving each container control over.
Simply put, we have more control over resource allocation in a container if compared to a VM.
Portability and flexibility
So if we consider portability, dockerfile or container images provide us with limitless portability, unlike VMs. In a typical VM, you need to migrate the system files and OS,i.e., the whole disk, to run that instance in some other place.
The whole OS and containing system files are heavy on size may be several gigabytes in size. Because of this, a single server can host far more containers than virtual machines.
In a container, the only thing you need to transfer is the container image which is a few lines of text, and everything inside the container can be replicated.
But if we talk about flexibility, VMs provide enormous scaling. You can have any amount of RAM, storage inside a new machine. But this is not the case for containers, as their host system’s hardware resources limit them.
Another significant advantage is that, while virtual machines can take several minutes to boot up their operating systems, containerized applications can be started almost instantly. Containers can be created “just in time” when they’re needed and then removed when they’re no longer needed, freeing up resources on their hosts.
Rather than running an entire complex application in a single container, it can be divided into modules (such as the database, the application front end, and so on). The breaking down of monolith is what the microservices strategy is all about. Since each module is a relatively basic improvement to existing modules, they can be made without rebuilding the whole programme.
Individual modules (or microservices) can be instantiated only when required and are available almost instantly, thanks to the lightweight nature of containers. VMs can’t do the instantaneous stuff.
Container vs Unikernels
Unikernels, on the other hand, are containers on steroids that provide enhanced security because of a distinct kernel and no shell, therefore, reducing the attack surface area significantly. Containers can’t compete here because they need to have a shell to function, and sharing kernel is always a way to increase the attack surface area.
Apart from the security, the OS used under them is lightweight, providing a smaller resource footprint than the traditional counterpart, i.e., containers. So, you can host more unikernels with the same resources.
If you’d reached till here, I assume you’re clear on the fundamental distinctions between these virtualization technologies. But weren’t you here for making a decision quickly without having a headache?
I have put forward the use cases in this section to dictate your cloud-native journey with ease.
If you need challenging software, network infrastructure, and apps that will consume the majority of the VM’s resources, Virtual Machines are the way to go. In summary, choose VM if you’d need to:
- Manage a number of different operating systems
- Manage multiple apps on a single server
- Run an app that requires all the resources and functionalities of an OS
- Ensure total isolation and security
If you need to build web applications and caching services, microservices, network daemons, and small databases, containers are the way to go. In summary, choose containers if you’d need to:
- Maximize the number of apps running on a server
- Deploy multiple instances of a single application
- Have a lightweight system that quickly starts
- Develop an application that runs on any underlying infrastructure
Choose Unikernels if you need to taste innovation and experimentation but make sure this won’t have strong community support. Companies like NanoVMs are adopting a new approach of selling consultancy service out of unikernels. Still, there’s not a lot of competition currently, and you might not get the best value for your bucks.
“We haven’t seen significant traction in unikernels yet, primarily because there isn’t a universal library to build them with.”— Edwin Yuen, an analyst at Enterprise Strategy Group
Maybe by the end of this decade, unikernels that I consider a hybrid of container and VMs would rule the cloud-native strategy or might be the new normal.
Until then, whatever you choose as your chief virtualization technology clearly says you have chosen cloud-native as your organization’s soul mate.
The cloud-native path comes with different problems on its own, which translates into yak-shaving. If you’d like to explore the issues you might face or are currently looking for solutions to tackle the difficulties, feel free to sit on a discovery call with our engineering team.
That’s it for this post. Feeling exploratory? We got you covered with few awesome DevOps posts.