The DevOps Roadmap: Virtualization

by | 08.02.2021 | Engineering

I am not sure whether this comes as a surprise to you, but you can run two or more same/different operating systems simultaneously on the same piece of hardware. Now you might start imagining the back portion of your CPU and think about what connector you must buy to do so. Or on these laptops, which have the number of ports as an inverse proportionality to cost?🍎

You don’t need to buy any connector or open your cabinet to carry out this operation. We have a tool to do this tremendous task through a process called virtualization. This post would dive into the same core technology that drives Amazon’s most profitable arm, i.e., the cloud computing division, bringing upwards of 13 billion USD in net profit.

Excited?

Let’s dive in!

What is Virtualization?

Virtualization makes a single piece of the machine acts like multiple, saving you cost and physical space. We commonly use virtualization to run a different operating system on the same machine by sharing hardware resources between the running instances. The instances behave as stand-alone units with their different libraries, operating system, programs, or any other customization you might need. These stand-alone systems are separate from your primary system, and any change inside them aren’t reflected onto the host.

This simple technology provides enormous value to companies of different sizes, as we would see in the further sections.

How AWS uses virtualization?

Cloud computing is on the rise. Each and every company is trying to migrate its on-premises system onto the cloud. The deal is profitable for the cloud providers and the customers, i.e., all these companies migrating.

Why migration?

It’s cost-effective and saves upwards of 30% on operations!

Now cloud works in a very fragmented way. So by fragments, I mean only resources you need are allocated to you. If it’s confusing, think if you as a provider have a hard disk of 100 GB, but two-person need 50 GB each, then the platform divides the same disk virtually without physical partition.

Amazon web services (AWS), amazon’s cloud computing arm, follow the same allocation strategy but on a far massive scale. Not only amazon, but every cloud provider also follows their resources’ virtualization helping them cut costs and increase their bottom line.

Cloud providers have ‘spot instances,’ which is allocation on steroids. Spot instances distribute computing power that the host machine is not using at a far cheaper rate—so much cost saving just by using a hypervisor.

What are Hypervisors?

Hypervisors are a piece of software that runs above the physical server or host and pulling in resources from physical servers, and allocating them. Also called virtual machine managers (VMM), they are responsible for managing sharing of resources from disk to the networks and ensuring the virtual machine is independent of the host and can run any operating system you desire.

Bare-Metal Hypervisors

They are directly installed on top of the servers, sometimes on the BIOS that needs to be virtualized. They are more secure, improves latency, and are faster than their counterparts due to not having a host OS below them. These type-1 hypervisors have the majority of the market share because all the cloud providers use them to implement virtualization in their server farms.

Examples of popular bare-metal hypervisors are Microsoft Hyper-VCitrix XenServer and VMware ESXi.

Types of Hypervisors. Image Source: moneyvault

Hosted Hypervisors

These type-2 hypervisors have a layer of the host operating system on which the allocation of resources occurs. This means a system is running inside a system, therefore, called hosted. These comparatively are less frequent and used for end-user virtualization. Compared to the counterparts, it’s very easy to install and has a low cost or no cost.

Examples of popular hosted hypervisors are VMware FusionOracle Virtual BoxSolaris Zones, and VMware Workstation.

What advantages does virtualization provide?

Using a hypervisor to spin multiple systems using the same infrastructure provides us with a variety of advantages. Some are:

Cost Savings

Using a piece of infrastructure to run different instances costs less electricity, reduces maintenance, and most importantly, saves physical area. These all savings translate to cost savings and cloud providers love that.

Agility

It’s way faster to create a virtual machine if compared to assembling a new machine. The speed and agility are unmatched, and if you need to create an instance for a dev-test scenario, it’s quick. You don’t need to provision a new system and wait till it’s assembled. With virtualization, you can run a different OS and test your applications on all of these OSs fast and efficiently.

Downtime

Virtualization reduces downtime because you can transfer VMs from one hypervisor to another almost instantaneously. Now, if it’s a mayday scenario, spin a server and move all these VMs to the new hypervisor.

Conclusion

We quickly went through virtualization and implemented it in millions of servers and pc across the globe. Feel free to check out VirtualBox on your pc and run a different OS by using this tutorial.

Happy Virtualizing!

CommunityNew

The DevOps Awareness Program

Subscribe to the newsletter

Join 100+ cloud native ethusiasts

#wearep3r

Join the community Slack

Discuss all things Kubernetes, DevOps and Cloud Native

Related articles6

How to clean up disk space occupied by Docker images?

How to clean up disk space occupied by Docker images?

Docker has revolutionised containers even if they weren't the first to walk the path of containerisation. The ease and agility docker provide makes it the preferred engine to explore for any beginner or enterprise looking towards containers. The one problem most of...

Parsing Packages with Porter

Parsing Packages with Porter

Porter works as a containerized tool that helps users to package the elements of any existing application or codebase along with client tools, configuration resources and deployment logic in a single bundle. This bundle can be further moved, exported, shared and distributed with just simple commands.

eBPF – The Next Frontier In Linux (Introduction)

eBPF – The Next Frontier In Linux (Introduction)

The three great giants of the operating system even today are well regarded as Linux, Windows and Mac OS. But when it comes to creating all purpose and open source applications, Linux still takes the reign as a crucial piece of a developer’s toolkit. However, you...

Falco: A Beginner’s Guide

Falco: A Beginner’s Guide

Falco shines through in resolving these issues by detecting and alerting any behaviour that makes Linux system calls. This system of alerting rules is made possible with the use of Sysdig’s filtering expressions to detect potentially suspicious activity. Users can also specify alerts for specific calls, arguments related to the calls and through the properties of the calling process.

Why DevOps Engineers Love Fluentd?

Why DevOps Engineers Love Fluentd?

Fluentd’s main operational forte lies in the exchange of communication and platforming for creating pipelines where log data can be easily transferred from log generators (such as a host or application) to their preferred destinations (data sinks such as Elasticsearch).

Operating On OpenTracing: A Beginner’s Guide

Operating On OpenTracing: A Beginner’s Guide

OpenTracing is a largely ignored variant of the more popular distributed tracing technique, commonly used in microservice architectures. Users may be familiar with the culture of using distributed tracing for profiling and monitoring applications. For the newcomers,...