Linkerd: Looming on Service Meshes

by | 12.09.2021 | Engineering

Microservices and service meshes have become a staple of the industry as companies realize the full potential of creating an independent architecture that allows for easier scale up, agile development, resilience and streamlined deployment. Many of these applications are interfaced and communicated with through lightweight APIs that allow users, companies and stakeholders to avoid the pitfalls of a monolithic system.

One of the latest talks of the town surrounds the Linkerd service mesh that boasts an ultralight palette of services for Kubernetes. But before you go about adopting this service mesh on your mesh, let’s take a deeper dive into its hits and misses. And for the purpose of semantics, Linkerd is pronounced as “Linker-DEE”.

A Top Down Look at Linkerd

Linkerd as a service mesh is designed to remove the challenges of running services by integrating components for critical security, observability and reliability, all built as a Cloud Native Computing Foundation graduate project.

At the same time, it also integrates these features into the mainframe without requiring users to implement serious code changes to repositories, thus improving deployment and communication.

Linkerd Service Mesh
Linkerd Service Mesh Source: Linkerd

Being an open source application released under the Apache v2 license, it’s important to dissect and understand the various components that makes Linkerd such an interesting choice among the plethora of service mesh software.

Components and Functions

Linkerd possesses three basic components:

  1. A UI interface with application icons, connection interfaces and platform apps.
  2. A data plane that allows users to connect with native data apps, perform import/export functions and so on.
  3. A control plane to execute commands for backend changes, regulating information flow, app communication and other application edits.
Linkerd Components Interface:

Linkerd Components Interface: Container Journal

Linkerd works quite simply with an installation of Kubernetes, the CLI and the control panel separately. Since Linkerd echoes the same implementations as a microservice like Istio service mesh, all components and apps run independently from each other. Linkerd installs a series of transparent proxies next to each service instance. All the installed proxies handle traffic that comes in and out of the services. The layer of transparency in this case creates a network of stacks that send telemetry information from the control plane.

The control plane is composed of controller components, including a web component that links with an administrative dashboard and a metrics component. The panel also supports modified versions of Prometheus and Grafana. The data plane too uses its own proxy which puts it at an advantage compared to data planes in other microservices like Envoy.

Protocols and Languages

Linkerd 2.x currently supports HTTP 1.1, HTTP2, gRPC, and TCP communication between services via their sidecar proxies. It does not however support TCP connections.

Implementation Languages

Linkerd 2.x is exclusively written in Go. Linkerd’s proxy for its data plane runs on Rust and allows for multifunction compatibility with all types of data analytical softwares and platforms. It should be noted that Linkerd 1.x was written in Scala originally.

Service Mesh Sidecar Architecture
Service Mesh Sidecar Architecture Source: Platform9

Where Linkerd Shines

Being a service mesh application written after industry names like Istio and Consul, Linkerd carries on from many of the traditions of older mesh systems while bringing its own unique take on services.

Sidecar Injection

Sidecars can be added and deployed through artifacts by using the service mesh control plane. Users can now natively add these automatically to applications with no changed code complications.

High Availability

Feature availability for Linkerd were still in the experimental state in previous versions but can now be implemented with additional capabilities to make applications far more useful. Users can now make apps available for online and cloud based pipelines with reduced challenges of server crashes or telemetry load limits.

Monitoring and Tracing Support

Linkerd currently supports Prometheus and Grafana for monitoring out of the box applications but still lacks distributed tracing.

….And Everything Else

But these are just some of the known merits that Linkerd holds. While Istio may be quite difficult to set up in the cluster, Linkerd requires no configuration as it works out of the box and can be scaled horizontally with ease.

Apart from the protocols and languages seen above, it also supports TLS application wide. The service mesh is highly intelligent and distributes traffic using modern load-balancing algorithms, allowing users to send requests dynamically and shift traffic as required.

Configuring the root cause of failures is also easier since the system supports distributed tracing and aligns perfectly with present day microservices. Users can also have total awareness about the state of the system by looking at dashboards to assess real time performances, including operations connected to Prometheus and Grafana.

Where Linkerd Falls Short and Final Thoughts 🥱

Criticisms regarding Linkerd have usually been aimed at the comparatively large memory footprint, which prompted the community to shift focus on the development of Conduit, a lightweight service mesh specifically for Kubernetes, written in Rust and Go. The Conduit project is seen as a subsidiary to Linkerd and has since been folded into the mainframe. Linkerd was thus relaunched as Linkerd 2.0 in July of 2018.

Users still face issues of telemetry rate limiting if hosting multiple applications, which creates the need for another application to deal with overloading traffic. Linkerd also runs on a per-host deployment model. This introduces another common problem where a single proxy failure can affect multiple services. This still doesn’t remedy the relatively high resource requirements of Linkerd 1.x and its future renditions.

For users planning on applying Linkerd on their applications, it is best recommended to run Linkerd using Sidecar pattern, along with every pod but will still plague the systems with heavy resource usages. Users would thus best benefit to run it per node.

It’s best to have every Linkerd pod with one or more instances of all services such that all nodes in the network are able to communicate with each other. Using a NoSQL platform like DataStax or Cassandra can fix this to avoid direct Service-to-Service communication across the neighbouring nodes.

As a general piece of advice, Linkerd proxies should be allowed to talk to another Linkerd proxy. Additionally, users should create alerts based on Prometheus or Grafana metrics of running applications provided by proxies. It is best if it is for some of the Microservices first before the applications are injected for the entire network.

Follow back on the blog to learn more about upcoming technologies and how to implement the best strategies for your projects.

PS: We admire the work of the developer community. Do connect with the team and give us a shoutout on social media. Feel free to reach out to us if you want us to help you with a service mesh implementation 😉

Happy Learning!

Join the Community

The DevOps Awareness Program

Subscribe to the newsletter

Join 100+ cloud native ethusiasts


Join the community Slack

Discuss all things Kubernetes, DevOps and Cloud Native

More stories from our blog

What’s new in Flux v0.17.0?

What’s new in Flux v0.17.0?

Flux2 came with its new update a while ago, and it is sheer exciting for the users because it brought a lot of new features. It also made a lot of new enhancements and updates. We will take a look at the entire catalogue in this article. So, without further a due,...

What’s new in Portainer v2.7.0 BE?

What’s new in Portainer v2.7.0 BE?

A few days ago, Portainer Business Edition came up with their new update. It is quite a massive update with many new features, bug fixes, enhancements and much more. In this article, we will see all of those in a nutshell. Let's start What is Portainer? Portainer is...

DVC (Git For Data): A Complete Intro

DVC (Git For Data): A Complete Intro

As a data scientist or ML engineer, have you ever faced the inconvenience of experimenting with the model? When we train the model, the model file is generated. Now, if you want to experiment with some different parameters or data, generally people rename the existing...

Recap of the Cloud Native Meetup Saar #3

Recap of the Cloud Native Meetup Saar #3

We are looking back on a very successful third edition of our Cloud Native Meetup Saar #3! Togetherer with our co-host anynines, we enjoyed a fun afternoon filled with great speakers, intriguing topics and thoughtful conversations! We welcomed a total of three...

Portainer Ambassador Series ft. Fabian Peter

Portainer Ambassador Series ft. Fabian Peter

Portainer arranged a fun and informative discussion through a one-hour special named “Ambassador Series” on 1st July 2021. It was pretty amazing to see Savannah Peterson as the host and two other guests. One is our very own CEO of, Fabian Peter and the other...

What’s new in Longhorn v1.2.0?

What’s new in Longhorn v1.2.0?

Longhorn came with their new update. It is full of surprises. We will peel off one by one to see all the latest updates, features, bug fixes and much more. This one is a much-awaited update, and we will see all of it in a moment. So, without further a due, let's...

Kubernetes Stateful Friend: What’s more to etcd?

Kubernetes Stateful Friend: What’s more to etcd?

The Kubernetes control plane consists of various components, and one of such components is etcd. Anyone starting to learn k8s come across it and memorizes quickly that it’s a key-value pair for Kubernetes with persistence store. But, what’s more to it? Why do we need...

What’s New in Flux 1.24.0?

What’s New in Flux 1.24.0?

Flux 1.24 is out this month with couple of updates and Important notices. Let’s get around what are the updates in the new release. But, first, let’s do a quick intro on Flux. What is Flux? Flux is a tool that checks to see if the status of a cluster matches the git...

Event Driven Architecture Demystified (For Pros)

Event Driven Architecture Demystified (For Pros)

Event-Driven Architecture or EDA is talked about with pride inside any organization. But, through last few months, I have noticed a trend that the definition of EDA is not consistent across people and organizations. It’s vague. EDA is something where you have events...

What’s new in Istio v1.11?

What’s new in Istio v1.11?

Istio is an open platform for providing a uniform way to integrate microservices. It also manages traffic flow across microservices, enforce policies and aggregate telemetry data. The control plane of Istio offers an abstraction layer over the underlying cluster...

Interested in what we do? Looking for help? Wanna talk about software strategy?