Istio is an open platform for providing a uniform way to integrate microservices. It also manages traffic flow across microservices, enforce policies and aggregate telemetry data. The control plane of Istio offers an abstraction layer over the underlying cluster management platform, such as Kubernetes.
Learn More Here:
Istio came with their new release with the version of 1.11.0. This major update came with many promotions, fixings, the addition of new features and much more. We will talk about all those changes in this article, along with how Istio functions.
Changes with v1.11.0
The new release brings a lot of changes in various sectors of Istio. We will see all the changes in traffic management, security, telemetry, installation and istioctl.
The new release brings the promotion of CNI to beta. We can observe the improved resolution of headless services via in-agent DNS to include endpoints from other clusters on the same network. Again, we can see the enhanced usage of
AUTO_PASSTHROUGH Gateways to no longer require configuring the
ISTIO_META_ROUTER_MODE environment variable on the gateway deployment; instead, we can detect it automatically. Version 1.11.0 brings the improvement of the CNI network plugin to send logs to the CNI DaemonSet. This improvement will allow the viewing of CNI logs using
kubectl logs instead of looking at kubelet logs. Lastly, we can also see the improved service conflict resolution to favour Kubernetes Services over ServiceEntries with the same hostname.
The new release brings an updated CNI install container, and we can see the combination of race condition and repair container into one container. It also introduces an updated Istiod debug interface, which can only be accessible over localhost or with proper authentication (mTLS or JWT). The recommended way to access the debug interface is through istioctl experimental internal-debug, which handles this automatically.
The release brings a whole lot of addition of new features and missing elements. Firstly, we can see the addition of the
shutdownDuration flag to pilot-discovery so that users can configure the duration istiod needs to terminate gracefully. The default value is 10s. The devs have also added an environment variable
PILOT_STATUS_UPDATE_INTERVAL that is the interval to update the XDS distribution status, and its default value is 500ms. We can also see the HTTP endpoint
localhost:15004/debug/<typeurl> to the Istio sidecar agent. We can resolve the GET requests to that URL by sending an xDS discovery “event” to istiod. We can also disable this by setting the following in the Istio Operator:
Version 1.11.0 also adds support for overriding the locality of the WorkloadGroup template in an auto registered WorkloadEntry. We can pass locality overrides through the Envoy bootstrap configuration. Also, there is a new metric for tracking the distribution of configuration resource sizes which istiod pushes.
A valuable addition is experimental support for the Kubernetes Multi-Cluster Services (MCS) host (clusterset.local). This feature is off by default, but we can enable it by setting the following environment variables for our Istiod deployment:
ENABLE_MCS_SERVICE_DISCOVERY. When enabled, Istio will include the MCS host as a domain in the service’s HTTP route. Additionally, it will also support the MCS host during a DNS lookup. For now, the MCS host is just an alias for cluster.local and resolves to the same service IP. Future work will give the MCS host a separate IP as the MCS spec defines it.
Lastly, we can see another added experimental support for controlling service endpoint discoverability with Kubernetes Multi-Cluster Services (MCS). By default, this feature is off, but we can enable it by setting the
ENABLE_MCS_SERVICE_DISCOVERY flag in Istio. When enabled, Istio will make service endpoints only discoverable from within the same cluster by default. To make the service endpoints within a cluster discoverable throughout the mesh, we must create a ServiceExport CR within the same cluster as the service endpoints. We can automate this process by enabling the Istio flag
ENABLE_MCS_AUTOEXPORT. With this enabled, Istio will automatically create ServiceExport in all clusters for each service.
Several fixes in the traffic management section will help a lot to the users. Firstly, we can see the fixing of an issue to enable
CoreDump using the sidecar annotation. The new release fixed the issue where both inbound and outbound apps could not intercept traffic when using podIP in
TPROXY interception mode. The devs have also fixed an issue where we do not consider alternate subject names specified in service entry while building TLS context.
The new release fixes the bug where multiple gateways on the same port with
PASSTHROUGH modes was not working correctly. Again, we see fixing a bug where Istio config generation failed when the sum of endpoint weights was over uint32 max. Smart DNS will now support Istio CNI.
We again see fixing a bug in Kubernetes Ingress causing paths with prefixes of the form /foo to match the route
/foo/ but not the route
/foo. The new release also fixed an issue allowing a ServiceEntry to act as an instance in other namespaces and causing proxies to send Transfer-Encoding headers with 1xx and 204 responses.
We can also see the fixing of reconciliation logic in the validation webhook controller to rate-limit the retries in the loop. It will drastically reduce churn (and generated logs) in cases of misconfiguration. Lastly, we see the optimization of generated routing configuration to merge virtual hosts with the same routing configuration. It will improve performance for Virtual Services, which will have a definition of multiple hostnames.
The only thing that changes in the security of Istio in version 1.11.0 is the added validation for the jwks field in the request authentication policy.
The new release brings the updated Prometheus telemetry behaviour for inbound traffic to disable host header fallback by default. This update will prevent traffic coming from out-of-mesh locations from potentially polluting the destination_service dimension in metrics with junk data (and exploding metrics cardinality).
With this change, users relying on host headers for labeling the destination service for inbound traffic from out-of-mesh workloads may see that traffic marked as unknown. We can restore the behaviour by modifying the Istio configuration to remove the
disable_host_header_fallback: true configuration.
For telemetry, we can see the added support for the Apache SkyWalking tracer. Now we can run the istioctl dashboard skywalking command to view the SkyWalking dashboard UI. Also, there is the addition of a new metric to istiod to report server uptime. Again, the devs have added a new metric (
istiod_managed_clusters) to istiod to track the number of clusters managed by an istiod instance.
In the installation section, the devs have promoted the external control plane to beta. We can see the improvement of the installation of Istio on remote clusters using an external control plane. The
istiodRemote component now includes all of the resources needed for either a basic remote or config cluster. Also, there is an improvement in container images’ size, decreasing each image by up to 50Mb. As a result, we no longer see the installation of the
linux-tools-generic package and dependencies (including python).
The new version now comes with updating the base image versions built on
debian10 (for distroless). It also has updated Jaeger addon to version 1.22.
The only fixing in the installation section is fixing the upgrade and downgrade message of the control plane.
The installation section got the only removal in this update. The devs have removed the empty
caBundle default value from Chart to allow a GitOps approach.
In the case of istioctl, we can see the promotion of the istioctl experimental revision tag command group to the istioctl tag.
The new release brings the
--workloadIP flag to
istioctl x workload entry configure, which sets the configuration for the workload IP that the sidecar proxy uses to auto-register a workload Entry. We usually require this flag when the VM workloads aren’t in the same network as the primary cluster to which they register. There is also the addition of
--dry-run flag for
istioctl x uninstall.
istioctl proxy-config bootstrap now has a short output option (-o short) that shows the Istio and Envoy version summary. The devs have added a new analyzer to check for image: auto in Pods and Deployments that will not get an injection. Again, we see the added support for auto-completion of the namespace for
istioctl. Also, istioctl will now support completion for Kubernetes pods, services. Lastly, we can see the addition of the
--vklog option to enable verbose logging in client-go.
The only fixing that istioctl got is the fixing of user-agent in all Istio binaries to include version.
We have discussed all the changes that took place with version 1.11.0 of Istio. I know that you guys also want to enjoy the new and improved Istio. You can do that by downloading Istio here. Read more of our articles below.