The rise of the Istio service mesh

Prakash Waikar
11 min readMar 17, 2020

--

Follow me: @Medium @LinkedIn

Getting Started With Istio:

The last few years have brought about immense changes in the software architecture landscape. A major shift that we have all witnessed is the breakdown of large monolithic and coarse-grained applications into fine-grained deployment units called microservices, communicating predominantly by way of synchronous REST as well as asynchronous events and message passing. The benefits of this architecture are numerous, but the drawbacks are equally evident.

With the increased adoption of microservices, the industry has been steadily coming up with patterns and best-practices that have made the entire experience more palatable. Resiliency Patterns, Service Discovery, Container Orchestration, Canary Releases, Observability Patterns, BFF, API Gateway… These are some of the concepts that practitioners will employ to build more robust and sustainable distributed systems. But these concepts are just that — abstract notions and patterns — they require someone to implement them somewhere in the system. More often than not, that ‘someone’ is you and ‘somewhere’ is everywhere.

Istio Introduction

Istio is a service mesh — an application-aware infrastructure layer for facilitating service-to-service communications. By ‘application-aware’, it is meant that the service mesh understands, to some degree, the nature of service communications and can intervene in a value-added manner. For example, a service mesh can implement resiliency patterns (retries, circuit breakers), alter the traffic flow (shape the traffic, affect routing behavior, facilitate canary releases), as well as add a whole host of comprehensive security controls. Being intrinsically aware of the traffic passing between services, Istio can also provide fine-grained instrumentation and telemetry insights, providing a degree of observability to an otherwise opaque distributed system.

What is a service mesh?

Istio addresses the challenges developers and operators face as monolithic applications transition towards a distributed microservice architecture. To see how it helps to take a more detailed look at Istio’s service mesh.

The term service mesh is used to describe the network of microservices that make up such applications and the interactions between them. As a service mesh grows in size and complexity, it can become harder to understand and manage. Its requirements can include discovery, load balancing, failure recovery, metrics, and monitoring. A service mesh also often has more complex operational requirements, like A/B testing, canary rollouts, rate limiting, access control, and end-to-end authentication.

Istio provides behavioral insights and operational control over the service mesh as a whole, offering a complete solution to satisfy the diverse requirements of microservice applications.

Why use Istio?

Istio makes it easy to create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, with few or no code changes in service code. You add Istio support to services by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices, then configure and manage Istio using its control plane functionality, which includes:

· Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic.

· Fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.

· A pluggable policy layer and configuration API supporting access controls, rate limits and quotas.

· Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.

· Secure service-to-service communication in a cluster with strong identity-based authentication and authorization.

Istio is designed for extensibility and meets diverse deployment needs.

Let’s check out the core features of Istio:

In a world, without Istio one service makes direct requests to another and in cases of failure, the service needs to handle it by retrying, time outing, opening the circuit breaker, etc.

To resolve this, Istio provides an ingenious solution by being completely separated from the services and act only by intercepting all network communication. And doing so it can implement:

· Fault Tolerance: Using response status codes it understands when a request failed and retries.

· Canary Rollouts: Forward only the specified percentage of requests to a new version of the service.

· Monitoring and Metrics: The time it took for a service to respond.

· Tracing and Observability: It adds special headers in every request and traces them in the cluster.

· Security: Extracts the JWT Token and Authenticates and Authorizes users.

Now we’ll take a look at the architecture of an Istio service-mesh.

An Istio service mesh is consists of two parts, data plane, and control plane.

· Data plane: is composed of a set of intelligent proxies named Envoy which is deployed as a sidecar. These proxies mediate and control all the network communication between micro-services along with Mixer (a general-purpose and telemetry hub)

· Control plane: manages and configures the proxies to route traffic. Plus, it configures Mixers to enforce policies and collect telemetry.

· Pilot — Responsible for configuring the Envoy and Mixer at runtime.

· Proxy / Envoy — It runs as a container inside each pods. Sidecar proxies per microservice to handle ingress/egress traffic between services in the cluster and from a service to external services. The proxies form a secure microservice mesh providing a rich set of functions like discovery, rich layer-7 routing, circuit breakers, policy enforcement, and telemetry recording/reporting functions.

· Mixer — Create a portability layer on top of infrastructure backends. Enforce policies such as ACLs, rate limits, quotas, authentication, request tracing and telemetry collection at an infrastructure level.

· Citadel / Istio CA — Secures service to service communication over TLS. Providing a key management system to automate key and certificate generation, distribution, rotation, and revocation.

· Ingress/Egress — Configure path-based routing for inbound and outbound external traffic.

· Control Plane API — Underlying Orchestrator such as Kubernetes or Hashicorp Nomad.

Download Istio

1. Go to the Istio release page to download the installation file for your OS, or download and extract the latest release automatically (Linux or macOS):

curl -L https://istio.io/downloadIstio | sh -

2. Move to the Istio package directory.

$ cd istio-1.5.0

The istioctl client binary in the bin/ directory.

3. Add the istioctl client to your path (Linux or macOS):

$ export PATH=$PWD/bin:$PATH

You’ll need a Kubernetes client config file and access to the cluster dashboard.S

Installation YAML files for Kubernetes in install/Kubernetes

kubectl apply -f install/kubernetes/istio-demo.yaml

You would be able to see istio namespaces.

Istio NameSpaces in Kubernetes

You would be able to see all the running pods for istio

Kubectl get pods -n istio-system

The installation directory contains:

Sample applications in samples/

Deploy a Demo Web Service with Envoy Proxy Sidecar

Now we are finally at the fun part of the tutorial. Let’s check the routing capabilities of this service mesh. First, we will deploy two demo web services, “blue” and “green”.

The source code for the article is available on GitHub.

Copy the following into a yaml file named my-websites.yaml:

kind: Deployment

metadata:

name: web-v1

namespace: default

spec:

replicas: 1

template:

metadata:

labels:

app: website

version: website-version-1

spec:

containers:

- name: website-version-1

image: pkw0301/istio-images:v1

resources:

requests:

cpu: 0.1

memory: 200

— -

apiVersion: apps/v1

kind: Deployment

metadata:

name: web-v2

namespace: default

spec:

replicas: 1

template:

metadata:

labels:

app: website

version: website-version-2

spec:

containers:

- name: website-version-2

image: pkw0301/istio-images:v2

resources:

requests:

cpu: 0.1

memory: 200

— -

apiVersion: apps/v1

kind: Deployment

metadata:

name: web-v3

namespace: default

spec:

replicas: 1

template:

metadata:

labels:

app: website

version: website-version-3

spec:

containers:

- name: website-version-3

image: pkw0301/istio-images:v3

resources:

requests:

cpu: 0.1

memory: 200

— -

apiVersion: v1

kind: Service

metadata:

name: website

spec:

ports:

- port: 80

targetPort: 80

protocol: TCP

name: http

selector:

app: website

Note that when you want to use the Envoy sidecar with your pods, the label “app” should be present (it’s used in the request tracing feature), and “spec.ports.name” in the service definition must be named properly (http, http2, grpc, redis, mongo) otherwise Envoy will act on that service traffic as if it was plain TCP, and you will not be able to use the layer 7 features with those services!

In addition, the pods must be targeted only by a single “service” in the cluster. As you can see above, the definition file has three simple deployments each using a different version of the web service (v1/v2/v3), and three simple services, each pointing at the corresponding deployment.

Now we will add the needed Envoy proxy configuration to the pod definitions in this file, using “istioctl kube-inject” command. It will produce a new yaml file with additional components of the Envoy sidecar ready to be deployed by kubectl, run:

istioctl kube-inject -f my-websites.yaml -o my-websites-with-proxy.yaml

The output file will contain extra configuration, you can inspect the “my-websites-with-proxy.yaml” file. This command took the pre-defined ConfigMap “istio-sidecar-injector” (that was installed earlier when we did istio installation), and added the needed sidecar configurations and arguments to our deployment definitions. When we deploy the new file “my-websites-with-proxy.yaml”, each pod will have two containers, one of our demo application and one Envoy proxy. Run the creation command on that new file:

kubectl apply -f my-websites-with-proxy.yaml

You will see this output if it worked as expected:

deployment “web-v1” created

deployment “web-v2” created

deployment “web-v3” created

service “website” created

Let’s inspect the pods to see that the Envoy sidecar is present: kubectl get pods

We can see that each pod has two containers, one is the website container and another is the proxy sidecar:

Also, we can inspect the logs of the Envoy proxy by running:

kubectl logs <your pod name> -c istio-proxy

You will see a lot of output, with last lines similar to this:

add/update cluster outbound|80|version-1|website.default.svc.cluster.local starting warming

add/update cluster outbound|80|version-2|website.default.svc.cluster.local starting warming

add/update cluster outbound|80|version-3|website.default.svc.cluster.local starting warming

warming cluster outbound|80|version-3|website.default.svc.cluster.local complete

warming cluster outbound|80|version-2|website.default.svc.cluster.local complete

warming cluster outbound|80|version-1|website.default.svc.cluster.local complete

This means that the proxy sidecar is healthy and running in that pod.

Now we need to deploy the minimal Istio configuration resources, needed to route the traffic to our service and pods, save the following manifests into a file named

“website-routing.yaml”:

— -

apiVersion: networking.istio.io/v1beta1

kind: Gateway

metadata:

name: website-gateway

spec:

selector:

# Which pods we want to expose as Istio router

# This label points to the default one installed from file istio-demo.yaml

istio: ingressgateway

servers:

- port:

number: 80

name: http

protocol: HTTP

# Here we specify which Kubernetes service names

# we want to serve through this Gateway

hosts:

- “*”

— -

apiVersion: networking.istio.io/v1beta1

kind: VirtualService

metadata:

name: website-virtual-service

spec:

hosts:

- “*”

gateways:

- website-gateway

http:

- route:

- destination:

host: website

subset: version-1

— -

apiVersion: networking.istio.io/v1alpha3

kind: DestinationRule

metadata:

name: website

spec:

host: website

subsets:

- name: version-1

labels:

version: website-version-1

- name: version-2

labels:

version: website-version-2

- name: version-3

labels:

version: website-version-3

These are Gateway, VirtualService, and DestinationRule. Those are custom Istio resources that manage and configure the ingress behavior of istio-ingressgateway pod. We will describe them more in-depth in the next tutorial which gets to the technical details of Istio configuration. For now, deploy these resources to be able to access our example website:

kubectl create -f website-routing.yaml

The next step is to visit our demo website. We deployed three “versions”, each shows different page text and color, but at the moment we can reach only version 1 through the Istio ingress. Let’s visit our endpoint just to be sure there is a web service deployed.

Find your external endpoint by running:

kubectl get services istio-ingressgateway -n istio-system

Or find it by browsing to the istio-ingressgateway service as shown below (we also saw it at the beginning of the tutorial):

Visit the external endpoint by clicking it. You may see several links because one link points to HTTPS and another to HTTP port of the load balancer.

The exact configuration which makes our “website” Kubernetes service point only to single deployment is the Istio VirtualService we created for the website. It tells the Envoy proxy to route requests of “website” service only to pods with label “version: website-version-1” (you probably noticed that the manifest of service “website” selects only one label “app: website” from our pods but says nothing about the “version” label to pick from — so without Envoy logic the Kubernetes service itself would do round-robin between all pods with “app: website” label, both version one, two and three).

You can change the version of the website that we see by changing the following section of the VirtualService manifest and re-deploying it:

http:

- route:

- destination:

host: website

subset: version-1

The “subset” is where we chose the correct section of DestinationRule to route to, and we will learn in-depth about these resources in the next tutorial.

Rolling out gradually

Usually, when a new version of an application needs to be tested with a small amount of traffic (a canary deployment), the vanilla Kubernetes approach would be to create a second deployment that uses a new Docker image but the same pod label, causing the “service” that sends traffic to this pod label, while also balancing between the newly plugged pods from the second deployment. However, you cannot easily point 10% of traffic to the new deployment (in order to reach a precise 10% you will need to keep the pod replicas ratio between two deployments according to the needed percentage, like 9 “v1 pods” and 1 “v2 pod”, or 18 “v1 pods” and 2 “v2 pods”), and cannot use HTTP header for example to route requests to a particular version.

Istio solves this limitation through its flexible VirtualService configuration. For instance, if you want to route traffic using the 90/10 rule, it can easily do it like this:

— -

apiVersion: networking.istio.io/v1beta1

kind: Gateway

metadata:

name: website-gateway

spec:

selector:

# Which pods we want to expose as Istio router

# This label points to the default one installed from file istio-demo.yaml

istio: ingressgateway

servers:

- port:

number: 80

name: http

protocol: HTTP

# Here we specify which Kubernetes service names

# we want to serve through this Gateway

hosts:

- “*”

— -

apiVersion: networking.istio.io/v1beta1

kind: VirtualService

metadata:

name: website-virtual-service

spec:

hosts:

- “*”

gateways:

- website-gateway

http:

- route:

- destination:

host: website

subset: version-1

weight: 90

- destination:

host: website

subset: version-2

weight: 10

— -

apiVersion: networking.istio.io/v1alpha3

kind: DestinationRule

metadata:

name: website

spec:

host: website

subsets:

- name: version-1

labels:

version: website-version-1

- name: version-2

labels:

version: website-version-2

- name: version-3

labels:

version: website-version-3

Wrapping Up

We hope this tutorial provided you with a good high-level overview of Istio, how it works, and how to leverage it for more sophisticated network routing. Istio streamlines the implementation of scenarios that would otherwise require a lot more time and resources. It is a powerful technology anyone looking into service meshes should consider.

--

--

Prakash Waikar
Prakash Waikar

Written by Prakash Waikar

I am an IT professional, having 10 years of IT experience. I have worked on DevOps, AWS, Azure, GCP, Kubernetes etc Technologies.

No responses yet