Istio service mesh step by step
Today we will use istioctl to deploy Istio service mesh on our kubernetes cluster.
we will demonstrate the core functionality by creating an authorization policy that restricts network traffic to a specific namespace.
If you don’t have a kubernetes cluster yet you can learn here how to install RKE1 or RKE2 clusters.
Introduction to Istio Service Mesh
Istio is an open-source service mesh platform designed to simplify the management of microservices in Kubernetes environments. It provides a powerful set of tools and features that address common challenges in modern, distributed applications.
What is a Service Mesh?
A service mesh is a dedicated infrastructure layer for handling service-to-service communication. It’s responsible for the reliable delivery of requests through the complex topology of services that comprise a modern, cloud-native application.
Key Features of Istio
Traffic Management: Istio enables fine-grained control over traffic flows and API calls between services. It provides features like load balancing, circuit breakers, and fault injection.
Security: Istio offers strong identity-based authentication, authorization, and encryption of service communications.
Observability: With Istio, you get automatic metrics, logs, and traces for all traffic within your service mesh, providing deep insights into your application’s behavior.
Platform Support: While primarily designed for Kubernetes, Istio can integrate with other platforms and infrastructure as well.
Why Use Istio?
Simplified Microservices Management: Istio abstracts away the complexities of managing microservice architectures.
Enhanced Security: It provides built-in security features without requiring changes to application code.
Improved Visibility: Istio offers detailed insights into service behavior, making it easier to optimize and troubleshoot your applications.
Traffic Control: It allows for sophisticated traffic management strategies like A/B testing, canary rollouts, and fault injection.
In the following sections of this tutorial, we’ll walk through the process of deploying Istio on a Kubernetes cluster and explore its key features in detail.
First let’s get the istio package locally, you can select from several releases: https://github.com/istio/istio/releases
Today we will be using 1.22.3.
mkdir istio && cd istio
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.22.3 TARGET_ARCH=x86_64 sh -
To install istio we will be using istio command line “istioctl” which is located in the binary folder, lets add it to PATH.
(it’s only for the current session, if you want it permanently you can add it to .bashrc file.
export PATH="$PATH:/home/rke/stmp/istio/istio-1.22.3/bin"
And the next step is obviously is to install istio on our kubernetes cluster.
As mentioned above we will use “istioctl”, one clean install command deploys the mesh on kubernetes, there are 7 profiles you can choose from.
It is deployed on it’s own namespace.
istioctl install --set profile=default -y
To verify the installation you can run the following command.
istioctl verify-install
Now before we proceed to deploying add-ons and POC application i want to demonstrate the core functionality.
here i want to show you how you can control the traffic flow inside the cluster.
This is the scenario:
1. we are deploying three pods in two namespaces, one pod in “app-1-namespace, and two pods in “app-2-namespace”.
2. try to connect to app number 2 on “app-2-namespace” from a container on “app-1-namespace”.
3. then create authorization policy to allow the connection to app number 2 on “app-2-namespace” only from “app-2-namespace”, and drop any other connection.
This is the deployment file that will deploy the needed resources, as you see envoy proxy injection is enabled, deploy it with the following command.
### this deployes a pod in "app-1-namespace" namespace.
apiVersion: v1
kind: Namespace
metadata:
name: app-1-namespace
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-1
namespace: app-1-namespace
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: ubuntu:latest
command: ["/bin/sh", "-c", "apt-get update && apt-get install -y nginx curl && echo 'Setup complete' && nginx -g 'daemon off;'"]
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service-1
namespace: app-1-namespace
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
---
### this deployes a first pod in "app-2-namespace" namespace.
apiVersion: v1
kind: Namespace
metadata:
name: app-2-namespace
labels:
istio-injection: enabled
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-2
namespace: app-2-namespace
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: ubuntu:latest
command: ["/bin/sh", "-c", "apt-get update && apt-get install -y nginx curl && echo 'Setup complete' && nginx -g 'daemon off;'"]
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service-2
namespace: app-2-namespace
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
---
### this deployes a second pod in "app-2-namespace" namespace. (here we don't create the namespace, it's already exists.)
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-3
namespace: app-2-namespace
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: ubuntu:latest
command: ["/bin/sh", "-c", "apt-get update && apt-get install -y nginx curl && echo 'Setup complete' && nginx -g 'daemon off;'"]
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service-3
namespace: app-2-namespace
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
You can deploy that and take alook at what was created with the following commands.
k apply -f deployment-2-ns-3-nginx-pods.yaml && \
k get all -n app-1-namespace && \
k get all -n app-2-namespace
Now lets try to communicate with the service on namespace “app-2-namespace” from a pod running on “app-1-namespace”.
let’s navigate to rancher and open a shell to the pod on “app-1-namespace”, and run a curl to the pod on “app-2-namespace”.
as you can see, you get to communicate with it.
curl nginx-service-2.app-2-namespace.svc.rke.rancher
Now let’s create a “AuthorizarionPolicy”, to drop all connections that are not from the same namespace (app-2-namespace).
create a new file “authorizationpolicy.yaml” and run it.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: namespace-restriction
namespace: app-2-namespace
spec:
action: ALLOW
rules:
- from:
- source:
namespaces: ["app-2-namespace"]
Now let’s try to communicate again after we applied the authorization policy.
as you see, you can’t do that anymore.
The communication is allowed only from the same namespace (app-2-namespace).
as you can see we are from the same namespace it works perfectly.
Now after we deployed istio and demonstrated core functionality we need some kind of microservice application for testing, luckily we have google :>
This is a demo application that will fit our needs: https://github.com/GoogleCloudPlatform/microservices-demo
Clone this repository and Inside microservices-demo/release/ folder you will find the application manifest file, create a new namespace for it and apply it.
git clone https://github.com/GoogleCloudPlatform/microservices-demo
k create ns google-demo-app
k apply -f microservices-demo/release/kubernetes-manifests.yaml -n google-demo-app
Now as you can notice there is only one container in each pod, and as we said earlier istio is supposed to inject additional envoy proxy containers to each pod, so what’s going on?
the reason for not injecting the proxy pod is that our namespace labels does not contain “istio-injection=enabled” label, and it should.
So now lets configure the proxy container injection by adding the label to our namespace and the proxy side carts will be injected after a redeployment of the app.
First let’s take a look at the namespace labels, and add the relevant label.
k get ns google-demo-app --show-labels
k label ns google-demo-app istio-injection=enabled
k get ns google-demo-app --show-labels
Now if you will get them pods in google-demo-app namespace you will still see only one container per pod, we have to redeploy for changes to take affect.
kubectl rollout restart deployment -n google-demo-app
And now after initial deployment we will add some integrations that will be handy for looking on some telemetry tracing and monitoring the application.
in the samples/addons folder you will find kubernetes config files for services like grafana, kiali, prometheus, etc.
We will apply all those services in a bulk, but first we will change their service type to LoadBalancer to be able to get them more conveniently.
Open istio-1.22.3/samples/addons/kiali.yaml and change the type from ClusterIP to LoadBalancer, you can specify the IP if you want.
Do the same for grafana and jeager, with a different IP obviously.
Very well, now let’s deploy it all.
k apply -f /home/rke/stmp/istio/istio-1.22.3/samples/addons/ -n istio-system
Now let’s take a look at KIALI the google-demo-app that we deployed.
Let’s go to kiali server at 192.168.66.139 and if we get to traffic graph of google-demo-app namespace we will see an interactive graph of the application.
i suggest you go over this to understand kiali better.
Next let’s try to take a look what’s in Grafana.
navigate to Grafana’s IP and take a look at the dashboards.
Just like kiali and grafana let’s connect to jaeger UI.
To uninstall Istio from cluster use the following command.
istioctl uninstall --purge
To conclude our session today, we installed Istio service mesh on a kubernetes cluster, demonstrated a core functionality of how to control traffic in the cluster, then we added some tools that will make our lives easier if we desire to get to know our application better.