In this article we are going to look into Istio — one of the service mesh option to manage inter communication with microservices.
When you take the meaning of the Istio — it is “sail” in Greek, it refers like sailing in kubernetes, directly connects to the communication between microservices.
Before we get into the Istio, it is better to get understand what is Service Mesh and Why it is needed in the context of kubernetes.
Typical Kubernetes Inter Microservice Communication
As depicts by the below diagram, in a typical kubernetes deployment, mainly we need three components to enable the communication between microservices.
- Ingress Controller — This will help us to expose the service outside of the network.
- Service — Which will load balance between multiple replicas and expose the deployed micro services to the internal network.
Based on that, when there is a communication happen between services, it will happen through HTTP and even though we can setup HTTPS communication between them, it will be an additional overhead to the development team to consider.
In summary, the non functional requirements like communication configuration, security configuration, retry logics, metrics, tracing between microservices, all will be implemented along with the each microservice development and in the deployment perspective it will be run as a single POD like below:
What is Service Mesh and why we need it?
Service Mesh is a tool for adding observability, security, and reliability features to “cloud native” applications by transparently inserting this functionality at the platform layer rather than the application layer.
Note: Quoted from “https://linkerd.io/what-is-a-service-mesh/”.
The main purpose of introducing the service mesh is to isolate the non-functional and functional aspect of the microservice, and to developers focus only on the business logic and other infrastructure level and non-functional attributes like observability, reliability and security features.
Istio Architecture and Message Flow
Note: To get more information on the architecture and message flow pls follow https://istio.io/latest/docs/ops/deployment/architecture/.
Step-by-Step guide to implement Istio on top of kubernetes
- For this demonstration i’m using the minikube kubernetes cluster and follow the below document to get it installed. https://minikube.sigs.k8s.io/docs/start/
- Setting up the Istioctl, to setup first download the Istio distribution to the machine where you setup the kubectl, when configuring the minikube cluster. You can use the below command to download the distribution.
3. Now using the below command we can deploy the Istio into our minikube cluster.
Note: Here with the installation command we can specify some options like kubeconfig as well, refer the help page for more information.
Also if there is a issue in the readiness probe failure use the below command:
istioctl install — set meshConfig.defaultConfig.holdApplicationUntilProxyStarts=true
4. Once done execute the below commands to verify the installation.
5. Now we need to deploy our sample microservice to understand how this works.
Before enabling any configuration we will first deploy a microservice and see whats happening.
Below is the sample yaml configuration i’m going to deploy in minikube default namespace.
After a successful apply, you can view a below using describe pod command.
You can see here that still the envoy proxy is not injected. So we need to enable at the namespace level to achieve it.
6. Enabling istio-injection
To enable istio-injection follow the below commands
7. Verifying the Istio Injection effectiveness.
Remove the deployment and re-deploy
Now when you do describe pod you will observe that there are two containers initialised inside one POD.
8. To remove the Istio from the cluster execute the below:
istioctl x uninstall — purge
That’s all for this… and hopefully will do an End of End Message flow hands on in my next blog on how we can access the APIs through the Ingress Gateways and trace the messages.