This article is about how we can deploy the wso2 products into kubernetes, a container orchestration engine, with a brief explanations on the steps and also how wso2 along with kubernetes work together.
As this is a local setup I’m using the Minikube and using an updated wso2 provided scripts used for the deployment.
The initial target of this article is to setup WSO2 API Manager, this will guide us how the setup works and through that we can get knowledge and deploy other products.
- Minikube ( https://minikube.sigs.k8s.io/docs/start/ )
- WSO2 Kubernetes Deployment Guide ( https://apim.docs.wso2.com/en/latest/install-and-setup/install/deploying-api-manager-with-kubernetes-or-openshift-resources/ )
What we are going to look
- Kubernetes — Why and how it works?
- Setting up a local Minikube
- Commands to verify the Minikube Setup
- Helm and Helm Charts
- Flow diagram on the WSO2 APIM Installation Dependencies and Interconnection
- Guideline to setting up WSO2 APIM in kubernetes
- Other useful kubectl Commands
Kubernetes — Why and how it works?
Kubernetes — is Greek for pilot or helmsman (the person holding the ship’s steering wheel).
Before getting into how it works, we need to know why the Kubernetes came in, in one of my previous blogs i have explained through a simple diagram to get understand on this, you can refer containerization evolvement.
Next, we can move onto how the kubernetes architecture looks like and how each component interacts.
As usual I’m always prefer Diagrams to explain the stuff easily. Please refer the below diagram, it have the details on it, explained briefly.
Why there is POD -> Container design, without using directly the containers?
So when it comes to docker what we do is we package the application as container and our application run inside the container.
Then why we need a POD question comes in? Resolving the port mapping complexity and to easily replace the container runtime from docker with any other container runtime, introduced the need of a wrapper and named as POD.
For example when we create containers we will use the docker run command -p option to expose the service to outside.
So, when the host machine assigned with multiple ports, then there will be problem will be faced with assigning ports which are free at the particular host port. This will be more complex when 1000’s of services deployed in kubernetes cluster. This is the reason the POD concept came in.
Each POD will act as a wrapper for the container and it will act as a separate machine, so it will be assigned with separate IP addresses. So the port mapping issue will not be encountered, when exposing the service to the host machine.
Then, next important thing we need to get understand is the message flow when we post a deployment through kubectl commands. Below diagram explains that.
So now we are fine with the message flow which gives us how the deployment configuration we provided in the file is transformed into running pods, and now we need to know how these pods can be accessed.
From the above image, you can see we can expose to outside using the service types when defining the services and also there is another option to use the Ingress Controller. When we are setting up the WSO2 APIM deployment, you can see that we are using the Ingress Controller there.
The above explanation, is enough on the Kubernetes to continue on the WSO2 APIM deployment. If need more in-depth information you can refer the below book which will guide you on that.
Setting up a Local Minikube
Now we can start the Minikube cluster, when starting it I’m here giving some additional parameters as they will be needed to customize my VM settings and also to communicate with my local docker registry in a non-secure way ( As this local setup, in production you can setup certificates to achieve secure connectivity ).
Here, I have used an argument — insecure-registry, this is used because if we are connecting to local docker registry which is not a secured one, then need to pass this when initially starting the Minikube Cluster.
Now we are done with the setup. One more thing we need to setup is the kubectl, to access the Minikube cluster.
Now we are done with the client also.
Commands to verify the Minikube Setup
Helm and Helm Charts
I have installed Helm 3.3.4 version ( https://helm.sh/docs/intro/install/ ) in my local and the main difference between the version 2 and version 3 is that the Tiller component usage is no more available as it will cause security concerns, due to it permissions inside the Kubernetes Cluster.
Also note that when installing the Helm it will also fetch the Kubernetes Cluster information from the cat ~/.kube/config file, as same as the Kubectl installation.
A typical Helm Chart File Structure as below, the information fetched from https://helm.sh/docs/topics/charts/.
WSO2 Helm Repository
WSO2 Helm Chart repository is https://helm.wso2.com, to add the repository to the Helm installed execute the below command:
And by providing the values.yaml with updated values, we can execute the below command fetch and install the kubernetes-pipeline to the Kubernetes Cluster. Sample command will be like below:
As my intention here is to deploy the kubernetes-apim with some customizations, then we can use the other option:
- Fetch the code from git.
- Then execute the below command where pointing the extracted code as the home location for the helm resources.
Also there is another option to pull the package from Helm Repository directly and then can extract:
More details on the artifacts can be found at: https://artifacthub.io/packages/helm/wso2/am-pattern-1
Flow diagram on the WSO2 APIM Installation Dependencies and Interconnection
When we deploy the default script, the flow which will happen internally is explained in the below diagram.
If we want to customize we need to update the configurations to cater that, for example below two mainly we do:
- Pointing to a External Database: This one we need to customize the deployment.toml
- Pointing to a External or Already Existing NFS Server: In this case we need to use nfs-client-provisioner
The above i’ll cover in my next blog on how to customize the current script to cater our requirement of above two.
Guideline to setting up WSO2 APIM in kubernetes
- To get started with the deployment, first pull the deployment scripts provided by wso2 to your local machine.
Here, I’m using the latest version.
2. As I’m running the default script, didn’t done any major modifications. But as we are running in the local Minikube Cluster, it is needed to reduce the CPU and Memory Resources in the values.yaml
Eg: I reduced like below to cater my local setup, otherwise the the POD’s state will be Pending.
Only reduced the Analytics memory from 4GB to 2GB.
Note: I have started the Minikube cluster with, note that docker.local is not needed, if you are directly fetching from Docker Hub, for my future use adding it here.
also note that need to update the livenessProbe: initialDelaySeconds: 180 of the analytics ( initially it was 20 ) worker and dashboard, otherwise you will get the crashloopback error when starting the wso2 apim analytics worker and dashboard.
3. Execute the Helm Command:
After this sometimes one more issue you can come up with is, the db service is not starting.
the POD log may end up with:
If you encounter this execute the Below:
Then wait for POD to do the restart automatically due to the Crashloopback error. If it continues delete the POD and also remove the files under the persistent volume. Then it should work.
Now you can see that the MySQL started properly.
4. You can check the other PODs status through the below command.
Note: In my local due to insufficient memory some of the PODs were not started properly, but in an environment where, the required memory and cpu is available, all the pods will be up and running. As I used this on how this is done, hope this will be a supporting guide for you to get understand how this works together.
Other useful kubectl commands
Hopefully will come up with a customized setup and it’s configurations in my following blog. Will specifically focus on the two things i mentioned earlier,
- Using external database
- Using already existing external NFS Server.