Kubernetes for local NodeJS Development
Being as a developer our job is also to push our developed applications as faster as we can. There are new tools coming on which makes our works faster and easier but it also brings problems. One of them is the maintenance problem. (i.e.Hard to maintain).
Developers excuses
Code compiling is taking time, or that person doesn’t work here anymore. And among them, the most repeated problem is, it works on my machines.
Which is fine but we are not shipping your machines to the server.
Something we can use is docker.
Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. -https://opensource.com/resources/what-docker
What it does is, it isolate apps in containers and make it easier to create, build and run the application anywhere. Docker helps to package dependencies in your application and provides an easier way to run, build and ship applications. docker CLI is used to manage individual containers on a docker engine. It is a client command line to access the Docker daemon API.
Let’s take an example of a simple Nodejs application named fixit-map. What it does is it list problems in Berlin.[ Demo ]
We can use docker CLI command line to run this application. But, we have to keep in mind that docker CLI manage individual containers on a docker engine.
To manage multiple containers applications we can use docker-compose. It works as a front end “script” on top of the same docker api used by docker.
Usage of docker-compose requires 3 steps:
- Define the app environment with a Dockerfile
- Define the app services in docker-compose.yml
- Run docker-compose up to start and run app
No more excuses, “it works on anywhere”. Now the Workflow independent on DevOps. But there is a new problem. You build your application. It runs on your local and you run the test with works like a charm. Now, the Ops team uses a build tool to publish a new docker image. And if for some reason the container dies and Ops team comes back and tells you it doesn’t work.
You have to go to that server and run the command to start it again. It’s not much different than spinning a VM’s. When something goes down he needs to manually start the container. What happened to all the time that you saved ??
Now, you again start fighting with the DevOps team, blaming with the ops time. So here’s where Kubernetes come to play.
Kubernetes(k8) is an open-source system for automating deployment, scaling and management of containerized applications.- https://kubernetes.io/
Containerization is a process that involves building an application with associated configuration files, dependencies, and libraries required to run in an efficient manner across different computing environments.
Docker, Mesos, and Kubernetes are the most popular containerization ecosystems.
I used docker locally. Can I use Kubernetes locally?
Of Course, we can.
Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a Virtual Machine (VM) on your laptop for users looking to try out Kubernetes or develop with it day-to-day.-https://kubernetes.io/docs/setup/learning-environment/minikube/
Minikube is a tool that makes it easy to run Kubernetes locally. And, the primary goal of minikube is to make it simple to run Kubernetes locally, for day-to-day development workflows and learning.
Let’s run our deployment file named “fixit-deployment.yaml” using Kubernetes.
kubectl create -f fixit-deployment.yaml
It uses two replicas. Kubernetes has this resource called Ingress that allows you to access your Kubernetes Service from outside the Kubernetes cluster. It consolidates your routing rules into a single resource. You might want to request “api/v1” using “api-v1” service and “api/v2” for “api-v2” service. With Ingress, you can easily set up these without creating a bunch of LoadBalancer or exposing each service in a node.
To see the status of our service we can run
kubectl get pods -w
Now, we can deploy resource ingress named “ fixit-ingress.yaml”.
It’s basically mapping the application to the route “/fixit”. So, if you open your minikube IP followed by “/fixit”, it will load this application.
Before that, we need to expose the service into ClusterIP. Which can be done by running
kubectl expose deployment fixit --port=3000 --type=ClusterIP
To see the changes you need to get minikube ip followed by “/fixit”.
minikube ip
In my case, it is 192.168.99.102/fixit.
This approach is more suitable for those case where you have many microservices. And you don’t want to make it available to load balancer and start wasting money. Once can put it behind the ingress and mapped it out.
Conclusion
Docker containers help us to isolate and pack our applications with all the dependencies whereas Kubernetes help to deploy, scale and orchestrate your containers. We can use these tools to deliver our application fast, but more importantly in a consistent manner.