How to Deploy a NodeJS App to Kubernetes

June 12, 2019 | Views: 1149

Begin Learning Cyber Security for FREE Now!

FREE REGISTRATIONAlready a Member Login Here

If you’re one of the few who haven’t yet tried out container orchestration, you might be new to the Kubernetes architecture.

“Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.”

Kubernetes, also popularly known as k8s can be thought of as:

  1. a container platform

  2. a microservices platform

  3. a portable cloud platform and a lot more.

Simply put, k8s provide users with a platform for executing applications and managing them at scale, either across multiple physical or virtual machines.The design of Kubernetes is greatly influenced by Borg, the large-scale cluster management system deployed by Google.

In this article, we’ll go through the steps to setup and deploy a Nodejs application using Kubernetes.

Kubernetes Architecture

Containers allow applications to be broken into smaller microservices that are easy to manage. If you’re containerizing a node application with a simple architecture, your requirements might be met by Docker’s orchestration tooling. However, for more complex structures, you will need to use Kubernetes to manage your clusters.

The master-child architecture of Kubernetes allows it to scale horizontally. Below are the various components of Kubernetes.

Pod: The smallest deployable unit created and managed by Kubernetes, a Pod is a group of one or more containers. Containers within a Pod share an IP address and can access each other via localhost as well as enjoy shared access to volumes.

Node: A worker machine in Kubernetes. May be a VM or a physical machine, and comes with services necessary to run Pods.

Service: An abstraction which defines a logical set of Pods and a policy for accessing them. Assigns a fixed IP address to Pod replicas, allowing other Pods or Services to communicate with them.

ReplicaSet: Ensures that a specified number of Pod replicas are running at any given time. K8s recommend using Deployments instead of directly manipulating ReplicaSet objects, unless you require custom update orchestration or don’t require updates at all.

Deployment: A controller that provides declarative updates for Pods and ReplicaSets.

Namespace: Virtual cluster backed by the same physical cluster. A way to divide cluster resources between multiple users, and a mechanism to attach authorization and policy to a subsection of a given cluster.

If you’re new to the concept of containerizing and running containers on cloud, you might want to read more about the usual cloud migration challenges and how containers solve those challenges.

Installing minikube and kubectl

To make matters simpler, for the duration of this guide we will use minikube on local machines and run one single-node Kubernetes cluster. Minikube is a useful tool that can start a virtual machine and is able to bootstrap a cluster.

Before we begin, you would need to download and install VirtualBox in case you do not already have it. Though minikube works with similar virtualization platforms, VirtualBox has proven to be the most reliable.

At the next stage, we need to install kubectl. This will be helpful when interacting with the k8s cluster. You can install kubectl using the script below –

#!/bin/bash

ARCH=$(uname | awk ‘{print tolower($0)}’)
TARGET_VERSION=“v0.15.0”
MINIKUBE_URL=“https://storage.googleapis.com/minikube/releases/${TARGET_VERSION}/minikube-${ARCH}-amd64”

KUBECTL_VER=“v1.5.1”
KUBECTL_URL=“http://storage.googleapis.com/kubernetes-release/release/${KUBECTL_VER}/bin/${ARCH}/amd64/kubectl”

echo “installing latest kubectl…”
curl -Lo kubectl $KUBECTL_URL && chmod +x kubectl && sudo mv kubectl /usr/local/bin/

echo “installing latest minikube…”
curl -Lo minikube $MINIKUBE_URL && chmod +x minikube && sudo mv minikube /usr/local/bin/

ISO_URL=“https://storage.googleapis.com/minikube/iso/minikube-v1.0.1.iso”
minikube start
   –vm-driver=virtualbox
   –iso-url=$ISO_URL

echo “starting minikube dashboard…”
minikube dashboard

If this has been successful, you should the Kubernetes dashboard in your browser, similar to –

Using kubectl

When minikune is launched, it should automatically establish the context for kubectl. When you execute ‘kubectl get nodes’, you should see the following –

kubectl get nodes
NAME       STATUS AGE
minikube   Ready 2m

Similarly, when you execute ‘kubectl get pods –all-namespaces’, you should see the following -:

kubectl get pods –all-namespaces
NAMESPACE     NAME                READY STATUS RESTARTS AGE
kube-system   kube-addon-manager-minikube   1/1       Running   0          3m
kube-system   kube-dns-v20-qkzgg            3/3       Running   0          3m
kube-system   kubernetes-dashboard-1hs02    1/1       Running   0          3m

Even though a dashboard like this is quite useful when attempting to visualize deployments and pods, we will be mainly using kubectl to interact with the cluster.

The Demo NodeJS App

We’ll be using a NodeJS server known as demo-server as an example in this guide. demo-server was so named for its sheer simplicity. You can find demo-server at github. Within the repository, you will find a server that looks similar to:

var http = require(‘http’);

var server = http.createServer(function(req, res){
   res.end(new Date().toISOString());
});

server.listen(8000);
You will also find a Dockerfile similar to –
FROM quay.io/seanmcgary/nodejs-raw-base
MAINTAINER Sean McGary <sean@seanmcgary.com>


EXPOSE 8000

ADD start.sh start.sh

RUN chmod +x start.sh

CMD ./start.sh

Per its default settings, the Dockerfile will clone the repo as well as run the server when executed. You can edit the Dockerfile to include the repo that you have cloned to the container earlier instead of having to recall it every time.

Build the Container

To build the container, you would need to execute:

CONTAINER_NAME=“<container name>”
docker build -t $CONTAINER_NAME:latest .
docker push $CONTAINER_NAME:latest

A point to note – Because k8s is running on a virtual machine of its own, it will not have access to any of the Docker images that you may build. For proceeding with this guide, you will be required to push your images to a repository that is accessible by k8s.

While Dockerhub is a free resource, we suggest that you use Google’s Container Registry. This is a very low cost and also lends support to private images. You can find the GCR getting started guide here.

Creating a Deployment

For the deployment of our app, we will be using the ‘Deployment’ pod type. A deployment wraps the functions and features of ReplicaSets and Pods and allows users to declaratively update their application(s). This, in turn, allows users to take advantage of zero-downtime deploys using Kubernetes’ RollingUpdate functionality.

To deploy our app, we’re going to use the “Deployment” pod type. A deployment wraps the functionality of Pods and ReplicaSets to allow you to declaratively update your application. This is the magic that allows you to leverage zero-downtime deploys via Kubernetes’ RollingUpdate functionality.

deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: demo-server-deployment
spec:
 replicas: 1
 template:
   metadata:
     labels:
       app: demo-server
   spec:
     containers:
     – name: demo-server
       image: <container image>
       imagePullPolicy: Always
       ports:
       – containerPort: 8000
# vim: set ts=2 expandtab!:

 

In order to deploy your deployment, execute:

kubectl create -f deployment.yaml

 

To get your deployment with kubectl, execute:

kubectl get deployments
NAME                       DESIRED CURRENT UP-TO-DATE   AVAILABLE AGE
demo-server-deployment   1         1         1            1           7m

This metadata should update when your deployment is created and pulls down containers.

Creating a Service

Once the application has been successfully deployed, we need to figure out a method of exposing it to traffic that is external from the cluster. To this end, we will be creating a Service. We will need to open a NodePort connecting directly to our application via port 3061.

service.yaml
apiVersion: v1
kind: Service
metadata:
 name: demo-server
 labels:
   app: demo-server
spec:
 selector:
   app: demo-server
 ports:
 – port: 8000
   protocol: TCP
   nodePort: 30061
 type: LoadBalancer

Now the service can be created within Kubernetes –

kubectl create -f service.yaml

The details can be accessed by executing –

kubectl get services
NAME            CLUSTER-IP EXTERNAL-IP   PORT(S) AGE
kubernetes      10.0.0.1     <none>        443/TCP          1h
demo-server   10.0.0.121   <pending>     8000:30061/TCP   12m

The ReplicaSet for our deployment should now look something like this:

Accessing the Server

We had earlier defined a NodePort in the Service. This exposes a port to the relevant IP address where minikube is running at the time. This allows your app to be accessible outside the cluster.

Per its default settings, minikube is bound to port 192.168.99.100. If you want to verify this, you can execute minikube ip which should return the current IP address in use.

To access your service, you only need to curl the IP on port 30061:

curl http://192.168.99.100:30061
2017-01-17T16:10:55.153Z

If each step of the guide was followed successfully, your application should now return a timestamp.

Conclusion

This guide was created only an extremely quick overview of the steps needed to get a NodeJS application running on Kubernetes with the bare minimum of configuration possible.

 

No doubt, Kubernetes is an extremely powerful computing platform and a number of features that we have not even touched upon today. The guides that follow will touch upon these aspects and how best to have Kubernetes working for you.

 

Share with Friends
FacebookTwitterLinkedInEmail
Use Cybytes and
Tip the Author!
Join
Share with Friends
FacebookTwitterLinkedInEmail
Ready to share your knowledge and expertise?
Comment on This

You must be logged in to post a comment.

Our Revolution

We believe Cyber Security training should be free, for everyone, FOREVER. Everyone, everywhere, deserves the OPPORTUNITY to learn, begin and grow a career in this fascinating field. Therefore, Cybrary is a free community where people, companies and training come together to give everyone the ability to collaborate in an open source way that is revolutionizing the cyber security educational experience.

Support Cybrary

Donate Here to Get This Month's Donor Badge

 

We recommend always using caution when following any link

Are you sure you want to continue?

Continue
Cancel