Bootstrap ELK stack on K8s Cluster for your development environment

In this article, I am going to show you how to bootstrap ELK stack on top of kubernetes(k8s) cluster for my development environment. Kubernetes is an opensource production grade container orchestration platform. Containerization helps us to develop, build, test, release and package our software applications faster. Application deployment can be made simpler with the help of container orchestration platform such as kubernetes. Besides, we can scale our application within a few seconds of time. It is often required to have a development environment in k8s in a situation such as deploying a multiple versions of application several times in a day.

Why Kubernetes?

Kubernetes is now becoming a de facto standard for deploying applications in almost all software organizations irrespective of size. Kubernetes cluster can be setup in an on-premises environment as well as cloud environment. Cloud Service Providers such as GCE, AWS and Microsoft Azure offers a managed kubernetes service under the name Google Kubernetes Engine(GKE), Elastic Kubernetes Service(EKS), Azure Kubernetes Service(AKS) for the customer to provision a kubernetes cluster in a single click. Each vendor provides a different features for their customers to run their workloads on top of it. Besides, default authentication mechanism of kubernetes, they support integrating their Identity and Access Management system with kubernetes cluster. They take care of managing the control plane in highly available mode. We can scale our worker nodes based on demand. Customer can focus on deploying and scaling their applications at lightning speed.

Are you ready to ship your application to k8s?

People who are already running their application in Docker Swarm or ECS can easily migrate their workloads to k8s easily. We can also setup k8s cluster on EC2 environment using third party tools such as KOPS. It is a reliable tool to manage a production ready cluster in AWS environment. It leverages EC2 Ondemand and EC2 Spot, ASG, ELB, IAM, etc. We can also import the cluster state as terraform configuration and manage the infrastruture as code. I explained briefly about Infrastructure as Code in my previous blog post. For dev and test environment, we can setup a kubernetes cluster using minikube. Minikube runs a virtual machine using Virtual Box or KVM hypervisor in your machine and bootstrap a kubernetes cluster.

Once you started minikube, it will run a virtual machine and start all kubernetes service. It includes master and worker node components. Master node consists of Kubernetes API server, etcd key value store, Controller Manager, Scheduler and slave node consists of Kube proxy, Kubelet and docker/rocket run time. Pod is a basic execution unit of a kubernetes application. It encapsulates a container(s), volume of your application. A pod can be a stateful or stateless. We can attach a configmap and secrets to a pod.

Select the minikube context to deploy ELK stack on it. I have written a simple kubernetes deployment manifest to bootstrap my ELK stack. It includes creating a namespace, persistent volume claim to store elastic search data, deployment to maintain the pod replica, service to access elastic search and kibana endpoints. I am not using any special configurations in my ELK stack. We can also use configmap and secrets if we are supposed to have a custom configuration.

ELK stack with metric beat

Kubernetes deployment manifest file to create a namespace in a cluster

---
apiVersion: v1
kind: Namespace
metadata:
  name: elkstack

Kubernetes deployment manifest file to create a persistent volume claim of size 10GiB to store Elasticsearch data for indexing, Deployment spec to create a pod using official Elasticsearch docker image with replica of 1 pod, A service to access ELK stack endpoints from your computer, I use NodePort type in my service definition.

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  namespace: elkstack
  name: es-data
  labels:
    service: es-data
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: elkstack
  name: elasticsearch
  labels:
    service: elasticsearch
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        service: elasticsearch
    spec:
      containers:
      - image: docker.elastic.co/elasticsearch/elasticsearch:7.4.2
        livenessProbe:
          exec:
            command:
            - curl --cacert /usr/share/elasticsearch/config/certs/ca/ca.crt -s https://localhost:9200
              >/dev/null; if [[ $? == 52 ]]; then echo 0; else echo 1; fi
          failureThreshold: 5
          periodSeconds: 30
          timeoutSeconds: 10
        name: elasticsearch
        ports:
        - containerPort: 9200
        resources: {}
        volumeMounts:
        - mountPath: /usr/share/elasticsearch/data
          name: es-data
      restartPolicy: Always
      volumes:
      - name: es-data
        persistentVolumeClaim:
          claimName: es-data
---
apiVersion: v1
kind: Service
metadata:
  namespace: elkstack
  name: elasticsearch
  labels:
    service: elasticsearch
spec:
  ports:
  - name: "elasticsearch"
    protocol: TCP
    port: 9200
    targetPort: 9200
  selector:
    service: elasticsearch
  type: NodePort

Kubernetes deployment manifest file to create a deployment object for logstash.

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: elkstack
  name: logstash
  labels:
    service: logstash
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      labels:
        service: logstash
    spec:
      containers:
      - image: docker.elastic.co/logstash/logstash:7.4.2
        livenessProbe:
          exec:
            command:
            - bin/logstash -t
          failureThreshold: 5
          periodSeconds: 60
          timeoutSeconds: 50
        name: logstash
        resources: {}
      restartPolicy: Always

Kubernetes deployment manifest file to create a deployment and service objects for Kibana.

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: elkstack
  name: kibana
  labels:
    service: kibana
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      labels:
        service: kibana
    spec:
      containers:
      - image: docker.elastic.co/kibana/kibana:7.4.2
        livenessProbe:
          exec:
            command:
            - curl --cacert /usr/share/elasticsearch/config/certs/ca/ca.crt -s https://localhost:5601
              >/dev/null; if [[ $? == 52 ]]; then echo 0; else echo 1; fi
          failureThreshold: 5
          periodSeconds: 30
          timeoutSeconds: 10
        name: kibana
        ports:
        - containerPort: 5601
        resources: {}
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  namespace: elkstack
  name: kibana
  labels:
    service: kibana
spec:
  ports:
  - name: "kibana"
    protocol: TCP
    port: 5601
    targetPort: 5601
  selector:
    service: kibana
  type: NodePort

Finally, I am adding a kubernetes deployment manifest file to create a deployment object for metric beat.

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: elkstack
  name: metricbeat
  labels:
    service: metricbeat
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        service: metricbeat
    spec:
      containers:
      - args:
        - --strict.perms=false
        - -e
        image: docker.elastic.co/beats/metricbeat:7.4.2
        livenessProbe:
          exec:
            command:
            - metricbeat test config
          failureThreshold: 5
          periodSeconds: 30
          timeoutSeconds: 15
        name: metricbeat
        resources: {}
        volumeMounts:
        - mountPath: /usr/local/bin/setup-beat.sh
          name: metricbeat-claim0
          readOnly: true
      restartPolicy: Always
      volumes:
      - name: metricbeat-claim0
        persistentVolumeClaim:
          claimName: metricbeat-claim0
          readOnly: true

I have created a separate manifest file for each components and run kubectl apply command. Kubectl will interact with kubernetes API server, It will process the request and coordinates with controller managers to fulfill the request. Scheduler will assign a node to run the pod based on the replica count and node selector.

kubectl apply

Once the command is executed, kubectl will will display the status of the object creation. If there is no error in input or the syntax of the YAML file, objects will be created.

From the above screenshot, we can see the different k8s objects were created, Service will be available once the pod becomes running state. We can do port forwarding to access Elasticsearch and Kibana endpoints.

In my next article, Lets have a overview of helm – a package manager for kubernetes, how to develop a helm chart and deploy our software application release to kubernetes cluster.

Be the first to comment

Leave a Reply

Your email address will not be published.


*