How to set up log monitoring for Kubernetes using Fluentd & Elasticsearch (AWS Elasticsearch Service)

Shashank Srivastava
6 min readMay 11, 2020

--

This article explains how you can monitor logs from your Kubernetes cluster & deployments using Fluentd & Elasticsearch. The entire process is pretty simple & you should be able to follow it in no time.

Introduction

In my previous 2 articles, I explained how to monitor important Kubernetes metrics using Prometheus & Grafana and deploy a WordPress blog on Minikube.

This article explains how to monitor your Kubernetes logs using fluentd & Elasticsearch.

In the process, you’ll also learn how you can use AWS free tier & create an Elasticsearch Service domain there for monitoring your logs.

You can use your existing ELK stack but I have deliberately chosen to use AWS Elasticsearch Service as it relieves you from installing & configuring ELK.

So, let’s get started!

Requirements

  1. A free AWS account (AWS free tier).
  2. Minikube.
  3. Kubectl.

Steps to perform

1. Create a free AWS account.

Since I am using AWS ES (Elasticsearch Service) for this tutorial, we’ll start by creating a free account on AWS & using its free tier. This is optional if you already have your own ELK set up.

Go to AWS free tier link & create your account. Please note that this will require a valid credit/debit card. You don’t have to pay anything for a year if you stay within your free tier limits. Make sure you read the instructions carefully.

2. Create an Elasticsearch Service domain on AWS.

Once your AWS account has been created & is operational, go to AWS console & click on Services & then select Elasticsearch Service. Follow the screenshots carefully.

Now click on the Create a new domain button.

On the next screen, select Deployment type as Development and testing.

Give it a name under Elasticsearch domain name & select Instance type as t2.small.elasticsearch. This is very important as other instance types will cost you money.

Leave Number of nodes as 1.

On the next screen — Configure access & security, choose Public access for Network configuration. This will allow logs from your Kubernetes cluster to be shipped to AWS.

Under Domain access policy, select Custom access policy & configure it like the below screenshot. Leave the Encryption section as it is.

Go on & create your domain once you have verified all the settings.

Please note that it will take around 10–15 minutes for your Elasticsearch domain to be created. Once created, you will get its endpoint & Kibana’s URL. You need to note down the domain endpoint.

3. Deploy Fluentd daemonset in Kubernetes.

Fluentd is an open-source log collector. We’ll use it for collecting logs from Kubernetes. To enable fluentd, we need to do 2 things.

a). Create a Service Account & bind it to a ClusterRole—This is needed to give access to fluentd to collect data from Kubernetes pods & namespaces.

b). Create its daemonset in Kubernetes — fluentd is deployed as a daemonset. We also need to supply our Elasticsearch domain endpoint & its port here.

To do this, create 2 YAML files — fluentd-rbac.yaml & fluentd-daemonset.yaml.

Below are the files.

fluentd-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: kube-system
---apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: fluentd
namespace: kube-system
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: kube-system

fluentd-daemonset.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-system
labels:
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
selector:
matchLabels:
k8s-app: fluentd-logging
template:
metadata:
labels:
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.3-debian-elasticsearch
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "YOUR_ELASTICSEARCH_DOMAIN_ENDPOINT"
- name: FLUENT_ELASTICSEARCH_PORT
value: "80"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENT_UID
value: "0"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers

In the fluentd-daemonset.yaml file, under env: section, replace the value of YOUR_ELASTICSEARCH_DOMAIN_ENDPOINT with your own endpoint. The value should NOT start with http://.

Once these files are created, deploy them using kubectl command. If your Minikube is not running, please start it before applying the files.

fluentd-rbac.yaml

admin@shashank-mbp ~/wordpress-minikube> kubectl apply -f fluentd-rbac.yaml
serviceaccount/fluentd created
clusterrole.rbac.authorization.k8s.io/fluentd created
clusterrolebinding.rbac.authorization.k8s.io/fluentd created

fluentd-daemonset.yaml

admin@shashank-mbp ~/wordpress-minikube> kubectl apply -f fluentd-daemonset.yaml
daemonset.apps/fluentd created

You can check the status by opening your Minikube dashboard or using CLI.

4. Configure Kibana.

Go back to your AWS console & open your newly created Elasticsearch Service domain. You will find the endpoint under the Overview tab. When you click the Kibana URL, it will open in a new tab.

Once it is open, click on Index Patterns & then Create index pattern.

Under Index pattern, type logstash-*. You should see a success message like below. Click the Next step button.

Now under Step 2 of 2:Configure settings, choose @timestamp as Time Filter. Click the Create index pattern button to create the index.

Now, click the Discover link (compass icon) on the left side. You will see the logs coming from your Kubernetes cluster there.

Logs being displayed on Kibana from Kubernetes cluster

Similarly, you can create Visualisations in Kibana to visualize your Kubernetes items. For this, click the Visualize icon (below Discover).

Then click the Create new visualization button. Now choose Pie from the options.

Now, under Buckets on your left select Significant Terms for Aggregation & kubernetes.pod_name-keywords as Field.

Put 10 or 20 under Size textbox. Once done, click the Apply changes button (the one that looks like a play button).

You will now see a nice pie-chart of your Kubernetes pods.

Pie-chart showing various pods from my Minikube cluster

That’s all for this article. I hope you liked it. See you again! Take care & stay safe.

--

--

Shashank Srivastava

DevSecOps Architect @Virtualness. Music/Book/Photography/Fitness lover & Blogger.