Use Ansible to automatically provision a Kubernetes cluster on CentOS 7 servers

Shashank Srivastava
8 min readMar 12, 2021

--

Quickly set up a Kubernetes cluster on CentOS 7 servers from scratch using Ansible.

Kubernetes logo.

Introduction

In one of my articles, I had explained how to set up a Kubernetes cluster on CentOS 7 servers with Docker as the container runtime. I focused on explaining the various steps that are required for successfully bringing the cluster up. However, what good a DevOps Engineer is if she/he can’t automate the boring tasks?

With that in my mind, I decided to write an Ansible playbook that does the magic & starts my cluster in a 100% automated manner.

This is a single playbook with dedicated tasks for both master & worker nodes. As you explore it, you’ll also get to learn about a few cool Ansible tricks.

For this setup, I provisioned a fresh CentOS 7.3 virtual machine (VM) using VirtualBox on macOS. Apart from assigning the static IP address, I did nothing on this VM. I let my playbook set up everything that is needed to set up the cluster. This VM acts as Kubernetes Control Plane/master node. Below are the server specifications.

Hostname — master-node

IP Address — 10.128.0.29

CPUs — 3

RAM — 2 GB

OS — CentOS Linux release 7.3.1611 (Core)

For the worker node, I cloned the master node & allocated 2 CPUs (instead of 3).

Hostname — worker-node

IP Address — 10.128.0.30

CPUs — 2

RAM — 2 GB

OS — CentOS Linux release 7.3.1611 (Core)

In my other article, I had mentioned that my worker node has only 1 CPU but this setup requires 2 CPUs. This is because YUM was causing high CPU usage while installing Docker on the worker node, so I added one more CPU & the playbook ran fine.

Without wasting much time, let’s get started now.

Requirements

  • 2 CentOS 7 servers (you can add as many servers as you want). The master node should have at least 3 CPUs (2 CPUs may also work).
  • Ansible installed & configured to connect to the remote servers without a password.
  • A sudo user configured on all the Kubernetes nodes.

My Ansible version is 3.0.0.

Steps to follow

1. Add Ansible inventory.

We will add 2 groups in the Ansible inventory — 1 for the master node & another for the worker. This way we can define which Ansible tasks should be executed on a certain node. For this, add the lines similar to the below to /etc/ansible/hosts file on the Ansible node.

admin@shashank-mbp ~ $ cat /etc/ansible/hosts
[k8s-master]
master-node ansible_host=10.128.0.29 ansible_user=shashank
[k8s-worker]
worker-node ansible_host=10.128.0.30 ansible_user=shashank

As you can see, I have 2 groups — [k8s-master] & [k8s-worker] with a server each. The first field of the line is the hostname of the remote server. ansible_host is the IP address of that node. ansible_user is a user on the remote server with sudo access.

2. Enable password-less SSH access from the Ansible node to the Kubernetes nodes.

Ansible works its charm by SSHing into the remote servers. For this, we’d require to set up password-less SSH access from the Ansible node to the Kubernetes nodes. This way, Ansible can connect to those nodes without entering the password.

For this — on the Ansible node, run the below command to generate the SSH key pair (if you already have a key pair, then skip to the next step).

admin@shashank-mbp ~> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/admin/.ssh/id_rsa):

This command will generate an SSH key pair — a public key & a private key. We now need to copy/install the public key to the individual remote servers (Kubernetes nodes).

The below command installs the public key to the node(s).

admin@shashank-mbp /U/a/S/mymusicstats (master)$ ssh-copy-id shashank@10.128.0.30
The authenticity of host '10.128.0.30 (10.128.0.30)' can't be established.
ECDSA key fingerprint is SHA256:6YDGHiAgI76urnDCQuigwSMNEkg/sBED0SRZcPyEM7E.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
shashank@10.128.0.30's password:
/etc/profile.d/lang.sh: line 19: warning: setlocale: LC_CTYPE: cannot change locale (UTF-8): No such file or directory
Number of key(s) added: 1Now try logging into the machine, with: "ssh 'shashank@10.128.0.30'"
and check to make sure that only the key(s) you wanted were added.

Notice 2 things here. I first generated the SSH key pair for the admin user. It means only admin user will log in to the remote server.

Second, I installed the public key using shashank user on the remote server.shashank user has sudo access on the remote server.

In other words — from the Ansible node, I can log in to the server (10.128.0.30) as admin user using shashank user, without a password.

admin@shashank-mbp ~> ssh shashank@10.128.0.30
Last login: Thu Mar 11 13:12:03 2021 from 10.128.0.0
-bash: warning: setlocale: LC_CTYPE: cannot change locale (UTF-8): No such file or directory
[shashank@worker-node ~]$

This is because admin user’s public key from the Ansible node is installed in shashank’s home directory on 10.128.0.30 server.

3. Test the connection from the Ansible node.

Ansible reads the inventory file (/etc/ansible/hosts) when we perform an action using it. To test the connection, enter the below command. If your configuration is correct, you should see a reply similar to this.

admin@shashank-mbp ~> ansible -m ping all
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
worker-node | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
master-node | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}

If it fails, then check if your Kubernetes nodes are reachable from the Ansible node or not, or if the password-less SSH set up is correct or not.

4. Create a file to store Kubernetes repository information.

The playbook requires a file kubernetes.repo to install Kubernetes. You can grab this file from my repository (linked in the next step) or you can create it manually as well.

admin@shashank-mbp ~/Downloads> cat kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

Make sure that this file is placed under the same directory from which you run the playbook.

5. Write a playbook.

Here comes the most important (and interesting) part — writing a playbook.

You can grab the playbook (ansible-k8s-cluster.yml) from my GitHub repository.

My playbook is intelligent enough to know that it has to perform a few sets of tasks on the master node & some other tasks on the worker node. There is a single playbook that handles both tiers.

I have given meaningful names to the tasks so that it is easy to understand what each task does.

6. Run the playbook.

Now it’s time to see the playbook in action! Enter the below command to run this playbook & you should have a Kubernetes cluster built from scratch in a few minutes!

root@shashank-mbp /U/a/Downloads# ansible-playbook ansible-k8s-cluster.yml
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
____________
< PLAY [all] >
------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
________________________
< TASK [Gathering Facts] >
------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
ok: [worker-node]
ok: [master-node]
________________________________________
< TASK [Install dependencies for Docker] >
----------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
ok: [worker-node]
ok: [master-node]
______________________________
< TASK [Add Docker repository] >
------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
changed: [worker-node]
changed: [master-node]
_______________________
< TASK [Install Docker] >
-----------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
ok: [worker-node]
ok: [master-node]
______________________________________
< TASK [Start & enable Docker service] >
--------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
ok: [worker-node]
ok: [master-node]
________________________________________
< TASK [Remove swapfile from /etc/fstab] >
----------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
ok: [worker-node] => (item=swap)
ok: [master-node] => (item=swap)
ok: [worker-node] => (item=none)
ok: [master-node] => (item=none)
_____________________
< TASK [Disable swap] >
---------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
skipping: [master-node]
skipping: [worker-node]
_________________________
< TASK [Disable IPtables] >
-------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
changed: [worker-node]
changed: [master-node]
________________________
< TASK [Disable SELinux] >
------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
changed: [worker-node]
changed: [master-node]
__________________________________________
< TASK [Add YUM repository for Kubernetes] >
------------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
ok: [worker-node]
ok: [master-node]
____________________________________
< TASK [Install Kubernetes binaries] >
------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
ok: [worker-node]
ok: [master-node]
________________________
< TASK [Restart kubelet] >
------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
changed: [worker-node]
changed: [master-node]
________________________________________________________
< TASK [Initialize the Kubernetes cluster using kubeadm] >
--------------------------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
skipping: [worker-node]
changed: [master-node]
___________________________________________
< TASK [Setup Kubernetes for shashank user] >
-------------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
skipping: [worker-node] => (item=mkdir -p /home/shashank/.kube)
skipping: [worker-node] => (item=cp -i /etc/kubernetes/admin.conf /home/shashank/.kube/config)
skipping: [worker-node] => (item=chown shashank:shashank /home/shashank/.kube/config)
____________________________________
< TASK [Install Flannel pod network] >
------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
skipping: [worker-node]
_____________________________________________________________
/ TASK [Retrieve Kubernetes join command that is used to join \
\ worker node(s)] /
-------------------------------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
skipping: [worker-node]
______________________________________________
< TASK [Attach kubeadm join command to a file on Ansible control node] >
----------------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
skipping: [worker-node]
_____________________________________________________
< TASK [Copy the join-command file created above to worker node] >
-----------------------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
ok: [worker-node]
________________________________________
< TASK [Join the worker node to cluster] >
----------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
changed: [worker-node]
____________
< PLAY RECAP >
------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
master-node : ok=11 changed=4 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
worker-node : ok=13 changed=5 unreachable=0 failed=0 skipped=6 rescued=0 ignored=0

The playbook will run for a few minutes & once it is done running, your cluster should be ready.

You can now login to the master node & see the magic. Notice how my playbook configured Kubernetes on the master node to run as a normal user shashank.

[root@master-node ~]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@master-node ~]# su - shashank
Last login: Thu Mar 11 10:30:19 EST 2021 from 10.128.0.0 on pts/1
[shashank@master-node ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-node Ready control-plane,master 175m v1.20.4
worker-node Ready <none> 13m v1.20.4

All the pods should now be running.

[shashank@master-node ~]$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-74ff55c5b-cvh74 1/1 Running 0 175m
kube-system coredns-74ff55c5b-kjnrc 1/1 Running 0 175m
kube-system etcd-master-node 1/1 Running 0 175m
kube-system kube-apiserver-master-node 1/1 Running 2 175m
kube-system kube-controller-manager-master-node 1/1 Running 2 175m
kube-system kube-flannel-ds-chw4l 1/1 Running 0 13m
kube-system kube-flannel-ds-tmnl7 1/1 Running 0 175m
kube-system kube-proxy-drkwz 1/1 Running 0 13m
kube-system kube-proxy-jxc88 1/1 Running 0 175m
kube-system kube-scheduler-master-node 1/1 Running 2 175m

I hope the article was informative & helpful! Stay tuned for more.

--

--

Shashank Srivastava
Shashank Srivastava

Written by Shashank Srivastava

DevSecOps Architect @Virtualness. Music/Book/Photography/Fitness lover & Blogger.

No responses yet