Install and manage automatically a Kubernetes cluster on VMware vSphere with Terraform and Kubespray
words - read.

Install and manage automatically a Kubernetes cluster on VMware vSphere with Terraform and Kubespray

If you already completed the tutorials Deploy Kubernetes 1.9 from scratch on VMware vSphere and Install and configure a multi-master Kubernetes cluster with kubeadm, you should have a pretty good understanding of how a multi-master Kubernetes cluster is structured. You are now probably looking for a way to automate the deployment of your lab so you don't have to follow all these painful steps each time you want to deploy a clean environment.

Kubespray is a Kubernetes incubator project. It is composed of Ansible playbook and automates the deployment of a Kubernetes cluster on an existing infrastructure.

In this lab, we will use Terraform to deploy our infrastructure on VMware vSphere and, in a second stage, Terraform will call Kubespray to install and configure Kubernetes for us. This automation not only supports the deployment, but also adds worker nodes to the Kubernetes cluster, removes worker nodes from the Kubernetes cluster, upgrades the version of Kubernetes, and destroys the Kubernetes cluster.

We will take advantage of the fact that our infrastructure will be running on VMware vSphere and that Kubespray supports the configuration of vSphere Cloud Provider. This will allow us to use the vSphere storage as persistent volumes in the Kubernetes cluster as seen in the article Use vSphere Storage as Kubernetes persistent volumes.

Kubernetes logo

Requirements

For this lab, you will need a configured VMware vSphere environment. You will also need a Ubuntu 16.04 desktop client machine. This machine will be used to execute the Terraform script and to access the Kubernetes dashboard. This machine needs to be on a network that has access to the vCenter API as Terraform will need this to deploy and configure the virtual machines.

For the Kubernetes infrastructure, we are going to deploy three master nodes. They will have the IPs 10.10.40.110, 10.10.40.111, and 10.10.40.112. In front of these master nodes, we will deploy a HAProxy load balancer with the IP 10.10.40.113.

Regarding the Kubernetes worker nodes, we will deploy them on the IP range 10.10.40.120-10.10.40.123.

We will also need a Ubuntu 16.04 vSphere template and a specific user for the vSphere Cloud Provider. The step to configure these will be described later on.

Installing the requirements on the Ubuntu 16.04 client machine

1- Install Git.

$ sudo apt-get install git

2- Install Unzip.

$ sudo apt-get install unzip

3- Install Python 2.7.

$ sudo apt-get install python

4- Install Pip.

$ sudo apt-get install python-pip

5- Install Ansible.

$ pip install ansible

6- Install the Python netaddr library.

$ pip install netaddr

7- Install Terraform.

$ wget https://releases.hashicorp.com/terraform/0.11.7/terraform_0.11.7_linux_amd64.zip

$ unzip terraform_0.11.7_linux_amd64.zip

$ sudo mv terraform /usr/local/bin

8- Install Kubectl.

$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | \
sudo apt-key add -

$ sudo vim /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main

$ sudo apt-get update

$ sudo apt-get install kubectl

Configuring the requirements on VMware vSphere

Create a Ubuntu 16.04 template

1- Create a new virtual machine.

Create Ubuntu vSphere template

Create Ubuntu vSphere template

2- Enter the name "ubuntu-16.04-terraform-template" as the name of the virtual machine.

Create Ubuntu vSphere template

3- Choose the position of the virtual machine.

Create Ubuntu vSphere template

Create Ubuntu vSphere template

4- Choose the compatibility of the virtual machine.

Create Ubuntu vSphere template

5- Choose Ubuntu as a guest OS type.

Create Ubuntu vSphere template

6- Set the type of the SCSI controller to "VMware Paravirtual".

Create Ubuntu vSphere template

7- Select the network of the virtual machine.

Create Ubuntu vSphere template

8- Select the Ubuntu 16.04 ISO.

Create Ubuntu vSphere template

Create Ubuntu vSphere template

9- Connect the CD drive when the virtual machine boots.

Create Ubuntu vSphere template

10- Complete the creation of the virtual machine.

Create Ubuntu vSphere template

Create Ubuntu vSphere template

11- Power on the virtual machine.

Create Ubuntu vSphere template

12- Open the console of the virtual machine.

Create Ubuntu vSphere template

13- Select the language of the installer.

Create Ubuntu vSphere template

14- Install Ubuntu 16.04 server.

Create Ubuntu vSphere template

15- Choose the language of the system.

Create Ubuntu vSphere template

16- Choose the location of the system.

Create Ubuntu vSphere template

17- Choose the mapping of the keyboard.

Create Ubuntu vSphere template

Create Ubuntu vSphere template

Create Ubuntu vSphere template

18- Configure the network card with a temporary IP. We will unconfigure it later on.

Create Ubuntu vSphere template

Create Ubuntu vSphere template

Create Ubuntu vSphere template

Create Ubuntu vSphere template

Create Ubuntu vSphere template

Create Ubuntu vSphere template

19- Choose the default hostname.

Create Ubuntu vSphere template

20- Leave the domain name empty.

Create Ubuntu vSphere template

21- Choose the name of your user.

Create Ubuntu vSphere template

Create Ubuntu vSphere template

22- Choose a password for your user.

Create Ubuntu vSphere template

Create Ubuntu vSphere template

23- Choose if you would like to encrypt your disk or not.

Create Ubuntu vSphere template

24- Configure the timezone of the system.

Create Ubuntu vSphere template

25- Select the partitioning method.

Create Ubuntu vSphere template

26- Select the disk to install Ubuntu 16.04.

Create Ubuntu vSphere template

Create Ubuntu vSphere template

Create Ubuntu vSphere template

Create Ubuntu vSphere template

27- Configure a proxy if you are using one to access the Internet.

Create Ubuntu vSphere template

28- Choose to install the security update automatically.

Create Ubuntu vSphere template

29- Install OpenSSH.

Create Ubuntu vSphere template

30- Install GRUB.

Create Ubuntu vSphere template

31- Complete the installation.

Create Ubuntu vSphere template

32- Once the virtual machine has rebooted, SSH to it from the client machine.

$ ssh sguyennet@10.10.40.254

33- Upgrade the system.

$ sudo apt-get update

$ sudo apt-get upgrade

34- Upgrade the kernel to a version above 4.8. You can skip this step if you are not planning to use Cilium as an overlay network for your Kubernetes cluster.

$ sudo apt-get install linux-image-4.15.0-15-generic \
linux-image-extra-4.15.0-15-generic

$ sudo reboot

35- Allow your user to use sudo without a password.

$ sudo visudo
...
sguyennet ALL=(ALL) NOPASSWD: ALL

36- Generate a private and a public key on the client machine. Leave the passphrase empty.

$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/sguyennet/.ssh/idrsa): 
Created directory '/home/sguyennet/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/sguyennet/.ssh/idrsa.
Your public key has been saved in /home/sguyennet/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:6pK2EnnNianYPjb/0YoEhZlz23tQIJwzkQ0bUytkgcg sguyennet@ubuntu
The key's randomart image is:
+---[RSA 2048]----+
|.. oXOo          |
|.E.*=.o         |
|  = =o. .        |
|   + + .         |
|  ...=o.S        |
|  o.+ +=         |
| o +..+ o        |
|. B.+o +         |
| o.+++          |
+----[SHA256]-----+

37- Copy the public key to the template virtual machine.

$ ssh-copy-id sguyennet@10.10.40.254

38- Verify that you can SSH to the template virtual machine without entering a password.

$ ssh sguyennet@10.10.40.254

39- Remove the ens192 network interface configuration of the template virtual machine.

$ sudo vim /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback

40- Restart the networking of the template virtual machine.

$ sudo systemctl restart networking

41- Shutdown the template virtual machine.

$ sudo shutdown now

42- Take a snapshot of the template virtual machine. This snapshot will be used to do a linked clone of the template into several virtual machines.

Snapshot linked clone

Snapshot linked clone

43- Convert the virtual machine to a template.

Convert to template

Convert to template

44- Create a folder for the template.

Template folder

Template folder

45- Move the template to the new folder.

Template folder

Create a user to access the VMware vSphere storage from Kubernetes

1- Browse to the administration page.

Create user vSphere Cloud Provider

2- Add a new user called "k8s-vcp@vsphere.local".

Create user vSphere Cloud Provider

Create user vSphere Cloud Provider

Create roles for the vSphere Cloud Provider user

1- Create a role to view the profile-driven storage.

Create roles vSphere Cloud Provider

Create roles vSphere Cloud Provider

2- Create a role to manage the Kubernetes nodes virtual machines.

Create roles vSphere Cloud Provider

Create roles vSphere Cloud Provider

Create roles vSphere Cloud Provider

Create roles vSphere Cloud Provider

Create roles vSphere Cloud Provider

3- Create a new role to manage the Kubernetes volumes.

Create roles vSphere Cloud Provider

Create roles vSphere Cloud Provider

Assign permission to the vSphere Cloud Provider user

1- Add the read-only permission at the datacenter level. Remove the propagation of the permission.

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

2- Add the profile-driven storage view at the vCenter level. Remove the propagation of the permission.

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

3- Add the manage node permission at the cluster level. This cluster is the cluster where the Kubernetes nodes will be deployed. Keep the propagation of the permission.

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

4- Add the manage volumes permission at the datastore level. This datastore will be the datastore where the Kubernetes volumes will be created. Remove the propagation of the permission.

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

Add permission vSphere Cloud Provider

Create a directory for the vSphere Cloud Provider

1- Browse to the datastore files tabulation and create a new folder. The datastore needs to be the one to which you assigned permission to in the previous steps. This folder will store the virtual disks created by the vSphere Cloud Provider.

Create folder vSphere Cloud Provider

2- Name the folder "kubevols".

Create folder vSphere Cloud Provider

Create a resource pool for the Kubernetes cluster

1- Add a new resource pool in the cluster to which you assigned permission to in the previous steps.

Add resource pool

Add resource pool

Launching our first Kubernetes cluster

Clone the terraform script

1- Go back to the client machine.

2- Clone the terraform-vsphere-kubespray GitHub repository.

$ git clone https://github.com/sguyennet/terraform-vsphere-kubespray.git

Configure the terraform script

1- Go to the terraform-vsphere-kubespray directory.

$ cd terraform-vsphere-kubespray

2- Edit the terraform.tfvars configuration file and fill in the different variables. Enable the anti-affinity rule for the Kubernetes master virtual machine only if your vSphere cluster supports DRS. For the network plugin, you can choose between various options like Cilium, Weave, or Flannel.

$ vim terraform.tfvars
# vCenter connection
vsphere_vcenter = "vcsa.inkubate.io"
vsphere_user = "administrator@vsphere.local"
vsphere_password = "**********"
vsphere_unverified_ssl = "true"
vsphere_datacenter = "inkubate-lab"
vsphere_drs_cluster = "Compute-01"
vsphere_resource_pool = "Compute-01/Resources/kubernetes-kubespray"
vsphere_enable_anti_affinity = "true"
vsphere_vcp_user = "k8s-vcp@vsphere.local"
vsphere_vcp_password = "**********"
vsphere_vcp_datastore = "Datastore-02"

# Kubernetes infrastructure vm_user = "sguyennet" vm_password = "**********" vm_folder = "kubernetes-kubespray" vm_datastore = "Datastore-01" vm_network = "pg-inkubate-production-static" vm_template = "terraform-template/ubuntu-16.04-terraform-template" vm_linked_clone = "false" k8s_kubespray_url = "https://github.com/kubernetes-incubator/kubespray.git" k8s_kubespray_version = "v2.5.0" k8s_version = "v1.10.2" k8s_master_ips = { "0" = "10.10.40.110" "1" = "10.10.40.111" "2" = "10.10.40.112" } k8s_worker_ips = { "0" = "10.10.40.120" "1" = "10.10.40.121" "2" = "10.10.40.122" } k8s_haproxy_ip = "10.10.40.113" k8s_netmask = "24" k8s_gateway = "10.10.40.1" k8s_dns = "10.10.40.1" k8s_domain = "inkubate.io" k8s_network_plugin = "weave" k8s_weave_encryption_password = "**********" k8s_master_cpu = "1" k8s_master_ram = "2048" k8s_worker_cpu = "1" k8s_worker_ram = "2048" k8s_haproxy_cpu = "1" k8s_haproxy_ram = "1024" k8s_node_prefix = "k8s-kubespray"

3- Initialize the terraform script. This step is going to download the necessary Terraform provider.

$ terraform init

3- Check what Terraform is going to deploy.

$ terraform plan

4- Deploy the Kubernetes cluster.

$ terraform apply
...
Apply complete! Resources: 17 added, 0 changed, 0 destroyed.

5- List the Kubernetes nodes.

$ kubectl --kubeconfig config/admin.conf get nodes
NAME                     STATUS    ROLES     AGE       VERSION
k8s-kubespray-master-0   Ready     master    1m       v1.10.2
k8s-kubespray-master-1   Ready     master    1m       v1.10.2
k8s-kubespray-master-2   Ready     master    2m       v1.10.2
k8s-kubespray-worker-0   Ready     node      1m       v1.10.2
k8s-kubespray-worker-1   Ready     node      1m       v1.10.2
k8s-kubespray-worker-2   Ready     node      1m       v1.10.2

Scaling the cluster

Add one or several worker nodes

1- Edit the terraform.tfvars configuration file and add a new worker node to the list of worker IPs.

$ vim terraform.tfvars
...
k8s_worker_ips = {
  "0" = "10.10.40.120"
  "1" = "10.10.40.121"
  "2" = "10.10.40.122"
  "3" = "10.10.40.123"
}

2- Add the new worker node to the cluster.

$ terraform apply -var 'action=add_worker'
...
Apply complete! Resources: 3 added, 0 changed, 2 destroyed.

3- Check that the worker node was added to the Kubernetes cluster.

$ kubectl --kubeconfig config/admin.conf get nodes
NAME                     STATUS    ROLES     AGE       VERSION
k8s-kubespray-master-0   Ready     master    31m       v1.10.2
k8s-kubespray-master-1   Ready     master    31m       v1.10.2
k8s-kubespray-master-2   Ready     master    32m       v1.10.2
k8s-kubespray-worker-0   Ready     node      31m       v1.10.2
k8s-kubespray-worker-1   Ready     node      31m       v1.10.2
k8s-kubespray-worker-2   Ready     node      31m       v1.10.2
k8s-kubespray-worker-3   Ready     node      1m        v1.10.2

Remove one or several worker nodes

1- Edit the terraform.tfvars configuration file and remove a new worker node from the list of worker IPs.

$ vim terraform.tfvars
...
k8s_worker_ips = {
  "0" = "10.10.40.120"
  "1" = "10.10.40.121"
  "2" = "10.10.40.122"
}

2- Remove the new worker node from the cluster. Before being removed, the node will be drained and all the pods running on this node will be rescheduled on the other worker nodes.

$ terraform apply -var 'action=remove_worker'
...
Apply complete! Resources: 1 added, 0 changed, 3 destroyed.

3- Check that the worker node was removed from the Kubernetes cluster.

$ kubectl --kubeconfig config/admin.conf get nodes
NAME                     STATUS    ROLES     AGE       VERSION
k8s-kubespray-master-0   Ready     master    36m       v1.10.2
k8s-kubespray-master-1   Ready     master    36m       v1.10.2
k8s-kubespray-master-2   Ready     master    36m       v1.10.2
k8s-kubespray-worker-0   Ready     node      36m       v1.10.2
k8s-kubespray-worker-1   Ready     node      36m       v1.10.2
k8s-kubespray-worker-2   Ready     node      36m       v1.10.2

Upgrading the cluster to a new version of Kubernetes

1- Edit the terraform.tfvars configuration file and modify the Kubernetes version.

$ vim terraform.tfvars
...
k8s_version = "v1.10.3"
...

2- Open a new terminal on the client machine to monitor what is happening during the upgrade of the Kubernetes cluster.

$ watch -n 1 kubectl --kubeconfig ~/terraform-vsphere-kubespray/config/admin.conf get nodes
NAME                     STATUS                     ROLES     AGE       VERSION
k8s-kubespray-master-0   Ready                      master    1h        v1.10.3
k8s-kubespray-master-1   Ready                      master    1h        v1.10.3
k8s-kubespray-master-2   Ready                      master    1h        v1.10.3
k8s-kubespray-worker-0   Ready                      node      1h        v1.10.3
k8s-kubespray-worker-1   Ready,SchedulingDisabled   node      1h        v1.10.2
k8s-kubespray-worker-2   Ready                      node      1h        v1.10.2

3- Upgrade the Kubernetes version. The upgrade will be done node by node. The worker nodes will be drained and all the pods on the drained node will be rescheduled on the other nodes. This should avoid downtime of your application running in the Kubernetes cluster as long as you scale your application to at least two replicas.

$ terraform apply -var 'action=upgrade'
...
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Checking that the vSphere Cloud Provider is working

Create a storage class for our application

1- Create the file redis-sc.yaml.

$ vim redis-sc.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: thin-disk
provisioner: kubernetes.io/vsphere-volume
parameters:
    diskformat: thin

2- Create the storage class.

$ kubectl --kubeconfig ~/terraform-vsphere-kubespray/config/admin.conf apply -f redis-sc.yaml

Create a persistent storage claim for the Redis master node

1- Create the file redis-master-claim.yaml.

$ vim redis-master-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: redis-master-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: thin-disk
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi

2- Create the persistent storage claim.

$ kubectl --kubeconfig ~/terraform-vsphere-kubespray/config/admin.conf apply -f redis-master-claim.yaml

3- Check that the virtual disk for the Redis master pod was created in the "kubevols" directory.

vSphere Kubernetes volume claim

Create a persistent storage claim for the Redis slave node

1- Create the file redis-slave-claim.yaml.

$ vim redis-slave-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: redis-slave-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: thin-disk
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi

2- Create the persistent storage claim.

$ kubectl --kubeconfig ~/terraform-vsphere-kubespray/config/admin.conf apply -f redis-slave-claim.yaml

3- Check that the virtual disk for the Redis slave pod was created in the "kubevols" directory.

vSphere Kubernetes volume claim

Launch the application

1- Create the file guestbook-all-in-one.yaml.

$ vim guestbook-all-in-one.yaml
apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    tier: backend
    role: master
spec:
  ports:
    # the port that this service should serve on
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    tier: backend
    role: master
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
  # these labels can be applied automatically
  # from the labels in the pod template if not set
  # labels:
  #   app: redis
  #   role: master
  #   tier: backend
spec:
  # this replicas value is default
  # modify it according to your case
  replicas: 1
  # selector can be applied automatically
  # from the labels in the pod template if not set
  # selector:
  #   matchLabels:
  #     app: guestbook
  #     role: master
  #     tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/google_containers/redis:e2e  # or just image: redis
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379
        volumeMounts:
        - name: redis-master-data
          mountPath: /data
      volumes: 
      - name: redis-master-data
        persistentVolumeClaim:
          claimName: redis-master-claim
---
apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    tier: backend
    role: slave
spec:
  ports:
    # the port that this service should serve on
  - port: 6379
  selector:
    app: redis
    tier: backend
    role: slave
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
  # these labels can be applied automatically
  # from the labels in the pod template if not set
  # labels:
  #   app: redis
  #   role: slave
  #   tier: backend
spec:
  # this replicas value is default
  # modify it according to your case
  replicas: 1
  # selector can be applied automatically
  # from the labels in the pod template if not set
  # selector:
  #   matchLabels:
  #     app: guestbook
  #     role: slave
  #     tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google_samples/gb-redisslave:v1
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below.
          # value: env
        ports:
        - containerPort: 6379
        volumeMounts:
        - name: redis-slave-data
          mountPath: /data
      volumes: 
      - name: redis-slave-data
        persistentVolumeClaim:
          claimName: redis-slave-claim
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  type: NodePort
  ports:
    # the port that this service should serve on
  - port: 80
  selector:
    app: guestbook
    tier: frontend
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
  # these labels can be applied automatically
  # from the labels in the pod template if not set
  # labels:
  #   app: guestbook
  #   tier: frontend
spec:
  # this replicas value is default
  # modify it according to your case
  replicas: 4
  # selector can be applied automatically
  # from the labels in the pod template if not set
  # selector:
  #   matchLabels:
  #     app: guestbook
  #     tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v4
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below.
          # value: env
        ports:
        - containerPort: 80

3- Start the application.

$ kubectl --kubeconfig ~/terraform-vsphere-kubespray/config/admin.conf apply -f guestbook-all-in-one.yaml

4- Get the port on which the application is running.

$ kubectl --kubeconfig ~/terraform-vsphere-kubespray/config/admin.conf describe service frontend | grep NodePort

5- Browse to http://[ip_of_one_of_your_kubernetes_node]:[your_application_port].

6- Add some messages to the guestbook.

7- Destroy the application.

$ kubectl --kubeconfig ~/terraform-vsphere-kubespray/config/admin.conf delete -f guestbook-all-in-one.yaml

8- Check that the application was deleted.

$ kubectl --kubeconfig ~/terraform-vsphere-kubespray/config/admin.conf get pods

9- Create a new application.

$ kubectl --kubeconfig ~/terraform-vsphere-kubespray/config/admin.conf apply -f guestbook-all-in-one.yaml

10- Get the new port on which the application is running.

$ kubectl --kubeconfig ~/terraform-vsphere-kubespray/config/admin.conf describe service frontend | grep NodePort

5- Browse to http://[ip_of_one_of_your_kubernetes_node]:[your_application_port].

Your messages should still be there.

Accessing the Kubernetes dashboard

1- Create an admin user manifest.

$ vim kubernetes-dashboard-admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

2- Create the admin user.

$ kubectl --kubeconfig ~/terraform-vsphere-kubespray/config/admin.conf create -f kubernetes-dashboard-admin.yaml

3- Get the admin user token.

$ kubectl --kubeconfig ~/terraform-vsphere-kubespray/config/admin.conf -n kube-system describe secret $(kubectl --kubeconfig ~/terraform-vsphere-kubespray/config/admin.conf -n kube-system get secret | grep admin-user | awk '{print $1}')

4- Copy the token.

5- Start the proxy to access the dashboard.

$ kubectl --kubeconfig ~/terraform-vsphere-kubespray/config/admin.conf proxy

6- Browse to http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy.

7- Select Token and paste the token from step 4.

kubernetes dashboard

kubernetes dashboard

Installing Heapster

Heapster is a small monitoring tool. It collects the performance metrics of the different pods running in the cluster and displays them in the Kubernetes dashboard.

1- Create a manifest for Heapster.

$ vim heapster.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: heapster
  namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: heapster
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: heapster
    spec:
      serviceAccountName: heapster
      containers:
      - name: heapster
        image: gcr.io/google_containers/heapster-amd64:v1.4.2
        imagePullPolicy: IfNotPresent
        command:
        - /heapster
        - --source=kubernetes.summary_api:''?useServiceAccount=true&kubeletHttps=true&kubeletPort=10250&insecure=true
---
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: Heapster
  name: heapster
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: 8082
  selector:
    k8s-app: heapster
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: heapster
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:heapster
subjects:
- kind: ServiceAccount
  name: heapster
  namespace: kube-system

2- Deploy Heapster.

$ kubectl --kubeconfig ~/terraform-vsphere-kubespray/config/admin.conf create -f heapster.yaml

Destroying the Kubernetes cluster

1- Go to the terraform-vsphere-kubespray directory.

$ cd ~/terraform-vsphere-kubespray

2- Destroy the deployment

$ terraform destroy

Conclusion

You now have a way to easily deploy, scale, upgrade and destroy a Kubernetes cluster on VMware vSphere. This will allow you to do even more testing without worrying about breaking the cluster as you can spin up a new one automatically.

If you find an issue with the Terraform script, please let me know by opening an issue on GitHub: https://github.com/sguyennet/terraform-vsphere-kubespray

Comments

comments powered by Disqus