Use vSphere Storage as Kubernetes persistent volumes
words - read.

Use vSphere Storage as Kubernetes persistent volumes

Hatchway is a VMware open source project. The goal of this project is to allow to use the vSphere storage technology with Docker containers and Kubernetes pods. Hatchway is composed of two projects: one is the plugin for Docker and the other one is vSphere Cloud Provider, which is the provider for Kubernetes. If you are using VMware vSAN, one of the main benefits of vSphere Cloud Provider is that you can leverage the vSAN storage policies.

Hatchway logo

Prerequisites

You must have a Kubernetes cluster running on VMware vSphere virtual machines and the VMware tools must be installed on each virtual machine of the Kubernetes cluster.
If you don't have a Kubernetes cluster running on VMware vSphere you can follow the Install and configure a multi-master Kubernetes cluster with kubeadm article.

Installing vSphere Cloud Provider

Adding vSphere Cloud Provider namespace, account, and roles

1- Create the file vcp-namespace-account-and-roles.yaml.

$ vim vcp-namespace-account-and-roles.yaml
kind: Namespace
apiVersion: v1
metadata:
  name: vmware
  labels:
    name: vmware
---
apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: vmware
  name: vcpsa
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: sa-vmware-default-binding
  namespace: vmware
subjects:
- kind: User
  name: system:serviceaccount:vmware:default
  namespace: vmware
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: sa-vmware-vcpsa-binding
  namespace: vmware
subjects:
- kind: User
  name: system:serviceaccount:vmware:vcpsa
  namespace: vmware
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

2- Create the vSphere Cloud Provider namespace, account, and roles.

$ kubectl create -f vcp-namespace-account-and-roles.yaml

Adding the VCP vSphere user.

1- Login to the VMware vSphere web client with the administrator@vsphere.local user.

VCP user

2- Navigate to the administration page.

VCP user

3- Go to the Users and Groups menu.

VCP user

4- Add the user k8s-vcp.

VCP user

VCP user

Configuring vSphere Cloud Provider with the vSphere information

1- Encode the vSphere usernames and passwords.

$ echo -n 'administrator@vsphere.local' | base64

$ echo -n '[your_administrator_password]' | base64

$ echo -n 'k8s-vcp@vsphere.local' | base64

$ echo -n '[your_k8s-vcp_password]' | base64

Copy the encoded usernames and passwords as we will need them in the next step.

2- Create and modify the vcp-secret.yaml file to reflect your VMware vSphere infrastructure.

$ vim vcp-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: vsphere-cloud-provider-secret
  namespace: vmware
type: Opaque
data:
  # base64 encode username and password
  # vc_admin_username ==> echo -n 'Administrator@vsphere.local' | base64
  # vc_admin_password ==> echo -n 'Admin!23' | base64
  # vcp_username ==> echo -n 'vcpuser@vsphere.local' | base64
  # vcp_password ==> echo -n 'Admin!23' | base64
  #
  vc_admin_username: QWRtaW5pc3RyYXRvckB2c3BoZXJlLmxvY2Fs
  vc_admin_password: QWRtaW4hMjM=
  vcp_username: dmNwdXNlckB2c3BoZXJlLmxvY2Fs
  vcp_password: QWRtaW4hMjM=
stringData:
  vc_ip: "vcsa.inkubate.io"
  vc_port: "443"
  # datacenter is the datacenter name in which Node VMs are located.
  datacenter: "inkubate-lab"
  # default_datastore is the Default datastore VCP will use for provisioning volumes using storage classes/dynamic provisioning
  default_datastore: "Datastore-01"
  # node_vms_folder is the name of VM folder where all node VMs are located or to be placed under vcp_datacenter. This folder will be created if not present.
  node_vms_folder: "kubernetes"
  # node_vms_cluster_or_host is the name of host or cluster on which node VMs are located.
  node_vms_cluster_or_host : "Compute-01"

# vcp_configuration_file_location is the location where the VCP configuration file will be created. # This location should be mounted and accessible to controller pod, api server pod and kubelet pod. vcp_configuration_file_location: "/etc/vsphereconf" # kubernetes_api_server_manifest is the file from which api server pod takes parameters kubernetes_api_server_manifest: "/etc/kubernetes/manifests/kube-apiserver.yaml" # kubernetes_controller_manager_manifest is the file from which controller manager pod takes parameters kubernetes_controller_manager_manifest: "/etc/kubernetes/manifests/kube-controller-manager.yaml" # kubernetes_kubelet_service_name is the name of the kubelet service kubernetes_kubelet_service_name: kubelet.service # kubernetes_kubelet_service_configuration_file is the file from which kubelet reads parameters kubernetes_kubelet_service_configuration_file: "/etc/systemd/system/kubelet.service.d/10-kubeadm.conf" # configuration back up directory configuration_backup_directory: "/configurationbackup" # rollback value: off or on enable_roll_back_switch: "off"

3- Create the "kubevols" directory on the vSphere datastore that you specified in the previous configuration file.

3- Configure vSphere Cloud Provider.

$ kubectl create --save-config -f vcp-secret.yaml

Deploying vSphere Cloud Provider

1- Create the enable-vsphere-cloud-provider.yaml.

$ vim enable-vsphere-cloud-provider.yaml
apiVersion: v1
kind: Pod
metadata:
    name: vcp-manager
    namespace: vmware
spec:
    containers:
    - name: vcp-manager-container
      image: sguyennet/enablevcp:v1
      env:
      - name: POD_ROLE
        value: "MANAGER"
      - name: POD_NAME
        valueFrom:
          fieldRef:
            fieldPath: metadata.name
      - name: POD_NAMESPACE
        valueFrom:
          fieldRef:
            fieldPath: metadata.namespace
      - name: NODE_NAME
        valueFrom:
          fieldRef:
            fieldPath: spec.nodeName
      volumeMounts:
      - name: secret-volume
        mountPath: /secret-volume
        readOnly: true
    restartPolicy: Never
    hostNetwork: true
    volumes:
    - name: secret-volume
      secret:
        secretName: vsphere-cloud-provider-secret

warning: This file is a slightly modified version of the original. The modification allows the scheduling of the vSphere Cloud Provider pods on the master nodes without removing the "no schedule" taint on them. This modification is documented here: https://github.com/vmware/kubernetes/pull/478.

2- Deploy vSphere Cloud Provider

$ kubectl create -f enable-vsphere-cloud-provider.yaml

Testing the vSphere Cloud Provider

Creating a storage class for our application

1- Create the file redis-sc.yaml.

$ vim redis-sc.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: thin-disk
provisioner: kubernetes.io/vsphere-volume
parameters:
    diskformat: thin

2- Create the storage class.

$ kubectl apply -f redis-sc.yaml

Creating a persistent storage claim for the Redis master node.

1- Create the file redis-master-claim.yaml.

$ vim redis-master-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: redis-master-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: thin-disk
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi

2- Create the persistent storage claim.

$ kubectl apply -f redis-master-claim.yaml

Creating a persistent storage claim for the Redis slave node.

1- Create the file redis-slave-claim.yaml.

$ vim redis-slave-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: redis-slave-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: thin-disk
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi

2- Create the persistent storage claim.

$ kubectl apply -f redis-slave-claim.yaml

Launching the application

1- Create the file guestbook-all-in-one.yaml.

$ vim guestbook-all-in-one.yaml
apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    tier: backend
    role: master
spec:
  ports:
    # the port that this service should serve on
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    tier: backend
    role: master
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
  # these labels can be applied automatically
  # from the labels in the pod template if not set
  # labels:
  #   app: redis
  #   role: master
  #   tier: backend
spec:
  # this replicas value is default
  # modify it according to your case
  replicas: 1
  # selector can be applied automatically
  # from the labels in the pod template if not set
  # selector:
  #   matchLabels:
  #     app: guestbook
  #     role: master
  #     tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/google_containers/redis:e2e  # or just image: redis
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379
        volumeMounts:
        - name: redis-master-data
          mountPath: /data
      volumes: 
      - name: redis-master-data
        persistentVolumeClaim:
          claimName: redis-master-claim
---
apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    tier: backend
    role: slave
spec:
  ports:
    # the port that this service should serve on
  - port: 6379
  selector:
    app: redis
    tier: backend
    role: slave
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
  # these labels can be applied automatically
  # from the labels in the pod template if not set
  # labels:
  #   app: redis
  #   role: slave
  #   tier: backend
spec:
  # this replicas value is default
  # modify it according to your case
  replicas: 1
  # selector can be applied automatically
  # from the labels in the pod template if not set
  # selector:
  #   matchLabels:
  #     app: guestbook
  #     role: slave
  #     tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google_samples/gb-redisslave:v1
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below.
          # value: env
        ports:
        - containerPort: 6379
        volumeMounts:
        - name: redis-slave-data
          mountPath: /data
      volumes: 
      - name: redis-slave-data
        persistentVolumeClaim:
          claimName: redis-slave-claim
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  type: NodePort
  ports:
    # the port that this service should serve on
  - port: 80
  selector:
    app: guestbook
    tier: frontend
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
  # these labels can be applied automatically
  # from the labels in the pod template if not set
  # labels:
  #   app: guestbook
  #   tier: frontend
spec:
  # this replicas value is default
  # modify it according to your case
  replicas: 4
  # selector can be applied automatically
  # from the labels in the pod template if not set
  # selector:
  #   matchLabels:
  #     app: guestbook
  #     tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v4
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below.
          # value: env
        ports:
        - containerPort: 80

3- Start the application.

$ kubectl apply -f guestbook-all-in-one.yaml

4- Get the port on which the application is running.

$ kubectl describe service frontend | grep NodePort

5- Browse to http://[ip_of_one_of_your_kubernetes_node]:[your_application_port].

6- Add some messages to the guestbook.

7- Destroy the application.

$ kubectl delete -f guestbook-all-in-one.yaml

8- Check that the application was deleted.

$ kubectl get pods

9- Create a new application.

$ kubectl apply -f guestbook-all-in-one.yaml

10- Get the new port on which the application is running.

$ kubectl describe service frontend | grep NodePort

5- Browse to http://[ip_of_one_of_your_kubernetes_node]:[your_application_port].

Your messages should still be there.

Comments

comments powered by Disqus