So, you’ve installed PKS and created a PKS cluster. Excellent! Now what?
We want to use helm charts to deploy applications. Many of the charts use PersistentVolumes, so getting PVs set up is our first step.
There are a couple of complicating factors to be aware of when it comes to PVs in a multi-AZ/multi-vSphere-Cluster environment. First, you probably have cluster-specific datastores – particularly if you are using Pivotal Ready Architecture and VSAN. These datastores are not suitable for PersistentVolumes consumed by applications deployed to our Kubernetes cluster. To work-around this, we’ll need to provide some shared block storage to each host in each cluster. Probably the simplest way to do this is with an NFS share.
Prerequisites:
Common datastore; NFS share or iSCSI
In production, you’ll want a production-quality fault-tolerant solution for NFS or iSCSI, like Dell EMC Isilon. For this proof-of-concept, I’m going to use an existing NFS server, create a volume and share it to the hosts in the three vSphere clusters where the PKS workload VMs will run. In this case, the NFS datastore is named “sharednfs” ’cause I’m creative like that. Make sure that your hosts have adequate permissions to the share. Using VMFS on iSCSI is supported, just be aware that you may need to cable-up additional NICs if yours are already consumed by N-VDS and/or VSAN.
Workstation Prep
We’ll need a handful of command-line tools, so make sure your workstation has the PKS CLI and Kubectl CLI from Pivotal and you’ve downloaded and extracted Helm.
PKS Cluster
We’ll want to provision a cluster using the PKS CLI tool. This document assumes that your cluster was provisioned successfully, but nothing else has been done to it. For my environment, I configured the “medium” plan to use 3 Masters and 3 Workers in all three AZs, then created the cluster with the command
pks create-cluster pks1cl1 --external-hostname cl1.pks1.lab13.myenv.lab --plan "medium" --num-nodes "3"
Logged-in
Make sure you’re logged into the Kubernetes cluster. In PKS, the easiest way to do this is via the PKS cli:
pks login -a api.pks1.lab13.myenv.lab -u pksadmin -p my_password --skip-ssl-validation
pks cluster pks1cl1
pks get-credentials pks1cl1
kubectl config use-context pks1cl1
kubectl get nodes -o wide
Where “pks1cl1″ is replaced by your cluster’s name,”api.pks1.lab13.myenv.lab” is replaced by the FQDN to your PKS API server, “pksadmin” is replaced by the username with admin rights to PKS and “my_password” is replaced with that account’s password.
Procedure:
- Create storageclass
- Create storageclass spec yaml. Note that the file is named storageclass-nfs.yml and we’re naming the storage class itself “nfs”:
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: nfs annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: kubernetes.io/vsphere-volume parameters: diskformat: thin datastore: sharednfs fstype: ext3
- Apply the yml with kubectl
kubectl create -f storageclass-nfs.yml
- Create a sample PVC (Persistent Volume Claim). Note that the file is names pvc-sample.yml, the PVC name is “pvc-sample” and uses the “nfs” storageclass we created above. This step is not absolutely necessary, but will help confirm we can use the storage.
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-sample annotations: volume.beta.kubernetes.io/storage-class: nfs spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: nfs
- Apply the yml with kubectl
kubectl create -f pvc-sample.yml
If you’re watching vSphere closely, you’ll see a VMDK created in the kubevols folder of the NFS datastore - Check that the PVC was created with
kubectl get pvc
and
kubectl describe pvc pvc-sample
- Remove sample PVC with
kubectl delete -f pvc-sample
- Create storageclass spec yaml. Note that the file is named storageclass-nfs.yml and we’re naming the storage class itself “nfs”:
- Configure Helm and Tiller
- Create Service Account for tiller with
apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system
- Apply the service account yml with Kubectl
kubectl create -f rbac-config.yml
- Initialize helm and tiller with
helm init --service-account tiller
- Check that tiller is ready
helm version
Look for a version number for the version; note that it might take a few seconds for tiller in the cluster to get ready.
- Create Service Account for tiller with
- Deploy sample helm chart
- Update helm local chart repository. We do this so that we can be sure that helm can reach the public repo and to cache teh latest information to our local repo.
helm repo update
If this step results in a certificate error, you may have to add the cert to the trusted certificates on the workstation. - Install helm chart with ingress enabled. Here, I’ve selected the Dokuwiki app. The command below will enable ingress, so we can access it via routable IP and it will use the default storageclass we configured earlier.
helm install --name dokuwiki \
--set ingress.enabled="true",dokuwikiUsername=admin,dokuwikiPassword=password \
stable/dokuwikiEdit – April 23 2019 – Passing the credentials in here makes connecting easier later.
- Confirm that the app was deployed
helm list kubectl get pods -n default kubectl get services -n default
From the get services results, make a note of the external IP address – in the example above, it’s 192.13.6.73 - Point a browser at the external address from the previous step and marvel at your success in deploying Dokuwiki via helm to Kubernetes!
If you want to actually login to your Dokuwiki instance, first obtain the password for the user account with this command:kubectl get secret -n default dokuwiki-dokuwiki \ -o jsonpath="{.data.dockuwiki-password}" | base64 --decodeThen login with username “user” and that password.Edit – 04/23/19 – Login with the username and password you included in the helm install command
- Update helm local chart repository. We do this so that we can be sure that helm can reach the public repo and to cache teh latest information to our local repo.
- Additional info
- View Persistent Volume Claims with
kubectl get pvc -n default
This will list the PVCs and the volumes in the “default” namespace. Note the volume corresponds to the name of the VMDK on the datastore. - Load-Balancer
Notice that since we are leveraging the NSX-T Container Networking Interface and enabled the ingress when we installed dokuwiki, a load-balancer in NSX-T was automatically created for us to point to the application.
- View Persistent Volume Claims with
This took me some time to figure out; had to weed through a lot of documentation – some of which contradicted itself and quite a bit of trial-and-error. I hope this helps save someone time later!