Adding a private Docker registry to a PKS 1.5 Windows Kubernetes cluster

Pivotal Container Service (PKS) 1.5 and Kubernetes 1.14 bring *beta* support for Workers running Windows.  This means that we can provide the advantages of Kubernetes to a huge array of applications running on Windows.  I see this especially useful for Windows applications that you don’t have the source code for and/or do not want to invest in reworking it for .NET core or languages that run on Linux.

In nearly all cases, you’ll need an image with your applications’ dependencies or configuration and in the real world, we don’t want those in the public space like dockerhub.  Enter Private Docker Repositories.

PKS Enterprise includes VMware Harbor as a private registry, it’s very easy to deploy alongside PKS and provides a lot of important functionality.  The Harbor interface uses TLS/SSL; you may use a self-signed, enterprise PKI-signed or public CA-signed certificate.  If you chose to not use a public CA-signed certificate ($!), the self-signed or PKI-signed certificate must be trusted by the docker engine on each Kubernetes worker node.

Clusters based on Ubuntu Xenial Stemcells:

  • The operator/administrator simply puts the CA certificate into the “Trusted Certificates” box of the Security section in Ops Manager.
  • When BOSH creates the VMs for kubernetes clusters, the trusted certificates are added to the certificate store automatically.
  • If using an enterprise PKI where all of the internal certificates are signed by the Enterprise CA, this method makes it very easy to trust and “un-trust” CAs.

Clusters based on Windows 2019 Stemcells:

This is one of those tasks that is easier to perform on Linux that it is on Windows.  Unfortunately, Windows does not automatically add the Trusted Certificates from Ops Manager to the certificate store, so extra steps are required.

    1. Obtain the Registry CA Certificate.  In Harbor, you may click the “REGISTRY CERTIFICATE” link while in a Project.  Save the certificate to where the BOSH cli is installed (Ops Manager typically).
    2.  Connect BOSH cli to director.  This may be done on the Ops Manager.
    3. List BOSH-managed vms to identify the service_instance deployment corresponding to the targeted K8s cluster by matching the VM IP address to the IP address of the master node as reported by PKS cluster.
    4. Run this command to copy the certificate to the Windows worker
      bosh -e ENV -d DEPLOYMENT scp root.cer WINDOWS-WORKER:/
      • ENV – your environment alias in the BOSH cli
      • DEPLOYMENT – the BOSH deployment that corresponds to the k8s cluster; ex: service-instance_921bd35d-c46d-4e7a-a289-b577ff743e15
      • WINDOWS-WORKER – the instance name of the specific Windows worker VM; ex: windows-worker/277536dd-a7e6-446b-acf7-97770be18144

      This command copies the local file named root.cer to the root folder on the Windows VM

    5. Use BOSH to SSH into the Windows Worker.
      bosh -e ENV -d DEPLOYMENT ssh WINDOWS-WORKER
      • ENV – your environment alias in the BOSH cli
      • DEPLOYMENT – the BOSH deployment that corresponds to the k8s cluster; ex: service-instance_921bd35d-c46d-4e7a-a289-b577ff743e15
      • WINDOWS-WORKER – the instance name of the specific Windows worker VM; ex: windows-worker/277536dd-a7e6-446b-acf7-97770be18144

      SSH into Windows node, notice root.cer on the filesystem
    6. In the Windows SSH session run “powershell.exe” to enter powershell
    7. At the PS prompt, enter
      Import-certificate -filepath .\root.cer -CertStoreLocation 
      Cert:\LocalMachine\Root

      The example above imports the local file “root.cer” into the Trusted Root Certificate Store

    8. Type “exit” twice to exit PS and SSH
    9. Repeat steps 5-8 for each worker node.

Add docker-registry secret to k8s cluster

Whether the k8s cluster is running Windows workers or not, you’ll want to add credentials for authenticating to harbor.  These credentials are stored in a secret. To add the secret, use this command:

kubectl create secret docker-registry harbor \
--docker-server=HARBOR_FQDN \
--docker-username=HARBOR_USER \
--docker-password=USER_PASS \
--docker-email=USER_EMAIL
  • HARBOR_FQDN – FQDN for local/private Harbor registry
  • HARBOR_USER – name of user in Harbor with access to project and repos containing the desired images
  • USER_PASS – username for the above account
  • USER_EMAIL – email adddress for the above account

Note that this secret is namespaced; it needs to be added to the namespace of the deployments that will reference it

More info

Here’s an example deployment yaml for a Windows K8s cluster that uses a local private docker registry.  Note that Windows clusters cannot leverage NSX-T yet, so this example uses a NodePort to expose the service.

Manually creating a Kubernetes cluster with kubeadm

I’ve talked about Pivotal Container Service (PKS) before and now work for Pivotal, so I’ve frequently got K8s on my mind.  I’ve discussed at length the benefits of PKS and the creation of K8s clusters, but didn’t have much of a point of reference for alternatives.  I know about Kelsey Hightower’s book and was looking for something a little less in-the-weeds.

Enter kubeadm, included in recent versions of K8s.  With this tool, you’re better able to understand what goes into a cluster, how the master and workers are related and how the networking is organized.  I really wanted to stand up a K8s cluster alongside a PKS-managed cluster in order to better understand the differences (if any).  This is also a part of the Linux Foundation training “Kubernetes Fundamentals”.  I don’t want to spoil the course for you, but will point to some of the docs on kubernetes.io

 

Getting Ready

I used VMware Fusion on the Macbook to create and run two Ubuntu 18.04 VMs.  Each was a linked clone with 2GB RAM, 1vCPU.  Had to make sure that they had different MAC addresses, IP, UUIDs and host names. I’m sure you can use nearly any virtualization tool to get your VMs running.  Once running, be sure you can SSH into each.

Install Docker

I thought, “hey I’ve done this before” and just installed Docker as per usual, but that method does not leverage the correct cgroup driver, so we’ll want to install Docker with the script found here.

Install Tools

Once again, the kubernetes.io site provides commands to properly install kubeadm, kubelet and kubectl on our Ubuntu nodes.  Use the commands on that page to ensure kubelet is installed and held to the correct version.

Choose your CNI pod network

Ok, what?  CNI is the Container Network Interface is a specification for networking add-ons for K8s.  Kubeadm requires that we use a pod network addon that uses the CNI spec.  The pod network – we may have only one per k8s cluster – is the network that the pods communicate on; think of it as using a NAT rather than the network you’ve actually assigned to the Ubuntu nodes.  Further, this can be confusing, because this address space is not what is actually assigned to the pods.  This address space is used when we “expose” a service.  What pod-network-cidr you assign depends on which network add-on you select.  In my case, I went with Canal as it seems to be both powerful and flexible.  Also, the pod-network cidr used by Calico is “192.168.0.0/16”, which is already in use in my home lab – it may not have actually been a conflict, but it certainly would be confusing if it were in use twice.

Create the master node
Make sure you’re ssh’d into your designated “Master” Ubuntu VM, make sure that you’ve installed kubeadm, kubelet, kubectl and docker from the steps above.  If you also choose canal, you’ll initialize the master node (not on the worker node – we’ll have a different command for that one) by running

kubeadm init –pod-network-cidr=10.244.0.0/16

Exactly that CIDR. It’ll take a few minutes to download, install and configure the k8s components.  When the initialization has completed, you’ll see a message like this:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.129:6443 –token zdjzrp.5jad4gihqjo46olg \
–discovery-token-ca-cert-hash sha256:90d1b349aa93a7130ee91668e4e763a4c29e5fc1502060191b38ea0e31d3cec8

Using, this, we’ll exit su and run the

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Make a note of the bottom section as we’ll need it in order to join our worker to the newly-formed cluster.

Sanity-check:

On the master, run kubectl get nodes.  you’ll notice that we have 1 node and it’s not ready:

Install the network pod add-on

Referencing the docs, you’ll note that Canal has a couple yaml files to be applied to our cluster.  So, again on our master node, we’ll run these commands to install and configure Canal:


kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/canal/canal.yaml

Sanity-check:

On the master, run kubectl get nodes.  You’ll now notice that we still have 1 node but now it’s ready:

Enable pod placement on master
By default, the cluster will not put pods on the master node. If you want to use a single-node cluster or use compute capacity on the master node for pods, we’ll need to remove the taint.

Default taint on master

We’ll remove the taint with the command

kubectl taint nodes –all node-role.kubernetes.io/master-

Join worker to cluster
You saved the output from kubeadm init earlier, right? We’ll need that now to join the worker to the cluster. On the worker VM, become root via sudo su – and run that command:

Join worker to cluster

Now, back on master, we run kubectl get nodes and can see both nodes!

Master and Worker Ready

Summary and Next Steps

At this point, we have a functional kubernetes cluster with 1 master and 1 worker.  Next in this series, we’ll deploy some applications and compare the behavior to a PKS-managed kubernetes cluster.

 

Using Helm and Dynamic PersistentVolumes with Multi-AZ PKS on vSphere

So, you’ve installed PKS and created a PKS cluster.  Excellent!  Now what?

We want to use helm charts to deploy applications.  Many of the charts use PersistentVolumes, so getting PVs set up is our first step.

There are a couple of complicating factors to be aware of when it comes to PVs in a multi-AZ/multi-vSphere-Cluster environment.  First, you probably have cluster-specific datastores – particularly if you are using Pivotal Ready Architecture and VSAN.  These datastores are not suitable for PersistentVolumes consumed by applications deployed to our Kubernetes cluster.  To work-around this, we’ll need to provide some shared block storage to each host in each cluster.  Probably the simplest way to do this is with an NFS share.

Prerequisites:

Common datastore; NFS share or iSCSI

In production, you’ll want a production-quality fault-tolerant solution for NFS or iSCSI, like Dell EMC Isilon. For this proof-of-concept, I’m going to use an existing NFS server, create a volume and share it to the hosts in the three vSphere clusters where the PKS workload VMs will run.  In this case, the NFS datastore is named “sharednfs” ’cause I’m creative like that.  Make sure that your hosts have adequate permissions to the share.  Using VMFS on iSCSI is supported, just be aware that you may need to cable-up additional NICs if yours are already consumed by N-VDS and/or VSAN.

Workstation Prep

We’ll need a handful of command-line tools, so make sure your workstation has the PKS CLI and Kubectl CLI from Pivotal and you’ve downloaded and extracted Helm.

PKS Cluster
We’ll want to provision a cluster using the PKS CLI tool.  This document assumes that your cluster was provisioned successfully, but nothing else has been done to it.  For my environment, I configured the “medium” plan to use 3 Masters and 3 Workers in all three AZs, then created the cluster with the command

pks create-cluster pks1cl1 --external-hostname cl1.pks1.lab13.myenv.lab --plan "medium" --num-nodes "3"


Logged-in
Make sure you’re logged into the Kubernetes cluster. In PKS, the easiest way to do this is via the PKS cli:

pks login -a api.pks1.lab13.myenv.lab -u pksadmin -p my_password --skip-ssl-validation
pks cluster pks1cl1
pks get-credentials pks1cl1
kubectl config use-context pks1cl1
kubectl get nodes -o wide

Where “pks1cl1″ is replaced by your cluster’s name,”api.pks1.lab13.myenv.lab” is replaced by the FQDN to your PKS API server, “pksadmin” is replaced by the username with admin rights to PKS and “my_password” is replaced with that account’s password.

Procedure:

  1. Create storageclass
    • Create storageclass spec yaml. Note that the file is named storageclass-nfs.yml and we’re naming the storage class itself “nfs”:
      kind: StorageClass
      apiVersion: storage.k8s.io/v1
      metadata:
        name: nfs
        annotations:
          storageclass.kubernetes.io/is-default-class: "true"
      provisioner: kubernetes.io/vsphere-volume
      parameters:
        diskformat: thin
        datastore: sharednfs
        fstype: ext3
      

    • Apply the yml with kubectl

      kubectl create -f storageclass-nfs.yml

    • Create a sample PVC (Persistent Volume Claim). Note that the file is names pvc-sample.yml, the PVC name is “pvc-sample” and uses the “nfs” storageclass we created above. This step is not absolutely necessary, but will help confirm we can use the storage.
      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: pvc-sample
        annotations:
          volume.beta.kubernetes.io/storage-class: nfs
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
        storageClassName: nfs
      
    • Apply the yml with kubectl

      kubectl create -f pvc-sample.yml


      If you’re watching vSphere closely, you’ll see a VMDK created in the kubevols folder of the NFS datastore

    • Check that the PVC was created with

      kubectl get pvc

      and

      kubectl describe pvc pvc-sample

    • Remove sample PVC with

      kubectl delete -f pvc-sample

  2. Configure Helm and Tiller
    • Create Service Account for tiller with
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: tiller
        namespace: kube-system
      ---
      apiVersion: rbac.authorization.k8s.io/v1beta1
      kind: ClusterRoleBinding
      metadata:
        name: tiller
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: cluster-admin
      subjects:
        - kind: ServiceAccount
          name: tiller
          namespace: kube-system
      
    • Apply the service account yml with Kubectl

      kubectl create -f rbac-config.yml

    • Initialize helm and tiller with

      helm init --service-account tiller

    • Check that tiller is ready

      helm version


      Look for a version number for the version; note that it might take a few seconds for tiller in the cluster to get ready.

  3. Deploy sample helm chart
    • Update helm local chart repository. We do this so that we can be sure that helm can reach the public repo and to cache teh latest information to our local repo.

      helm repo update


      If this step results in a certificate error, you may have to add the cert to the trusted certificates on the workstation.

    • Install helm chart with ingress enabled. Here, I’ve selected the Dokuwiki app. The command below will enable ingress, so we can access it via routable IP and it will use the default storageclass we configured earlier.

      helm install --name dokuwiki \
      --set ingress.enabled="true",dokuwikiUsername=admin,dokuwikiPassword=password \
      stable/dokuwiki

      Edit – April 23 2019 – Passing the credentials in here makes connecting easier later.

    • Confirm that the app was deployed
      helm list
      kubectl get pods -n default
      kubectl get services -n default


      From the get services results, make a note of the external IP address – in the example above, it’s 192.13.6.73

    • Point a browser at the external address from the previous step and marvel at your success in deploying Dokuwiki via helm to Kubernetes!
      If you want to actually login to your Dokuwiki instance, first obtain the password for the user account with this command:

      kubectl get secret -n default dokuwiki-dokuwiki \
      -o jsonpath="{.data.dockuwiki-password}" | base64 --decode

      Then login with username “user” and that password.

       

      Edit – 04/23/19 – Login with the username and password you included in the helm install command

  4. Additional info
    • View Persistent Volume Claims with

      kubectl get pvc -n default


      This will list the PVCs and the volumes in the “default” namespace. Note the volume corresponds to the name of the VMDK on the datastore.

    • Load-Balancer
      Notice that since we are leveraging the NSX-T Container Networking Interface and enabled the ingress when we installed dokuwiki, a load-balancer in NSX-T was automatically created for us to point to the application.

This took me some time to figure out; had to weed through a lot of documentation – some of which contradicted itself and quite a bit of trial-and-error. I hope this helps save someone time later!

PKS and NSX-T: I did everything wrong

I’ve fought with PKS and NSX-T for a month or so now. I’ll admit it: I did everything wrong, several times. One thing for certain, I know how NOT to configure it. So, now that I’ve finally gotten past my configuration issues, it makes sense to share the pain lessons learned.

  1. Set your expectations correctly. PKS is literally a 1.0 product right now. It’s getting a lot of attention and will make fantastic strides very quickly, but for now, it can be cumbersome and confusing. The documentation is still pretty raw. Similarly, NSX-T is very young. The docs are constantly referring you to the REST API instead of the GUI – this is fine of course, but is a turn-off for many. The GUI has many weird quirks. (when entering a tag, you’ll have to tab off of the value field after entering a value, since it is only checked onBlur)
  2. Use Chrome Incognito  NSX-T does not work in Firefox on Windows. It works in Chrome, but I had issues where the cache would problems (the web GUI would indicate that backup is not configured until I closed Chrome, cleared cache and logged in again)
  3. Do not use exclamation point in the NSX-T admin password Yep, learned that the hard way. Supposedly, this is resolved in PKS 1.0.3, but I’m not convinced as my environment did not wholly cooperate until I reset the admin password to something without an exclamation point in it
  4. Tag only one IP Pool with ncp/external I needed to build out several foundations on this environment and wanted to keep them in discrete IP space by created multiple “external IP Pools” and assigning each to its own foundation. Currently the nsx-cli.sh script that accompanies PKS with NSX-T only looks for the “ncp/external” tag on IP Pools, if more than one is found, it quits. I suppose you could work around this by forking the script and passing an additional “cluster” param, but I’m certain that the NSBU is working on something similar
  5. Do not take a snapshot of the NSX Manager This applies to NSX for vSphere and NSX-T, but I have made this mistake and it was costly. If your backup solution relies on snapshots (pretty much all of them do), be sure to exclude the NSX Manager and…
  6. Configure scheduled backups of NSX Manager I found the docs for this to be rather obtuse. Spent a while trying to configure a FileZilla SFTP or even IIS-FTP server until it finally dawned on me that it really is just FTP over SSH. So, the missing detail for me was that you’ll just need a linux machine with plenty of space that the NSX Manager can connect to – over SSH – and dump files to. I started with this procedure, but found that the permissions were too restrictive.
  7. Use concourse pipelines This was an opportunity for me to really dig into concourse pipelines and embrace what can be done. One moment of frustration came when PKS 1.0.3 was released and I discovered that the parameters for vSphere authentication had changed. In PKS 1.0 through 1.0.2, there was a single set of credentials to be used by PKS to communicate with vCenter Server. As of 1.0.3, this was split into credentials for master and credentials for workers. So, the pipeline needed a tweak in order to complete the install. I ended up putting in a conditional to check the release version, so the right params are populated. If interested, my pipelines can be found at https://github.com/BrianRagazzi/concourse-pipelines
  8. Count your Load-Balancers In NSX-T, the load-balancers can be considered a sort of empty appliance that Virtual Servers are attached to and can itself attach to a Logical Router. The load-balancers in-effect require pre-allocated resources that must come from an Edge Cluster. The “small” load-balancer consumes 2 CPU and 4GB RAM and the “Large” edge VM provides 8 CPU and 16GB RAM. So, a 2-node Edge Cluster can support up to FOUR active/standby Load-Balancers. This quickly becomes relevant when you realize that PKS creates a new load-balancer when a new K8s cluster is created. If you get errors in the diego databse with the ncp job when creating your fifth k8s cluster, you might need to add a few more edge nodes to the edge cluster.
  9. Configure your NAT rules as narrow as you can. I wasted a lot of time due to mis-configured NAT rules. The log data from provisioning failures did not point to NAT mis-configuration, so wild geese were chased.  Here’s what finally worked for me:
    Router Priority Action Source Destination Translated Description
    Tier1 PKS Management 512 No NAT [PKS Management CIDR] [PKS Service CIDR] Any No NAT between management and services
    [PKS Service CIDR] [PKS Management CIDR]
    1024 DNAT Any [External IP for Ops Manager] [Internal IP for Ops Manager] So Ops Manager is reachable
    [External IP for PKS Service] [Internal IP for PKS Service] (obtain from Status tab of PKS in Ops Manager) So PKS Service (and UAA) is reachable
    SNAT [Internal IP for PKS Service] Any [External IP for PKS Service] Return Traffic for PKS Service
    2048 [PKS Management CIDR] [Infrastructure CIDR] (vCenter Server, NSX Manager, DNS Servers) [External IP for Ops Manager] So PKS Management can reach infrastructure
    [PKS Management CIDR] [Additional Infrastructure] (NTP in this case) [External IP for Ops Manager]
    Tier1 PKS Services 512 No NAT [PKS Service CIDR] [PKS Management CIDR] Any No NAT between management and services
    [PKS Management CIDR] [PKS Service CIDR]
    1024 SNAT [PKS Service CIDR] [Infrastructure CIDR] (vCenter Server, NSX Manager, DNS Servers) [External IP] (not the same as Ops Manager and PKS Service, but in the same L3 network) So PKS Services can reach infrastructure
    [PKS Service CIDR] [Additional Infrastructure] (NTP in this case) [External IP]