Configure Tanzu Kubernetes Grid to use Active Directory

Tanzu Kubernetes Grid includes and supports packages for dex and Gangway.  These are used to extend authentication to LDAP and OIDC endpoints.  Recall that Kubernetes does not do user-management or traditional authentication.  As a K8s cluster admin, you can create service accounts of course, but those are not meant to be used by developers.

Think of dex as a transition layer, it uses ‘connectors’ for upstream Identity providers (IdP) like Active Directory for LDAP or Okta for SAML and presents an OpenID Connect (OIDC) endpoint for k8s to use.

TKG provides not only the packages mentioned above, but also a collection of yaml files and documentation for implementation.  The current version (as of May 12, 2020) documentation for configuring authentication is pretty general, the default values in the config files are suitable for OpenLDAP.  So, I thought I’d share the specific settings for connecting dex to Active Directory.

Assumptions:

    1. TKG Management cluster is deployed
    2. Following the VMware documentation
    3. Using the TKG-provided tkg-extensions
    4. dex will be deployed to management cluster or to a specific workload cluster

Edits to authentication/dex/vsphere/ldap/03-cm.yaml – from Docs

  1. Replace <MGMT_CLUSTER_IP> with the IP address of one of the control plane nodes of your management cluster.  This is one of the control plane nodes where we’re putting dex
  2. If the LDAP server is listening on the default port 636, which is the secured configuration, replace <LDAP_HOST> with the IP or DNS address of your LDAP server. If the LDAP server is listening on any other port, replace <LDAP_HOST> with the address and port of the LDAP server, for example 192.168.10.22:389 or ldap.mydomain.com:389.  Never, never, never use unencrypted LDAP.  You’ll need to specify port 636 unless your targeted AD controller is also a Global Catalog server in which case you’ll specify port 3269.  Check with the AD team if you’re unsure.
  3. If your LDAP server is configured to listen on an unsecured connection, uncomment insecureNoSSL: true. Note that such connections are not recommended as they send credentials in plain text over the network. Never, never, never use unencrypted LDAP.
  4. Update the userSearch and groupSearch parameters with your LDAP server configuration.  This need much more detail – see steps below

Edits to authentication/dex/vsphere/ldap/03-cm.yaml – AD specific

  1. Obtain the root CA public certificate for your AD controller. Save a base64-encoded version of the certificate: echo root64.cer | base64 > rootcer.b64 for example will write the data from the PEM-encoded root64.cer file into a base64-encoded file named rootcer.b64
  2. Add the base64-encoded certificate content to the rootCAData key.  Be sure to remove the leading “#”.  This is an alternative to using the rootCA key, where we’ll have to place the file on each Control Plane node
  3. Update the userSearch values as follows:
    key default set to notes
    baseDN ou=people,

    dc=vmware,dc=com

    DN of OU in AD under

    which user accounts are found

    Example: ou=User Accounts,DC=ragazzilab,DC=com
    filter “(objectClass=

    posixAccount)”

    “(objectClass=person)”
    username uid userPrincipalName
    idAttr uid DN Case-sensitive
    emailAttr mail userPrincipalName
    nameAttr givenName cn
  4. Update the groupSearch values as follows:
    key default set to notes
    baseDN ou=people,

    dc=vmware,dc=com

    DN of OU in AD under

    which security Groups are found

    Example: DC=ragazzilab,DC=com
    filter “(objectClass=

    posixGroup)”

    “(objectClass=group)”
    userAttr uid DN Case-Sensitive
    groupAttr memberUid “member:1.2.840.113556.1.4.1941:” This is necessary to search within nested groups in AD
    nameAttr cn cn

Other important Notes
When you create the oidc secret in the workload clusters running Gangway, the clientSecret value is base64-encoded, but the corresponding secret for the workload cluster in the staticClients section of the dex configmMap is decoded. This can be confusing since the decoded value is also randomly-generated.

Replicating images from DockerHub to Harbor

Harbor LogoI found the documentation for actually replicating images from DockerHub to a local Harbor instance to be missing.  So here’s what I’ve found:

Objective: Replicate the images for the Yelb sample application to local Harbor repo

Set-up and Prereqs

  1. A local Harbor instance – I’ll be using an Enterprise PKS foundation with Harbor 1.10
  2. An account for DockerHub

Steps

      1. Login to Harbor Web GUI as an administrator. Navigate to Administration/Registries
      2. Add Endpoint for local Harbor by clicking ‘New Endpoint’ and entering the following:
        • Provider: harbor
        • Name: local (or FQDN or whatever)
        • Description: optional
        • Endpoint URL: the actual URL for your harbor instance beginning with https and ending with :443
        • Access ID: username for an admin or user that at least has Project Admin permission to the target Projects/namespaces
        • Access Secret: Password for the account above
        • Verify Remote Cert: typically checked
      3. Add Endpoint for Docker Hub by clicking ‘New Endpoint’ and entering the following:
        • Provider: docker-hub
        • Name: dockerhub (or something equally profound)
        • Description: optional
        • Endpoint URL: pre-populated/li>
        • Access ID: username for your account at dockerhub
        • Access Secret: Password for the account above
        • Verify Remote Cert: typically checked

        Notice that this is for general dockerhub, not targeting a particular repo.

      4. Configure Replications for the Yelb Images
        You may create replications for several images at once using a variety of filters, but I’m going to create a replication rule for each image we need. I think this makes it easier to identify a problem, removes the risk of replicating too much and makes administration easier. Click ‘New Replication Rule‘ enter the following to create our first rule:

        • Name: yelb-db-0.5
        • Description: optional
        • Replication Mode: Pull-based (because we’re pulling the image from DockerHub)
        • Source registry: dockerhub
        • Source Registry Filter – Name: mreferre/yelb-db
        • Source Registry Filter – Tag: 0.5
        • Source Registry Filter – Resource: pre-populated
        • Destination Namespace: yelb (or whatever Project you want the images saved to)
        • Trigger Mode: Select ‘Manual’ for a one-time sync or select ‘Scheduled’ if you want to ensure the image is replicated periodically. Note that the schedule format is cron with seconds, so 0 0 23 * * 5 would trigger the replication to run every Friday at 23:00:00. Scheduled replication makes sense when the tag filter is ‘latest’ for example
        • Override: leave checked to overwrite the image if it already exists
        • Enable rule: leave checked to keep the rule enabled
      5. Add the remaining Replication Rules:
        Name Name Filter Tag Filter Dest Namespace
        yelb-ui-latest mreferre/yelb-ui latest yelb
        yelb-appserver-latest mreferre/yelb-appserver latest yelb
        redis-4.0.2 library/redis 4.0.2 yelb

        Note that redis is an official image, so we have to include library/

Logging into a Kubernetes cluster with an OIDC LDAP account

I confess, most of my experience with Kubernetes is with Pivotal Container Service (PKS) Enterprise.  PKS makes it rather easy to get started and I found that I took some tasks for granted.

In PKS Enterprise, one can use the pks cli to not only life-cycle clusters, but to obtain the credentials to the cluster and automatically update the kubeconfig with the account information.  So, administrative/operations users can run the command “pks get-credentials my-cluster” to have a kubeconfig updated with the authentication tokens and parameters to connect to my-cluster.

K8s OIDC using UAA on PKS

The PKS controller includes the User Account and Authentication (UAA) component, which is used to authenticate users into PKS Enterprise.  UAA can also be easily configured to connect to an existing LDAP service – this is the desired configuration in most organizations so that users account exist in one place (Active Directory in my example).

So, I found myself wondering “I don’t want to provide the PKS CLI to developers, so how can they connect kubectl to the cluster?”

Assumptions:

  • PKS Enterprise on vSphere (with or without NSX-T)
  • Active Directory
  • Developer user account belongs to the k8s-devs security group in AD

Prerequisite configuration:

  1. UAA on PKS configured a with UAA User Account Store: LDAP Server.  This links UAA to LDAP/Active Directory
  2. User Search Filter: userPrincipalName={0}  This means that users can login as user@domain.tld
  3. Group Search Filter: member={0} This ensures that AD groups may be used rather than specifying individual users
  4. Configure created clusters to use UAA as the OIDC provider: Enabled  This pre-configures the kubernetes API to use OpenID Connect with UAA. If not using PKS Enterprise, you’ll need to provide another OpenID Connect-Compliant endpoint (like Dex), link it to Active Directory and update the kubernetes cluster api manually to use the OpenID Authentication.

 

Operator: Create Role and RoleBinding:

While authentication is handled by OIDC/UAA/LDAP, Authorization must be configured on the cluster to provide access to resources via RBAC.  This is done by defining a Role (or clusterRole) that indicates what actions may be taken on what resources and a RoleBinding which links the Role to one or more “subjects”.

  1.  Authenticate to kubernetes cluster with an administrative account (for example, using PKS cli to connect)
  2.  Create yaml file for our Role and RoleBinding:
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: developers
    rules:
    - apiGroups: ["", "extensions", "apps"]
      resources: ["deployments", "replicasets", "pods"]
      verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] 
      # You can also use ["*"]
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: developer-dev-binding
    subjects:
    - kind: Group
      name: k8s-devs
      apiGroup: rbac.authorization.k8s.io
    roleRef:
      kind: Role
      name: developers
      apiGroup: rbac.authorization.k8s.io
    

    In the example above, we’re creating a Role named “developers”, granting access to the core, extensions and apps API groups and several actions against deployments, replicaSets and pods. Notice that developers in this role would have have access to secrets (for example)
    The example RoleBinding binds a group named “k8s-devs” to the developers role. Notice that we have not created the k8s-devs group in Kubernetes or UAA; it exists in Active Directory

  3. Use Kubectl to apply the yaml, creating the Role and Rolebinding in the targeted namespace

 

Creating the kubeconfig – the hard way

To get our developer connected with kubectl, they’ll need a kubeconfig with the authentication and connection details.  The Hard way steps are:

  1. Operator obtains the cluster’s certificate authority data.  This can be done via curl or by copying the value from the existing kubeconfig.
  2. Operator creates a template kubeconfig, replacing the value specified, then sends it to the developer user
    apiVersion: v1
    clusters:
    - cluster:
      certificate-authority-data: <OBTAINED IN STEP 1 >
      server: < FQDN to Master Node. >
      name: PROVIDED-BY-ADMIN
    contexts:
    - context:
        cluster: PROVIDED-BY-ADMIN
        user: PROVIDED-BY-USER
      name:  PROVIDED-BY-ADMIN
    current-context: PROVIDED-BY-ADMIN
    kind: Config
    preferences: {}
    users:
    - name: PROVIDED-BY-USER
      user:
        auth-provider:
          config:
            client-id: pks_cluster_client
            cluster_client_secret: ""
            id-token: PROVIDED-BY-USER
            idp-issuer-url: https://PROVIDED-BY-ADMIN:8443/oauth/token
            refresh-token:  PROVIDED-BY-USER
          name: oidc
  3. The developer user obtains the id_token and refresh_token from UAA, via a curl command
    curl 'https://PKS-API:8443/oauth/token' -k -XPOST -H
    'Accept: application/json' -d "client_id=pks_cluster_client&client_secret=""&grant_type=password&username=UAA-USERNAME&response_type=id_token" --data-urlencode password=UAA-PASSWORD
  4. The developer user updates the kubeconfig with the id_token and refresh token in the kubeconfig

 

Creating the kubeconfig – the easy way

Assuming the developer is using Mac or Linux…

  1. Install jq on developer workstation
  2. Download the get-pks-k8s-config.sh script, make it executable (chmod +x get-pks.k8s.config.sh)
  3. Execute the script (replace the params with your own)
    ./get-pks-k8s-config.sh --API=api.pks.mydomain.com \
    --CLUSTER=cl1.pks.mydomain.com \
    --USER=dev1@mydomain.com \
    --NS=scratch
    • API – FQDN to PKS Controller, for UAA
    • CLUSTER – FQDN to master node of k8s cluster
    • USER – userPrincipalName for the user
    • NS – Namespace to target; optional
  4. After entering the user’s password, the script will set the params in the kubeconfig and switch context automatically

Try it out
Our developer user should able to “see” pods but not namespaces for example:

dev1 can see pods but not namespaces

 

Creating the kubeconfig – the easiest way

  1. Provide the developer with the PKS CLI tool, remember we have not added them to any group or role with PKS admin permissions.
  2. Provide the developer with the PKS API endpoint FQDN and the cluster name
  3. The developer may run this command to generate the updated kubeconfig and set the current context
    pks get-kubeconfig CLUSTERNAME -a API -u USER -k

    • CLUSTERNAME is the name of the cluster
    • API – FQDN to PKS Controller
    • USER – userPrincipalName for the user
  4. You’ll be prompted for the account password. Once entered, the tool will fetch the user-specific kubeconfig.
Use PKS CLI to get the kubeconfig

 

Getting Started with CredHub in Concourse

First, some background – I promise to keep it short.  You should never have credentials in a public github repo.  Probably not good to have them in a private repo either.  At Pivotal, the github client is configured with credalert which complains when I try to push credentials to github.  I maintain compliance, I needed a way to update the stuff in the repo and have my credentials too.  Concourse supports a couple of AWS credential managers, Vault and Credhub.  Since CredHub is built-in to ops-manager-deployed BOSH director, we don’t have to spin anything else up.

The simplified diagram shows how this will work.  CredHub is on the BOSH director, so it’ll need to be reachable from the Concourse Web ATS service and anywhere the credhub cli will be used. If your BOSH director is behind a NAT, you may want to configure a DNAT, so it can be reached.

In this case, we’re using a “management/infrastructure” Operations Manager and BOSH director to deploy and manage concourse and minio.  The pipelines on concourse will be used to deploy and maintain other foundations in the environment.

Configure UAA

  1. Logon to the ops manager and navigate to status to record the IP address of the BOSH director.  If your BOSH director is behind a NAT, locate it’s DNAT instead.
  2. Navigate to the credentials tab.  We’re going to need the uaa_login_client_credentials password and the uaa_admin_client_credentials password.
  3. While here, save the ops manager root ca to your computer.  From the installation dashboard, click on your name in the upper right, select Settings.  Then click Advanced and Download Root CA.
  4. SSH into your ops manager:  ubuntu@<ops manager name or IP>
  5. Set uaac target

    uaac target https://<IP of BOSH director>:8443 –ca-cert /var/tempest/workspaces/default/root_ca_certificate

  6. Login to uaac – ok, this gets awkward

    uaac token owner get login -s <uaa_login_client_credentials>

    • Replace <uaa_login_client_credentials> with the value you saved
    • When prompted for a username enter admin
    • For password enter the uaa_admin_client_password value you saved
    • You should see “Successfully fetched token…”
  7. Create a uaac client for concourse to use with credhub

    uaac client add –name concourse-to-credhub –authorized-grant-types client_credentials –authorities credhub.read,credhub.write –access-token-validity 30 –secret MySecretPassword

    Please replace MySecretPassword with something else

  8. Create a uaac user for use with the CredHub cli

    uaac user add credhub –email credhub@whatever.com -p MySecretPassword

Try out Credhub cli

    1. Download and install the credhub cli.  On mac, you can use brew install credhub
    2. From a terminal/command line run this to point the cli to the credhub instance on the BOSH director:

      credhub api -s <IP of BOSH director>:8844 –ca-cert ./root_ca_certificate

      • Replace <IP of BOSH director> with the name or reachable IP of the director
      • root_ca_certificate is the root CA from ops manager you downloaded earlier
    3. Login to credhub:

      credhub login -u credhub -p MySecretPassword

      User and pass are from the User we added to uaa earlier

    4. Set a test value:

      credhub –type:value –name=/testval –value=hello

      Here’s we’re setting a key (aka credential) with the name /testval to the value “hello”. Note that all the things stored in credhub start with a slash and that there are several types of credentials that can be stored, the simplest being “value”

    5. Get our value:

      credhub –name /testval

      This will return the metadata for our key/credential

Configuring Concourse to use CredHub

Concourse TSA must be configured to look to credhub as a credential manager. I’m using BOSH-deployed concourse, so I’ll simply update the deployment manifest with the new params. if you’re using concourse via docker-compose, you’ll want to update the yml with the additional params as described here.

For concourse deployed via BOSH and using concouse-bosh-deployment, we’ll include the /operations/credhub.yml file and the additional params.  For me this looks like

bosh -e core deploy -d concourse concourse.yml \
-l ../versions.yml \
–vars-store cluster-creds.yml \
-o operations/static-web.yml \
-o operations/basic-auth.yml \
-o operations/scale.yml \
-o operations/privileged-http.yml \
-o operations/credhub.yml \
–var web_ip=192.168.100.205 \
–var external_url=http://concourse.ragazzilab.com \
–var network_name=INFRA \
–var web_vm_type=small.disk \
–var db_vm_type=small.disk \
–var azs=[BOSH] \
–var db_persistent_disk_type=10240 \
–var worker_vm_type=concourse.worker \
–var deployment_name=concourse \
–var local_user.username=myuser \
–var local_user.password=mypass \
–var web_instances=1 \
–var worker_instances=1 \
–var syslog_address=syslog.ragazzilab.com \
–var syslog_port=514 \
–var syslog_permitted_peer=syslog.ragazzilab.com \
–var credhub_url=”https://192.168.100.200:8844 ” \
–var credhub_client_id=concourse-to-credhub \
–var credhub_client_secret=MySecretPassword \
–var credhub_ca_cert=”$(cat root_ca_certificate)”

Test a pipeline

    1. Use credhub cli to create a value

      credhub set –name /concourse/main/hello-credhub/hello –value World

      Concourse has a default pattern for looking up interpolation values. It’s /concourse/<team name>/<pipeline name>/<key>

    2. Get the test pipeline from here.

      jobs:
      – name: hello-credhub
      plan:
      – do:
      – task: hello-credhub
      config:
      platform: linux
      image_resource:
      type: docker-image
      source:
      repository: ubuntu
      run:
      path: sh
      args:
      – -exc
      – |
      echo “Hello $WORLD_PARAM”
      params:
      WORLD_PARAM: ((hello))

    3. Use fly to set the test pipeline

      fly -t concourse login -c http://concourse -u myuser -p mypass -n main
      fly -t core sp -p hello-credhub -c hello-credhub.yml

    4. Run the test pipeline in concourse. If all goes well, it should say Hello World”

Using Helm and Dynamic PersistentVolumes with Multi-AZ PKS on vSphere

So, you’ve installed PKS and created a PKS cluster.  Excellent!  Now what?

We want to use helm charts to deploy applications.  Many of the charts use PersistentVolumes, so getting PVs set up is our first step.

There are a couple of complicating factors to be aware of when it comes to PVs in a multi-AZ/multi-vSphere-Cluster environment.  First, you probably have cluster-specific datastores – particularly if you are using Pivotal Ready Architecture and VSAN.  These datastores are not suitable for PersistentVolumes consumed by applications deployed to our Kubernetes cluster.  To work-around this, we’ll need to provide some shared block storage to each host in each cluster.  Probably the simplest way to do this is with an NFS share.

Prerequisites:

Common datastore; NFS share or iSCSI

In production, you’ll want a production-quality fault-tolerant solution for NFS or iSCSI, like Dell EMC Isilon. For this proof-of-concept, I’m going to use an existing NFS server, create a volume and share it to the hosts in the three vSphere clusters where the PKS workload VMs will run.  In this case, the NFS datastore is named “sharednfs” ’cause I’m creative like that.  Make sure that your hosts have adequate permissions to the share.  Using VMFS on iSCSI is supported, just be aware that you may need to cable-up additional NICs if yours are already consumed by N-VDS and/or VSAN.

Workstation Prep

We’ll need a handful of command-line tools, so make sure your workstation has the PKS CLI and Kubectl CLI from Pivotal and you’ve downloaded and extracted Helm.

PKS Cluster
We’ll want to provision a cluster using the PKS CLI tool.  This document assumes that your cluster was provisioned successfully, but nothing else has been done to it.  For my environment, I configured the “medium” plan to use 3 Masters and 3 Workers in all three AZs, then created the cluster with the command

pks create-cluster pks1cl1 --external-hostname cl1.pks1.lab13.myenv.lab --plan "medium" --num-nodes "3"


Logged-in
Make sure you’re logged into the Kubernetes cluster. In PKS, the easiest way to do this is via the PKS cli:

pks login -a api.pks1.lab13.myenv.lab -u pksadmin -p my_password --skip-ssl-validation
pks cluster pks1cl1
pks get-credentials pks1cl1
kubectl config use-context pks1cl1
kubectl get nodes -o wide

Where “pks1cl1″ is replaced by your cluster’s name,”api.pks1.lab13.myenv.lab” is replaced by the FQDN to your PKS API server, “pksadmin” is replaced by the username with admin rights to PKS and “my_password” is replaced with that account’s password.

Procedure:

  1. Create storageclass
    • Create storageclass spec yaml. Note that the file is named storageclass-nfs.yml and we’re naming the storage class itself “nfs”:
      kind: StorageClass
      apiVersion: storage.k8s.io/v1
      metadata:
        name: nfs
        annotations:
          storageclass.kubernetes.io/is-default-class: "true"
      provisioner: kubernetes.io/vsphere-volume
      parameters:
        diskformat: thin
        datastore: sharednfs
        fstype: ext3
      

    • Apply the yml with kubectl

      kubectl create -f storageclass-nfs.yml

    • Create a sample PVC (Persistent Volume Claim). Note that the file is names pvc-sample.yml, the PVC name is “pvc-sample” and uses the “nfs” storageclass we created above. This step is not absolutely necessary, but will help confirm we can use the storage.
      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: pvc-sample
        annotations:
          volume.beta.kubernetes.io/storage-class: nfs
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
        storageClassName: nfs
      
    • Apply the yml with kubectl

      kubectl create -f pvc-sample.yml


      If you’re watching vSphere closely, you’ll see a VMDK created in the kubevols folder of the NFS datastore

    • Check that the PVC was created with

      kubectl get pvc

      and

      kubectl describe pvc pvc-sample

    • Remove sample PVC with

      kubectl delete -f pvc-sample

  2. Configure Helm and Tiller
    • Create Service Account for tiller with
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: tiller
        namespace: kube-system
      ---
      apiVersion: rbac.authorization.k8s.io/v1beta1
      kind: ClusterRoleBinding
      metadata:
        name: tiller
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: cluster-admin
      subjects:
        - kind: ServiceAccount
          name: tiller
          namespace: kube-system
      
    • Apply the service account yml with Kubectl

      kubectl create -f rbac-config.yml

    • Initialize helm and tiller with

      helm init --service-account tiller

    • Check that tiller is ready

      helm version


      Look for a version number for the version; note that it might take a few seconds for tiller in the cluster to get ready.

  3. Deploy sample helm chart
    • Update helm local chart repository. We do this so that we can be sure that helm can reach the public repo and to cache teh latest information to our local repo.

      helm repo update


      If this step results in a certificate error, you may have to add the cert to the trusted certificates on the workstation.

    • Install helm chart with ingress enabled. Here, I’ve selected the Dokuwiki app. The command below will enable ingress, so we can access it via routable IP and it will use the default storageclass we configured earlier.

      helm install --name dokuwiki \
      --set ingress.enabled="true",dokuwikiUsername=admin,dokuwikiPassword=password \
      stable/dokuwiki

      Edit – April 23 2019 – Passing the credentials in here makes connecting easier later.

    • Confirm that the app was deployed
      helm list
      kubectl get pods -n default
      kubectl get services -n default


      From the get services results, make a note of the external IP address – in the example above, it’s 192.13.6.73

    • Point a browser at the external address from the previous step and marvel at your success in deploying Dokuwiki via helm to Kubernetes!
      If you want to actually login to your Dokuwiki instance, first obtain the password for the user account with this command:

      kubectl get secret -n default dokuwiki-dokuwiki \
      -o jsonpath="{.data.dockuwiki-password}" | base64 --decode

      Then login with username “user” and that password.

       

      Edit – 04/23/19 – Login with the username and password you included in the helm install command

  4. Additional info
    • View Persistent Volume Claims with

      kubectl get pvc -n default


      This will list the PVCs and the volumes in the “default” namespace. Note the volume corresponds to the name of the VMDK on the datastore.

    • Load-Balancer
      Notice that since we are leveraging the NSX-T Container Networking Interface and enabled the ingress when we installed dokuwiki, a load-balancer in NSX-T was automatically created for us to point to the application.

This took me some time to figure out; had to weed through a lot of documentation – some of which contradicted itself and quite a bit of trial-and-error. I hope this helps save someone time later!

Automating PKS Upgrades

Last night, Pivotal announced new versions of PKS and Harbor, so I thought it’s time to simplify the upgrade process. Here is a concourse pipeline that essentially aggregates the upgrade-tile pipeline so that PKS and Harbor are upgraded in one go.

What it does:

  1. Runs on a schedule – you set the time and days it may run
  2. Downloads the latest version of PKS and Harbor from Pivnet- you set the major.minor version range
  3. Uploads the PKS and Harbor releases to your BOSH director
  4. Determines whether the new release is missing a stemcell, downloads it from PivNet and uploads it to BOSH director
  5. Stages the tiles/releases
  6. Applies changes

What you need:

  1. A working Concourse instance that is able to reach the Internet to pull down the binaries and repo
  2. The fly cli and credentials for your Concourse.
  3. A token from your PivNet account
  4. An instance of PKS 1.0.2 or 1.0.3 AND Harbor 1.4.x deployed on Ops Manager
  5. Credentials for your Ops Manager
  6. (optional) A token from your GitHub account

How to use the pipeline:

  1. Download params.yml and pipeline.yml from here.
  2. Edit the params.yml by replacing the values in double-parentheses with the actual value. Each line has a bit explaining what it’s expecting.  For example, ((ops_mgr_host)) becomes opsmgr.pcf1.domain.local
    • Remove the parens
    • If you have a GitHub Token, pop that value in, otherwise remove ((github_token))
    • The current pks_major_minor_version regex will get the latest 1.0.x.  If you want to pin it to a specific version, or when PKS 1.1.x is available, you can make those changes here.
    • The ops_mgr_usr and ops_mgr_pwd credentials are those you use to logon to Ops Manager itself.  Typically set when the Ops Manager OVA is deployed.
    • The schedule params should be adjusted to a convenient time to apply the upgrade.  Remember that in addition to the PKS Service being offline (it’s a singleton) during the upgrade, your Kubernetes clusters may be affected if you have the “Upgrade all Clusters” errand set to run in the PKS configuration, so schedule wisely!

  3. Open your cli and login to concourse with fly

    fly -t concourse login -c http://concourse.domain.local:8080 -u username -p password

  4. Set the new pipeline. Here, I’m naming the pipeline “PKS_Upgrade”. You’ll pass the pipeline.yml with the “-c” param and your edited params.yml with the “-l” param

    fly -t concourse sp -p PKS_Upgrade -c pipeline.yml -l params.yml

    Answer “y” to “Apply Configuration”…

  5. Unpause the pipeline so it can run when in the scheduled window

    fly -t concourse up -p PKS_Upgrade

  6. Login to the Concourse web to see our shiny new pipeline!

    If you don’t want to deal with the schedule and simply want it to upgrade on-demand, use the pipeline-nosched.yml instead of pipeline.yml, just be aware that when you unpause the pipeline, it’ll start doing its thing.  YMMV, but for me, it took about 8 minutes to complete the upgrade.

Behind the scenes
It’s not immediately obvious how the pipeline does what it does. When I first started out, I found it frustrating that there just isn’t much to the pipeline itself. To that end, I tried making pipelines that were entirely self-contained. This was good in that you can read the pipeline and see everything it’s doing; plus it can be made to run in an air-gapped environment. The downside is that there is no separation, one error in any task and you’ll have to edit the whole pipeline file.
As I learned a little more and poked around in what others were doing, it made sense to split the “tasks” out, keep them in a GitHub public repo and pull it down to run on-demand.

Pipelines generally have two main sections; resources and jobs.
Resources are objects that are used by jobs. In this case, the binary installation files, a zip of the GitHub repo and the schedule are resources.
Jobs are (essentially) made up of plans and plans have tasks.
Each task in most pipelines uses another source yml. This task.yml will indicate which image concourse should build a container from and what it should do on that container (typically, run a script). All of these task components are in the GitHub repo, so when the pipeline job runs, it clones the repo and runs the appropriate task script in a container built on an image pulled from dockerhub.

More info
I’ve got a several pipelines in the repo.   Some of them do what they’re supposed to. 🙂 Most of them are derived from others’ work, so many thanks to Pivotal Services and Sabha Parameswaran

PKS and NSX-T: I did everything wrong

I’ve fought with PKS and NSX-T for a month or so now. I’ll admit it: I did everything wrong, several times. One thing for certain, I know how NOT to configure it. So, now that I’ve finally gotten past my configuration issues, it makes sense to share the pain lessons learned.

  1. Set your expectations correctly. PKS is literally a 1.0 product right now. It’s getting a lot of attention and will make fantastic strides very quickly, but for now, it can be cumbersome and confusing. The documentation is still pretty raw. Similarly, NSX-T is very young. The docs are constantly referring you to the REST API instead of the GUI – this is fine of course, but is a turn-off for many. The GUI has many weird quirks. (when entering a tag, you’ll have to tab off of the value field after entering a value, since it is only checked onBlur)
  2. Use Chrome Incognito  NSX-T does not work in Firefox on Windows. It works in Chrome, but I had issues where the cache would problems (the web GUI would indicate that backup is not configured until I closed Chrome, cleared cache and logged in again)
  3. Do not use exclamation point in the NSX-T admin password Yep, learned that the hard way. Supposedly, this is resolved in PKS 1.0.3, but I’m not convinced as my environment did not wholly cooperate until I reset the admin password to something without an exclamation point in it
  4. Tag only one IP Pool with ncp/external I needed to build out several foundations on this environment and wanted to keep them in discrete IP space by created multiple “external IP Pools” and assigning each to its own foundation. Currently the nsx-cli.sh script that accompanies PKS with NSX-T only looks for the “ncp/external” tag on IP Pools, if more than one is found, it quits. I suppose you could work around this by forking the script and passing an additional “cluster” param, but I’m certain that the NSBU is working on something similar
  5. Do not take a snapshot of the NSX Manager This applies to NSX for vSphere and NSX-T, but I have made this mistake and it was costly. If your backup solution relies on snapshots (pretty much all of them do), be sure to exclude the NSX Manager and…
  6. Configure scheduled backups of NSX Manager I found the docs for this to be rather obtuse. Spent a while trying to configure a FileZilla SFTP or even IIS-FTP server until it finally dawned on me that it really is just FTP over SSH. So, the missing detail for me was that you’ll just need a linux machine with plenty of space that the NSX Manager can connect to – over SSH – and dump files to. I started with this procedure, but found that the permissions were too restrictive.
  7. Use concourse pipelines This was an opportunity for me to really dig into concourse pipelines and embrace what can be done. One moment of frustration came when PKS 1.0.3 was released and I discovered that the parameters for vSphere authentication had changed. In PKS 1.0 through 1.0.2, there was a single set of credentials to be used by PKS to communicate with vCenter Server. As of 1.0.3, this was split into credentials for master and credentials for workers. So, the pipeline needed a tweak in order to complete the install. I ended up putting in a conditional to check the release version, so the right params are populated. If interested, my pipelines can be found at https://github.com/BrianRagazzi/concourse-pipelines
  8. Count your Load-Balancers In NSX-T, the load-balancers can be considered a sort of empty appliance that Virtual Servers are attached to and can itself attach to a Logical Router. The load-balancers in-effect require pre-allocated resources that must come from an Edge Cluster. The “small” load-balancer consumes 2 CPU and 4GB RAM and the “Large” edge VM provides 8 CPU and 16GB RAM. So, a 2-node Edge Cluster can support up to FOUR active/standby Load-Balancers. This quickly becomes relevant when you realize that PKS creates a new load-balancer when a new K8s cluster is created. If you get errors in the diego databse with the ncp job when creating your fifth k8s cluster, you might need to add a few more edge nodes to the edge cluster.
  9. Configure your NAT rules as narrow as you can. I wasted a lot of time due to mis-configured NAT rules. The log data from provisioning failures did not point to NAT mis-configuration, so wild geese were chased.  Here’s what finally worked for me:
    Router Priority Action Source Destination Translated Description
    Tier1 PKS Management 512 No NAT [PKS Management CIDR] [PKS Service CIDR] Any No NAT between management and services
    [PKS Service CIDR] [PKS Management CIDR]
    1024 DNAT Any [External IP for Ops Manager] [Internal IP for Ops Manager] So Ops Manager is reachable
    [External IP for PKS Service] [Internal IP for PKS Service] (obtain from Status tab of PKS in Ops Manager) So PKS Service (and UAA) is reachable
    SNAT [Internal IP for PKS Service] Any [External IP for PKS Service] Return Traffic for PKS Service
    2048 [PKS Management CIDR] [Infrastructure CIDR] (vCenter Server, NSX Manager, DNS Servers) [External IP for Ops Manager] So PKS Management can reach infrastructure
    [PKS Management CIDR] [Additional Infrastructure] (NTP in this case) [External IP for Ops Manager]
    Tier1 PKS Services 512 No NAT [PKS Service CIDR] [PKS Management CIDR] Any No NAT between management and services
    [PKS Management CIDR] [PKS Service CIDR]
    1024 SNAT [PKS Service CIDR] [Infrastructure CIDR] (vCenter Server, NSX Manager, DNS Servers) [External IP] (not the same as Ops Manager and PKS Service, but in the same L3 network) So PKS Services can reach infrastructure
    [PKS Service CIDR] [Additional Infrastructure] (NTP in this case) [External IP]

Replacing the self-signed Certificate on NSX-T

Ran into a difficulty trying to use the self-signed certificate that comes pre-configured on the manager for NSX-T. In my case, Pivotal Operations Manager refused to accept the self-signed certificate.

So, for NSX-T 2.1, it looks like the procedure is:

    1. Log on to the NSX Manager and navigate to System|Trust
    2. Click CSRs tab and then “Generate CSR”, populate the certificate request details and click Save
    3. Select the new CSR and click Actions|Download CSR PEM to save the exported CSR in PEM format
    4. Submit the CSR to your CA to get it signed and save the new certificate. Be sure to save the root CA and any subordinate CA certificates too<. In this example, certnew.cer is the signed NSX Manager certificate, sub-CA.cer is the subordinate CA certificate and root-CA.cer is the Root CA certificate
    5. Open the two (or three) cer files in notepad or notepad++ and concatenate them in order of leaf cert, (subordinate CA cert), root CA cert
    6. Back in NSX Manager, select the CSR and click Actions|Import Certificate for CSR. In the Window, paste in the concatenated certificates from above and click save
    7. Now you’ll have a new certificate and CA certs listed under Certificates. The GUI only shows a portion of the ID by default, click it to display the full ID and copy it to the clip board
    8. Launch RESTClient in Firefox.
      • Click Authentication|Basic Authentication and enter the NSX Manager credentials for Username and Password, click “Okay”
      • For the URL, enter https://<NSX Manager IP or FQDN>api/v1/node/services/http?action=apply_certificate&certificate_id=<certificate ID copied in previous step>
      • Set the method to POST and click SEND button
      • check the Headers to confirm that the status code is 200
    9. Refresh browser session to NSX Manager GUI to confirm new certificate is in use

Notes:
I was concerned that replacing the certificate would break the components registered via the certificate thumbprint; this process does not break those things. They remain registered and trust the new certificate

Removing NSX-T VIBs from ESXi hosts

I’d wanted to revert my environment from (an incomplete install of) NSX-T v2.0 back to NSX for vSphere v6.3.x, but found that the hosts would not complete preparation.  The logs indicated that something was “claimed by multiple non-overlay vibs”.

Error in esxupdate.log

I found that the hosts still had the NSX-T VIBs loaded, so to remove them, here’s what I did:

  1. Put the host in maintenance mode.  This is necessary to “de-activate” the VIBs that may be in use
  2. Login to the host via SSH
  3. Run

    /etc/init.d/netcpad stop
    /etc/init.d/nsx-ctxteng stop remove
    /etc/init.d/nsx-da stop remove
    /etc/init.d/nsx-datapath stop remove
    /etc/init.d/nsx-exporter stop remove
    /etc/init.d/nsx-hyperbus stop remove
    /etc/init.d/nsx-lldp stop remove
    /etc/init.d/nsx-mpa stop remove
    /etc/init.d/nsx-nestdb stop remove
    /etc/init.d/nsx-platform-client stop remove
    /etc/init.d/nsx-sfhc stop remove
    /etc/init.d/nsx-support-bundle-client stop remove
    /etc/init.d/nsxa stop remove
    /etc/init.d/nsxcli stop remove

  4. Run this all in one line; note the the order of the vibs is important

    esxcli software vib remove -n nsx-ctxteng -n nsx-hyperbus -n nsx-platform-client -n nsx-nestdb -n nsx-aggservice -n nsx-da -n nsx-esx-datapath -n nsx-exporter -n nsx-host -n nsx-lldp -n nsx-mpa -n nsx-netcpa -n nsx-python-protobuf -n nsx-sfhc -n nsx-support-bundle-client -n nsxa -n nsxcli -n nsx-common-libs -n nsx-metrics-libs -n nsx-nestdb-libs -n nsx-rpc-libs -n nsx-shared-libs -n nsx-python-gevent -n nsx-python-greenlet

  5. reboot the host

Configuring NSX Load-Balancer for PCF

There’s not a lot of specific information out there for this configuration.  There’s some guidance from Pivotal and some how-tos from VMware, so with a little additional detail, we should be able to figure this out.

Edit – 2/1/17 – Updated with OpenSSL configuration detail
Edit – 3/20/17 – Updated SubjectAltNames in config

Preparation

  1. SSL Certificate. You’ll need the signed public cert for your URL (certnew.cer), the associated private key (pcf.key) and the public cert of the signing CA (root64.cer).
    1. Download and install OpenSSL
    2. Create a config file for your request – paste this into a text file:

      [ req ]
      default_bits = 2048
      default_keyfile = rui.key
      distinguished_name = req_distinguished_name
      encrypt_key = no
      prompt = no
      string_mask = nombstr
      req_extensions = v3_req

      [ v3_req ]
      basicConstraints = CA:FALSE
      keyUsage = digitalSignature, keyEncipherment
      extendedKeyUsage = serverAuth, clientAuth
      subjectAltName = DNS: *.pcf.domain.com, DNS:ServerShortName, IP:ServerIPAddress, DNS: *.system.pcf.domain.com, DNS: *.apps.pcf.domain.com, DNS:*.login.system.pcf.domain.com, DNS: *.uaa.system.pcf.domain.com

      [ req_distinguished_name ]
      countryName = US
      stateOrProvinceName = State
      localityName = City
      0.organizationName = Company Name
      organizationalUnitName = PCF
      commonName = *.pcf.domain.com

    3. Replace the values in red with those appropriate for your environment. Be sure to specify the server name and IP address as the Virtual IP and its associated DNS record. Save the file as pcf.cfg.  You’ll want to use the wildcard “base” name as the common name and the server name, as well as the *.system, *.apps, *.login.system and *.uaa.system Subject Alt Names.
    4. Use OpenSSL to create the Certificate Site Request (CSR) for the wildcard PCF domain.

      openssl req -new -newkey rsa:2048 -nodes -keyout pcf.key -out pcf.csr -config pcf.cfg

    5. Use OpenSSL to convert the key to RSA (required for NSX to accept it)

      openssl rsa -in pcf.key -out pcfrsa.key

    6. Submit the CSR (pcf.csr) to your CA (Microsoft Certificate Services in my case), retrieve the certificate (certnew.cer) and certificate chain (certnew.p7b) base-64 encoded.
    7. Double-click certnew.p7b to open certmgr. Export the CA certificate as 64-bit encoded x509 to a file (root64.cer is the file name I use)
  2. Networks.  You’ll need to know what layer 3 networks the PCF components will use.  In my case, I set up a logical switch in NSX and assigned the gateway address to the DLR. Probably should make this a 24-bit network, so there’s room to grow, but not reserving a ridiculous number of addresses. We’re going to carve up the address space a little, so make a note of the following:
    • Gateway and other addresses you typically reserve for network devices.  (eg:  first 9 addresses 1-9)
    • Address that will be assigned to the NSX load balancer.  Just need one (eg: 10)
    • Addresses that will be used by the PCF Routers.  At least two. These will be configured as members in the NSX Load Balancer Pool.
  3. DNS, IP addresses.  PCF will use “system” and “apps” subdomains, plus whatever names you give any apps deployed.  This takes some getting used to – not your typical application.  Based on the certificate we created earlier, I recommend just creating a “pcf” subdomain.  In my case, the network domain (using AD-DNS) is ragazzilab.com and I’ve created the following:
    • pcf.ragazzilab.com subdomain
    • *.pcf.ragazzilab.com A record for the IP address I’m going to assign to the NSX Load-Balancer

NSX

Assuming NSX is already installed and configured.  Create or identify an existing NSX Edge that has an interface on the network where PCF will be / is deployed.

  1. Assign the address we noted above to the inteface under Settings|Interfaces
  2. Under Settings|Certificates, add the our SSL certificates
    • Click the Green Plus and select “CA Certificate”.  Paste the content of the signing CA public certificate (base64.cer) into the Certificate Contents box.  Click OK.
    • Click the Green Plus and select “Certificate”.  Paste the content of the signed public cert (certnew.cer) into the Certificate Contents box and paste the content of the RSA private key (pcfrsa.key) into the Private Key box. Click OK.
  3. Under Load Balancer, create an Application Profile. We need to ensure that NSX inserts the x-forwarded-for HTTP headers.  To do that, we need to be able to decrypt the request and therefore must provide the certificate information.  I found that Pool Side SSL had to be enabled and using the same Service and CA Certificates.
    Router Application Profile
    Router Application Profile

     

  4. Create the Service Monitor.  What worked for me is a little different from what is described in the GoRouter project page. The key points are that we want to specify the useragent and look for a response of “ok” with a header of “200 OK”.

    Service Monitor for PCF Router
    Service Monitor for PCF Router
  5. Create the Pool.  Set it to ROUND-ROBIN using the Service Monitor you just created.  When adding the routers as members, be sure to set the port to 443, but the Monitor Port to 80.

    Router Pool
    Router Pool
  6. Create the Virtual Server.  Specify the Application Profile and default Pool we just created.  Obviously, specify the correct IP Address.
    Virtual Server Configuration
    Virtual Server Configuration


PCF – Ops Manager

Assuming you’ve already deployed the Ops Manager OVF, use the installation dashboard to edit the configuration for Ops Manager Director.  I’m just going to highlight the relevant areas of the configuration here:

Networks.  Under “Create Networks”, be sure that the Subnet specified has the correct values.  Pay special attention to the reserved IP ranges.  These should be the addresses of the network devices and the IP address assigned to the load-balancer.  Do not include the addresses we intend to use for the routers though.  Based on the example values above, we’ll reserve the first 10 addresses.

Ops Manager Network Config
Ops Manager Network Config

Ops Manager Director will probably use the first/lowest address in range that is not reserved.

PCF – Elastic Runtime

Next, we’ll install Elastic Runtime.  Again, I’ll highlight the relevant sections of the configuration.

  1. Domains.  In my case it’s System Domain = system.pcf.ragazzilab.com and Apps Domain = apps.pcf.ragazzilab.com
  2. Networking.
    • Set the Router IPs to the addresses (comma-separated) you noted and added to as members to the NSX load-balancer earlier.
    • Leave HAProxy IPs empty
    • Select the point-of-entry option for “external load balancer, and it can forward encrypted traffic”
    • Paste the content of the signed certificate (certnew.cer) into the Certificate PEM field.  Paste the content of the CA public certificate (root64.cer) into the same field, directly under the certificate content.
    • Paste the content of the private key (pcf.key) into the Private Key PEM field.
    • Check “Disable SSL Certificate verification for this environment”.
  3. Resource Config.  Be sure that the number of Routers is at least 2 and equal to the number of IP addresses you reserved for them.

 

Troubleshooting

Help! The Pool Status is down when the Service Monitor is enabled.

This could occur if your routers are behaving differently from mine.  Test the response by sending a request to one of the routers  through curl and specifying the user agent as HTTP-Monitor/1.1

curl -v -A “HTTP-Monitor/1.1” “http://{IP of router}”

 

Testing router with curl
Testing router with curl

The value in the yellow box should go into the “Expected” field of the Service Monitor and the value in the red box should go into the “Receive” field. Note that you should not get a 404 response, if you do, check that he user agent is set correctly.

 

Notes

This works for me and I hope it works for you.  If you have trouble or disagree, please let me know.