Configuring VMware Tanzu SQL with MySQL for Kubernetes for High Availability

As a follow up to the getting started post, let’s touch on what it takes to configure a MySQL instance for High Availability in Tanzu SQL/MySQL

Why this is important

In kubernetes, pods are generally treated as transient and ephemeral, they can be restarted quickly and are often stateless. This is certainly not the case with databases. We need to make sure our databases remain online and usable. MySQL itself provides a means to do High Availability with multiple instances and synchronization; we’ll be leveraging this capability today.

High Availability Architecture

Blatantly ripped off from the official docs

Unlike our stand-alone instance when create an instance with HA enabled, the operator creates FIVE pods and two services for us.

Pods created for HA instance
Services created for HA instance

You’ll notice that the mysql-ha LoadBalancer uses the proxy pods as its endpoints and the mysql-ha-members uses the database pods themselves.

Create an HA instance

In this example, I’m going to reuse the “harbor” docker-registry secret we created originally, but we’ll want a new tls certificate for this instance.

Create the TLS certificate

Just like previously, save the following as cert-ha.yaml and apply it with kubectl -n mysql-instances -f cert-ha.yaml to create a certificate for our instance. Adjust the names to match your environment of course. Notice the issuerRef.name is ca-issuer

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: mysql-ha-secret
spec:
  # Secret names are always required.
  secretName: mysql-ha-secret
  duration: 2160h # 90d
  renewBefore: 360h # 15d
  subject:
    organizations:
    - ragazzilab.com
  # The use of the common name field has been deprecated since 2000 and is
  # discouraged from being used.
  commonName: mysql-ha.ragazzilab.com
  dnsNames:
  - mysql-ha.ragazzilab.com
  - mysql-ha
  - mysql-ha.mysql-instances.svc.cluster.local
  # Issuer references are always required.
  issuerRef:
    name: ca-issuer
    # We can reference ClusterIssuers by changing the kind here.
    # The default value is Issuer (i.e. a locally namespaced Issuer)
    kind: Issuer
    # This is optional since cert-manager will default to this value however
    # if you are using an external issuer, change this to that issuer group.
    group: cert-manager.io

Create the instance

The only differences are highAvailability.enabled:true and the name of the certificate secret

apiVersion: with.sql.tanzu.vmware.com/v1
kind: MySQL
metadata:
  name: mysql-ha
spec:
  storageSize: 2Gi
  imagePullSecret: harbor
#### Set highAvailability.enabled:true to create three pods; one primary and two standby, plus two proxy pods
  highAvailability:
    enabled: true

#### Set the storage class name to change storage class of the PVC associated with this resource
  storageClassName: tanzu

#### Set the type of Service used to provide access to the MySQL database.
  serviceType: LoadBalancer # Defaults to ClusterIP

### Set the name of the Secret used for TLS
  tls:
    secret:
      name: mysql-ha-secret

Apply this as usual: kubectl apply -n mysql-instances -f ./mysql-ha.yaml

Create a database user

The steps to create the database user in an HA instance are just like those for the standalone instance once we determine which Pod is the primary/active and writable one. I was unable to make the one-liner method in the official docs work, so here’s what I did instead.

  1. Get the MySQL root password: kubectl get secret -n mysql-instances mysql-ha-credentials -o jsonpath='{.data.rootPassword}' | base64 -D
  2. Get a shell on the mysql-ha-0 pod: kubectl -n mysql-instances exec --stdin --tty pod/mysql-ha-0 -c mysql -- /bin/bash
  3. Get into the mysql cli: mysql -uroot -p<root password>
  4. Identify the Primary member: SELECT MEMBER_HOST, MEMBER_ROLE FROM performance_schema.replication_group_members;
  5. If the primary node is mysql-ha-0 (the one we’re on), proceed to the next step. If it is not, go back to step step 2 to get a shell on the pod that is primary.
  6. Now, we should be on the mysql cli on the primary pod/member. Just like with the standalone instance, let’s create a user:
CREATE USER 'admin'@'%' IDENTIFIED BY 'password';
  GRANT ALL PRIVILEGES ON * . * TO 'admin'@'%';
  FLUSH PRIVILEGES;

Type exit twice to get out of mysql and the pod.

Ok, so now, we have a running instance of mysql and we’ve created a user account that can manage it (cannot login remotely as root). We can connect phpMyAdmin to the instance using the admin credentials:

Showing the three members of the instance
Advertisement

Adding trusted certs to nodes on TKGS 7.0 U2

A new feature added to TKGS as of 7.0 Update 2 is support for adding private SSL certificates to the “trust” on TKG cluster nodes.

This is very important as it finally provides a supported mechanism to use on-premises Harbor and other image registries.

It’s done by adding the encoded CAs to the “TkgServiceConfiguration”. The template for the TkgServiceConfiguration looks like this:

apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TkgServiceConfiguration
metadata:
  name: tkg-service-configuration
spec:
  defaultCNI: antrea
  proxy:
    httpProxy: http://<user>:<pwd>@<ip>:<port>

  trust:
    additionalTrustedCAs:
      - name: first-cert-name
        data: base64-encoded string of a PEM encoded public cert 1
      - name: second-cert-name
        data: base64-encoded string of a PEM encoded public cert 2

Notice that there are two new sections under spec; one for proxy and one for trust. This article is going to focus on trust for additional CAs.

If your registry uses a self-signed cert, you’ll just encode that cert itself. If you take advantage on an Enterprise CA or similar to sign your certs, you’d encoded and import the “signing”, “intermediate” and/or “root” CA.

Example

Let’s add the certificate for a standalone Harbor (not the built-in Harbor instance in TKGS, its certificate is already trusted)

Download the certificate by clicking the “Registry Certificate” link

Run base64 -i <ca file> to return the base64 encoded content:

Provide a simple name and copy and paste the encoded cert into the data value:

Apply the TkgServiceConfiguration

After setting up your file. Apply it to the Supervisor cluster:

kubectl apply -f ./TanzuServiceConfiguration.yaml

Notes

  • Existing TKG clusters will not automatically inherit the trust for the certificates
  • Clusters created after the TKGServiceConfiguration is applied will get the certificates
  • You can scale an existing TKG cluster to trigger a rollout with the certificates
  • You can verify the certificates exist by connecting through SSH to the nodes and locating the certs under /etc/ssl/certs:

Retrieving the Admin Password for Harbor Image Registry in Tanzu Kubernetes Grid Service

In TKGS on vSphere 7.0 through (at least) 7.0.1d, a Harbor Image Registry may be enabled for the vSphere Cluster (Under Configure|Namespaces| Image Registry). This feature currently (as of 7.0.1d) requires the Pod Service, which in turn requires NSX-T integration.

As of 7.0.1d, the self-signed certificate created for this instance of Harbor is added to the trust for nodes in TKG clusters, making it easier (possible?) to use images from Harbor.

When you login to harbor as a user, you’ll notice that the menu is very sparse. Only the ‘admin’ account can access the “Administration” menu.

To get logged in as the ‘admin’ account, we’ll need to retrieve the password from a secret for the harbor controller in the Supervisor cluster.

Steps:

  • SSH into the vCenter Server as root, type ‘shell’ to get to bash shell
  • Type ‘/usr/lib/vmware-wcp/decryptK8Pwd.py‘ to return information about the Supervisor Cluster. The results include the IP for the cluster as well as the node root password
  • While still in the SSH session on the vCenter Server, ssh into the Supervisor Custer node by entering ‘ssh root@<IP address from above>’. For the password, enter the PWD value from above.
  • Now, we have a session as root on a supervisor cluster control plane node.
  • Enter ‘kubectl get ns‘ to see a list of namespaces in the supervisor cluster. You’ll see a number of hidden, system namespaces in addition to those corresponding to the vSphere namespaces. Notice there is a namespace named “vmware-system-registry” in addition to one named “vmware-system-registry-#######”. The namespace with the number is where Harbor is installed.
  • Run ‘kubectl get secret -n vmware-system-registry-######‘ to get a list of secrets in the namespace. Locate the secret named “harbor-######-controller-registry”.
  • Run this to return the decoded admin password: kubectl get secret -n vmware-system-registry-###### harbor-######-controller.data.harborAdminPassword}' | base64 -d | base64 -d
  • In the cases I seen so far, the password is about 16 characters long, if it’s longer than that, you may not have decoded it entirely. Note that the value must be decoded twice.
  • Once you’ve saved the password, enter “exit” three times to get out of the ssh sessions.

Notes

  • Don’t manipulate the authentication settings
  • The process above is not supported; VMware GSS will not help you complete these steps
  • Some features may remain disabled (vulnerability scanning for example)
  • As admin, you may configure registries and replication (although it’s probably unsupported with this built-in version of Harbor for now)

Configure Tanzu Kubernetes Grid to use Active Directory

Tanzu Kubernetes Grid includes and supports packages for dex and Gangway.  These are used to extend authentication to LDAP and OIDC endpoints.  Recall that Kubernetes does not do user-management or traditional authentication.  As a K8s cluster admin, you can create service accounts of course, but those are not meant to be used by developers.

Think of dex as a transition layer, it uses ‘connectors’ for upstream Identity providers (IdP) like Active Directory for LDAP or Okta for SAML and presents an OpenID Connect (OIDC) endpoint for k8s to use.

TKG provides not only the packages mentioned above, but also a collection of yaml files and documentation for implementation.  The current version (as of May 12, 2020) documentation for configuring authentication is pretty general, the default values in the config files are suitable for OpenLDAP.  So, I thought I’d share the specific settings for connecting dex to Active Directory.

Assumptions:

    1. TKG Management cluster is deployed
    2. Following the VMware documentation
    3. Using the TKG-provided tkg-extensions
    4. dex will be deployed to management cluster or to a specific workload cluster

Edits to authentication/dex/vsphere/ldap/03-cm.yaml – from Docs

  1. Replace <MGMT_CLUSTER_IP> with the IP address of one of the control plane nodes of your management cluster.  This is one of the control plane nodes where we’re putting dex
  2. If the LDAP server is listening on the default port 636, which is the secured configuration, replace <LDAP_HOST> with the IP or DNS address of your LDAP server. If the LDAP server is listening on any other port, replace <LDAP_HOST> with the address and port of the LDAP server, for example 192.168.10.22:389 or ldap.mydomain.com:389.  Never, never, never use unencrypted LDAP.  You’ll need to specify port 636 unless your targeted AD controller is also a Global Catalog server in which case you’ll specify port 3269.  Check with the AD team if you’re unsure.
  3. If your LDAP server is configured to listen on an unsecured connection, uncomment insecureNoSSL: true. Note that such connections are not recommended as they send credentials in plain text over the network. Never, never, never use unencrypted LDAP.
  4. Update the userSearch and groupSearch parameters with your LDAP server configuration.  This need much more detail – see steps below

Edits to authentication/dex/vsphere/ldap/03-cm.yaml – AD specific

  1. Obtain the root CA public certificate for your AD controller. Save a base64-encoded version of the certificate: echo root64.cer | base64 > rootcer.b64 for example will write the data from the PEM-encoded root64.cer file into a base64-encoded file named rootcer.b64
  2. Add the base64-encoded certificate content to the rootCAData key.  Be sure to remove the leading “#”.  This is an alternative to using the rootCA key, where we’ll have to place the file on each Control Plane node
  3. Update the userSearch values as follows:
    key default set to notes
    baseDN ou=people,

    dc=vmware,dc=com

    DN of OU in AD under

    which user accounts are found

    Example: ou=User Accounts,DC=ragazzilab,DC=com
    filter “(objectClass=

    posixAccount)”

    “(objectClass=person)”
    username uid userPrincipalName
    idAttr uid DN Case-sensitive
    emailAttr mail userPrincipalName
    nameAttr givenName cn
  4. Update the groupSearch values as follows:
    key default set to notes
    baseDN ou=people,

    dc=vmware,dc=com

    DN of OU in AD under

    which security Groups are found

    Example: DC=ragazzilab,DC=com
    filter “(objectClass=

    posixGroup)”

    “(objectClass=group)”
    userAttr uid DN Case-Sensitive
    groupAttr memberUid “member:1.2.840.113556.1.4.1941:” This is necessary to search within nested groups in AD
    nameAttr cn cn

Other important Notes
When you create the oidc secret in the workload clusters running Gangway, the clientSecret value is base64-encoded, but the corresponding secret for the workload cluster in the staticClients section of the dex configmMap is decoded. This can be confusing since the decoded value is also randomly-generated.