Getting Started with VMware Tanzu SQL with MySQL for Kubernetes

Let’s deploy Tanzu SQL with MySQL on Kubernetes and use phpMyAdmin to interact with our
database secured with TLS

VMware Tanzu SQL with MySQL for Kubernetes is quite a mouthful. For this post, I’ll refer to the product as Tanzu SQL/MySQL. We’re going to deploy it onto an existing Tanzu Kubernetes Grid cluster.

Objectives:

  • Deploy Tanzu SQL with MySQL on Kubernetes
  • Use phpMyAdmin to interact with our databases
  • Secure database instances with TLS

Cluster Setup

Tanzu SQL/MySQL can run on any conformant kubernetes cluster, if you already have one running, you can skip ahead. If, like me, you want to provision a new TKG cluster for Tanzu SQL/MySQL, you’ll want settings like this:

  • K8s version 1.18 or 1.19 or 1.20.7
  • Additional volume on /var/lib/containerd for the images
  • For a test cluster, best-effort small control-plane nodes (3) and best-effort-medium worker nodes (2) is sufficient to start, YMMV.
  • Install metrics-server and add appropriate PSPs

Get the images and chart

You’ll need to login to pivnet and registry.pivotal.io, accept the EULA for VMware Tanzu SQL with MySQL for Kubernetes.

At a command-line, run:docker login registry.pivotal.io then, provide your credentials. This is so that docker can pull down the images from VMware. Login to your local container registry as well – you’ll need permissions to push images into your project.

In the following commands, replace “{local repo}” with the FQDN for your local registry and “{project}” with the project name in that repo that you can push images to.

docker pull registry.pivotal.io/tanzu-mysql-for-kubernetes/tanzu-mysql-instance:1.0.0
docker pull registry.pivotal.io/tanzu-mysql-for-kubernetes/tanzu-mysql-operator:1.0.0
docker tag registry.pivotal.io/tanzu-mysql-for-kubernetes/tanzu-mysql-instance:1.0.0 {local repo}/{project}/tanzu-mysql-instance:1.0.0
docker tag registry.pivotal.io/tanzu-mysql-for-kubernetes/tanzu-mysql-operator:1.0.0 {local repo}/{project}/tanzu-mysql-operator:1.0.0
docker push {local repo}/{project}/tanzu-mysql-instance:1.0.0
docker push {local repo}/{project}/tanzu-mysql-operator:1.0.0

Retrieve the helm chart:

export HELM_EXPERIMENTAL_OCI=1
helm chart pull registry.pivotal.io/tanzu-mysql-for-kubernetes/tanzu-mysql-operator-chart:1.0.0
helm chart export registry.pivotal.io/tanzu-mysql-for-kubernetes/tanzu-mysql-operator-chart:1.0.0

In the tanzu-sql-with-mysql-operator folder created by the helm export, copy values.yaml to values-override.yaml. Edit the keys with the correct values (we haven’t created the harbor secret yet, but we’ll name it the value you provide here). Here’s an example:


imagePullSecret: harbor
operatorImage: {local repo}/{project}/tanzu-mysql-operator:1.0.0"
instanceImage: {local repo}/{project}/tanzu-mysql-instance:1.0.0"
resources:
  limits:
    cpu: 100m
    memory: 128Mi
  requests:
    cpu: 100m
    memory: 128Mi

Deploy Operator

We’ll want to create namespace, a docker-registry secret (named harbor in the example below) and then install the chart.

kubectl create namespace tanzu-mysql
kubectl --namespace tanzu-mysql create secret docker-registry harbor --docker-server=https://{local repo} --docker-username=MYUSERNAME --docker-password=MYPASSWORD
helm install --namespace tanzu-mysql --values=./tanzu-sql-with-mysql-operator/values-override.yaml tanzu-mysql-operator ./tanzu-sql-with-mysql-operator/

Let’s check that the pods are running by running kubectl get po -n tanzu-mysql

Before Creating an Instance…

We’ll need to create a namespace to put our mysql instances, a secret in that namespace in order to pull the images from our local repo, and a way to create TLS certificates and phpMyAdmin. These commands will create the namespace, create the docker-registry secret and install cert-manager:

kubectl create namespace cert-manager helm repo add jetstack https://charts.jetstack.io helm repo update helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.0.2 --set installCRDs=true kubectl create namespace tanzu-mysql kubectl --namespace mysql-instances create secret docker-registry harbor --docker-server=https://<local repo> --docker-username=<username> --docker-password=<password>

Working with cert-manager

Cert-manager uses issuers to create certificates from cert-requests. There are a variety of issuers supported, but we must have the ca certificate included in the resulting certificate secret – something not all issuers do. For example, self-signed and ACME are not suitable as they do not appear to include the ca certificate in the cert secret. Luckily, the CA issuer works fine and can use a self-signed issuer as its own signer. Save the following as a yaml file to create a self-signed issuer, root cert and a CA issuer and apply it with kubectl -n mysql-instances -f cabootstrap.yaml

---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: selfsigned-issuer
spec:
  selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: my-selfsigned-ca
spec:
  isCA: true
  commonName: my-selfsigned-ca
  secretName: root-secret
  privateKey:
    algorithm: ECDSA
    size: 256
  issuerRef:
    name: selfsigned-issuer
    kind: ClusterIssuer
    group: cert-manager.io
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: ca-issuer
spec:
  ca:
    secretName: root-secret

Save the following as cert.yaml and apply it with kubectl -n mysql-instances -f cert.yaml to create a certificate for our instance. Adjust the names to match your environment of course. Notice the issuerRef.name is ca-issuer

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: mysql-tls-secret
spec:
  # Secret names are always required.
  secretName: mysql-tls-secret
  duration: 2160h # 90d
  renewBefore: 360h # 15d
  subject:
    organizations:
    - ragazzilab.com
  # The use of the common name field has been deprecated since 2000 and is
  # discouraged from being used.
  commonName: mysql-tls.mydomain.local
  dnsNames:
  - mysql-tls.mydomain.local
  - mysql-tls
  - mysql-tls.mysql-instances.svc.cluster.local
  # Issuer references are always required.
  issuerRef:
    name: ca-issuer
    # We can reference ClusterIssuers by changing the kind here.
    # The default value is Issuer (i.e. a locally namespaced Issuer)
    kind: Issuer
    # This is optional since cert-manager will default to this value however
    # if you are using an external issuer, change this to that issuer group.
    group: cert-manager.io

Confirm that the corresponding secret contains three files: ca.crt, tls.crt, tls.key by using kubectl describe secret -n mysql-instances mysql-tls-secret

Create an instance and add a user

Here is an example yaml for a MySQL instance. This will create an instance name mysql-tls, using the docker-registry secret named harbor we created earlier and the certificate secret we created above named mysql-tls-secret and use a LoadBalancer IP so we can access it from outside of the cluster.

apiVersion: with.sql.tanzu.vmware.com/v1
kind: MySQL
metadata:
  name: mysql-tls
spec:
  storageSize: 2Gi
  imagePullSecret: harbor

#### Set the storage class name to change storage class of the PVC associated with this resource
  storageClassName: tanzu

#### Set the type of Service used to provide access to the MySQL database.
  serviceType: LoadBalancer # Defaults to ClusterIP

### Set the name of the Secret used for TLS
  tls:
    secret:
      name: mysql-tls-secret

Apply this yaml to the mysql-instances namespace to create the instance: kubectl apply -n mysql-instances -f ./mysqlexample.yaml

Watch for two containers in the pod to be ready

Watch for the mysql-tls-0 pod to be running with 2 containers. When the instance is created, the operator also creates a secret containing the root password. Retrieve the root password with this command: kubectl get secret -n mysql-instances mysql-tls-credentials -o jsonpath='{.data.rootPassword}' | base64 -D
Retrieve the load-balancer address for the MySQL instance with this command: kubectl get svc -n mysql-instances mysql-tls

LoadBalancer address for our instance

Login to Pod and run mysql commands

Logging into the pod, then into mysql

Run this to get into a command prompt on the mysql pod: kubectl -n mysql-instances exec --stdin --tty pod/mysql-tls-0 -c mysql -- /bin/bash
Once in the pod and at a prompt, run this to get into the mysql cli as root: mysql -uroot -p<root password>
Once at the mysql prompt, run this to create a user named “admin” with a password set to “password” (PLEASE use a different password!)

  CREATE USER 'admin'@'%' IDENTIFIED BY 'password';
  GRANT ALL PRIVILEGES ON * . * TO 'admin'@'%';
  FLUSH PRIVILEGES;

Type exit twice to get out of mysql and the pod.

Ok, so now, we have a running instance of mysql and we’ve created a user account that can manage it (cannot login remotely as root).

Deploy, Configure and use phpMyAdmin

There are several ways to do this, but I’m going to go with kubeapps to deploy phpMyAdmin. Run this to install kubeapps with a loadbalancer front-end:

helm repo add bitnami https://charts.bitnami.com/bitnami
kubectl create namespace kubeapps
helm install kubeapps --namespace kubeapps bitnami/kubeapps --set frontend.service.type=LoadBalancer

Find the External IP address for kubeapps and point a browser at it: kubectl get svc -n kubeapps kubeapps. Get the token from your .kube/config file to paste into the token field in kubeapps and click submit. Once in kubeapps, be sure to select the kubeapps namespace – you should see kubeapps itself under Applications.

logged into kubeapps in the kubeapps namespace

Click “Catalog” and type “phpmyadmin” into the search field. Click on the phpmyadmin box that results. On the next page, describing phpmyadmin, click the Blue deploy button.

Now, you should be looking at a configuration yaml for phpmyadmin. First, set the Name up top to something meaningful, like phpmyadmin, the scroll down to line 256, you should see the service type currently set to ClusterIP, replace ClusterIP with LoadBalancer.

Set the name and service type

Then scroll the rest of the way to click the blue “Deploy X.Y.Z” button and hang tight. After it deploys, the Access URLs will show the IP address for phpMyAdmin.

Access URLs for phpmyadmin after deployment

Click the Access URL to get to the Login page for phpMyAdmin and supply the IP Address of the mysql instance as well as the admin username and password we created above, then click Go.

Login to instance IP with the account we made earlier

Now you should be able to manage the databases and objects in the mysql instance!

phpmyadmin connected to our instance!

Notes

  • Kubernetes v1.20. There are filesystem permissions set on Tanzu Kubernetes Grid image 1.20.2 that prevent the MySQL instance pods from running. On TKG or vSphere with Tanzu, use v1.20.7 instead.
  • You don’t have to use cert-manager if you have another source for TLS certificates, just put the leaf cert, private key and ca cert into the secret referenced by the mysql instance yaml.
  • Looks like you can reuse the TLS cert for multiple databases, just keep in mind that if you connect using a name/fqdn that is not in the cert’s dnsNames, you may be a cert error.
  • This example uses Tanzu Kubernetes Grid Service in vSphere with Tanzu on vSphere 7 Update 2 using NSX-ALB.

Adding trusted certs to nodes on TKGS 7.0 U2

A new feature added to TKGS as of 7.0 Update 2 is support for adding private SSL certificates to the “trust” on TKG cluster nodes.

This is very important as it finally provides a supported mechanism to use on-premises Harbor and other image registries.

It’s done by adding the encoded CAs to the “TkgServiceConfiguration”. The template for the TkgServiceConfiguration looks like this:

apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TkgServiceConfiguration
metadata:
  name: tkg-service-configuration
spec:
  defaultCNI: antrea
  proxy:
    httpProxy: http://<user>:<pwd>@<ip>:<port>

  trust:
    additionalTrustedCAs:
      - name: first-cert-name
        data: base64-encoded string of a PEM encoded public cert 1
      - name: second-cert-name
        data: base64-encoded string of a PEM encoded public cert 2

Notice that there are two new sections under spec; one for proxy and one for trust. This article is going to focus on trust for additional CAs.

If your registry uses a self-signed cert, you’ll just encode that cert itself. If you take advantage on an Enterprise CA or similar to sign your certs, you’d encoded and import the “signing”, “intermediate” and/or “root” CA.

Example

Let’s add the certificate for a standalone Harbor (not the built-in Harbor instance in TKGS, its certificate is already trusted)

Download the certificate by clicking the “Registry Certificate” link

Run base64 -i <ca file> to return the base64 encoded content:

Provide a simple name and copy and paste the encoded cert into the data value:

Apply the TkgServiceConfiguration

After setting up your file. Apply it to the Supervisor cluster:

kubectl apply -f ./TanzuServiceConfiguration.yaml

Notes

  • Existing TKG clusters will not automatically inherit the trust for the certificates
  • Clusters created after the TKGServiceConfiguration is applied will get the certificates
  • You can scale an existing TKG cluster to trigger a rollout with the certificates
  • You can verify the certificates exist by connecting through SSH to the nodes and locating the certs under /etc/ssl/certs:

Retrieving the Admin Password for Harbor Image Registry in Tanzu Kubernetes Grid Service

In TKGS on vSphere 7.0 through (at least) 7.0.1d, a Harbor Image Registry may be enabled for the vSphere Cluster (Under Configure|Namespaces| Image Registry). This feature currently (as of 7.0.1d) requires the Pod Service, which in turn requires NSX-T integration.

As of 7.0.1d, the self-signed certificate created for this instance of Harbor is added to the trust for nodes in TKG clusters, making it easier (possible?) to use images from Harbor.

When you login to harbor as a user, you’ll notice that the menu is very sparse. Only the ‘admin’ account can access the “Administration” menu.

To get logged in as the ‘admin’ account, we’ll need to retrieve the password from a secret for the harbor controller in the Supervisor cluster.

Steps:

  • SSH into the vCenter Server as root, type ‘shell’ to get to bash shell
  • Type ‘/usr/lib/vmware-wcp/decryptK8Pwd.py‘ to return information about the Supervisor Cluster. The results include the IP for the cluster as well as the node root password
  • While still in the SSH session on the vCenter Server, ssh into the Supervisor Custer node by entering ‘ssh root@<IP address from above>’. For the password, enter the PWD value from above.
  • Now, we have a session as root on a supervisor cluster control plane node.
  • Enter ‘kubectl get ns‘ to see a list of namespaces in the supervisor cluster. You’ll see a number of hidden, system namespaces in addition to those corresponding to the vSphere namespaces. Notice there is a namespace named “vmware-system-registry” in addition to one named “vmware-system-registry-#######”. The namespace with the number is where Harbor is installed.
  • Run ‘kubectl get secret -n vmware-system-registry-######‘ to get a list of secrets in the namespace. Locate the secret named “harbor-######-controller-registry”.
  • Run this to return the decoded admin password: kubectl get secret -n vmware-system-registry-###### harbor-######-controller.data.harborAdminPassword}' | base64 -d | base64 -d
  • In the cases I seen so far, the password is about 16 characters long, if it’s longer than that, you may not have decoded it entirely. Note that the value must be decoded twice.
  • Once you’ve saved the password, enter “exit” three times to get out of the ssh sessions.

Notes

  • Don’t manipulate the authentication settings
  • The process above is not supported; VMware GSS will not help you complete these steps
  • Some features may remain disabled (vulnerability scanning for example)
  • As admin, you may configure registries and replication (although it’s probably unsupported with this built-in version of Harbor for now)

Configure Tanzu Kubernetes Grid to use Active Directory

Tanzu Kubernetes Grid includes and supports packages for dex and Gangway.  These are used to extend authentication to LDAP and OIDC endpoints.  Recall that Kubernetes does not do user-management or traditional authentication.  As a K8s cluster admin, you can create service accounts of course, but those are not meant to be used by developers.

Think of dex as a transition layer, it uses ‘connectors’ for upstream Identity providers (IdP) like Active Directory for LDAP or Okta for SAML and presents an OpenID Connect (OIDC) endpoint for k8s to use.

TKG provides not only the packages mentioned above, but also a collection of yaml files and documentation for implementation.  The current version (as of May 12, 2020) documentation for configuring authentication is pretty general, the default values in the config files are suitable for OpenLDAP.  So, I thought I’d share the specific settings for connecting dex to Active Directory.

Assumptions:

    1. TKG Management cluster is deployed
    2. Following the VMware documentation
    3. Using the TKG-provided tkg-extensions
    4. dex will be deployed to management cluster or to a specific workload cluster

Edits to authentication/dex/vsphere/ldap/03-cm.yaml – from Docs

  1. Replace <MGMT_CLUSTER_IP> with the IP address of one of the control plane nodes of your management cluster.  This is one of the control plane nodes where we’re putting dex
  2. If the LDAP server is listening on the default port 636, which is the secured configuration, replace <LDAP_HOST> with the IP or DNS address of your LDAP server. If the LDAP server is listening on any other port, replace <LDAP_HOST> with the address and port of the LDAP server, for example 192.168.10.22:389 or ldap.mydomain.com:389.  Never, never, never use unencrypted LDAP.  You’ll need to specify port 636 unless your targeted AD controller is also a Global Catalog server in which case you’ll specify port 3269.  Check with the AD team if you’re unsure.
  3. If your LDAP server is configured to listen on an unsecured connection, uncomment insecureNoSSL: true. Note that such connections are not recommended as they send credentials in plain text over the network. Never, never, never use unencrypted LDAP.
  4. Update the userSearch and groupSearch parameters with your LDAP server configuration.  This need much more detail – see steps below

Edits to authentication/dex/vsphere/ldap/03-cm.yaml – AD specific

  1. Obtain the root CA public certificate for your AD controller. Save a base64-encoded version of the certificate: echo root64.cer | base64 > rootcer.b64 for example will write the data from the PEM-encoded root64.cer file into a base64-encoded file named rootcer.b64
  2. Add the base64-encoded certificate content to the rootCAData key.  Be sure to remove the leading “#”.  This is an alternative to using the rootCA key, where we’ll have to place the file on each Control Plane node
  3. Update the userSearch values as follows:
    key default set to notes
    baseDN ou=people,

    dc=vmware,dc=com

    DN of OU in AD under

    which user accounts are found

    Example: ou=User Accounts,DC=ragazzilab,DC=com
    filter “(objectClass=

    posixAccount)”

    “(objectClass=person)”
    username uid userPrincipalName
    idAttr uid DN Case-sensitive
    emailAttr mail userPrincipalName
    nameAttr givenName cn
  4. Update the groupSearch values as follows:
    key default set to notes
    baseDN ou=people,

    dc=vmware,dc=com

    DN of OU in AD under

    which security Groups are found

    Example: DC=ragazzilab,DC=com
    filter “(objectClass=

    posixGroup)”

    “(objectClass=group)”
    userAttr uid DN Case-Sensitive
    groupAttr memberUid “member:1.2.840.113556.1.4.1941:” This is necessary to search within nested groups in AD
    nameAttr cn cn

Other important Notes
When you create the oidc secret in the workload clusters running Gangway, the clientSecret value is base64-encoded, but the corresponding secret for the workload cluster in the staticClients section of the dex configmMap is decoded. This can be confusing since the decoded value is also randomly-generated.