Configuring Backup in Tanzu SQL with MySQL for Kubernetes

Backup & Restore

Prerequisite: A reachable S3 endpoint. Can be local or remote, but the pods must be able to resolve its name or IP. Create or select and existing bucket for your database backups. In this case, I have a minio instance running on-prem with a bucket named backup-mysql.

Create a secret for the S3 endpoint credentials. This account will need to be able to write to the database backup bucket. Here’s an example:

---
apiVersion: v1
kind: Secret
metadata:
  name: minio-creds
stringData:
  # S3 Credentials
  accessKeyId: "MYACCESSKEY"
  secretAccessKey: "MYSECRETKEY"

Create a TanzuMySQLBackupLocation. In the example below, we’re not using SSL with the minio endpoint, so I’m explicitly using port 80. More examples and details are found here. I like to keep the backups organized, so I’ll create a backup location for each instance and specify an bucketPath for each.

---
apiVersion: with.sql.tanzu.vmware.com/v1
kind: MySQLBackupLocation
metadata:
  name: backuplocation-mysql-ha
spec:
  storage:
    # For S3 or Minio:
    s3:
      bucket: "backup-mysql-ha"
      bucketPath: "/mysql-ha/"
      # region: "us-east-1"
      endpoint:  "http://minio.ragazzilab.com:80" # optional, default to AWS
      forcePathStyle: true
      secret:
        name: minio-creds

Test with a one-off backup. Create and apply a yaml like the following to request a backup without a schedule. Here’s an example yaml for a one-off backup for the mysql-ha instance to its corresponding backup location:

---
apiVersion: with.sql.tanzu.vmware.com/v1
kind: MySQLBackup
metadata:
  name: backup-mysql-ha-1off
spec:
  location:
    name: backuplocation-mysql-ha
  instance:
    name: mysql-ha

We can get the MySQLBackups to see that it has completed successfully:

Create a backup Schedule

Now that we’ve confirmed that the backup location and credentials work as expected, we should add a backup schedule. Here’s an example:

---
apiVersion: with.sql.tanzu.vmware.com/v1
kind: MySQLBackupSchedule
metadata:
  name: mysql-ha-daily
spec:
  backupTemplate:
    spec:
      location:
        name: backuplocation-mysql-ha
      instance:
        name:  mysql-ha
  schedule: "@daily"

Apply this kubectl apply -n mysql-instances -f backupschedule-mysql-ha-daily.yaml

I found that (unlike Velero), when applying the MySQLBackupSchedule, a backup does not immediately begin. At the scheduled time however, a pod for the backup schedule will be created to run the backup job. This pod will remain intact to run subsequent backup jobs.

Backup Pods and created Backup objects

Lastly, regarding backups, keep in mind that the backup data on the S3 endpoint never expires, the backups will remain there until removed manually. This may be important if you have limited capacity.

Restore/Recover

From the docs:

MySQLRestores always restores to a new MySQL instance to avoid overwriting any data on an existing MySQL instance. The MySQL instance is created automatically when the restore is triggered. Tanzu MySQL for Kubernetes does not allow you to restore a backup to an existing MySQL instance. Although you can perform this manually by copying the MySQL data from the backup artifact onto an existing MySQL instance, VMware strongly discourages you from doing this because you might overwrite existing data on the MySQL instance.

So, we should not expect to restore directly to a running database instance. If we need to recover, we’ll create a new instance and restore the backup to it.

To create a restore, we’ll need the name of the MySQLBackup object to restore from and a name of a database to create from that backup as part of the restore. We’ll put that into a yaml like the one below. Notice that we provide a spec for a new database, I wanted a loadbalancer for it although we are able to repoint the existing loadbalancer to the new proxy nodes (for ha) or the new database node (for standalone)

---
apiVersion: with.sql.tanzu.vmware.com/v1
kind: MySQLRestore
metadata:
  name: restore-ha
spec:
  backup:
    name: mysql-ha-daily-20210708-000005
  instanceTemplate:
    metadata:
      name: restored-mysql-database
    spec:
      storageSize: 2Gi
      imagePullSecret: harbor
      serviceType: LoadBalancer
      highAvailability:
        enabled: true

Apply the yaml to create the restore kubectl apply -n mysql-instances -f ./restore-ha.yamlYou should see a new database pending and a MySQLRestore object running:

Job is running and instance is pending
Restore job succeeded and there is a new mysql instance

Now, the choice if yours to copy data from the restored database back to the original or to point the applications to the new database or to point the loadbalancer at the new database.

If you choose to repoint the existing load-balancer to the new database, here’s an example how to do that:

kubectl patch service -n mysql-instances mysql-ha -p '{"spec":{"selector":{"app.kubernetes.io/instance": "restored-mysql-database"}}}'

Configuring VMware Tanzu SQL with MySQL for Kubernetes for High Availability

As a follow up to the getting started post, let’s touch on what it takes to configure a MySQL instance for High Availability in Tanzu SQL/MySQL

Why this is important

In kubernetes, pods are generally treated as transient and ephemeral, they can be restarted quickly and are often stateless. This is certainly not the case with databases. We need to make sure our databases remain online and usable. MySQL itself provides a means to do High Availability with multiple instances and synchronization; we’ll be leveraging this capability today.

High Availability Architecture

Blatantly ripped off from the official docs

Unlike our stand-alone instance when create an instance with HA enabled, the operator creates FIVE pods and two services for us.

Pods created for HA instance
Services created for HA instance

You’ll notice that the mysql-ha LoadBalancer uses the proxy pods as its endpoints and the mysql-ha-members uses the database pods themselves.

Create an HA instance

In this example, I’m going to reuse the “harbor” docker-registry secret we created originally, but we’ll want a new tls certificate for this instance.

Create the TLS certificate

Just like previously, save the following as cert-ha.yaml and apply it with kubectl -n mysql-instances -f cert-ha.yaml to create a certificate for our instance. Adjust the names to match your environment of course. Notice the issuerRef.name is ca-issuer

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: mysql-ha-secret
spec:
  # Secret names are always required.
  secretName: mysql-ha-secret
  duration: 2160h # 90d
  renewBefore: 360h # 15d
  subject:
    organizations:
    - ragazzilab.com
  # The use of the common name field has been deprecated since 2000 and is
  # discouraged from being used.
  commonName: mysql-ha.ragazzilab.com
  dnsNames:
  - mysql-ha.ragazzilab.com
  - mysql-ha
  - mysql-ha.mysql-instances.svc.cluster.local
  # Issuer references are always required.
  issuerRef:
    name: ca-issuer
    # We can reference ClusterIssuers by changing the kind here.
    # The default value is Issuer (i.e. a locally namespaced Issuer)
    kind: Issuer
    # This is optional since cert-manager will default to this value however
    # if you are using an external issuer, change this to that issuer group.
    group: cert-manager.io

Create the instance

The only differences are highAvailability.enabled:true and the name of the certificate secret

apiVersion: with.sql.tanzu.vmware.com/v1
kind: MySQL
metadata:
  name: mysql-ha
spec:
  storageSize: 2Gi
  imagePullSecret: harbor
#### Set highAvailability.enabled:true to create three pods; one primary and two standby, plus two proxy pods
  highAvailability:
    enabled: true

#### Set the storage class name to change storage class of the PVC associated with this resource
  storageClassName: tanzu

#### Set the type of Service used to provide access to the MySQL database.
  serviceType: LoadBalancer # Defaults to ClusterIP

### Set the name of the Secret used for TLS
  tls:
    secret:
      name: mysql-ha-secret

Apply this as usual: kubectl apply -n mysql-instances -f ./mysql-ha.yaml

Create a database user

The steps to create the database user in an HA instance are just like those for the standalone instance once we determine which Pod is the primary/active and writable one. I was unable to make the one-liner method in the official docs work, so here’s what I did instead.

  1. Get the MySQL root password: kubectl get secret -n mysql-instances mysql-ha-credentials -o jsonpath='{.data.rootPassword}' | base64 -D
  2. Get a shell on the mysql-ha-0 pod: kubectl -n mysql-instances exec --stdin --tty pod/mysql-ha-0 -c mysql -- /bin/bash
  3. Get into the mysql cli: mysql -uroot -p<root password>
  4. Identify the Primary member: SELECT MEMBER_HOST, MEMBER_ROLE FROM performance_schema.replication_group_members;
  5. If the primary node is mysql-ha-0 (the one we’re on), proceed to the next step. If it is not, go back to step step 2 to get a shell on the pod that is primary.
  6. Now, we should be on the mysql cli on the primary pod/member. Just like with the standalone instance, let’s create a user:
CREATE USER 'admin'@'%' IDENTIFIED BY 'password';
  GRANT ALL PRIVILEGES ON * . * TO 'admin'@'%';
  FLUSH PRIVILEGES;

Type exit twice to get out of mysql and the pod.

Ok, so now, we have a running instance of mysql and we’ve created a user account that can manage it (cannot login remotely as root). We can connect phpMyAdmin to the instance using the admin credentials:

Showing the three members of the instance

Getting Started with VMware Tanzu SQL with MySQL for Kubernetes

Let’s deploy Tanzu SQL with MySQL on Kubernetes and use phpMyAdmin to interact with our
database secured with TLS

VMware Tanzu SQL with MySQL for Kubernetes is quite a mouthful. For this post, I’ll refer to the product as Tanzu SQL/MySQL. We’re going to deploy it onto an existing Tanzu Kubernetes Grid cluster.

Objectives:

  • Deploy Tanzu SQL with MySQL on Kubernetes
  • Use phpMyAdmin to interact with our databases
  • Secure database instances with TLS

Cluster Setup

Tanzu SQL/MySQL can run on any conformant kubernetes cluster, if you already have one running, you can skip ahead. If, like me, you want to provision a new TKG cluster for Tanzu SQL/MySQL, you’ll want settings like this:

  • K8s version 1.18 or 1.19 or 1.20.7
  • Additional volume on /var/lib/containerd for the images
  • For a test cluster, best-effort small control-plane nodes (3) and best-effort-medium worker nodes (2) is sufficient to start, YMMV.
  • Install metrics-server and add appropriate PSPs

Get the images and chart

You’ll need to login to pivnet and registry.pivotal.io, accept the EULA for VMware Tanzu SQL with MySQL for Kubernetes.

At a command-line, run:docker login registry.pivotal.io then, provide your credentials. This is so that docker can pull down the images from VMware. Login to your local container registry as well – you’ll need permissions to push images into your project.

In the following commands, replace “{local repo}” with the FQDN for your local registry and “{project}” with the project name in that repo that you can push images to.

docker pull registry.pivotal.io/tanzu-mysql-for-kubernetes/tanzu-mysql-instance:1.0.0
docker pull registry.pivotal.io/tanzu-mysql-for-kubernetes/tanzu-mysql-operator:1.0.0
docker tag registry.pivotal.io/tanzu-mysql-for-kubernetes/tanzu-mysql-instance:1.0.0 {local repo}/{project}/tanzu-mysql-instance:1.0.0
docker tag registry.pivotal.io/tanzu-mysql-for-kubernetes/tanzu-mysql-operator:1.0.0 {local repo}/{project}/tanzu-mysql-operator:1.0.0
docker push {local repo}/{project}/tanzu-mysql-instance:1.0.0
docker push {local repo}/{project}/tanzu-mysql-operator:1.0.0

Retrieve the helm chart:

export HELM_EXPERIMENTAL_OCI=1
helm chart pull registry.pivotal.io/tanzu-mysql-for-kubernetes/tanzu-mysql-operator-chart:1.0.0
helm chart export registry.pivotal.io/tanzu-mysql-for-kubernetes/tanzu-mysql-operator-chart:1.0.0

In the tanzu-sql-with-mysql-operator folder created by the helm export, copy values.yaml to values-override.yaml. Edit the keys with the correct values (we haven’t created the harbor secret yet, but we’ll name it the value you provide here). Here’s an example:


imagePullSecret: harbor
operatorImage: {local repo}/{project}/tanzu-mysql-operator:1.0.0"
instanceImage: {local repo}/{project}/tanzu-mysql-instance:1.0.0"
resources:
  limits:
    cpu: 100m
    memory: 128Mi
  requests:
    cpu: 100m
    memory: 128Mi

Deploy Operator

We’ll want to create namespace, a docker-registry secret (named harbor in the example below) and then install the chart.

kubectl create namespace tanzu-mysql
kubectl --namespace tanzu-mysql create secret docker-registry harbor --docker-server=https://{local repo} --docker-username=MYUSERNAME --docker-password=MYPASSWORD
helm install --namespace tanzu-mysql --values=./tanzu-sql-with-mysql-operator/values-override.yaml tanzu-mysql-operator ./tanzu-sql-with-mysql-operator/

Let’s check that the pods are running by running kubectl get po -n tanzu-mysql

Before Creating an Instance…

We’ll need to create a namespace to put our mysql instances, a secret in that namespace in order to pull the images from our local repo, and a way to create TLS certificates and phpMyAdmin. These commands will create the namespace, create the docker-registry secret and install cert-manager:

kubectl create namespace cert-manager helm repo add jetstack https://charts.jetstack.io helm repo update helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.0.2 --set installCRDs=true kubectl create namespace tanzu-mysql kubectl --namespace mysql-instances create secret docker-registry harbor --docker-server=https://<local repo> --docker-username=<username> --docker-password=<password>

Working with cert-manager

Cert-manager uses issuers to create certificates from cert-requests. There are a variety of issuers supported, but we must have the ca certificate included in the resulting certificate secret – something not all issuers do. For example, self-signed and ACME are not suitable as they do not appear to include the ca certificate in the cert secret. Luckily, the CA issuer works fine and can use a self-signed issuer as its own signer. Save the following as a yaml file to create a self-signed issuer, root cert and a CA issuer and apply it with kubectl -n mysql-instances -f cabootstrap.yaml

---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: selfsigned-issuer
spec:
  selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: my-selfsigned-ca
spec:
  isCA: true
  commonName: my-selfsigned-ca
  secretName: root-secret
  privateKey:
    algorithm: ECDSA
    size: 256
  issuerRef:
    name: selfsigned-issuer
    kind: ClusterIssuer
    group: cert-manager.io
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: ca-issuer
spec:
  ca:
    secretName: root-secret

Save the following as cert.yaml and apply it with kubectl -n mysql-instances -f cert.yaml to create a certificate for our instance. Adjust the names to match your environment of course. Notice the issuerRef.name is ca-issuer

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: mysql-tls-secret
spec:
  # Secret names are always required.
  secretName: mysql-tls-secret
  duration: 2160h # 90d
  renewBefore: 360h # 15d
  subject:
    organizations:
    - ragazzilab.com
  # The use of the common name field has been deprecated since 2000 and is
  # discouraged from being used.
  commonName: mysql-tls.mydomain.local
  dnsNames:
  - mysql-tls.mydomain.local
  - mysql-tls
  - mysql-tls.mysql-instances.svc.cluster.local
  # Issuer references are always required.
  issuerRef:
    name: ca-issuer
    # We can reference ClusterIssuers by changing the kind here.
    # The default value is Issuer (i.e. a locally namespaced Issuer)
    kind: Issuer
    # This is optional since cert-manager will default to this value however
    # if you are using an external issuer, change this to that issuer group.
    group: cert-manager.io

Confirm that the corresponding secret contains three files: ca.crt, tls.crt, tls.key by using kubectl describe secret -n mysql-instances mysql-tls-secret

Create an instance and add a user

Here is an example yaml for a MySQL instance. This will create an instance name mysql-tls, using the docker-registry secret named harbor we created earlier and the certificate secret we created above named mysql-tls-secret and use a LoadBalancer IP so we can access it from outside of the cluster.

apiVersion: with.sql.tanzu.vmware.com/v1
kind: MySQL
metadata:
  name: mysql-tls
spec:
  storageSize: 2Gi
  imagePullSecret: harbor

#### Set the storage class name to change storage class of the PVC associated with this resource
  storageClassName: tanzu

#### Set the type of Service used to provide access to the MySQL database.
  serviceType: LoadBalancer # Defaults to ClusterIP

### Set the name of the Secret used for TLS
  tls:
    secret:
      name: mysql-tls-secret

Apply this yaml to the mysql-instances namespace to create the instance: kubectl apply -n mysql-instances -f ./mysqlexample.yaml

Watch for two containers in the pod to be ready

Watch for the mysql-tls-0 pod to be running with 2 containers. When the instance is created, the operator also creates a secret containing the root password. Retrieve the root password with this command: kubectl get secret -n mysql-instances mysql-tls-credentials -o jsonpath='{.data.rootPassword}' | base64 -D
Retrieve the load-balancer address for the MySQL instance with this command: kubectl get svc -n mysql-instances mysql-tls

LoadBalancer address for our instance

Login to Pod and run mysql commands

Logging into the pod, then into mysql

Run this to get into a command prompt on the mysql pod: kubectl -n mysql-instances exec --stdin --tty pod/mysql-tls-0 -c mysql -- /bin/bash
Once in the pod and at a prompt, run this to get into the mysql cli as root: mysql -uroot -p<root password>
Once at the mysql prompt, run this to create a user named “admin” with a password set to “password” (PLEASE use a different password!)

  CREATE USER 'admin'@'%' IDENTIFIED BY 'password';
  GRANT ALL PRIVILEGES ON * . * TO 'admin'@'%';
  FLUSH PRIVILEGES;

Type exit twice to get out of mysql and the pod.

Ok, so now, we have a running instance of mysql and we’ve created a user account that can manage it (cannot login remotely as root).

Deploy, Configure and use phpMyAdmin

There are several ways to do this, but I’m going to go with kubeapps to deploy phpMyAdmin. Run this to install kubeapps with a loadbalancer front-end:

helm repo add bitnami https://charts.bitnami.com/bitnami
kubectl create namespace kubeapps
helm install kubeapps --namespace kubeapps bitnami/kubeapps --set frontend.service.type=LoadBalancer

Find the External IP address for kubeapps and point a browser at it: kubectl get svc -n kubeapps kubeapps. Get the token from your .kube/config file to paste into the token field in kubeapps and click submit. Once in kubeapps, be sure to select the kubeapps namespace – you should see kubeapps itself under Applications.

logged into kubeapps in the kubeapps namespace

Click “Catalog” and type “phpmyadmin” into the search field. Click on the phpmyadmin box that results. On the next page, describing phpmyadmin, click the Blue deploy button.

Now, you should be looking at a configuration yaml for phpmyadmin. First, set the Name up top to something meaningful, like phpmyadmin, the scroll down to line 256, you should see the service type currently set to ClusterIP, replace ClusterIP with LoadBalancer.

Set the name and service type

Then scroll the rest of the way to click the blue “Deploy X.Y.Z” button and hang tight. After it deploys, the Access URLs will show the IP address for phpMyAdmin.

Access URLs for phpmyadmin after deployment

Click the Access URL to get to the Login page for phpMyAdmin and supply the IP Address of the mysql instance as well as the admin username and password we created above, then click Go.

Login to instance IP with the account we made earlier

Now you should be able to manage the databases and objects in the mysql instance!

phpmyadmin connected to our instance!

Notes

  • Kubernetes v1.20. There are filesystem permissions set on Tanzu Kubernetes Grid image 1.20.2 that prevent the MySQL instance pods from running. On TKG or vSphere with Tanzu, use v1.20.7 instead.
  • You don’t have to use cert-manager if you have another source for TLS certificates, just put the leaf cert, private key and ca cert into the secret referenced by the mysql instance yaml.
  • Looks like you can reuse the TLS cert for multiple databases, just keep in mind that if you connect using a name/fqdn that is not in the cert’s dnsNames, you may be a cert error.
  • This example uses Tanzu Kubernetes Grid Service in vSphere with Tanzu on vSphere 7 Update 2 using NSX-ALB.