Prerequisite: A reachable S3 endpoint. Can be local or remote, but the pods must be able to resolve its name or IP. Create or select and existing bucket for your database backups. In this case, I have a minio instance running on-prem with a bucket named backup-mysql.
Create a secret for the S3 endpoint credentials. This account will need to be able to write to the database backup bucket. Here’s an example:
Create a TanzuMySQLBackupLocation. In the example below, we’re not using SSL with the minio endpoint, so I’m explicitly using port 80. More examples and details are found here. I like to keep the backups organized, so I’ll create a backup location for each instance and specify an bucketPath for each.
Test with a one-off backup. Create and apply a yaml like the following to request a backup without a schedule. Here’s an example yaml for a one-off backup for the mysql-ha instance to its corresponding backup location:
Apply this kubectl apply -n mysql-instances -f backupschedule-mysql-ha-daily.yaml
I found that (unlike Velero), when applying the MySQLBackupSchedule, a backup does not immediately begin. At the scheduled time however, a pod for the backup schedule will be created to run the backup job. This pod will remain intact to run subsequent backup jobs.
Backup Pods and created Backup objects
Lastly, regarding backups, keep in mind that the backup data on the S3 endpoint never expires, the backups will remain there until removed manually. This may be important if you have limited capacity.
Restore/Recover
From the docs:
MySQLRestores always restores to a new MySQL instance to avoid overwriting any data on an existing MySQL instance. The MySQL instance is created automatically when the restore is triggered. Tanzu MySQL for Kubernetes does not allow you to restore a backup to an existing MySQL instance. Although you can perform this manually by copying the MySQL data from the backup artifact onto an existing MySQL instance, VMware strongly discourages you from doing this because you might overwrite existing data on the MySQL instance.
So, we should not expect to restore directly to a running database instance. If we need to recover, we’ll create a new instance and restore the backup to it.
To create a restore, we’ll need the name of the MySQLBackup object to restore from and a name of a database to create from that backup as part of the restore. We’ll put that into a yaml like the one below. Notice that we provide a spec for a new database, I wanted a loadbalancer for it although we are able to repoint the existing loadbalancer to the new proxy nodes (for ha) or the new database node (for standalone)
Apply the yaml to create the restore kubectl apply -n mysql-instances -f ./restore-ha.yamlYou should see a new database pending and a MySQLRestore object running:
Job is running and instance is pendingRestore job succeeded and there is a new mysql instance
Now, the choice if yours to copy data from the restored database back to the original or to point the applications to the new database or to point the loadbalancer at the new database.
If you choose to repoint the existing load-balancer to the new database, here’s an example how to do that:
kubectl patch service -n mysql-instances mysql-ha -p '{"spec":{"selector":{"app.kubernetes.io/instance": "restored-mysql-database"}}}'
As a follow up to the getting started post, let’s touch on what it takes to configure a MySQL instance for High Availability in Tanzu SQL/MySQL
Why this is important
In kubernetes, pods are generally treated as transient and ephemeral, they can be restarted quickly and are often stateless. This is certainly not the case with databases. We need to make sure our databases remain online and usable. MySQL itself provides a means to do High Availability with multiple instances and synchronization; we’ll be leveraging this capability today.
High Availability Architecture
Blatantly ripped off from the official docs
Unlike our stand-alone instance when create an instance with HA enabled, the operator creates FIVE pods and two services for us.
Pods created for HA instanceServices created for HA instance
You’ll notice that the mysql-ha LoadBalancer uses the proxy pods as its endpoints and the mysql-ha-members uses the database pods themselves.
Create an HA instance
In this example, I’m going to reuse the “harbor” docker-registry secret we created originally, but we’ll want a new tls certificate for this instance.
Create the TLS certificate
Just like previously, save the following as cert-ha.yaml and apply it with kubectl -n mysql-instances -f cert-ha.yaml to create a certificate for our instance. Adjust the names to match your environment of course. Notice the issuerRef.name is ca-issuer
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: mysql-ha-secret
spec:
# Secret names are always required.
secretName: mysql-ha-secret
duration: 2160h # 90d
renewBefore: 360h # 15d
subject:
organizations:
- ragazzilab.com
# The use of the common name field has been deprecated since 2000 and is
# discouraged from being used.
commonName: mysql-ha.ragazzilab.com
dnsNames:
- mysql-ha.ragazzilab.com
- mysql-ha
- mysql-ha.mysql-instances.svc.cluster.local
# Issuer references are always required.
issuerRef:
name: ca-issuer
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
# This is optional since cert-manager will default to this value however
# if you are using an external issuer, change this to that issuer group.
group: cert-manager.io
Create the instance
The only differences are highAvailability.enabled:true and the name of the certificate secret
apiVersion: with.sql.tanzu.vmware.com/v1
kind: MySQL
metadata:
name: mysql-ha
spec:
storageSize: 2Gi
imagePullSecret: harbor
#### Set highAvailability.enabled:true to create three pods; one primary and two standby, plus two proxy pods
highAvailability:
enabled: true
#### Set the storage class name to change storage class of the PVC associated with this resource
storageClassName: tanzu
#### Set the type of Service used to provide access to the MySQL database.
serviceType: LoadBalancer # Defaults to ClusterIP
### Set the name of the Secret used for TLS
tls:
secret:
name: mysql-ha-secret
Apply this as usual: kubectl apply -n mysql-instances -f ./mysql-ha.yaml
Create a database user
The steps to create the database user in an HA instance are just like those for the standalone instance once we determine which Pod is the primary/active and writable one. I was unable to make the one-liner method in the official docs work, so here’s what I did instead.
Get the MySQL root password: kubectl get secret -n mysql-instances mysql-ha-credentials -o jsonpath='{.data.rootPassword}' | base64 -D
Get a shell on the mysql-ha-0 pod: kubectl -n mysql-instances exec --stdin --tty pod/mysql-ha-0 -c mysql -- /bin/bash
Get into the mysql cli: mysql -uroot -p<root password>
Identify the Primary member: SELECT MEMBER_HOST, MEMBER_ROLE FROM performance_schema.replication_group_members;
If the primary node is mysql-ha-0 (the one we’re on), proceed to the next step. If it is not, go back to step step 2 to get a shell on the pod that is primary.
Now, we should be on the mysql cli on the primary pod/member. Just like with the standalone instance, let’s create a user:
CREATE USER 'admin'@'%' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON * . * TO 'admin'@'%';
FLUSH PRIVILEGES;
Type exit twice to get out of mysql and the pod.
Ok, so now, we have a running instance of mysql and we’ve created a user account that can manage it (cannot login remotely as root). We can connect phpMyAdmin to the instance using the admin credentials:
Let’s deploy Tanzu SQL with MySQL on Kubernetes and use phpMyAdmin to interact with our
database secured with TLS
VMware Tanzu SQL with MySQL for Kubernetes is quite a mouthful. For this post, I’ll refer to the product as Tanzu SQL/MySQL. We’re going to deploy it onto an existing Tanzu Kubernetes Grid cluster.
Objectives:
Deploy Tanzu SQL with MySQL on Kubernetes
Use phpMyAdmin to interact with our databases
Secure database instances with TLS
Cluster Setup
Tanzu SQL/MySQL can run on any conformant kubernetes cluster, if you already have one running, you can skip ahead. If, like me, you want to provision a new TKG cluster for Tanzu SQL/MySQL, you’ll want settings like this:
K8s version 1.18 or 1.19 or 1.20.7
Additional volume on /var/lib/containerd for the images
For a test cluster, best-effort small control-plane nodes (3) and best-effort-medium worker nodes (2) is sufficient to start, YMMV.
Install metrics-server and add appropriate PSPs
Get the images and chart
You’ll need to login to pivnet and registry.pivotal.io, accept the EULA for VMware Tanzu SQL with MySQL for Kubernetes.
At a command-line, run:docker login registry.pivotal.io then, provide your credentials. This is so that docker can pull down the images from VMware. Login to your local container registry as well – you’ll need permissions to push images into your project.
In the following commands, replace “{local repo}” with the FQDN for your local registry and “{project}” with the project name in that repo that you can push images to.
In the tanzu-sql-with-mysql-operator folder created by the helm export, copy values.yaml to values-override.yaml. Edit the keys with the correct values (we haven’t created the harbor secret yet, but we’ll name it the value you provide here). Here’s an example:
imagePullSecret: harbor
operatorImage: {local repo}/{project}/tanzu-mysql-operator:1.0.0"
instanceImage: {local repo}/{project}/tanzu-mysql-instance:1.0.0"
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
Deploy Operator
We’ll want to create namespace, a docker-registry secret (named harbor in the example below) and then install the chart.
Let’s check that the pods are running by running kubectl get po -n tanzu-mysql
Before Creating an Instance…
We’ll need to create a namespace to put our mysql instances, a secret in that namespace in order to pull the images from our local repo, and a way to create TLS certificates and phpMyAdmin. These commands will create the namespace, create the docker-registry secret and install cert-manager:
Cert-manager uses issuers to create certificates from cert-requests. There are a variety of issuers supported, but we must have the ca certificate included in the resulting certificate secret – something not all issuers do. For example, self-signed and ACME are not suitable as they do not appear to include the ca certificate in the cert secret. Luckily, the CA issuer works fine and can use a self-signed issuer as its own signer. Save the following as a yaml file to create a self-signed issuer, root cert and a CA issuer and apply it with kubectl -n mysql-instances -f cabootstrap.yaml
Save the following as cert.yaml and apply it with kubectl -n mysql-instances -f cert.yaml to create a certificate for our instance. Adjust the names to match your environment of course. Notice the issuerRef.name is ca-issuer
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: mysql-tls-secret
spec:
# Secret names are always required.
secretName: mysql-tls-secret
duration: 2160h # 90d
renewBefore: 360h # 15d
subject:
organizations:
- ragazzilab.com
# The use of the common name field has been deprecated since 2000 and is
# discouraged from being used.
commonName: mysql-tls.mydomain.local
dnsNames:
- mysql-tls.mydomain.local
- mysql-tls
- mysql-tls.mysql-instances.svc.cluster.local
# Issuer references are always required.
issuerRef:
name: ca-issuer
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
# This is optional since cert-manager will default to this value however
# if you are using an external issuer, change this to that issuer group.
group: cert-manager.io
Confirm that the corresponding secret contains three files: ca.crt, tls.crt, tls.key by using kubectl describe secret -n mysql-instances mysql-tls-secret
Create an instance and add a user
Here is an example yaml for a MySQL instance. This will create an instance name mysql-tls, using the docker-registry secret named harbor we created earlier and the certificate secret we created above named mysql-tls-secret and use a LoadBalancer IP so we can access it from outside of the cluster.
apiVersion: with.sql.tanzu.vmware.com/v1
kind: MySQL
metadata:
name: mysql-tls
spec:
storageSize: 2Gi
imagePullSecret: harbor
#### Set the storage class name to change storage class of the PVC associated with this resource
storageClassName: tanzu
#### Set the type of Service used to provide access to the MySQL database.
serviceType: LoadBalancer # Defaults to ClusterIP
### Set the name of the Secret used for TLS
tls:
secret:
name: mysql-tls-secret
Apply this yaml to the mysql-instances namespace to create the instance: kubectl apply -n mysql-instances -f ./mysqlexample.yaml
Watch for two containers in the pod to be ready
Watch for the mysql-tls-0 pod to be running with 2 containers. When the instance is created, the operator also creates a secret containing the root password. Retrieve the root password with this command: kubectl get secret -n mysql-instances mysql-tls-credentials -o jsonpath='{.data.rootPassword}' | base64 -D Retrieve the load-balancer address for the MySQL instance with this command: kubectl get svc -n mysql-instances mysql-tls
LoadBalancer address for our instance
Login to Pod and run mysql commands
Logging into the pod, then into mysql
Run this to get into a command prompt on the mysql pod: kubectl -n mysql-instances exec --stdin --tty pod/mysql-tls-0 -c mysql -- /bin/bash Once in the pod and at a prompt, run this to get into the mysql cli as root: mysql -uroot -p<root password> Once at the mysql prompt, run this to create a user named “admin” with a password set to “password” (PLEASE use a different password!)
CREATE USER 'admin'@'%' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON * . * TO 'admin'@'%';
FLUSH PRIVILEGES;
Type exit twice to get out of mysql and the pod.
Ok, so now, we have a running instance of mysql and we’ve created a user account that can manage it (cannot login remotely as root).
Deploy, Configure and use phpMyAdmin
There are several ways to do this, but I’m going to go with kubeapps to deploy phpMyAdmin. Run this to install kubeapps with a loadbalancer front-end:
Find the External IP address for kubeapps and point a browser at it: kubectl get svc -n kubeapps kubeapps. Get the token from your .kube/config file to paste into the token field in kubeapps and click submit. Once in kubeapps, be sure to select the kubeapps namespace – you should see kubeapps itself under Applications.
logged into kubeapps in the kubeapps namespace
Click “Catalog” and type “phpmyadmin” into the search field. Click on the phpmyadmin box that results. On the next page, describing phpmyadmin, click the Blue deploy button.
Now, you should be looking at a configuration yaml for phpmyadmin. First, set the Name up top to something meaningful, like phpmyadmin, the scroll down to line 256, you should see the service type currently set to ClusterIP, replace ClusterIP with LoadBalancer.
Set the name and service type
Then scroll the rest of the way to click the blue “Deploy X.Y.Z” button and hang tight. After it deploys, the Access URLs will show the IP address for phpMyAdmin.
Access URLs for phpmyadmin after deployment
Click the Access URL to get to the Login page for phpMyAdmin and supply the IP Address of the mysql instance as well as the admin username and password we created above, then click Go.
Login to instance IP with the account we made earlier
Now you should be able to manage the databases and objects in the mysql instance!
phpmyadmin connected to our instance!
Notes
Kubernetes v1.20. There are filesystem permissions set on Tanzu Kubernetes Grid image 1.20.2 that prevent the MySQL instance pods from running. On TKG or vSphere with Tanzu, use v1.20.7 instead.
You don’t have to use cert-manager if you have another source for TLS certificates, just put the leaf cert, private key and ca cert into the secret referenced by the mysql instance yaml.
Looks like you can reuse the TLS cert for multiple databases, just keep in mind that if you connect using a name/fqdn that is not in the cert’s dnsNames, you may be a cert error.
This example uses Tanzu Kubernetes Grid Service in vSphere with Tanzu on vSphere 7 Update 2 using NSX-ALB.