When new versions of the VoltDB software are released they are accompanied by new versions of the Helm charts that support them. By default when you "install" a "release" of VoltDB with Helm, you get the latest version of the VoltDB software at that time. Your release will stay on its initial version of VoltDB as long as you don't update the charts and VoltDB Operator in use.
You can upgrade an existing database instance to a recent version using a combination of kubectl and helm commands to update the charts, the operator, and the VoltDB software. The steps to upgrade the VoltDB software in Kubernetes are:
Update your copy of the VoltDB repository.
Update the custom resource definition (CRD) for the VoltDB Operator.
Upgrade the VoltDB Operator and software.
The following sections explain how to perform each step of this process, including a full example of the entire process in Example 5.1, “Process for Upgrading the VoltDB Software” However, when upgrading an XDCR cluster, there is an additional step required to ensure the cluster's schema is maintained during the upgrade process. Section 5.3.5, “Updating VoltDB for XDCR Clusters” explains the extra step necessary for XDCR clusters.
To use the helm upgrade command to upgrade the VoltDB software, the starting version of VoltDB must be 10.1 or higher. See the VoltDB Release Notes for instructions when using Helm to upgrade earlier versions of VoltDB.
The first step when upgrading VoltDB is to make sure your local copy of the VoltDB Helm repository is up to date. You do this using the helm repo update command:
$ helm repo update
The second step is to update the custom resource definition (CRD) for the VoltDB Operator. This allows the Operator to be upgraded to the latest version.
To update the CRD, you must first save a copy of the latest chart, then extract the CRD from the resulting tar file. The helm pull command saves the chart as a gzipped tar file and the tar command lets you extract the CRD. For example:
$ helm pull voltdb/voltdb $ ls *.tgz voltdb-3.1.0.tgz $ tar --strip-components=2 -xzf voltdb-3.1.0.tgz \ voltdb/crds/voltdb.com_voltdbclusters_crd.yaml
Note that the file name of the resulting tar file includes the chart version number. Once you have extracted the CRD as a YAML file, you can use it to replace the CRD in Kubernetes:
$ kubectl replace -f voltdb.com_voltdbclusters_crd.yaml
Once you update the CRD, you are ready to upgrade VoltDB. You do this using the helm upgrade command and specifying the new software version you wish to use on the command line. What happens when you issue the helm upgrade command depends on whether you are performing a standard software upgrade or an in-service upgrade.
For a standard software upgrade, you simply issue the helm upgrade command specifying the software
version in the global.voltdbVersion
property. For example:
$ helm upgrade mydb voltdb/voltdb --reuse-values \
--set global.voltdbVersion=13.2.1
When you issue the helm upgrade command, the operator saves a final snapshot, shuts down the cluster, restarts the cluster with the new version and restores the snapshot. For example, Example 5.1, “Process for Upgrading the VoltDB Software” summarizes all of the commands used to update a database release to VoltDB version 13.2.1.
Example 5.1. Process for Upgrading the VoltDB Software
$ # Update the local copy of the charts $ helm repo update $ # Extract and replace the CRD $ helm pull voltdb/voltdb $ ls *.tgz voltdb-3.1.0.tgz $ tar --strip-components=2 -xzf voltdb-3.1.0.tgz \ voltdb/crds/voltdb.com_voltdbclusters_crd.yaml $ kubectl replace -f voltdb.com_voltdbclusters_crd.yaml $ $ # Upgrade the Operator and VoltDB software $ helm upgrade mydb voltdb/voltdb --reuse-values \ --set global.voltdbVersion=13.2.1
Standard upgrades are convenient and can upgrade across multiple versions of the VoltDB software. However, they do require downtime while the cluster is shutdown and restarted. In-Service Upgrades avoid the need for downtime by upgrading the cluster nodes one at a time, while the database remains active and processing transactions.
To use in-service upgrades, you must have an appropriate software license (in-service upgrades are a separately licensed feature), the cluster must be K-safe (that is, have a K-safety factor of one or more), and the difference between the current software version and the version you are upgrading to must fall within the limits of in-service upgrades. The following sections describe:
What versions can be upgraded using an in-service upgrade
How to perform the in-service upgrade
How to monitor the upgrade process
How to rollback an in-service upgrade if the upgrade fails
There are limits to which software versions can use in-service upgrades. The following table describes the rules for which releases can be upgraded with an in-service upgrade and which releases cannot.
You can upgrade between any two patch releases. That is, any two releases where only the third and final number of the version identifier changes. For example, upgrading from 13.1.1 to 13.1.4.
You can also use in-service upgrades to upgrade between two consecutive minor releases. That is where the second number in the version identifier differ. For example, you can upgrade from V13.2.0 to V13.3.0. You can also upgrade between any patch releases within those minor releases. For example, upgrading from V13.2.3 to V13.3.0.
You cannot use in-service upgrades to upgrade more than one minor version at a time. In other words, you can upgrade from V13.2.0 to V13.3.0 but you cannot perform an in-service upgrade from V13.2.0 to V13.4.0. To transition across multiple minor releases your options are to perform consecutive in-service upgrades (for example, from V13.2.0 to V13.3.0, then from V13.3.0 to V13.4.0) or to perform a regular upgrade where all cluster nodes are upgrading at one time.
You cannot use in-service upgrades between major versions of VoltDB. That is, where the first number in the version identifier is different. For example, you must perform a full cluster upgrade when migrating from V13.x.x to V14.0.0 or later.
If your cluster meets the requirements, you can use the in-service upgrade process to automate the software update and eliminate the downtime associated with standard upgrades. The procedure for performing an in-service upgrade is:
Set the property cluster.clusterSpec.enableInServiceUpgrade
to true to allow the
upgrade.
Set the property global.voltdbVersion
to the software version you want to upgrade to.
For example, the following command performs an in-service upgrade from V13.1.2 to V13.2.0:
helm upgrade mydb voltdb/voltdb --reuse-values \
--set cluster.clusterSpec.enableInServiceUpgrade=true \
--set global.voltdbVersion=13.2.0
Once the upgrade is complete, it is a good idea to reset the enableInServiceUpgrade
property to
false so as not to accidentally trigger an upgrade during normal operations.
Once you initiate an in-service upgrade, the process proceeds by itself until completion. At a high level you can monitor the current status of the upgrade using the @SystemInformation system procedure with the OVERVIEW selector and looking for the VERSION keyword. For example, in the following command output, the first column is the host ID and the last column is the currently installed software version for that host. Once all hosts report using the upgraded software version, the upgrade is complete.
$ echo "exec @SystemInformation overview" | sqlcmd | grep VERSION 2 VERSION 13.1.2 1 VERSION 13.1.2 0 VERSION 13.1.3
During the upgrade, the Volt Operator reports various stages of the process as events to Kubernetes. So you can monitor the progression of the upgrade in more detail using the kubectl get events command. For example, the following is an abbreviated listing of events you might see during an in-service upgrade. (The messages often contain additional information concerning the pods or the software versions being upgraded from and to.)
$ kubectl get events -w
11m Normal RollingUpgrade mydb-voltdb-cluster Gracefully terminating pod 2
11m Normal RollingUpgrade mydb-voltdb-cluster Gracefully terminated pod 2
11m Normal RollingUpgrade mydb-voltdb-cluster Recycling Gracefully terminated pod mydb-voltdb-cluster-2
9m43s Normal RollingUpgrade mydb-voltdb-cluster Recycled pod 2 has rejoined the cluster
9m42s Normal RollingUpgrade mydb-voltdb-cluster Pod mydb-voltdb-cluster-2 is now READY
9m35s Normal RollingUpgrade mydb-voltdb-cluster Gracefully terminating pod 1
[ . . . ]
Once the upgrade is finished, the Operator reports this as well:
5m10s Normal RollingUpgrade mydb-voltdb-cluster RollingUpgrade Done.
The in-service upgrade process is automatic on Kubernetes — once you initiate the upgrade, the Volt Operator handles all of the activities until the upgrade is complete. However, if the upgrade fails for any reason — for example, if a node fails to rejoin the cluster — you can rollback the upgrade, returning the cluster to its original software version.
The Volt Operator detects an error during the upgrade whenever the VoltDB server process fails. The failure is reported as an appropriate series of events to Kubernetes:
12m Warning RollingUpgrade mydb-voltdb-cluster Rolling Upgrade failed upgrading from... to... 12m Normal RollingUpgrade mydb-voltdb-cluster Please update the clusterSpec image back to...
In addition to monitoring the events, you may wish to use the kubectl commands get events, get pods, and logs to determine exactly why the node is failing. The next step is to cancel the upgrade by initiating a rollback. You do this by resetting the image tag to the original version number.
Invoking the rollback is a manual task. However, once the rollback is initiated, the Operator automates the process
of returning the cluster to its original state. Consider the previous example where you are upgrading from V13.1.2 to
V13.2.0. Let us assume three nodes had upgraded but a fourth was refusing to join the cluster. You could initiate a
rollback by resetting the global.voltdbVersion
property to V13.1.2:
$ helm upgrade mydb voltdb/voltdb --reuse-values \ --set global.voltdbVersion=13.1.2
Once you initiate the rollback, the Volt Operator stops the node currently being upgraded and restarts it using the original software version. After that process completes, the Operator goes through any node that had been upgraded, one at a time, downgrading them back to the original software. Once all nodes are reset and have rejoined the cluster, the rollback is complete.
Note that an in-service rollback can only occur if you initiate the rollback during the upgrade process. Once the in-service upgrade is complete and all nodes are running the new software version, resetting the image tag will force the cluster to perform a standard software downgrade, shutting down the cluster as a whole and restarting with the earlier version.
When upgrading an XDCR cluster, there is one extra step you must pay attention to. Normally, during the upgrade, VoltDB saves and restores a snapshot between versions and so all data and schema information is maintained. When upgrading an XDCR cluster, the data and schema is deleted, since the cluster will need to reload the data from another cluster in the XDCR relationship once the upgrade is complete.
Loading the data is automatic. But loading the schema depends on the schema being stored properly before the upgrade begins.
If the schema was loaded through the YAML properties cluster.config.schemas
and
cluster.config.classes
originally and has not changed, the schema and classes will be restored
automatically. However, if the schema was loaded manually or has been changed since it was originally loaded, you must make
sure a current copy of the schema and classes is available after the upgrade. There are two ways to do this.
For both methods, the first step is to save a copy of the schema and the classes. You can do this using the voltdb get schema and voltdb get classes commands. For example, using Kubernetes port forwarding you can save a copy of the schema and class JAR file to your local working directory:
$ kubectl port-forward mydb-voltdb-cluster-0 21212 & $ voltdb get schema -o myschema.sql $ voltdb get classes -o myclasses.jar
Once you have copies of the current schema and class files, you can either set them as the default schema and classes
for your database release before you upgrade the software or you can set them in the same command as you upgrade the
software. For example, the following commands set the default schema and classes first, then upgrade the Operator and server
software. Alternately, you could put the two --set-file
and two --set
arguments in a
single command.
$ helm upgrade mydb voltdb/voltdb --reuse-values \ --set-file cluster.config.schemas.mysql=myschema.sql \ --set-file cluster.config.classes.myjar=myclasses.jar $ helm upgrade mydb voltdb/voltdb --reuse-values \ --set global.voltdbVersion=12.3.1