Finally, as new versions of VoltDB become available, you will want to upgrade the VoltDB software on your database cluster. The simplest approach for upgrading a VoltDB cluster is to pause the database, save the data, shutdown, upgrade the software on all servers, then restart the database and restore the data.
However, this method involves downtime while the software is being updated. An alternative is to use passive database replication (DR) to copy the active database contents to a new cluster, then switch the application clients to point to the new server. The advantage of this process is that the only downtime the business application sees is the time needed to promote the new cluster and redirect the clients.
The following sections describe both approaches:
To upgrade the VoltDB software on a single database cluster, you must first shutdown the database then upgrade all servers in the cluster before restarting the database. The steps to perform this procedure are:
Place the database in admin mode (voltadmin pause).
Perform a manual snapshot of the database (voltadmin save --blocking).
Shutdown the database (voltadmin shutdown).
Upgrade VoltDB on all cluster nodes.
Start a new database using the voltdb create --force option and starting in admin mode (specified in the deployment file).
Restore the snapshot created in Step #2 (voltadmin restore).
Return the database to normal operations (voltadmin resume).
Note that you must use restore after a software upgrade; you cannot use command logs to recover across software versions. However, you can use database replication (DR) between clusters running two different versions, as described in the following section.
When upgrading the VoltDB software in a production environment, it is possible to minimize the disruption to client applications by upgrading across two clusters using database replication (DR). To use this process you need a second database cluster to act as the DR replica and you must have a unique cluster ID assigned to the current database.
The basic process for upgrading the VoltDB software using DR is to:
Install the new VoltDB software on the secondary cluster
Use passive DR to synchronize the database contents from the current cluster to the new cluster
Pause the current database and promote the new cluster, switching the application clients to the new upgraded database
The following sections describe in detail the prerequisites for using this process, the steps to follow, and — in case there are any issues with the updated database — the process for falling back to the previous software version.
The prerequisites for using DR to upgrade VoltDB are:
A second cluster with the same configuration (that is, the same number of servers and sites per host) as the current database cluster.
The current database cluster must have a unique cluster ID assigned in its deployment file.
The cluster ID is assigned in the
<dr> section of the deployment file and
must be set when the cluster starts. It cannot be added or altered while the database is running. So if you are
considering using this process for upgrading your production systems, be sure to add a
<dr> tag to the deployment and assign a unique cluster ID when starting the database,
even if you do not plan on using DR for normal operations.
For example, you would add the following element to the deployment file when starting your primary database cluster to assign it the unique ID of 3.
An important constraint to be aware of when using this process is that you must not make any schema changes during the upgrade process. This includes the period after the upgrade while you verify the application's proper operation on the new software version. If any changes are made to the schema, you may not be able to readily fall back to the previous version.
The procedure for upgrading the VoltDB software on a running database using DR is the following. In the examples,
we assume the existing database is running on a cluster with the nodes
oldsvr2 and the new cluster includes servers
newsvr2. We will assign the clusters unique IDs 3 and 4, respectively.
Install the new VoltDB software on the secondary cluster.
Start the second cluster as a replica of the current database cluster.
Once the new software is installed, create a new database on the secondary server using the voltdb create --replica command and including the necessary DR configuration to create a replica of the current database. For example, the deployment file on the new cluster might look like this:
<dr id="4"> <connection source="oldsvr1,oldsvr2"/> </dr>
Once the second cluster starts, apply the schema from the current database to the second cluster. Once the schema match on the two databases, replication will begin.
Wait for replication to stabilize.
During replication, the original database will send a snapshot of the current content to the new replica, then send binary logs of all subsequent transactions. You want to wait until the snapshot is finished and the ongoing DR is processing normally before proceeding.
First monitor the DR statistics on the new cluster. The DR consumer state changes to "RECEIVE" once the snapshot is complete. You can check this in the Monitor tab of the VoltDB Management Center or from the command line by using sqlcmd to call the @Statistics system procedure, like so:
$ sqlcmd --servers=newsvr1 1> exec @Statistics drconsumer 0;
Once the new cluster reports the consumer state as "RECEIVE", you can monitor the rate of replication on the existing database cluster using the DR producer statistics. Again, you can view these statistics in the Monitor tab of the VoltDB Management Center or by calling @Statistics using sqlcmd:
$ sqlcmd --servers=oldsvr1 1> exec @Statistics drproducer 0;
What you are looking for on the producer side is that the DR latency is low; ideally under a second. Because the DR latency helps determine how long you will wait for the cluster to quiesce when you pause it and, subsequently, how long the client applications will be stalled waiting for the new cluster to be promoted. You determine the latency by looking at the difference between the statistics for the last queued timestamp and the last ACKed timestamp. The difference between these values gives you the latency in microseconds. When the latency reaches a stable, low value you are ready to proceed.
Pause the current database.
The next step is to pause the current database. You do this using the voltadmin pause --wait command:
$ voltadmin pause --host=oldsvr1 --wait
The --wait flag tells voltadmin to wait until all DR and export queues are flushed to their downstream targets before returning control to the shell prompt. This guarantees that all transactions have reached the new replica cluster.
If DR or export are blocked for any reason — such as a network outage or the target server unavailable — the voltadmin pause --wait command will continue to wait and periodically report on what queues are still busy. If the queues do not progress, you will want to fix the underlying problem before proceeding to ensure you do not lose any data.
Promote the new database.
Once the current database is fully paused, you can promote the new database, using the voltadmin promote command:
$ voltadmin promote --host=newsvr1
At this point, your database is up and running on the new VoltDB software version.
Redirect client applications to the new database.
To restore connectivity to your client applications, redirect them from the old cluster to the new cluster by creating connections to the new cluster servers newsvr1, newsvr2, and so on.
Shutdown the original cluster.
At this point you can shutdown the old database cluster.
Verify proper operation of the database and client applications.
The last step is to verify that your applications are operating properly against the new VoltDB software. Use the VoltDB Management Center to monitor database transactions and performance and verify transactions are completing at the expected rate and volume.
Your upgrade is now complete. If, at any point, you decide there is an issue with your application or your database, it is possible to fall back to the previous version of VoltDB as long as you have not made any changes to the underlying database schema. The next section explains how to fall back when necessary.
In extreme cases, you may find there is an issue with your application and the latest version of VoltDB. Of course, you normally would discover this during testing prior to a production upgrade. However, if that is not the case and an incompatibility or other conflict is discovered after the upgrade is completed, it is possible to fall back to a previous version of VoltDB. The basic process for falling back is to the following:
If any problems arise before Step #6 (redirecting the clients) is completed, simply shutdown the new replica and resume the old database using the voltadmin resume command:
$ voltadmin shutdown --host=newsvr1 $ voltadmin resume --host=oldsvr1
If issues are found after Step #6, the fall back procedure is basically to repeat the upgrade procedure described in Section 220.127.116.11, “The DR Upgrade Process” except reversing the roles of the clusters and replicating the data from the new cluster to the old cluster. That is:
Update the deployment file on the new cluster to enable DR as a master, removing the <connection> element:
Shutdown the original database and edit the deployment file to enable DR as a replica of the new cluster:
<dr id="3"> <connection source="newsvr1,newsvr2"/> </dr>
Start the old cluster using the voltdb create --force --replica command.
Follow steps 3 through 8 in Section 18.104.22.168, “The DR Upgrade Process” reversing the roles of the new and old clusters.