Once you define the configuration of your cluster, you start a VoltDB database by starting the VoltDB server process on each node of the cluster. You start the server process by invoking VoltDB and specifying:
A startup action (see Section 6.5, “Stopping and Restarting a VoltDB Database” for details)
The location of the application catalog
The hostname or IP address of the host node in the cluster
The location of the deployment file
The host can be any node in the cluster and plays a special role during startup; it hosts the application catalog and manages the cluster initiation process. Once startup is complete, the host's role is complete and it becomes a peer of all the other nodes. It is important that all nodes in the cluster can resolve the hostname or IP address of the host node you specify.
For example, the following
voltdb command starts the cluster with the create startup action, specifying the location of the catalog and the deployment files, and naming
voltsvr1 as the host node:
$ voltdb create mycatalog.jar \ --deployment=deployment.xml \ --host=voltsvr1
Or you can also use shortened forms for the argument flags:
$ voltdb create mycatalog.jar \ -d deployment.xml \ -H voltsvr1
If you are using the VoltDB Enterprise Edition, you must also provide a license file. The license is only required by
the host node when starting the cluster. To simplify startup, VoltDB looks for the license as a file named
license.xml in three locations, in the following order:
The current working directory
The directory where the VoltDB image files are installed (usually in the
/voltdb subfolder of
the installation directory)
The current user's home directory
So if you store the license file in any of these locations, you do not have to explicitly identify it on the command
line. Otherwise, you can use the
-l flag to specify the license file
location, For example:
$ voltdb create mycatalog.jar \ -d deployment.xml \ -H voltsvr1 \ -l /usr/share/voltdb-license.xml
When you are developing an application (where your cluster consists of a single node using localhost), this one command is sufficient to start the database. However, when starting a cluster, you must:
Copy the runtime catalog to the host node.
Copy the deployment file to all nodes of the cluster.
Log in and start the server process using the preceding command on each node.
The deployment file must be identical on all nodes for the cluster to start.
Manually logging on to each node of the cluster every time you want to start the database can be tedious. There are several ways you can simplify the startup process:
Shared network drive — By creating a network drive and mounting it (using NFS) on all nodes of the cluster, you can distribute the runtime catalog and deployment file (and the VoltDB software) by copying it once to a single location.
Remote access — When starting the database, you can specify the location
of either the runtime catalog or the deployment file as a URL rather than a file path (for example,
http://myserver.com/mycatalog.jar). This way you can publish the catalog and deployment file once
to a web server and start all nodes of the server from those copies.
Remote shell scripts — Rather than manually logging on to each cluster node, you can use secure shell (ssh) to execute shell commands remotely. By creating an ssh script (with the appropriate permissions) you can copy the files and/or start the database on each node in the cluster from a single script.
VoltDB Enterprise Manager — The VoltDB Enterprise Edition includes a web-based management console, called the VoltDB Enterprise Manager, that helps you manage the configuration, initialization, and performance monitoring of VoltDB databases. The Enterprise Manager automates the startup process for you. See the VoltDB Management Guide for details.
When you are starting a VoltDB database, the VoltDB server process performs the following actions:
If you are starting the database on the node identified as the host node, it waits for initialization messages from the remaining nodes.
If you are starting the database on a non-host node, it sends an initialization message to the host indicating that it is ready.
Once all the nodes have sent initialization messages, the host sends out a message to the other nodes that the cluster is complete. The host then distributes the application catalog to all nodes.
At this point, the cluster is complete and the database is ready to receive requests from client applications. Several points to note:
Once the startup procedure is complete, the host's role is over and it becomes a peer like every other node in the cluster. It performs no further special functions.
The database is not operational until the correct number of nodes (as specified in the deployment file) have connected.