The two major differences between creating a VoltDB database cluster in Kubernetes and starting a cluster using traditional servers are:
With Helm there is a single command (install) that performs both the initialization and the startup of the database.
You specify the database configuration with properties rather than as an XML file.
In fact, all of the configuration — including the configuration of the virtual servers (or pods), the server processes, and the database — is accomplished using Helm properties. The following sections provide examples of some of the most common configuration settings when using Kubernetes. Appendix A, VoltDB Helm Properties gives a full list of all of the properties that are available for customization.
Many of the configuration options that are performed through hardware configuration, system commands or environment variables on traditional server platforms are now available through Helm properties. Most of these settings are listed in Section A.3, “Kubernetes Cluster Startup Options”.
Hardware settings, such as the number of processors and memory size, are defined as Kubernetes image resources
through the Helm cluster.clusterSpec.resources property. Under resources, you
can specify any of the YAML properties Kubernetes expects when configuring pods within a container. For
example:
cluster:
clusterSpec:
resources:
requests:
cpu: 500m
memory: 1000Mi
limits:
cpu: 500m
memory: 1000MiSystem settings that control process limits that are normally defined through environment variables can be set
with the cluster.clusterSpec.env properties. For example, the following YAML increases the Java
maximum heap size and disables the collection of JVM statistics:
cluster:
clusterSpec:
env:
VOLTDB_HEAPMAX: 3072
VOLTDB_OPTS: -XX:+PerfDisableSharedMemOne system setting that is not configurable through Kubernetes or Helm is whether the base platform has Transparent Huge Pages (THP) enabled or not. This is dependent of the memory management settings on the actual base hardware on which Kubernetes is hosted. Having THP enabled can cause problems with memory-intensive applications like VoltDB and it is strongly recommended that THP be disabled before starting your cluster. (See the section on Transparent Huge Pages in the VoltDB Administrator's Guide for an explanation of why this is an issue.)
If you are not managing the Kubernetes environment yourself or cannot get your provider to modify their environment, you will need to override VoltDB's warning about THP on startup by setting the cluster.clusterSpec.additionalArgs property to include the VoltDB start argument to disable the check for THP. For example:
cluster:
clusterSpec:
additionalStartArgs:
- "--ignore=thp"In addition to configuring the environment VoltDB runs in, there are many different characteristics of the database itself you can control. These include mapping network interfaces and ports, selecting and configuring database features, and identifying the database schema, class files, and security settings.
The network settings are defined through the cluster.serviceSpec properties, where you can choose
the individual ports and choose whether to expose them through the networking service
(cluster.serviceSpec.type) you can also select. For example, the following YAML file disables exposure
of the admin port and assigns the externalized client port to 31313:
cluster:
serviceSpec:
type: NodePort
adminPortEnabled: false
clientPortEnabled: true
clientNodePort: 31313The majority of the database configuration options for VoltDB are traditionally defined in an XML configuration file. When using Kubernetes, these options are declared using YAML and Helm properties.
In general, the Helm properties follow the same structure as the XML configuration, beginninging with "cluster.config". So, for example, where the number of sites per host is defined in XML as :
<deployment>
<cluster sitesperhost="{n}"/>
</deployment>It is defined in Kubernetes as:
cluster:
config:
deployment:
cluster:
sitesperhost: {n}The following sections give examples of defining common database configurations options using both XML and YAML. See Section A.6, “VoltDB Database Configuration Options” for a complete list of the Helm properties available for configuring the database.
Command logging provides durability of the database content across failures. You can control the level of
durability as well as the length of time required to recover the database by configuring the type of command logging and
size of the logs themselves. In Kubernetes this is done with the cluster.config.deployment.commandlog
properties. The following examples show the equivalent configuration in both XML and YAML:
| XML Configuration File | YAML Configuration File |
|---|---|
<commandlog enabled="true"
synchronous="true"
logsize="3072">
<frequency time="300"
transactions="1000"/>
</commandlog> | cluster:
config:
deployment:
commandlog:
enabled: true
synchronous: true
logsize: 3072
frequency:
transactions 1000 |
Export simplifies the integration of the VoltDB database with external databases and systems. You use the export
configuration to define external "targets" the database can write to. In Kubernetes you define export targets using the
cluster.config.deployment.export.configurations property. Note that the
configurations property can accept multiple configuration definitions. In YAML, you specify a list by
prefixing each list element with a hyphen, even if there is only one element. The following examples show the equivalent
configuration in both XML and YAML for configuring a file export connector:
| XML Configuration File | YAML Configuration File |
|---|---|
<export>
<configuration
target="eventlog"
type="file">
<property
name="type">csv</property>
<property
name="nonce">eventlog</property>
</configuration>
</export> | cluster:
config:
deployment:
export:
configurations:
- target: eventlog
type: file
properties:
type: csv
nonce: eventlog |
There are a number of options for securing a VoltDB database, including basic usernames and passwords in addition
to industry network solutions such as Kerberos and SSL. Basic security is enabled in the configuration with the
cluster.config.deployment.security.enabled property. You must also use the property and its children
to define the actual usernames, passwords, and assigned roles. Again, the users property expects a
list of sub-elements so you must prefix each set of properties with a hyphen.
Finally, if you do enable basic security, you must also tell the VoltDB operator which account to use when
accessing the database. To do that, you define the cluster.config.auth properties, as shown below,
which must specify an account with the built-in administrator role. The following examples
show the equivalent configurations in both XML and YAML, including the assignment of an account to the VoltDB
Operator:
| XML Configuration File | YAML Configuration File |
|---|---|
<security enabled="true"/>
<users>
<user name="admin"
password="superman"
roles="administrator"/>
<user name="mitty"
password="thurber"
roles="user"/>
</users> | cluster:
config:
deployment:
security:
enabled: true
users:
- name: admin
password: superman
roles: administrator
- name: mitty
password: thurber
roles: user
auth:
username: admin
password: superman |
Another important aspect of security is securing and authenticating the ports used to access the database. The most common way to do this is by enabling TLS/SSL to encrypt data and authenticate the servers using user-created certificates. The process for creating the private keystore and truststore in Java is described in the section on "Configuring TLS/SSL on the VoltDB Server" in the Using VoltDB guide. This process is the same whether you are running the cluster directly on servers or in Kubernetes.
The one difference when enabling TLS/SSL for the cluster in Kubernetes is that you must also configure the operator with an appropriate truststore, in PEM format. The easiest way to do this is to configure the operator using the same truststore and password you use for the cluster itself. First, you will need to convert the truststore to PEM format using the Java keytool:
keytool -export \
-alias my.key -rfc \
-file mytrust.pem \
-keystore mykey.jks \
-storepass topsecret \
-keypass topsecretOnce you have your keystore, truststore, and truststore in PEM format, you can configure the cluster and operator with the appropriate SSL properties. The following examples show the equivalent configurations in both XML and YAML (minus the actual truststore and keystore files).
| XML Configuration File | YAML Configuration File |
|---|---|
<ssl enabled="true"
external="true"
internal="true">
<keystore path="mykey.jks"
password="topsecret"/>
<truststore path="mytrust.jks"
password="topsecret"/>
</ssl> | cluster:
config:
deployment:
ssl:
enabled: true
external: true
internal: true
keystore:
password: topsecret
truststore:
password: topsecret
clusterSpec:
ssl:
insecure: false |
Using the preceding YAML file (calling it ssl.yaml), we can complete the SSL configuration by
specifying the truststore and keystore files on the helm command line with the
--set-file argument:
helm install mydb voltdb/voltdb \
--values myconfig.yaml \
--values ssl.yaml \
--set-file cluster.config.deployment.ssl.keystore.file=mykey.jks \
--set-file cluster.config.deployment.ssl.truststore.file=mytrust.jks \
--set-file cluster.clusterSpec.ssl.certificateFile=mytrust.pemThree important notes concerning TLS/SSL configuration:
If you enable SSL for the cluster's external interface and ports and you also want to enable Prometheus metrics, you must provide an appropriate SSL truststore and password for the metrics agent. See Section 6.1, “Using Prometheus to Monitor VoltDB” for more information on configuring the Prometheus agent in Kubernetes.
If you do not require validation of the TLS certificate by the operator, you can avoid setting the truststore
PEM for the operator and, instead, set the cluster.clusterSpec.ssl.insecure property to
true.
If you enable SSL for the cluster, you must repeat the specification of the truststore and keystore files
every time you update the configuration. Using the --reuse-values argument on the helm
upgrade command is not sufficient.
Finally, TLS/SSL certificates have an expiration date. It is important you replace the certificate before it
expires (if cluster.clusterSpec.ssl.insecure is false, which is the default). If not, the operator
will lose the ability to communicate with the cluster pods. See Section 5.3, “Updating TLS Security Certificates” for instructions on
updating the TLS/SSL certificates in Kubernetes.
VoltDB uses Log4J for logging messages while the database is running. The chapter on '"Logging and Analyzing Activity in a VoltDB Database" in the VoltDB Administrator's Guide describes some of the ways you can customize the logging to meet your needs, including changing the logging level or adding appenders. Logging is also available in the Kubernetes environment and is configured using a Log4J XML configuration file. However, the default configuration and how you set the configuration when starting or updating the database in Kubernetes is different than as described in the Administrator's Guide.
Before you attempt to customize the logging, you should familiarize yourself with the default settings. The easiest
way to do this is to extract a copy of the default configuration from the Docker image you will be using. The following
commands create a docker container without actually starting the image, extract the configuration file to a local file
(k8s-log4j.xml in the example), then delete the container.
$ ID=$(docker create voltdb/voltdb-enterprise)
$ docker cp ${ID}:/opt/voltdb/tools/kubernetes/server-log4j.xml k8s-log4j.xml
$ docker rm $IDOnce you extract the default configuration and made the changes you want, you are ready to specify your new
configuration on the Helm command to start the database. You do this by setting the
cluster.config.log4jcfgFile property. For example:
$ helm install mydb voltdb/voltdb \
--values myconfig.yaml \
--set cluster.configSpec.replicas=5 \
--set-file cluster.config.licenseXMLFile=license.xml \
--set-file cluster.config.log4jcfgFile=my-log4j.xmlSimilarly, you can update the logging configuration on a running cluster by using the --set-file
argument on the Helm upgrade command:
$ helm upgrade mydb voltdb/voltdb --reuse-values \
--set-file cluster.config.log4jcfgFile=my-log4j.xml