6.2. Choosing How to Establish a Network Mesh

Documentation

VoltDB Home » Documentation » VoltDB Kubernetes Administrator's Guide

6.2. Choosing How to Establish a Network Mesh

For XDCR to work, each cluster must be able to identify and connect to the nodes of the other cluster. Establishing the XDCR relationship occurs in two distinct phases:

  1. Network Discovery — First, the clusters connect over the replication port (port 5555, by default). The initial connection confirms that the configurations are compatible, that the schema of the two clusters match for all DR tables, and that there is data in only one of the clusters.

  2. Replication — Once the clusters agree on the schema, each cluster sends a list of node IP addresses and ports to the other cluster and multiple connections are made, node-to-node, between the two clusters. If there is existing data, a synchronization snapshot is sent between the clusters and then replication begins.

For the network discovery phase, each cluster must have a clearly identifiable network address that the other cluster can specify as part of its XDCR configuration. For the replication phase, each cluster must have externally reachable network addresses for each node in the cluster that it can advertise during the discovery phase and that the other cluster uses to make the necessary connections for replication.

Since, by default, the ports on a Kubernetes pod are not externally accessible, you must use additional services to make the VoltDB nodes accessible. Three such options are:

  • Kubernetes Load Balancers — One way to establish a network mesh is to use the built-in load balancer service within Kubernetes. Load balancers provide a defined, persistent external interface for internal pods. The advantage of using load balancers is that they are a native component of Kubernetes and are easy to configure. The disadvantage is that if you are running your VoltDB clusters in a hosted environment, load balancers tend to be far more expensive than regular pods and creating a separate load balancer for each node in the cluster to handle the replication phase can be prohibitively expensive unless you are managing your own infrastructure.

  • Kubernetes Node Ports — An alternative to load balancers is using node ports. Node ports, like load balancers, are native services of Kubernetes and provide an externally accessible interface for the internal pods. However, unlike load balancers where the addresses are persistent over time, node ports take on the addresses of the underlying Kubernetes nodes and therefore can change as Kubernetes nodes are recycled. Therefore node ports are not appropriate for the Network Discovery phase. On the other hand, they can be a cheaper alternative to load balancers for the replication phase, since the cluster can advertise the current set of node port addresses as pods come and go.

  • Network Mesh Services — These additional services, such as Consul, create a network mesh between Kubernetes clusters and regions. They essentially act as a virtual private network (VPN) within Kubernetes so the VoltDB clusters can interoperate as if they were local to each other. The advantage of using network mesh services is that configuring the VoltDB clusters is simpler, since all of the network topology is handled separately. The deficit is that this requires yet another service to set up. And the configuration of these services can be quite complex, requiring a deep understanding of — and access to — the networking layer in Kubernetes.

Which networking solution you use is up to you. You can even mix and match the alternatives — using, for example, a single load balancer per cluster for the Network Discovery phase and individual node ports for each VoltDB cluster node during the replication phase.

You define the type of network mesh to use and how to connect using YAML properties when you configure your clusters. In general, the Helm properties starting with cluster.config.deployment.dr, such as id and role, are generic properties common to all XDCR implementations. Helm properties starting with cluster.serviceSpec define the type of network mesh to use and annotations specific to the network type.

The following sections explain how to configure XDCR using Helm properties, with individual sections discussing the differences necessary for various networking options, including: