Kubernetes load balancers are an alternative for making VoltDB clusters accessible outside the Kubernetes cluster or region they are in. In this case you are not using load balancers for their traditional role, balancing the load between multiple pods. Instead, the load balancers are solely used to provide externally accessible IP addresses.
There are two approaches to using load balancers. The first approach is to assign a load balancer for each node of the cluster. Since the nodes are externally reachable through persistent IP addresses on their corresponding load balancer, the load balancers can be used for both the network discovery and replication phases. The second approach is to use only one load balancer for the entire cluster to provide network discovery, and use virtual network peering, available from your hosting provider, for replication.
Many hosting platforms, such as Google Cloud or AWS, provide proprietary mechanisms for performing network peering between regions or data centers. Each of these solutions has its own unique set up and configuration, separate from the configuration of VoltDB and the VoltDB Operator . As a result, using a network peering service is not as simple as the use of load balancers for replication. However, they can be significantly more cost effective when paired with a single load balancer for network discovery.
There is also the choice of assigning the IP addresses for the load balancers dynamically, or having them selected
from a range of static addresses. Dynamic assignment is simpler, since you do not need to arrange with your hosting provider
for pre-assigned IPs or hostnames. However, dynamic addresses also mean you do not know what the addresses are
until the cluster starts. This means the remote XDCR cluster cannot assign the
source
property until after the cluster starts with its associated load balancers and you can determine
the IP addresses assigned to them.
First you must assign the DR id
and role
as Helm properties. If the remote
cluster is using static addresses, you can specify one of its nodes as the source
, as in the following
example. If you are using dynamic load balancers, leave the source
property blank and use the
helm upgrade --set command once the clusters are running to assign a resulting node address for the
remote cluster.
cluster: config: deployment: dr: id: 1 role: xdcr connection: enabled: true source: "chicago-dc-2" # Remote cluster
Then in the cluster.serviceSpec
section, you enable perpod
by setting its
type
to LoadBalancer. You will also want to set the
dr.enabled
property to true so the per pod load balancers are used for
network discovery as well as replication.
For dynamically assigned addresses, set the publicIPFromService
to
true:
cluster:
serviceSpec:
perpod:
type: LoadBalancer
publicIPFromService: true
dr:
enabled: true
For static IP addresses, use the staticIPs
property to specify the addresses to assign when
creating the load balancers and, again, set dr.enabled
to true.
cluster: serviceSpec: perpod: enabled: true type: LoadBalancer staticIPs: - 12.34.56.78 - 12.34.56.79 - 12.34.56.80 dr: enabled: true
To reduce the number of resources needed to connect XDCR clusters in different regions, you can use a single load balancer for network discovery and use virtual network peering services from your hosting provider for connecting the two clusters during replication. How you set up and configure your network peering is specific to each provider. See your provider's documentation for additional information. This section describes how to set up a single Kubernetes load balancer for network discovery once you have your network peering established.
First you must assign the DR id
and role
as Helm properties and, if known in
advance, the source
for the remote cluster:
cluster: config: deployment: dr: id: 1 role: xdcr connection: enabled: true source: "chicago-dc-2" # Remote cluster
Then in the cluster.serviceSpec
section, you enable the dr
service (rather
than perpod
) and set its type
to LoadBalancer. You may
also need to provide additional annotations that help configure the service. These annotations are specific to the host
environment you are using. So, for example, the following configuration provides annotations for AWS and the Google
Cloud:
cluster: serviceSpec: dr: enabled: true type: LoadBalancer annotations: # Google Cloud networking.gke.io/load-balancer-type: "Internal" networking.gke.io/internal-load-balancer-allow-global-access: "true" # AWS service.beta.kubernetes.io/aws-load-balancer-internal: "true" service.beta.kubernetes.io/aws-load-balancer-type: "nlb"