Once you have prepared and loaded your pipeline template, you are ready to wrap up the final details before running the pipeline. This includes specifying the runtime value for any placeholders you use in the pipeline definition. For example, if you are using a Kafka topic as a source and Kafka or Volt Active Data as the sink, you will need to identify the servers, topics, and/or table names to use.
In the preceding code examples, the pipeline definition used the placeholders voltdb.host,
voltdb.port, and voltdb.procedure for Volt Active Data assets. To fill
in these placeholders, you edit the YAML properties file for your pipeline. If you used the quick start sample as a
template, this means you can rename one of the YAML files (in src/main/resources/
) with a meaningful
name and edit it to fill in the appropriate values for the placeholders. You put these placeholder assignments in the
streaming.javaProperties
property. For example:
streaming: javaProperties: > -Dvoltdb.host=volt.acme.org -Dvoltdb.port=21212 -Dvoltdb.procedure=MYDATA.insert
You use the same process for assigning values to any Kafka or application-specific placeholders your pipeline definition uses. For example:
streaming:
javaProperties: >
-Dvoltdb.host=volt.acme.org
-Dvoltdb.port=21212
-Dvoltdb.procedure=MYDATA.insert
-Dkafka.bootstrap.servers=kafa.acme.org
-Dkafka.topic=mydata
-Dkafka.consumer.group=42
Now you are ready to run your pipeline. Use the helm install command to start the pipeline, specifying voltdb/voltsp as the chart and your edited YAML as the properties file. (If this is your first time running a pipeline, it is a good idea to issue a helm repo update command first to make sure you have access to the latest charts.) You will also need to include your volt license file:
$ export MY_DOCKER_REPO=johnqpublic/projects $ export MY_VOLT_LICENSE=$HOME/licenses/volt-license.xml $ helm install mydatapipe voltdb/voltsp \ --set-file streaming.licenseXMLFile=${MY_VOLT_LICENSE} \ --set image.repository=${MY_DOCKER_REPO} \ --set image.tag=mypipe--latest \ --values test/src/main/resources/mydatapipe.yaml
Once you start the pipeline you can use the kubectl get pods to verify the processes have started. If there are any issues you can use kubectl logs {pod-id} to get details on what is happening.