Skip to content

Profiling and Troubleshooting on k8s

Scraping logs with prometheus

VoltSp ships with a Management Console that can also be installed using a standalone helm chart: helm install stream-management-console voltdb/management-console -n NAMESPACE, this setup assumes two things: - all pods that has to be scraped are in the same namespace - prometheus will scrape pods that have pod monitor and label app.kubernetes.io/name=volt-streams - prometheus will scrape pods that have service monitor and label app.kubernetes.io/monitor-by=volt-streams

if you need to redefine prometheus to scrape pods from other namespaces within same cluster, you need to specify custom scrape rules. Check scrape_configs of the k8s config map with name prometheus-server-conf.

Enabling profiling

VoltSp depends on pyroscope profiler to be run within same cluster. The grafana is able to display various details of the collected profiling data.

To enable pyroscope make sure to set

pyroscope:
  enabled: true
when installing via helm install stream-management-console volt/management-console -n NAMESPACE. Navigate to grafana kubectl -n NAMESPACE get svc and go to public ip of the grafana, now check connections->data sources->grafana-pyroscope-datasource It should be set to http://stream-management-console-pyroscope:4040, because you have installed management console under stream-management-console name.

Now, alter installation properties for pod that you want to profile, add

monitoring:
  prometheus:
    enabled: true

  profiler:
    enabled: true
    pyroscopeUrl: http://stream-management-console-pyroscope.NAMESPACE.svc.cluster.local:4040
    # one of ALLOC, CTIMER, WALL
    event: "ALLOC"
    extraArguments:

Now the pod will send profile data back to server and those will be available at grafana explore tab. Make sure to choose right data source.

Adding useful binaries to an image

VoltSp base image is build on top of registry.access.redhat.com/ubi9-minimal:9.6 which is very limited and misses most of useful tool. Best practise here is to create your own image that will extend Volt's for example

FROM registry.access.redhat.com/ubi9:9.6 as deps
RUN dnf install -y procps-ng tar && dnf clean all

FROM voltdb/volt-streams-dev:LATEST-VERSION

RUN mkdir -p /volt-apps/
COPY target/lib/* /volt-apps/
COPY target/app-jar.jar /volt-apps/

COPY LOCAL-PATH-TO/jattach /volt-apps/jattach
COPY --from=deps /usr/bin/ps /usr/bin/
COPY --from=deps /usr/bin/tar /usr/bin/
COPY --from=deps /usr/lib64/libprocps.so* /usr/lib64/
COPY --from=deps /usr/lib64/libsystemd.so* /usr/lib64/
COPY --from=deps /usr/lib64/libgcrypt.so* /usr/lib64/

the jattach is useful to see allocations and take a heap dump, it can be downloaded from - https://github.com/jattach/jattach/releases/download/v2.2/jattach-linux-x64.tgz

run commands on the pod

Exec kubectl -n NAMESPACE get pod that will list all pods within one namespace, kubectl -n NAMESPACE exec -it pod/POD_NAME -- /bin/bash check the pid of the java process bash-5.1# ls /tmp/hsperfdata_root/33 bash-5.1# /volt-apps/jattach 33 dumpheap /tmp/heap.bin

on your local host run kubectl cp -n stream POD_NAME:/tmp/heap.bin /tmp/heap.bin

more about jattach https://docs.datahub.com/docs/how/jattach-guide

attaching to a jvm on a pod

to do so change jvm options

JMX_OPTS="-Dcom.sun.management.jmxremote
    -Dcom.sun.management.jmxremote.port=9010
    -Dcom.sun.management.jmxremote.rmi.port=9010
    -Dcom.sun.management.jmxremote.local.only=false
    -Dcom.sun.management.jmxremote.authenticate=false
    -Dcom.sun.management.jmxremote.ssl=false
    -Djava.rmi.server.hostname=localhost"

helm install consumer voltdb/volt-stream-chart --wait -n $NS_STM -f yaml/consumer.yaml \
--set replicaCount=1 \
--set image.tag="..." \
--set podEnv.JAVA_OPTS="-XX:InitialRAMPercentage=80.0 -XX:MinRAMPercentage=80.0 -XX:MaxRAMPercentage=80.0 --add-opens java.base/sun.nio.ch=ALL-UNNAMED -XX:+AlwaysPreTouch $JMX_OPTS" \
...

after the consumer is up, forward the port to local machine kubectl -n $NS_STM port-forward POD_NAME 9010:9010 and run visualVM, then connect to remote pid via file->add jmx connection and set service:jmx:rmi:///jndi/rmi://localhost:9010/jmxrmi. Also make sure that you have right plugins installed, for example Extension, JConsole, Threads Inspector.