Running Nvidia Deepstream 6.2 on a local K3s node — Part I

Varun
4 min readFeb 19, 2024

--

Hey folks, this guide will help you setup kubernetes, go through a simple yaml file, and run nvidia deepstream’s sample application leveraging the GPU inside a k3s pod. There’s also a section to help understand how config maps are implemented in the cluster.

Setup K3s and install Nvidia device plugin

curl -sfL https://get.k3s.io | sh -

kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.13.0/nvidia-device-plugin.yml

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: gpu-operator
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: nvidiagpu
namespace: gpu-operator
spec:
chart: gpu-operator
repo: https://helm.ngc.nvidia.com/nvidia
EOF

service k3s restart

Check if GPU access is present

sudo kubectl get node -o json | jq '.items[0].status.capacity'

Create a directory to store your yaml files and other resources (good practice)

mkdir -p ~/Documents/DS_K3s

Create a deployment yaml file, let’s call it ds-demo.yaml

apiVersion: v1
kind: Pod
metadata:
name: demo-ds-deployment
labels:
app: demo-app
spec:
hostNetwork: true
restartPolicy: Always
runtimeClassName: nvidia
containers:
- name: demo-stream
image: nvcr.io/nvidia/deepstream:6.2-samples
imagePullPolicy: IfNotPresent
tty: true
securityContext:
privileged: true
env:
- name: DISPLAY
value: "$DISPLAY"
- name: NVIDIA_VISIBLE_DEVICES
value: all
- name: NVIDIA_DRIVER_CAPABILITIES
value: all
resources:
limits:
nvidia.com/gpu: 1
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 3600; done;" ]
volumeMounts:
- mountPath: /tmp/.X11-unix/
name: x11
- mountPath: /opt/demo.txt
subPath: config.txt
name: demo-configmap

Check status of your k3s node

kubectl get nodes

Create a configMap from a file, this will be mounted to the container. For the sake of this guide, i’ll be creating a demo.txt file which is nothing but a deepstream 6.2 sample config file which we’ll use to run our pod later.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=2
columns=2
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file://../../streams/sample_1080p_h264.mp4
num-sources=4
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink/nv3dsink (Jetson only) 3=File
type=1
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
#iframeinterval=10
bitrate=2000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
output-file=out.mp4
source-id=0

[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
#iframeinterval=10
bitrate=400000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
buffer-pool-size=4
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
model-engine-file=../../models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
batch-size=4
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt

[tracker]
enable=1
# For NvDCF and NvDeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=config_tracker_IOU.yml
# ll-config-file=config_tracker_NvSORT.yml
ll-config-file=config_tracker_NvDCF_perf.yml
# ll-config-file=config_tracker_NvDCF_accuracy.yml
# ll-config-file=config_tracker_NvDeepSORT.yml
gpu-id=0
enable-batch-process=1
enable-past-frame=1
display-tracking-id=1

[secondary-gie0]
enable=1
model-engine-file=../../models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine
gpu-id=0
batch-size=16
gie-unique-id=4
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=config_infer_secondary_vehicletypes.txt

[secondary-gie1]
enable=1
model-engine-file=../../models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine
batch-size=16
gpu-id=0
gie-unique-id=5
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=config_infer_secondary_carcolor.txt

[secondary-gie2]
enable=1
model-engine-file=../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine
batch-size=16
gpu-id=0
gie-unique-id=6
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=config_infer_secondary_carmake.txt

[tests]
file-loop=0
kubectl create configmap demo-configmap --from-file=config.txt=/home/iw/demo.txt

Create a pod

kubectl apply -f ds.yaml

Check status of your pod

kubectl get pods

NOTE : If your pod is stuck in a CrashLoopBackoff or any other error state, get more information using the following command

kubectl describe pod pod_name

Exec into the pod to verify configMap files

kubectl exec -it pod_name -- bash 
cat /opt/demo.txt

To edit the configMap, you can use the following command which opens a default text editor

kubectl edit cm demo-configmap

Run the sample app

kubectl exec -ti demo-ds-deployment-c8ff9556b-ltlzw -- bash -c "cd /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps && deepstream-app -c /opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt"

You can also run the sample app using the mounted demo.txt config file. The changed command would be as follows

kubectl exec -ti demo-ds-deployment-c8ff9556b-ltlzw -- bash -c "cd /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps && deepstream-app -c /opt/demo.txt"

That’s all for now! Stay tuned for Part II

--

--

Varun
Varun

Written by Varun

A novice here. In hopes of documenting a few things that I’m passionate about and would like maybe a few people to read and understand

No responses yet