OpenTalk is an open-source video conferencing platform developed in Germany. This guide explains how to deploy OpenTalk on a Kubernetes cluster step by step. In this tutorial I assume that a Keycloak instance is already running — e.g sso.foo.com.
Prerequisites
- Kubernetes Cluster
- nginx-ingress controller
- cert-manager with Let’s Encrypt
- Ceph RBD storage (or any other StorageClass)
- Keycloak instance (e.g.
sso.foo.com) - A domain (e.g.
foo.com)
Architecture Overview
The OpenTalk stack consists of the following services:
| Service | Description |
|---|---|
| PostgreSQL | Database for the controller |
| Redis | Cache |
| RabbitMQ | Message broker between controller and Janus |
| MinIO | Object storage for recordings and uploads |
| Janus Gateway | WebRTC media server |
| Controller | OpenTalk backend API |
| Web Frontend | OpenTalk React frontend |
DNS Records
You need two DNS records pointing to your cluster:
opentalk.foo.com→ Frontendcontroller.opentalk.foo.com→ Backend API
Firewall
Janus requires UDP ports for WebRTC media streams. I pin Janus to a dedicated worker node and open the UDP range. For ufw use:
sudo ufw allow 20000:40000/udp
Keycloak Setup
Before deploying OpenTalk, configure Keycloak:
- Create a new Realm:
opentalk - Create a confidential Client for the controller:
- Client ID:
opentalk - Client authentication:
ON - Valid redirect URIs:
https://opentalk.foo.com/* - Copy the Client Secret from the Credentials tab
- Client ID:
- Create a public Client for the frontend:
- Client ID:
Frontend - Client authentication:
OFF - Valid redirect URIs:
https://opentalk.foo.com/auth/callback - Web origins:
https://opentalk.foo.com
- Client ID:
- Create at least one test user under Users
Note: Modern Keycloak versions (17+) no longer use the
/authpath prefix. The correct base URL ishttps://sso.foo.com(without/auth).
Deployment
All YAML files go into /apps/opentalk/.
opentalk/
├── 000-namespace.yaml
├── 010-redis.yaml
├── 015-postgres.yaml
├── 020-rabbitmq.yaml
├── 030-minio.yaml
├── 040-janus.yaml
├── 050-controller.yaml
├── 060-web-frontend.yaml
└── README.md
Apply everything with:
kubectl apply -f apps/opentalk/
000-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: opentalk
015-postgres.yaml
Replace
<CEPH-CLUSTER-ID>and<POSTGRES-PASSWORD>with your values. Create the RBD image first:rbd create kubernetes/opentalk-postgres --size 5120
apiVersion: v1
kind: PersistentVolume
metadata:
name: opentalk-postgres
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 5Gi
csi:
driver: rbd.csi.ceph.com
fsType: ext4
nodeStageSecretRef:
name: csi-rbd-secret-coba
namespace: ceph-system
volumeAttributes:
clusterID: "<CEPH-CLUSTER-ID>"
pool: "kubernetes"
staticVolume: "true"
imageFeatures: "layering"
volumeHandle: "opentalk-postgres"
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: opentalk-postgres
namespace: opentalk
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeMode: Filesystem
volumeName: opentalk-postgres
---
apiVersion: v1
kind: Secret
metadata:
name: opentalk-postgres-secret
namespace: opentalk
type: Opaque
stringData:
POSTGRES_DB: opentalk
POSTGRES_USER: opentalk
POSTGRES_PASSWORD: "<POSTGRES-PASSWORD>"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: opentalk
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:15-alpine
envFrom:
- secretRef:
name: opentalk-postgres-secret
ports:
- containerPort: 5432
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumes:
- name: data
persistentVolumeClaim:
claimName: opentalk-postgres
---
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: opentalk
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
010-redis.yaml
Redis is used as a cache only — no persistent storage needed.
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: opentalk
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:7-alpine
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: opentalk
spec:
selector:
app: redis
ports:
- port: 6379
targetPort: 6379
020-rabbitmq.yaml
Replace
<RABBITMQ-PASSWORD>with your value. Important: RabbitMQ does not need a persistent volume. The exchange state is ephemeral and is recreated on each start.
apiVersion: v1
kind: Secret
metadata:
name: opentalk-rabbitmq-secret
namespace: opentalk
type: Opaque
stringData:
RABBITMQ_DEFAULT_USER: opentalk
RABBITMQ_DEFAULT_PASS: "<RABBITMQ-PASSWORD>"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rabbitmq
namespace: opentalk
spec:
replicas: 1
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:3.12-alpine
envFrom:
- secretRef:
name: opentalk-rabbitmq-secret
ports:
- containerPort: 5672
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
namespace: opentalk
spec:
selector:
app: rabbitmq
ports:
- port: 5672
targetPort: 5672
030-minio.yaml
Replace
<CEPH-CLUSTER-ID>and<MINIO-PASSWORD>with your values. Create the RBD image first:rbd create kubernetes/opentalk-minio --size 10240After first deployment, create the bucket manually (see below).
apiVersion: v1
kind: PersistentVolume
metadata:
name: opentalk-minio
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
csi:
driver: rbd.csi.ceph.com
fsType: ext4
nodeStageSecretRef:
name: csi-rbd-secret-coba
namespace: ceph-system
volumeAttributes:
clusterID: "<CEPH-CLUSTER-ID>"
pool: "kubernetes"
staticVolume: "true"
imageFeatures: "layering"
volumeHandle: "opentalk-minio"
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: opentalk-minio
namespace: opentalk
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
volumeMode: Filesystem
volumeName: opentalk-minio
---
apiVersion: v1
kind: Secret
metadata:
name: opentalk-minio-secret
namespace: opentalk
type: Opaque
stringData:
MINIO_ROOT_USER: opentalk
MINIO_ROOT_PASSWORD: "<MINIO-PASSWORD>"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: minio
namespace: opentalk
spec:
replicas: 1
selector:
matchLabels:
app: minio
template:
metadata:
labels:
app: minio
spec:
containers:
- name: minio
image: minio/minio:RELEASE.2023-07-21T21-12-44Z
args:
- server
- /data
envFrom:
- secretRef:
name: opentalk-minio-secret
ports:
- containerPort: 9000
volumeMounts:
- name: data
mountPath: /data
readinessProbe:
httpGet:
path: /minio/health/ready
port: 9000
initialDelaySeconds: 10
periodSeconds: 10
volumes:
- name: data
persistentVolumeClaim:
claimName: opentalk-minio
---
apiVersion: v1
kind: Service
metadata:
name: minio
namespace: opentalk
spec:
selector:
app: minio
ports:
- port: 9000
targetPort: 9000
Create the MinIO bucket after first deployment:
kubectl run -it --rm minio-bucket \
--image=minio/mc \
--restart=Never -n opentalk \
--overrides='{"spec":{"containers":[{"name":"minio-bucket","image":"minio/mc","command":["/bin/sh","-c","mc alias set local http://minio.opentalk.svc.cluster.local:9000 opentalk <MINIO-PASSWORD> && mc mb local/opentalk"]}]}}'
040-janus.yaml
Replace
<JANUS-ADMIN-SECRET>and<RABBITMQ-PASSWORD>with your values. Replace176.9.0.25with the public IP of your Janus worker node. The node is pinned vianodeSelectorto ensure Janus always runs on the same node where UDP ports are open.
Key learnings:
hostNetwork: trueis required so UDP ports are directly reachable on the hostJANUS_EXCHANGE_TYPE=topicis mandatory — without this, the RabbitMQ exchange type conflicts with the controller
apiVersion: v1
kind: ConfigMap
metadata:
name: janus-config
namespace: opentalk
data:
janus.jcfg: |
general: {
log_timestamps = true
log_colors = false
admin_secret = "<JANUS-ADMIN-SECRET>"
}
nat: {
nat_1_1_mapping = "176.9.0.25"
rtp_port_range = "20000-40000"
}
media: {
rtp_port_range = "20000-40000"
}
transports: {
disable = "libjanus_pfunix.so"
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: janus
namespace: opentalk
spec:
replicas: 1
selector:
matchLabels:
app: janus
template:
metadata:
labels:
app: janus
spec:
nodeSelector:
kubernetes.io/hostname: worker-4-uxmal
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: janus
image: registry.opencode.de/opentalk/janus-gateway:v0.13.4
env:
- name: RABBITMQ_HOST
value: "rabbitmq.opentalk.svc.cluster.local"
- name: RABBITMQ_PORT
value: "5672"
- name: RABBITMQ_USERNAME
valueFrom:
secretKeyRef:
name: opentalk-rabbitmq-secret
key: RABBITMQ_DEFAULT_USER
- name: RABBITMQ_PASSWORD
valueFrom:
secretKeyRef:
name: opentalk-rabbitmq-secret
key: RABBITMQ_DEFAULT_PASS
- name: RABBITMQ_VHOST
value: "/"
- name: JANUS_EXCHANGE_TYPE
value: "topic"
ports:
- containerPort: 8188
protocol: TCP
- containerPort: 8189
protocol: TCP
volumeMounts:
- name: config
mountPath: /etc/janus/janus.jcfg
subPath: janus.jcfg
readinessProbe:
tcpSocket:
port: 8188
initialDelaySeconds: 10
periodSeconds: 10
volumes:
- name: config
configMap:
name: janus-config
---
apiVersion: v1
kind: Service
metadata:
name: janus
namespace: opentalk
spec:
selector:
app: janus
ports:
- name: ws
port: 8188
targetPort: 8188
protocol: TCP
- name: admin
port: 8189
targetPort: 8189
protocol: TCP
050-controller.yaml
Replace all placeholder values with your actual credentials. Important: The config file must be mounted at
/controller/config.toml— this is where the OpenTalk controller binary looks for its configuration.
Key learnings:
- Keycloak base URL must NOT include
/auth(Keycloak 17+) - Room server section uses
[room_server]with aconnectionsarray - The health endpoint is not
/healthbut requires authentication — use a TCP readiness probe instead
apiVersion: v1
kind: ConfigMap
metadata:
name: opentalk-controller-config
namespace: opentalk
data:
config.toml: |
[http]
port = 11311
cors.allowed_origin = ["https://opentalk.imixs.com"]
[database]
url = "postgres://opentalk:<POSTGRES-PASSWORD>@postgres.opentalk.svc.cluster.local/opentalk"
[keycloak]
base_url = "https://sso.imixs.com"
realm = "opentalk"
client_id = "opentalk"
client_secret = "<KEYCLOAK-CLIENT-SECRET>"
[rabbit_mq]
url = "amqp://opentalk:<RABBITMQ-PASSWORD>@rabbitmq.opentalk.svc.cluster.local/%2F"
min_connections = 10
max_channels_per_connection = 100
[redis]
url = "redis://redis.opentalk.svc.cluster.local/"
[minio]
uri = "http://minio.opentalk.svc.cluster.local:9000"
bucket = "opentalk"
access_key = "opentalk"
secret_key = "<MINIO-PASSWORD>"
[room_server]
connections = [
{ url = "ws://<JANUS-NODE-IP>:8188", admin_url = "ws://<JANUS-NODE-IP>:8189", admin_secret = "<JANUS-ADMIN-SECRET>" }
]
[authz]
type = "none"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller
namespace: opentalk
spec:
replicas: 1
selector:
matchLabels:
app: controller
template:
metadata:
labels:
app: controller
spec:
containers:
- name: controller
image: registry.opencode.de/opentalk/controller:v0.18.0
ports:
- containerPort: 11311
volumeMounts:
- name: config
mountPath: /controller/config.toml
subPath: config.toml
readinessProbe:
tcpSocket:
port: 11311
initialDelaySeconds: 15
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 11311
initialDelaySeconds: 30
periodSeconds: 20
volumes:
- name: config
configMap:
name: opentalk-controller-config
---
apiVersion: v1
kind: Service
metadata:
name: controller
namespace: opentalk
spec:
selector:
app: controller
ports:
- port: 11311
targetPort: 11311
060-web-frontend.yaml
Key learnings:
CONTROLLER_HOSTmust NOT includehttps://— the frontend adds it automatically- The correct Keycloak client ID is
Frontend(hardcoded in the container entrypoint) - Use
OIDC_ISSUER— notKEYCLOAK_HOST
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-frontend
namespace: opentalk
spec:
replicas: 1
selector:
matchLabels:
app: web-frontend
template:
metadata:
labels:
app: web-frontend
spec:
containers:
- name: web-frontend
image: registry.opencode.de/opentalk/web-frontend:v1.19.0
ports:
- containerPort: 80
env:
- name: CONTROLLER_HOST
value: "controller.opentalk.foo.com"
- name: BASE_URL
value: "https://opentalk.foo.com"
- name: OIDC_ISSUER
value: "https://sso.foo.com/realms/opentalk"
---
apiVersion: v1
kind: Service
metadata:
name: web-frontend
namespace: opentalk
spec:
selector:
app: web-frontend
ports:
- port: 80
targetPort: 80
ingress.yaml
Replace
letsencrypt-prodwith your cert-manager ClusterIssuer name if different.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: opentalk-frontend
namespace: opentalk
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
spec:
ingressClassName: nginx
tls:
- hosts:
- opentalk.foo.com
secretName: opentalk-tls
rules:
- host: opentalk.foo.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-frontend
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: opentalk-controller
namespace: opentalk
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-buffering: "off"
spec:
ingressClassName: nginx
tls:
- hosts:
- controller.opentalk.foo.com
secretName: opentalk-controller-tls
rules:
- host: controller.opentalk.foo.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: controller
port:
number: 11311
First Time Setup
After the first deployment, create the MinIO bucket (only needed once — the bucket persists on the Ceph volume):
kubectl run -it --rm minio-bucket \
--image=minio/mc \
--restart=Never -n opentalk \
--overrides='{"spec":{"containers":[{"name":"minio-bucket","image":"minio/mc","command":["/bin/sh","-c","mc alias set local http://minio.opentalk.svc.cluster.local:9000 opentalk <MINIO-PASSWORD> && mc mb local/opentalk"]}]}}'
Redeployment
Since RabbitMQ has no persistent volume, redeployment is straightforward:
kubectl delete -f apps/opentalk/
kubectl apply -f apps/opentalk/
The MinIO bucket and all PostgreSQL data survive redeployment because they are stored on persistent Ceph volumes with Retain policy.
Lessons Learned
These were the trickiest parts of this deployment:
- Janus exchange type:
JANUS_EXCHANGE_TYPE=topicis mandatory. Without it, the RabbitMQ exchange is created asfanoutand the controller cannot connect. - RabbitMQ persistence: Do not use a persistent volume for RabbitMQ. The exchange state must be recreated fresh on each start.
- Controller config path: The config file must be at
/controller/config.toml— not/etc/opentalk/controller.toml. - Keycloak URL: Modern Keycloak (17+) dropped the
/authprefix. Usehttps://sso.foo.comnothttps://sso.foo.com/auth. - Frontend env vars:
CONTROLLER_HOSTwithouthttps://, useOIDC_ISSUER(notKEYCLOAK_HOST), Keycloak client ID isFrontend. - Janus WebRTC: Requires
hostNetwork: trueand open UDP ports 20000-40000 on the node.
