Complete guide for deploying hyper2kvm in Docker, Podman, and Kubernetes environments with the Worker Job Protocol v1.
hyper2kvm provides specialized container images for different deployment scenarios:
| Image Stage | Purpose | Use Case | Privileged Required |
|---|---|---|---|
cli |
One-shot migrations | Convert single VMDK → qcow2 | No (for conversion only) |
daemon |
File watching | Auto-process dropped VMDKs | Yes (if using NBD/LVM) |
batch |
Parallel processing | Bulk migrations from manifest | Depends on operations |
tui |
Interactive UI | Terminal-based workflow | Depends on operations |
worker |
Job orchestration | Worker Job Protocol daemon | Yes (for privileged ops) |
production |
Full-featured | Backwards compatible | Yes (full capabilities) |
Model 1: Safe Container (Conversion Only)
┌─────────────────┐
│ Safe Container │ No privileged mode
│ (cli/batch) │ qemu-img only
│ │
│ Input: VMDK │
│ Output: qcow2 │
└─────────────────┘
Model 2: Privileged Worker (Full Operations)
┌─────────────────────┐
│ Privileged Worker │ --privileged or extensive capabilities
│ (worker/daemon) │ NBD, LVM, mount, chroot
│ │
│ - Format conversion│
│ - Disk inspection │
│ - Offline repairs │
│ - Initramfs regen │
└─────────────────────┘
Build all specialized images:
# Build CLI image (minimal, conversion-only)
docker build --target cli -t hyper2kvm:cli .
# Build daemon image (long-running file watcher)
docker build --target daemon -t hyper2kvm:daemon .
# Build worker image (Worker Job Protocol)
docker build --target worker -t hyper2kvm:worker .
# Build production image (full-featured)
docker build --target production -t hyper2kvm:prod .
Approximate sizes after build:
cli: ~750 MB (minimal runtime)daemon: ~850 MB (adds watchdog)worker: ~900 MB (adds protocol dependencies)production: ~1.2 GB (full dependencies)The repository includes docker-compose.yml with pre-configured services.
# Edit docker-compose.yml to set your paths
docker-compose run --rm cli \
h2kvmctl --cmd local \
--vmdk /data/vmware/test.vmdk \
--output-dir /data/output \
--to-output converted.qcow2 \
--out-format qcow2 \
--compress
# Start daemon watching /mnt/incoming
docker-compose up -d daemon
# View logs
docker-compose logs -f daemon
# Drop VMDK into /mnt/incoming and watch it process
cp test.vmdk /mnt/incoming/
# Check output
ls -lh /mnt/output/
# Start worker daemon
docker-compose up -d worker
# Submit a job
docker-compose exec worker \
python3 -m hyper2kvm.worker.cli run /data/job-spec.json --follow
# Check worker capabilities
docker-compose exec worker \
python3 -m hyper2kvm.worker.cli capabilities
docker run --rm \
-v /path/to/vmdk:/data/input:ro \
-v /path/to/output:/data/output:rw \
hyper2kvm:cli \
h2kvmctl --cmd local \
--vmdk /data/input/test.vmdk \
--output-dir /data/output \
--to-output converted.qcow2 \
--out-format qcow2 \
--compress
docker run --rm --privileged \
-v /dev:/dev \
-v /lib/modules:/lib/modules:ro \
-v /data/input:/data/input:ro \
-v /data/output:/data/output:rw \
-e HYPER2KVM_MODE=worker \
hyper2kvm:worker \
python3 -m hyper2kvm.worker.cli run /data/job-spec.json
Podman offers better security with rootless containers and native systemd integration.
podman run --rm \
-v /path/to/vmdk:/data/input:ro,z \
-v /path/to/output:/data/output:rw,z \
hyper2kvm:cli \
h2kvmctl --cmd local \
--vmdk /data/input/test.vmdk \
--output-dir /data/output \
--to-output converted.qcow2
Note the :z suffix for SELinux relabeling.
sudo podman run --rm --privileged \
-v /dev:/dev \
-v /lib/modules:/lib/modules:ro \
-v /data/input:/data/input:ro,z \
-v /data/output:/data/output:rw,z \
-e HYPER2KVM_MODE=worker \
hyper2kvm:worker
Generate systemd unit:
podman create \
--name hyper2kvm-daemon \
--privileged \
-v /dev:/dev \
-v /data/incoming:/data/incoming:rw,z \
-v /data/output:/data/output:rw,z \
-e HYPER2KVM_MODE=daemon \
hyper2kvm:daemon
podman generate systemd --name hyper2kvm-daemon > /etc/systemd/system/hyper2kvm-daemon.service
systemctl daemon-reload
systemctl enable --now hyper2kvm-daemon
Complete Kubernetes deployment using the Worker Job Protocol.
Label worker nodes:
kubectl label nodes worker-01 hyper2kvm.io/worker-enabled=true
Verify NBD module available:
ssh worker-01 'ls /lib/modules/$(uname -r)/kernel/drivers/block/nbd.ko'
Create PVCs for input/output:
# See k8s/base/storageclasses.yaml for examples
kubectl apply -f k8s/base/storageclasses.yaml
# Create namespace and RBAC
kubectl apply -f k8s/base/namespace.yaml
kubectl apply -f k8s/worker/rbac.yaml
kubectl apply -f k8s/worker/configmap.yaml
kubectl apply -f k8s/worker/daemonset.yaml
# Verify workers running
kubectl get pods -n hyper2kvm-workers -l app=hyper2kvm-worker
# Create job spec ConfigMap
kubectl create configmap hyper2kvm-job-001 \
--from-file=job-spec.json=k8s/worker/examples/convert-job.json \
-n hyper2kvm-workers
# Deploy job
sed 's/JOBID/001/g' k8s/worker/job-template.yaml | kubectl apply -f -
# Follow progress
kubectl logs -n hyper2kvm-workers -f job/hyper2kvm-migration-001
See k8s/worker/README.md for complete Kubernetes deployment documentation.
The Worker Job Protocol provides production-grade job orchestration.
1. Create JobSpec JSON
↓
2. Submit via CLI/API/Kubernetes
↓
3. Job validated and queued
↓
4. Worker matched by capabilities
↓
5. Job executed with progress events
↓
6. Results stored, events streamed
Create job spec:
{
"version": "1.0",
"job_id": "convert-vm-001",
"operation": "convert",
"image": {
"path": "/data/input/production-vm.vmdk",
"format": "vmdk"
},
"parameters": {
"output_format": "qcow2",
"compress": true
},
"artifacts": {
"output_path": "/data/output"
},
"audit": {
"requested_by": "ops-team",
"ticket": "MIGR-1234"
}
}
Submit to worker:
# Docker
docker exec hyper2kvm-worker \
python3 -m hyper2kvm.worker.cli run /data/convert-job.json --follow
# Kubernetes
kubectl exec -n hyper2kvm-workers hyper2kvm-worker-xxxxx -- \
python3 -m hyper2kvm.worker.cli run /data/convert-job.json --follow
# View job status
python3 -m hyper2kvm.worker.cli status convert-vm-001
# Stream events
python3 -m hyper2kvm.worker.cli events convert-vm-001 --follow
# List all jobs
python3 -m hyper2kvm.worker.cli list
Operations requiring privileged mode:
| Operation | Required Capabilities |
|---|---|
| Format conversion | None (qemu-img is safe) |
| Disk inspection | None (read-only qemu-img info) |
| NBD attachment | CAP_SYS_ADMIN, /dev/nbd* access |
| LVM activation | CAP_SYS_ADMIN, device-mapper access |
| Mount operations | CAP_SYS_ADMIN, CAP_MKNOD |
| Chroot | CAP_SYS_CHROOT |
Docker/Podman:
# Drop unnecessary capabilities
docker run --cap-drop=ALL \
--cap-add=SYS_ADMIN \
--cap-add=MKNOD \
--cap-add=SYS_CHROOT \
--device=/dev/nbd0 \
--device=/dev/nbd1 \
hyper2kvm:worker
Kubernetes:
securityContext:
privileged: false
capabilities:
drop:
- ALL
add:
- SYS_ADMIN
- MKNOD
- SYS_CHROOT
Restrict container egress:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: hyper2kvm-worker-policy
spec:
podSelector:
matchLabels:
app: hyper2kvm-worker
egress:
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53 # DNS only
Enable audit logging for privileged operations:
env:
- name: HYPER2KVM_AUDIT_LOG
value: "/var/log/hyper2kvm/audit.log"
Symptom: error: /dev/nbd0: No such file or directory
Solution:
# Load NBD module on host
modprobe nbd max_part=16 nbds_max=16
# Verify
lsmod | grep nbd
# In Kubernetes, init container loads it automatically
Symptom: permission denied while trying to connect to NBD
Cause: Container not running with sufficient privileges.
Solution:
# Docker: Add --privileged
docker run --privileged ...
# Kubernetes: Verify privileged in securityContext
kubectl get pod -o yaml | grep privileged
Symptom: Image size > 2 GB
Solution: Use specialized stages:
# Use cli stage for conversion-only (750 MB)
docker build --target cli -t hyper2kvm:cli .
# Avoid production stage unless needed (1.2 GB)
Symptom: FileNotFoundError: /data/input/test.vmdk
Solution: Check volume mounts:
# Docker
docker run -v /host/path:/container/path:ro ...
# Kubernetes
kubectl describe pod | grep -A 5 Mounts
Cause: Using network storage for temp files.
Solution:
/data/output# Docker: Mount local SSD
docker run -v /mnt/nvme/temp:/data/output ...
Symptom: Container killed by OOM
Solution: Increase memory limits:
resources:
limits:
memory: "32Gi" # For large VMDKs
Symptom: Migration incomplete after pod restart.
Current State: Jobs in progress are lost.
Workaround:
terminationGracePeriodSeconds to allow job completionFuture Enhancement: Checkpoint/resume support (planned)
resources:
requests:
cpu: "4" # Minimum for good performance
limits:
cpu: "16" # Allow bursting for compression
| VMDK Size | Recommended Memory |
|---|---|
| < 100 GB | 4 GB |
| 100-500 GB | 8 GB |
| 500 GB - 1 TB | 16 GB |
| > 1 TB | 32 GB |
Questions? Open an issue at https://github.com/ssahani/hyper2kvm/issues