hyper2kvm

Hyper2KVM on Kubernetes + CentOS 8 - Quick Start

Deploy Hyper2KVM on Kubernetes with CentOS 8 worker nodes in under 10 minutes.


TL;DR - One-Command Deployment

# 1. Prepare nodes
./scripts/deploy-k8s-centos8.sh prepare

# 2. Deploy Hyper2KVM
./scripts/deploy-k8s-centos8.sh deploy

# 3. Run test
./scripts/deploy-k8s-centos8.sh test

Prerequisites


Step-by-Step Deployment

Step 1: Prepare CentOS 8 Worker Nodes

Run the preparation script to generate node setup commands:

./scripts/deploy-k8s-centos8.sh prepare

This creates /tmp/prepare-node.sh. Copy and run it on each worker node:

# On your local machine
scp /tmp/prepare-node.sh root@worker-node-1:/tmp/
scp /tmp/prepare-node.sh root@worker-node-2:/tmp/

# On each worker node
ssh root@worker-node-1
bash /tmp/prepare-node.sh
exit

ssh root@worker-node-2
bash /tmp/prepare-node.sh
exit

What this does:

Label the nodes:

kubectl label node worker-node-1 hyper2kvm=enabled
kubectl label node worker-node-2 hyper2kvm=enabled

# Verify
kubectl get nodes -L hyper2kvm

Step 2: Deploy Hyper2KVM

Deploy the core components:

# Basic deployment with defaults
./scripts/deploy-k8s-centos8.sh deploy

# Or with custom storage settings
STORAGE_CLASS=nfs-client \
VMWARE_STORAGE_SIZE=500Gi \
KVM_STORAGE_SIZE=1Ti \
./scripts/deploy-k8s-centos8.sh deploy

What this creates:

Verify deployment:

./scripts/deploy-k8s-centos8.sh status

# Or manually:
kubectl get all -n hyper2kvm-system
kubectl get pvc -n hyper2kvm-system

Step 3: Upload VMDKs to Storage

Copy your VMware VMDKs to the source storage PVC:

# Start a temporary pod with storage mounted
kubectl run -it --rm upload-vmdk \
  --image=alpine \
  --overrides='{
    "spec": {
      "containers": [{
        "name": "upload-vmdk",
        "image": "alpine",
        "stdin": true,
        "tty": true,
        "volumeMounts": [{
          "name": "vmware-storage",
          "mountPath": "/mnt/vmware"
        }]
      }],
      "volumes": [{
        "name": "vmware-storage",
        "persistentVolumeClaim": {
          "claimName": "vmware-storage"
        }
      }]
    }
  }' \
  --namespace=hyper2kvm-system

# Inside the pod, you can use wget, scp, or mount NFS
# For example:
wget -O /mnt/vmware/test-vm.vmdk http://your-server/test-vm.vmdk

# Or use kubectl cp from outside:
kubectl cp local-vm.vmdk hyper2kvm-system/upload-vmdk:/mnt/vmware/test-vm.vmdk

# Exit when done
exit

Alternative: Direct NFS mount:

If using NFS storage, you can copy directly:

# Mount NFS on your local machine
sudo mount -t nfs nfs-server:/export/hyper2kvm /mnt/nfs

# Copy VMDKs
sudo cp /vmware/vms/*.vmdk /mnt/nfs/vmware/

# Unmount
sudo umount /mnt/nfs

Step 4: Run Your First Migration

Create a migration job:

cat <<EOF | kubectl apply -f -
apiVersion: batch/v1
kind: Job
metadata:
  name: migrate-test-vm
  namespace: hyper2kvm-system
spec:
  template:
    metadata:
      labels:
        app: hyper2kvm
    spec:
      serviceAccountName: hyper2kvm-worker
      restartPolicy: Never
      nodeSelector:
        hyper2kvm: enabled
      containers:
      - name: hyper2kvm
        image: ghcr.io/ssahani/hyper2kvm:latest
        command:
          - h2kvmctl
          - --cmd
          - local
          - --vmdk
          - /mnt/vmware/test-vm.vmdk
          - --output-dir
          - /mnt/kvm
          - --to-output
          - test-vm.qcow2
          - --fstab-mode
          - stabilize-all
          - --regen-initramfs
          - --compress
        volumeMounts:
        - name: vmware-storage
          mountPath: /mnt/vmware
        - name: kvm-storage
          mountPath: /mnt/kvm
        resources:
          requests:
            memory: "4Gi"
            cpu: "2"
          limits:
            memory: "8Gi"
            cpu: "4"
        securityContext:
          privileged: true
          capabilities:
            add:
              - SYS_ADMIN
              - MKNOD
      volumes:
      - name: vmware-storage
        persistentVolumeClaim:
          claimName: vmware-storage
      - name: kvm-storage
        persistentVolumeClaim:
          claimName: kvm-storage
EOF

Monitor the migration:

# Watch job status
kubectl get jobs -n hyper2kvm-system -w

# View logs
kubectl logs -n hyper2kvm-system -f job/migrate-test-vm

# Check completion
kubectl wait --for=condition=complete job/migrate-test-vm -n hyper2kvm-system --timeout=3600s

Step 5: Retrieve Migrated VM

Download the migrated QCOW2 file:

# Start a pod to access storage
kubectl run -it --rm download-qcow2 \
  --image=alpine \
  --overrides='{
    "spec": {
      "containers": [{
        "name": "download-qcow2",
        "image": "alpine",
        "stdin": true,
        "tty": true,
        "volumeMounts": [{
          "name": "kvm-storage",
          "mountPath": "/mnt/kvm"
        }]
      }],
      "volumes": [{
        "name": "kvm-storage",
        "persistentVolumeClaim": {
          "claimName": "kvm-storage"
        }
      }]
    }
  }' \
  --namespace=hyper2kvm-system

# List migrated files
ls -lh /mnt/kvm/

# Exit and copy from outside
exit

# Copy to local machine
kubectl cp hyper2kvm-system/download-qcow2:/mnt/kvm/test-vm.qcow2 ./test-vm.qcow2

Batch Migration Example

Migrate multiple VMs in parallel:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: batch-manifest
  namespace: hyper2kvm-system
data:
  manifest.json: |
    {
      "migrations": [
        {
          "vmdk": "/mnt/vmware/web-01.vmdk",
          "to_output": "web-01.qcow2"
        },
        {
          "vmdk": "/mnt/vmware/web-02.vmdk",
          "to_output": "web-02.qcow2"
        },
        {
          "vmdk": "/mnt/vmware/db-01.vmdk",
          "to_output": "db-01.qcow2"
        }
      ]
    }
---
apiVersion: batch/v1
kind: Job
metadata:
  name: batch-migration
  namespace: hyper2kvm-system
spec:
  parallelism: 3
  completions: 3
  template:
    spec:
      serviceAccountName: hyper2kvm-worker
      restartPolicy: Never
      nodeSelector:
        hyper2kvm: enabled
      containers:
      - name: hyper2kvm
        image: ghcr.io/ssahani/hyper2kvm:latest
        command:
          - h2kvmctl
          - --cmd
          - local
          - --batch-manifest
          - /config/manifest.json
          - --output-dir
          - /mnt/kvm
          - --fstab-mode
          - stabilize-all
          - --regen-initramfs
        volumeMounts:
        - name: vmware-storage
          mountPath: /mnt/vmware
        - name: kvm-storage
          mountPath: /mnt/kvm
        - name: batch-config
          mountPath: /config
        resources:
          requests:
            memory: "4Gi"
            cpu: "2"
          limits:
            memory: "8Gi"
            cpu: "4"
        securityContext:
          privileged: true
          capabilities:
            add:
              - SYS_ADMIN
              - MKNOD
      volumes:
      - name: vmware-storage
        persistentVolumeClaim:
          claimName: vmware-storage
      - name: kvm-storage
        persistentVolumeClaim:
          claimName: kvm-storage
      - name: batch-config
        configMap:
          name: batch-manifest
EOF

# Monitor
kubectl get jobs -n hyper2kvm-system -w
kubectl logs -n hyper2kvm-system -l app=hyper2kvm

Common Operations

View All Migrations

# List all jobs
kubectl get jobs -n hyper2kvm-system

# Check job status
kubectl get jobs -n hyper2kvm-system -o wide

# View job details
kubectl describe job migrate-test-vm -n hyper2kvm-system

Debug Failed Migration

# Get pod name
POD=$(kubectl get pods -n hyper2kvm-system -l job-name=migrate-test-vm -o jsonpath='{.items[0].metadata.name}')

# View logs
kubectl logs -n hyper2kvm-system $POD

# Get events
kubectl get events -n hyper2kvm-system --field-selector involvedObject.name=$POD

# Shell into pod (if still running)
kubectl exec -it -n hyper2kvm-system $POD -- /bin/bash

Clean Up Old Jobs

# Delete completed jobs older than 24 hours
kubectl delete jobs -n hyper2kvm-system --field-selector status.successful=1

# Delete failed jobs
kubectl delete jobs -n hyper2kvm-system --field-selector status.failed=1

# Delete all jobs
kubectl delete jobs -n hyper2kvm-system --all

Troubleshooting

Job Stays in Pending

Check:

kubectl describe pod -n hyper2kvm-system <pod-name>

Common causes:

Fix:

# Check node labels
kubectl get nodes -L hyper2kvm

# Check PVC status
kubectl get pvc -n hyper2kvm-system

# Check node resources
kubectl top nodes

Permission Denied on /dev/kvm

On worker nodes:

sudo chmod 666 /dev/kvm
ls -l /dev/kvm

SELinux Issues

On worker nodes:

# Check for denials
sudo ausearch -m avc -ts recent

# Temporary fix (development only)
sudo setenforce 0

Cleanup

Remove all Hyper2KVM resources:

# Using script
./scripts/deploy-k8s-centos8.sh cleanup

# Or manually
kubectl delete namespace hyper2kvm-system

Next Steps


Support


Last Updated: February 2026 Tested On: CentOS 8.5, CentOS Stream 8, Kubernetes 1.24-1.26