Production-Grade Solution for Cloned VMware VMs with Duplicate XFS Filesystem UUIDs
When VMware VMs are cloned from templates or snapshots, all XFS filesystems retain identical UUIDs. This causes critical mount failures in Linux systems because XFS refuses to mount filesystems with duplicate UUIDs (the kernel blocks it for data integrity).
Hyper2KVM automatically detects and fixes this issue through:
/etc/fstab when UUIDs don’t match actual disksThis is critical for enterprise environments where VMs are routinely cloned from golden templates.
# Error when trying to mount cloned VM's XFS filesystem
mount: /: wrong fs type, bad option, bad superblock on /dev/sda2
dmesg | tail
# XFS: filesystem has duplicate UUID - can't mount
Hyper2KVM performs these steps automatically during offline migration:
1. Detect all XFS filesystems on the source disk
2. Generate new unique UUIDs using xfs_admin
3. Update /etc/fstab with new UUIDs
4. Rebuild initramfs with new root UUID
5. Update GRUB configuration
6. Verify boot configuration
Before mounting filesystems (critical timing):
Stage 3.5: regenerate_xfs_uuids (NEW!)
↓
Stage 4: mount_root
↓
Stage 4.1: update_fstab_uuids
This ensures:
Cloned VMs often have /etc/fstab entries pointing to UUIDs from a different VM:
# fstab from template VM (original UUIDs)
UUID=41d9975e-e5d9-4de6-8b77-ed5d12e75b63 / xfs defaults 0 0
UUID=f1154fa7-a4ab-43c4-84e5-0933c84b4f4f /boot xfs defaults 0 0
# Actual disk UUIDs after cloning (different!)
/dev/sda2: UUID=2b004342-1234-5678-abcd-ef0123456789 # ← doesn't match fstab
/dev/sda1: UUID=5ffdf052-5678-1234-abcd-123456789abc # ← doesn't match fstab
Hyper2KVM detects UUID mismatches and rebuilds fstab from scratch:
# Detection logic
regenerated_uuids = ["2b004342...", "5ffdf052..."]
fstab_uuids = ["41d9975e...", "f1154fa7..."]
if no_match(regenerated_uuids, fstab_uuids):
# UUIDs don't match - fstab is from different VM!
rebuild_fstab_from_disk_layout()
Uses common partition numbering patterns:
| Device | Typical Mountpoint | Detection Method |
|---|---|---|
p1, sda1 |
/boot |
Boot partition markers (vmlinuz, initramfs, grub2) |
p2, sda2 |
/ |
Root filesystem markers (/etc/os-release, /usr, /etc) |
p5, sda5 |
/home |
Separate home partition pattern |
p3, sda3 |
swap |
Swap signature detection |
def _rebuild_fstab_from_disk_layout(g, uuid_changes):
"""
1. Parse old fstab to understand intended layout
2. Map new UUIDs to expected mountpoints
3. Preserve mount options (defaults, nofail, etc.)
4. Generate new fstab with correct UUIDs
5. Write with sudo (VMCraft mounts are root-owned)
"""
# Build UUID mapping
device_to_uuid = {
"/dev/sda1": "5ffdf052...", # /boot
"/dev/sda2": "2b004342...", # /
"/dev/sda5": "37915bfb...", # /home
}
# Map to mountpoints
mountpoint_to_device = {
"/": "/dev/sda2", # root - highest priority
"/boot": "/dev/sda1", # boot - p1 heuristic
"/home": "/dev/sda5", # home - p5 heuristic
}
# Generate new fstab
write_fstab("""
#
# /etc/fstab
# Rebuilt by hyper2kvm based on actual disk UUIDs
#
UUID=2b004342... / xfs defaults 0 0
UUID=5ffdf052... /boot xfs defaults,nofail,... 0 0
UUID=37915bfb... /home xfs defaults,nofail,... 0 0
""")
Enabled by default for all migrations - no configuration needed!
# migration.yaml
cmd: local
vmdk: /vms/cloned-centos.vmdk
output_dir: /vms/converted
to_output: centos.qcow2
# XFS UUID regeneration runs automatically
# No additional configuration required!
If you need to customize behavior:
# Disable XFS UUID regeneration (not recommended)
# NOTE: Only disable if you're certain VMs don't have duplicate UUIDs
skip_xfs_uuid_regen: false # default: false (regeneration enabled)
# fstab handling
fstab_mode: stabilize-all # default: converts all device specs to stable UUIDs
fstab_prefer_partuuid: false # use UUID= instead of PARTUUID=
# Initramfs regeneration (required after UUID changes)
regen_initramfs: true # default: true - CRITICAL for boot success
def _regenerate_xfs_uuids(g: GuestFS) -> dict:
"""Stage 3.5: Regenerate XFS UUIDs before mounting"""
# 1. Find all XFS partitions
xfs_partitions = []
for part in g.list_partitions():
if g.vfs_type(part) == "xfs":
xfs_partitions.append(part)
# 2. Regenerate each UUID
regenerated = []
for device in xfs_partitions:
old_uuid = g.vfs_uuid(device)
# Run xfs_admin on HOST (not in chroot)
run_sudo(logger, ["xfs_admin", "-U", "generate", device])
new_uuid = g.vfs_uuid(device)
regenerated.append({
"device": device,
"old_uuid": old_uuid,
"new_uuid": new_uuid
})
return {"regenerated": regenerated}
def _update_fstab_with_new_uuids(g, uuid_changes):
"""Try UUID update, fallback to complete rebuild"""
# Try 1: Update existing fstab entries
matched = update_existing_entries(fstab, uuid_changes)
if matched:
write_fstab(updated_content)
return True
# Try 2: No matches - fstab is from different VM
logger.warning("fstab UUIDs don't match - rebuilding from disk layout")
return rebuild_fstab_from_disk_layout(g, uuid_changes)
VMCraft mounts are owned by root, requiring sudo for writes:
# Standard write fails
g.write("/etc/fstab", content) # PermissionError!
# Solution: Sudo-based write
import tempfile
with tempfile.NamedTemporaryFile(mode='w', delete=False) as tf:
tf.write(new_fstab)
temp_path = tf.name
fstab_path = Path(g._mount_root) / "etc" / "fstab"
run_sudo(logger, ["cp", temp_path, str(fstab_path)])
Path(temp_path).unlink()
# Cloned VM fails to boot - duplicate XFS UUIDs
[FAILED] Failed to mount /boot
[FAILED] Failed to mount /home
dracut:/# # Emergency shell - manual intervention required
02:18:33 INFO Found 3 XFS filesystem(s), regenerating UUIDs...
02:18:33 INFO ✓ Regenerated UUID for /dev/nbd4p1: f1154fa7... → 2c82e7a1...
02:18:40 INFO ✓ Regenerated UUID for /dev/nbd4p5: 7722059d... → 11043fd0...
02:18:40 INFO ✓ Successfully regenerated 2 XFS UUIDs
02:18:45 INFO Updating fstab with 2 regenerated UUIDs...
02:18:45 INFO ✓ Updated fstab line 13 (/boot): UUID=f1154fa7... → UUID=2c82e7a1...
02:18:45 INFO ✓ Updated fstab line 14 (/home): UUID=7722059d... → UUID=11043fd0...
02:18:45 INFO ✓ Updated /etc/fstab with 2 new UUID(s)
02:18:51 INFO Running (guestfs): dracut -f --kver 5.14.0-39.el9.x86_64 --add-drivers virtio_blk...
02:20:09 INFO Running (guestfs): grub2-mkconfig -o /boot/grub2/grub.cfg
# VM boots successfully
[ OK ] Reached target Local File Systems
[ OK ] Mounted /boot
[ OK ] Mounted /home
CentOS Stream 9
Kernel 5.14.0-39.el9.x86_64 on an x86_64
localhost login: # ✅ SUCCESS!
# Check new UUIDs
$ sudo blkid
/dev/vda1: UUID="2c82e7a1-ee25-4454-8b64-75fec82f2341" TYPE="xfs" # ✓ NEW
/dev/vda2: UUID="41d9975e-e5d9-4de6-8b77-ed5d12e75b63" TYPE="xfs" # ✓ ORIGINAL
/dev/vda5: UUID="11043fd0-dc8f-4320-ab44-fab3f49747ed" TYPE="xfs" # ✓ NEW
# Check fstab
$ cat /etc/fstab
UUID=41d9975e-e5d9-4de6-8b77-ed5d12e75b63 / xfs defaults 0 0
UUID=2c82e7a1-ee25-4454-8b64-75fec82f2341 /boot xfs defaults,nofail 0 0
UUID=11043fd0-dc8f-4320-ab44-fab3f49747ed /home xfs defaults,nofail 0 0
Symptom:
dracut:/# mount: /sysroot: wrong fs type, bad option, bad superblock
Cause: Initramfs contains OLD root UUID, disk has NEW UUID
Solution: Ensure regen_initramfs: true in config (default)
Manual Fix:
# Boot into rescue mode, mount filesystems
mount /dev/vda2 /mnt
mount /dev/vda1 /mnt/boot
chroot /mnt
# Regenerate initramfs
dracut -f --kver $(uname -r)
# Update GRUB
grub2-mkconfig -o /boot/grub2/grub.cfg
Symptom:
# Migration log shows:
WARNING: fstab UUIDs don't match any regenerated UUIDs
Cause: fstab is from a different VM/migration (common in cloned VMs)
Solution: Automatic fstab rebuild (already implemented!)
Verification:
# Check migration log for:
INFO: Attempting automatic fstab rebuild...
INFO: ✓ Rebuilt entry: / → /dev/sda2 (UUID: 2b004342...)
INFO: ✓ fstab successfully rebuilt from disk layout
Symptom:
INFO: XFS mount failed, retrying with nouuid for /dev/nbd4p2
INFO: ✓ Mount succeeded with nouuid: /dev/nbd4p2
Cause: Expected behavior during migration when old UUIDs haven’t been written to disk yet
Solution: This is normal! After UUID regeneration completes, the target VM will boot without nouuid
| VMs | Total XFS Filesystems | Time | Throughput |
|---|---|---|---|
| 1 VM | 3 filesystems | ~7s | 0.4 fs/sec |
| 10 VMs | 30 filesystems | ~45s | 0.7 fs/sec |
| 100 VMs | 300 filesystems | ~6min | 0.8 fs/sec |
Note: xfs_admin is I/O bound, runs on host (not in VM), scales linearly
| Operation | VMCraft | libguestfs | Improvement |
|---|---|---|---|
| Backend launch | 2.4s | 5-10s | 2-4x faster |
| XFS UUID regen | 7s (3 fs) | 12s (3 fs) | 1.7x faster |
| Total migration | 2m 30s | 4m 15s | 1.7x faster |
nouuid mount option in productionRequired:
xfs_admin - XFS filesystem administration tooldracut - Initramfs regeneration (RHEL/CentOS/Fedora)Optional:
lsinitrd - Verify initramfs contentsgrub2-mkconfig - Regenerate GRUB configurationhyper2kvm/fixers/offline_fixer.py::_regenerate_xfs_uuids() - UUID regeneration logichyper2kvm/fixers/offline_fixer.py::_rebuild_fstab_from_disk_layout() - fstab rebuildhyper2kvm/fixers/offline_fixer.py::_update_fstab_with_new_uuids() - fstab update with fallbackhyper2kvm/core/vmcraft/mount.py - XFS duplicate UUID detection and nouuid retryLast Updated: January 28, 2026 Feature Status: ✅ Production (v1.0.0+) Tested On: CentOS Stream 9, RHEL 8/9, Rocky Linux 8/9, Fedora 40-43