Migration to KVM - Proxmox from VMWare Guest (or physical) Server Instance for Ubuntu


Actual commands and hints:

Backup your current initrd:

First become root user:

sudo su -

Then fix up InitRD:

cd /boot
cp initrd.img-$(uname -r) BACKUP_initrd.img-$(uname -r)

Force new initrd with Virtio_BLK support

vi  /etc/initramfs-tools/

add these 2 lines below, to force support for virtio hard drives / controller:

virtio_pci
virtio_blk 

Force refresh of initrd image:

update-initramfs -k all -c -u
  • note the syntax / command used here is different for Ubuntu than for RHEL/CentOS

Uninstall VMWare Tools:

  • Clearly only relevant if inside VMware, and with VMware tools installed. I've found on windows hosts, it is 'painful' to try to uninstall VMWare tools once the VM is already migrated to a non-vmware platform. So I recommend removal of VMWare tools while your source system is still up and running on its original VMware host. Some hints for this process:
Follow steps per URL: http://www.vmware.com/support/ws5/doc/ws_newguest_tools_linux.html

Note that I can't see hints there for Debian. Great stuff!

Uninstall Virtualbox guest additions

  • Clearly this is relevant only if your source VM was a virtualbox VM which had guest additions installed

While still in a shell session as root user:

apt-get remove virtualbox-guest-utils
apt-get autoremove

Clonezilla hints:

  • I run it in 'beginner' mode so it uses defaults where possible; this works fine for me
  • I used mode, "Images" rather than "Devices" - ie - I was backing up from my donor system to an intermediate 'storage image file'; then restoring from this to the target KVM VM.
  • I also subsequently used the mode, "DISK" rather than "PARTITION" - as the easiest way to grab all slices and disk info in one fell swoop. I have done the 'partition based' as well and it does work but is more fiddley - you need to image each slice on your disk one at a time, which turns out to be unnecessary effort.
  • I am sure other transfer methods are fine too; so do what works for you. (ie, from a VMWare host you can scp out the VMDK to your ProxVE / KVM server; then run qemu-img to conver the thing to a RAW image file, etc. I just find SSH-SCP performance from VMware is horrific normally, so went this route in an attempt to optimize transfer speeds). It is true there are workarounds for this like fastscp from veeam but it is a bit of drama finding the smaller-older installer for this which is not a bloated monster app. On my systems I found Veeam was 2x faster than vanilla SCP out of the same VMware ESX host to the same ProxVE target via the same boring gig-ether infrastructure. Again, to reiterate. Use whatever transfer method you like. You are just moving blocks from A to B.
  • In theory I had hoped clonezilla would be faster for moving 'empty blocks' but I'm not sure this is always the case. YMMV.
  • note clonezilla does a few nice things, like resize disk slices where required to finesse things at the end / and also looks for hard-coded MAC address in your NIC config and comments out those lines if required - post migration. It also gracefully handles LVM and MD (software raid) I believe.
  • if you get a kernel panic after the new KVM VM starts booting, but fails to get into the init boot process - it likely means you forgot to get virtio_PCI support into your initrd image. You can recover from this by (A) changing KVM VM to be an IDE disk (B) Boot up the system (C) fix up your virtio support (D) power down, flip back to VirtIO HDD bus (E) power up and see if you are good.
  • note that virtio_NIC support is not something you have to force in - it is present already in your linux distro (most likely - it was for CentOS 4.x and 5.x for certain) and is not required to boot. So it should 'just work'

General context reminder:

  • In case it isn't obvious, remember that this is all not required if you are doing a clean install of a new VM into KVM with VirtIO Bus. This is only required when moving a previously setup linux host to such an environment.