Migration to KVM - Proxmox from VMWare Guest (or physical) Server Instance for RHEL / CentOS

These are some notes I prepared after doing this process for a client in early September-2012. Oddly enough I've migrated plenty of Windows hosts to KVM / Proxmox but have not migrated many Linux hosts - most linux VM hosts I've worked with start out as clean installs, which is a different business entirely.

Note some links for reference:

Rationale for this stuff:

  • When you have a linux host that was installed on real physical hardware, or on virtual hardware which lacks linux VirtIO_BLK 'hardware' - then your linux instance won't have support to boot from this sort of disk.
  • This poses some complication if migrating the Linux host to a VM environment with KVM and Virtio_BLK virtual HDD (providing enhanced disk performance over virtual IDE disk, which is the 'easy way')
  • The steps below are simple and effective, and seem to work well.
  • Note these steps documented here are specifically done on servers running CentOS but I think this process is almost identical for various other linux distros as well. Your mileage may vary, etc.

Concisely, steps taken:

  1. Plan for downtime. Schedule as required. The 'slow' part in this process is copying disk image over network. Vanilla gig-ether yielded 'image write' performance of about 30 minutes for a ~30gig HDD and 'image restore' performance of about 15 minutes. Larger HDDs will take longer to clone via the network. Faster networks may offer better speed. YMMV.
  2. Console access on your host that is being migrated to KVM; consider running 'yum update' to keep things 'good and current' before getting started.
  3. If running inside VMWare and you have VMWare tools installed, uninstall VMWare tools. Now. Don't forget this step!
  4. If required, reboot using the new kernel
  5. If you wish, take a backup of the current latest initrd
  6. Force build of a new initrd which has KVM VirtIO_BLK support.
  7. If you are prudent, consider rebooting to be sure it still works with this new initrd. Otherwise, rush ahead! :-)
  8. Prep your storage target for the image of the system
  9. Boot your VM using Clonezilla live CD, backup your host to the storage target. Wait patiently while blocks are copied.
  10. Meanwhile, prep your new KVM VM with appropriate specs (kernel version, appropriate HDD capacity of VirtIO bus type; NIC of VirtIO type most likely; appropriate CPU and RAM allocations). Boot this VM from a clonezilla LiveCD and get it ready to do a restore.
  11. Once your donor system is imaged, restore to the new VM. Wait patiently. (But this step is faster than the 'image write' fortunately.)
  12. Reboot clonezilla once done, power up your KVM VM. It should now actually boot properly from the system image you have poured in.
  13. Confirm function, power off the donor system, happy days.

Actual commands and hints:

Backup your current initrd:

cd /boot
cp initrd-$(uname -r).img BACKUP_initrd-$(uname -r).img

Force new initrd with Virtio_BLK support

mkinitrd --with virtio_pci --with virtio_blk -f /boot/initrd-$(uname -r).img $(uname -r)

Note that the dashes and syntax above do actually matter. I found some incorrect versions of this command on other google-hit-websites and they simply fail to work, because the commands are wrong. Copy and paste is your friend here - fewer typos! :-)

Uninstall VMWare Tools:

  • Clearly only relevant if inside VMware, and with VMware tools installed. I've found on windows hosts, it is 'painful' to try to uninstall VMWare tools once the VM is already migrated to a non-vmware platform. So I recommend removal of VMWare tools while your source system is still up and running on its original VMware host. Some hints for this process:
as per URL: http://www.vmware.com/support/ws5/doc/ws_newguest_tools_linux.html

From a tar install
vmware-uninstall-tools.pl

From an RPM install
rpm -e VMwareTools

In my case the RPM removal was appropriate and it worked perfectly. 

Note you may need to finesse the name of the VMware tools RPM, 
if present it will likely be visible from a command such as,

rpm -aq | grep -i vmware

Clonezilla hints:

  • I run it in 'beginner' mode so it uses defaults where possible; this works fine for me
  • I used mode, "Images" rather than "Devices" - ie - I was backing up from my donor system to an intermediate 'storage image file'; then restoring from this to the target KVM VM.
  • I also subsequently used the mode, "DISK" rather than "PARTITION" - as the easiest way to grab all slices and disk info in one fell swoop. I have done the 'partition based' as well and it does work but is more fiddley - you need to image each slice on your disk one at a time, which turns out to be unnecessary effort.
  • I am sure other transfer methods are fine too; so do what works for you. (ie, from a VMWare host you can scp out the VMDK to your ProxVE / KVM server; then run qemu-img to conver the thing to a RAW image file, etc. I just find SSH-SCP performance from VMware is horrific normally, so went this route in an attempt to optimize transfer speeds). It is true there are workarounds for this like fastscp from veeam but it is a bit of drama finding the smaller-older installer for this which is not a bloated monster app. On my systems I found Veeam was 2x faster than vanilla SCP out of the same VMware ESX host to the same ProxVE target via the same boring gig-ether infrastructure. Again, to reiterate. Use whatever transfer method you like. You are just moving blocks from A to B.
  • In theory I had hoped clonezilla would be faster for moving 'empty blocks' but I'm not sure this is always the case. YMMV.
  • note clonezilla does a few nice things, like resize disk slices where required to finesse things at the end / and also looks for hard-coded MAC address in your NIC config and comments out those lines if required - post migration. It also gracefully handles LVM and MD (software raid) I believe.
  • if you get a kernel panic after the new KVM VM starts booting, but fails to get into the init boot process - it likely means you forgot to get virtio_PCI support into your initrd image. You can recover from this by (A) changing KVM VM to be an IDE disk (B) Boot up the system (C) fix up your virtio support (D) power down, flip back to VirtIO HDD bus (E) power up and see if you are good.
  • note that virtio_NIC support is not something you have to force in - it is present already in your linux distro (most likely - it was for CentOS 4.x and 5.x for certain) and is not required to boot. So it should 'just work'

General context reminder:

  • In case it isn't obvious, remember that this is all not required if you are doing a clean install of a new VM into KVM with VirtIO Bus. This is only required when moving a previously setup linux host to such an environment.