I spent a few days comparing various Hypervisors under the same workload and on the same hardware. This is a very specific workload and results might be different when testing oher workloads.
I wanted to share it here, because many of us run very modest Hardware and getting the most out of it is probably something others are interested in, too. I wanted to share it also because maybe someone finds a flaw in the configurations I ran, which might boost things up.
If you do not want to go to the post / read all of that, the very quick summary is, that XCP-ng was the quickest and KVM the slowest. There is also a summary at the bottom of the post with some graphs if that interests you. For everyone else who reads the whole post, I hope it gives some useful insights for your self-hosting endeavours.
Unfortunately I’m not very familiar with Cloudstack or Proxmox; we’ve always worked with KVM using virt-manager and Cockpit.
Our usual method is to remove the default hard drive, reattach the qcow file as a SCSI device, and then we modify the SCSI controller that gets created to enable queuing. I’m sure at some point I should learn to do all this through the command line, but it’s never really been relevant to do so.
The relevant sections look like this in one our prod VMs:
<disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/XXX.qcow2' index='1'/> <backingStore/> <target dev='sdb' bus='scsi'/> <alias name='scsi0-0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk>
<controller type='scsi' index='0' model='virtio-scsi'> <driver queues='6'/> <alias name='scsi0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </controller>
The driver queues=‘X’ line is the part you have to add. The number should equal the number of cores assigned to the VM.
See the following for more on tuning KVM:
Thank you very much. I spent another two hours yesterday reading up on that and creating other VMs and Templates, but I was not able yet to attach the Boot disk to a SCSI controller and make it boot. I would really liked to see if this change would bring it on-par with Proxmox (I wonder now what the defaults for Proxmox are), but even then, it would still be much slower than with Hyper-V or XCP-ng. If I find time, I will look into this again.
I’d suggest maybe testing with a plain Debian or Fedora install. Just enable KVM and install virt-manager, and create the environment that way.