I spent a few days comparing various Hypervisors under the same workload and on the same hardware. This is a very specific workload and results might be different when testing oher workloads.

I wanted to share it here, because many of us run very modest Hardware and getting the most out of it is probably something others are interested in, too. I wanted to share it also because maybe someone finds a flaw in the configurations I ran, which might boost things up.

If you do not want to go to the post / read all of that, the very quick summary is, that XCP-ng was the quickest and KVM the slowest. There is also a summary at the bottom of the post with some graphs if that interests you. For everyone else who reads the whole post, I hope it gives some useful insights for your self-hosting endeavours.

  • snowfalldreamland@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    I am only messing with KVM/libvirt for fun so i have no professional experience in this but wouldn’t you want to use virtio disks for best performance?

    • buedi@feddit.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      I am neither working professionally in that field. To answer your question: Of course I would use whatever gives me the best performance. Why it is set like this is beyond my knowledge. What you basically do in Apache Cloudstack when you do not have a Template yet is: You upload an ISO and in this process you have to tell ACS what it is (Windows Server 2022, Ubuntu 24 etc.). From my understanding, those pre-defined OS you can select and “attach” to an ISO seem to include the specifics for when you create a new Instance (VM) in ACS. And it seems to set the Controller to SATA. Why? I do not know. I tried to pick another OS (I think it was called Windows SCSI), but in the end it ended up still being a VM with the disks bound to the SATA controller, despite the VM having an additional SCSI controller that was not attached to anything.

      This can probably be fixed on the commandline, but I was not able to figure this out yesterday when I had a bit spare time to tinker with it again. I would like to see if this makes a big difference in that specific workload.