I spent a few days comparing various Hypervisors under the same workload and on the same hardware. This is a very specific workload and results might be different when testing oher workloads.

I wanted to share it here, because many of us run very modest Hardware and getting the most out of it is probably something others are interested in, too. I wanted to share it also because maybe someone finds a flaw in the configurations I ran, which might boost things up.

If you do not want to go to the post / read all of that, the very quick summary is, that XCP-ng was the quickest and KVM the slowest. There is also a summary at the bottom of the post with some graphs if that interests you. For everyone else who reads the whole post, I hope it gives some useful insights for your self-hosting endeavours.

  • buedi@feddit.orgOP
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 days ago

    That’s a very good question. The testsystem is running Apache Cloudstack with KVM at the moment and I have yet to figure out how to see which Disk / Controller mode the VM is using. I will dig a bit to see if I can find out. Would be interesting if it is not SCSI to re-run the tests.

    Edit: I did a ‘virsh dumpxml <vmname>’ and the Disk Part looks like this:

      <devices>
        <emulator>/usr/bin/qemu-system-x86_64</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' cache='none'/>
          <source file='/mnt/0b89f7ac-67a7-3790-9f49-ad66af4319c5/8d68ee83-940d-4b68-8b28-3cc952b45cb6' index='2'/>
          <backingStore/>
          <target dev='sda' bus='sata'/>
          <serial>8d68ee83940d4b688b28</serial>
          <alias name='sata0-0-0'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
    

    It is SATA… now I need to figure out how to change that configuration ;-)

    • snowfalldreamland@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      I am only messing with KVM/libvirt for fun so i have no professional experience in this but wouldn’t you want to use virtio disks for best performance?

      • buedi@feddit.orgOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        I am neither working professionally in that field. To answer your question: Of course I would use whatever gives me the best performance. Why it is set like this is beyond my knowledge. What you basically do in Apache Cloudstack when you do not have a Template yet is: You upload an ISO and in this process you have to tell ACS what it is (Windows Server 2022, Ubuntu 24 etc.). From my understanding, those pre-defined OS you can select and “attach” to an ISO seem to include the specifics for when you create a new Instance (VM) in ACS. And it seems to set the Controller to SATA. Why? I do not know. I tried to pick another OS (I think it was called Windows SCSI), but in the end it ended up still being a VM with the disks bound to the SATA controller, despite the VM having an additional SCSI controller that was not attached to anything.

        This can probably be fixed on the commandline, but I was not able to figure this out yesterday when I had a bit spare time to tinker with it again. I would like to see if this makes a big difference in that specific workload.

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      2 days ago

      Unfortunately I’m not very familiar with Cloudstack or Proxmox; we’ve always worked with KVM using virt-manager and Cockpit.

      Our usual method is to remove the default hard drive, reattach the qcow file as a SCSI device, and then we modify the SCSI controller that gets created to enable queuing. I’m sure at some point I should learn to do all this through the command line, but it’s never really been relevant to do so.

      The relevant sections look like this in one our prod VMs:

      <disk type='file' device='disk'>
            <driver name='qemu' type='qcow2'/>
            <source file='/var/lib/libvirt/images/XXX.qcow2' index='1'/>
            <backingStore/>
            <target dev='sdb' bus='scsi'/>
            <alias name='scsi0-0-0-1'/>
            <address type='drive' controller='0' bus='0' target='0' unit='1'/>
      </disk>
      
      <controller type='scsi' index='0' model='virtio-scsi'>
            <driver queues='6'/>
            <alias name='scsi0'/>
            <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
      </controller>
      

      The driver queues=‘X’ line is the part you have to add. The number should equal the number of cores assigned to the VM.

      See the following for more on tuning KVM:

      • buedi@feddit.orgOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Thank you very much. I spent another two hours yesterday reading up on that and creating other VMs and Templates, but I was not able yet to attach the Boot disk to a SCSI controller and make it boot. I would really liked to see if this change would bring it on-par with Proxmox (I wonder now what the defaults for Proxmox are), but even then, it would still be much slower than with Hyper-V or XCP-ng. If I find time, I will look into this again.

        • Voroxpete@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          I’d suggest maybe testing with a plain Debian or Fedora install. Just enable KVM and install virt-manager, and create the environment that way.