>>So, what was your test environment? How big was the difference? That's strange, they are technical difference between virtio-scsi && virtio-scsi-single.
with virtio-scsi-single you have 1 virtio-scsi controller by disk. for iothread, you should see difference with multiple disk in 1 vm. This need virtio-scsi-single, because the iothread is mapped on controller, not the disk. ----- Mail original ----- De: "Andreas Steinel" <a.stei...@gmail.com> À: "pve-devel" <pve-devel@pve.proxmox.com> Envoyé: Vendredi 14 Octobre 2016 11:13:34 Objet: Re: [pve-devel] pve-manager and disk IO monitoring Hi Mir, On Fri, Oct 14, 2016 at 8:02 AM, Michael Rasmussen <m...@datanom.net> wrote: > I use virio-scsi-single exclusively because of the hough performance > gain in comparison to virtio-scsi so I can concur to that. I just benchmarked it in on a full-SSD-ZFS system of mine and got reverse results. I used 4 cores, 512 MB-RAM (fio 2.1.11, qd32, direct, libaio) and this varies: Test | sequential 8K | randread 4K | randrw 4K 50/50 ----------------------+---------------+-------------+---------------- virtio-scsi | 53k | 57k | 11k virtio-scsi-single | 35k | 41k | 11k virtio-scsi IO/Thread | 29k | 43k | 11k virtio-scsi-single IO | 29k | 44k | 11k So, what was your test environment? How big was the difference? Best, LnxBil _______________________________________________ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel _______________________________________________ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel