Hi all, I started a blueprint [1] and spec [2] for enabling the usage of native AIO mode for disk devices. The idea is to enable it for storage backends/setups where IO performance benefits from using native AIO mode and where no downsides are known wrt stability or data integrity.
As there is a wide range of storage backends and setups, I'm looking for input on specific backends that are known to benefit from native AIO mode (or where known problems exist). These are the comments so far (copied from the spec): * native AIO mode is a bad idea if the storage is not fully pre-allocated, e.g. for qcow2 images that grow on demand or sparse LVM storage * AIO mode has no effect if using the in-qemu network clients (any disks that use <disk type='network'>). It is only relevant if using the in-kernel network drivers Cases where AIO mode is beneficial * Raw images and pre-allocated images in qcow2 format * Cinder volumes that are located on iSCSI, NFS or FC devices. * Quobyte (reported by Silvan Kaiser) Also input on the minimum libvirt/qemu version where native AIO mode should be used would be very helpful. Thanks and regards, Alex __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev