wido opened a new issue #4883:
URL: https://github.com/apache/cloudstack/issues/4883


   ##### ISSUE TYPE
    * Feature Idea
   
   ##### COMPONENT NAME
   ~~~
   KVM
   ~~~
   
   ##### CLOUDSTACK VERSION
   ~~~
   master
   ~~~
   
   ##### SUMMARY
   ~~~
   ouring is a new kernel asynchronous I/O processing mechanism proposed as a 
much faster alternative for conventional Linux AIO. Patches were merged in 
Linux 5.1 and gave a promised performance boost. We decided to integrate it 
into QEMU to make virtualized storage devices work more efficiently. Let's take 
a look at how iouring works in QEMU.
   ~~~
   
   Source:
   - 
https://www.phoronix.com/scan.php?page=news_item&px=KVM-IO-uring-Passthrough-LF2020
   - https://archive.fosdem.org/2020/schedule/event/vai_io_uring_in_qemu/
   
   By using io_uring with Qemu we can see a massive I/O performance improvement 
within Virtual Machines running from Local and/or NFS storage.
   
   In order to use this we need:
   
   - Qemu >= 5.0
   - Libvirt >= 6.3.0
   
   We then need to add this to the XML definition for libvirt which would then 
look like:
   
   ~~~
       <disk type='file' device='disk'>
         <driver name='qemu' type='qcow2' cache='writeback' discard='unmap' 
io='io_uring'/>
         <source 
file='/var/lib/libvirt/images/a0f677c6-b065-4795-b468-2b5974496c75'/>
         <backingStore/>
         <target dev='sda' bus='scsi'/>
         <serial>a0f677c6b0654795b468</serial>
         <alias name='scsi0-0-0-0'/>
         <address type='drive' controller='0' bus='0' target='0' unit='0'/>
       </disk>
   ~~~
   
   With io_uring I/O performance can improve drastically and reach near 
bare-metal performance.
   
   Currently there is no Qemu version in a stable Ubuntu nor CentOS release, 
but we can already merge this in master so that when this is released we can 
use it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to