Also there has been some work on vhost-blk as well as vhost-scsi integrated with LIO so this work seems like more of an offshoot that the general direction of KVM performance.
On Sat, Sep 7, 2013 at 1:41 AM, Erico Augusto Cavalcanti Guedes < e...@cin.ufpe.br> wrote: > Hi, > > > 2013/9/5 Ruben S. Montero <rsmont...@opennebula.org> > > Hi >> >> I haven't tried this myself either, but if there is such an improvement >> we could add a hint at the KVM driver documentation. >> >> I've a concern about the first sentence of the post: >> >> "Data plane is suitable for LVM or raw image file configurations where >> live migration and advanced block features are not needed." >> >> So, are we missing then live-migration? have you tried it? >> > > No, I haven't tried it. Nevertheless, this information refers to qemu-1.4. > I'm using qemu-1.6, compiled by myself and linked to kvm command(I don't > install qemu-kvm package on my debian 7.1 node). Maybe look for this > information on qemu version history can give us some news. > > >> >> >> On Thu, Sep 5, 2013 at 12:02 PM, Vladislav Gorbunov <vadi...@gmail.com>wrote: >> >>> It works with: >>> DISK = [ driver = "raw" , cache = "none", io = "native"] >>> and on RedHat/CentOS kvm only. >>> >> > Vladislav, can you please send kvm process line of your node? > > >> >>> 2013/9/1 Valentin Bud <valentin....@gmail.com>: >>> > Hi Erico, >>> > >>> > This is the first time I hear about virtio-blk-data-plane. Thank you >>> for the >>> > info, looks like this feature brings notable IO improvements. >>> > >>> > You can try to use the RAW Section [1] to pass special attributes to >>> the >>> > underlying hypervisor. >>> > I have found a blog post [2] in which there is a method to enable >>> > virtio-blk-data-plane using the libvirt XML. The RAW section DATA gets >>> > passed to libvirt in XML format. >>> > >>> > I think the following could work: >>> > >>> > RAW = [ >>> > TYPE="kvm", >>> > DATA="<qemu:commandline><qemu:arg value='-set'/><qemu:arg >>> > value='device.virtio-disk0.scsi=off'/></qemu:commandline><!-- >>> config-wce=off >>> > is not needed in RHEL 6.4 --><qemu:commandline><qemu:arg >>> > value='-set'/><qemu:arg >>> > >>> value='device.virtio-disk0.config-wce=off'/></qemu:commandline><qemu:commandline<qemu:arg >>> > value='-set'/><qemu:arg >>> > value='device.virtio-disk0.x-data-plane=on'></qemu:commandline>" >>> > ] >>> > >>> > I don't have a test machine around and I would to hear back from you >>> if it >>> > works or not. >>> >> > It results on the following kvm process(observe last line): > > kvm -S -M pc-i440fx-1.6 -cpu qemu32 -enable-kvm -m 256 -smp > 1,sockets=1,cores=1,threads=1 -name one-34 -uuid > 632931d1-6195-4dfc-c01e-9fed9b19dd84 -no-user-config -nodefaults -chardev > socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-34.monitor,server,nowait > -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown > -boot c > -drive > file=/srv/cloud/one/var//datastores/0/34/disk.0,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native > > -device > virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 > -drive > file=/srv/cloud/one/var//datastores/0/34/disk.1,if=none,media=cdrom,id=drive-ide0-0-0,readonly=on,format=raw > > -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 > -netdev tap,fd=22,id=hostnet0 > -device > virtio-net-pci,netdev=hostnet0,id=net0,mac=02:00:c0:a8:0f:6e,bus=pci.0,addr=0x3 > > -usb -vnc 0.0.0.0:34 -vga cirrus > -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 > -set device.virtio-disk0.scsi=off -set device.virtio-disk0.config-wce=off > -set device.virtio-disk0.x-data-plane=on > > The Virtual machine reaches runn status, but invariably, there were > Boot failed: could not read the boot disk error. > > Some possibilities: > - Error with vd prefix X GRUB configuration? > - Is it mandotory to use IDE disk on my raw virtual machine image? > > I'm looking for a solution for "Boot failed: could not read the boot disk > error." > Some idea? > > Thanks, > > Erico. > > > > > >> > >>> > [1]: http://opennebula.org/documentation:rel4.2:template#raw_section >>> > [2]: >>> > >>> http://blog.vmsplice.net/2013/03/new-in-qemu-14-high-performance-virtio.html >>> > >>> > Health and Goodwill, >>> > >>> > >>> > On Sat, Aug 31, 2013 at 11:01 PM, Erico Augusto Cavalcanti Guedes >>> > <e...@cin.ufpe.br> wrote: >>> >> >>> >> Hello, >>> >> >>> >> on [1], page 10, section 2.3 - KVM Configuration: "To achieve the best >>> >> possible I/O rates for the KVM guest, the virtio-blk-data-plane >>> feature was >>> >> enabled for each LUN (a disk or partition) that was passed from the >>> host to >>> >> the guest. To enable virtio-blk-data-plane for a LUN being passed to >>> the >>> >> guest, the x-data-plane=on option was added for that LUN in the >>> qemu-kvm >>> >> command line used to set up the guest. For example: >>> >> /usr/libexec/qemu-kvm -drive >>> >> if=none,id=drive0,cache=none,aio=native,format=raw,file=<disk or >>> partition> >>> >> -device virtio-blk-pci,drive=drive0,scsi=off,x-data-plane=on >>> >> " >>> >> I'll be grateful if you can help me with the following question: >>> >> How to customize -device virtio-blk-pci parameter during OpenNebula VM >>> >> initialization to insert x-data-plane=on on it? >>> >> >>> >> My VM Template: >>> >> CONTEXT=[NETWORK="YES",SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]"] >>> >> CPU="1" >>> >> >>> >> >>> DISK=[AIO="native",BUS="virtio",CACHE="none",DEV_PREFIX="vd",FORMAT="raw",IMAGE_ID="1"] >>> >> GRAPHICS=[LISTEN="0.0.0.0",TYPE="VNC"] >>> >> MEMORY="256" >>> >> NIC=[NETWORK_ID="0"] >>> >> OS=[ARCH="i686",BOOT="hd"] >>> >> >>> >> KVM process on node: >>> >> /usr/bin/kvm -S -M pc-i440fx-1.6 -cpu qemu32 -enable-kvm -m 256 -smp >>> >> 1,sockets=1,cores=1,threads=1 -name one-27 >>> >> -uuid c014337c-5255-e983-862e-b744f889aa49 -no-user-config -nodefaults >>> >> -chardev >>> >> >>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-27.monitor,server,nowait >>> >> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc >>> >> -no-shutdown-boot c >>> >> -drive >>> >> >>> file=/srv/cloud/one/var//datastores/0/27/disk.0,if=none,id=drive-virtio-disk0,format=raw,cache=none >>> >> -device >>> >> >>> virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 >>> >> -drive >>> >> >>> file=/srv/cloud/one/var//datastores/0/27/disk.1,if=none,media=cdrom,id=drive-ide0-0-0,readonly=on,format=raw >>> >> -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 >>> >> -netdev tap,fd=22,id=hostnet0 >>> >> -device >>> >> >>> virtio-net-pci,netdev=hostnet0,id=net0,mac=02:00:c0:a8:0f:6e,bus=pci.0,addr=0x3 >>> >> -usb -vnc 0.0.0.0:27 -vga cirrus -device >>> >> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 >>> >> >>> >> I'm running ONE 4.2 on Debian 7.1 x86_64, kernel 3.2.0-4-amd64, with >>> >> customized qemu-1.6(compiled by myself to support >>> virtio-blk-data-plane) to >>> >> enable virtio-blk-data-plane, with Debian 7.1 i386 VMs. >>> >> >>> >> Thanks in advance, >>> >> >>> >> Erico. >>> >> >>> >> [1] >>> >> >>> ftp://public.dhe.ibm.com/linux/pdfs/KVM_Virtualized_IO_Performance_Paper_v2.pdf >>> >> >>> >> _______________________________________________ >>> >> Users mailing list >>> >> Users@lists.opennebula.org >>> >> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org >>> >> >>> > >>> > >>> > >>> > -- >>> > Valentin Bud >>> > http://databus.pro | valen...@databus.pro >>> > >>> > _______________________________________________ >>> > Users mailing list >>> > Users@lists.opennebula.org >>> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org >>> > >>> _______________________________________________ >>> Users mailing list >>> Users@lists.opennebula.org >>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org >>> >> >> >> >> -- >> -- >> Join us at OpenNebulaConf2013 in Berlin, 24-26 September, 2013 >> -- >> Ruben S. Montero, PhD >> Project co-Lead and Chief Architect >> OpenNebula - The Open Source Solution for Data Center Virtualization >> www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula >> >> _______________________________________________ >> Users mailing list >> Users@lists.opennebula.org >> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org >> >> > > _______________________________________________ > Users mailing list > Users@lists.opennebula.org > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org > >
_______________________________________________ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org