Does QEMU support hardware initiators? iSER?
No, this is only for case where you're doing pure software based
iSCSI client connections. If we're relying on local hardware that's
a different story.

We regularly fix issues with iSCSI attaches in the release cycles of
OpenStack,
because it's all done in python using existing linux packages.  How often
This is a great example of the benefit that in-QEMU client gives us. The
Linux iSCSI client tools have proved very unreliable in use by OpenStack.
This is a reflection of the very architectural approach. We have individual
resources needed by distinct VMs, but we're having to manage them as a host
wide resource and that's creating us unneccessary complexity and having a
poor effect on our reliability overall.
I've been doing more and more digging and research into doing this
and it seems that canonical removed libiscsi from qemu due to security problems
in the 14.04 LTS release cycle.

Trying to fire up a new vm manually with qemu attaching an iscsi disk via
the documented mechanism ends up with qemu complaining that it can't
open the disk 'unknown protocol'.

qemu-system-x86_64 -drive file=iscsi://10.52.1.11/iqn.2000-05.com.3pardata:20810002ac00383d/0 -iscsi initiator-name=iqn.walt-qemu-initiator qemu-system-x86_64: -drive file=iscsi://10.52.1.11/iqn.2000-05.com.3pardata:20810002ac00383d/0: could not open disk image iscsi://10.52.1.11/iqn.2000-05.com.3pardata:20810002ac00383d/0: Unknown protocol

There was bug filed against qemu back in 2014 and was marked as wont fix due to security issues.
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1271573

That looks like it has been fixed since here:
https://bugs.launchpad.net/ubuntu/+source/libiscsi/+bug/1271653
But that's only xenial (16.04) support and won't be in 14.x tree.


I have also confirmed that the nova.virt.libvirt.volume.net.LibvirtNetVolumeDriver
fails for iscsi for the same exact reason against nova master.

I modified the nova/virt/libvirt/driver.py and changed iscsi to point to the LibvirtNetVolumeDriver and tried to attach an iSCSI volume. It failed and the libvirtd log showed the unknown protocol error.

The n-cpu.log entry:
2016-06-24 08:09:21.555 8891 DEBUG nova.virt.libvirt.guest [req-46954106-c728-43ba-b40a-5b91a1639610 admin admin] attach device xml: <disk type="network" device="disk">
  <driver name="qemu" type="raw" cache="none"/>
<source protocol="iscsi" name="iqn.2000-05.com.3pardata:20810002ac00383d/0">
    <host name="10.52.1.11" port="3260"/>
  </source>
  <target bus="virtio" dev="vdb"/>
  <serial>a1d0c85e-d6e6-424f-9ca7-76ecd0ce45fb</serial>
</disk>
 attach_device /opt/stack/nova/nova/virt/libvirt/guest.py:251
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [req-46954106-c728-43ba-b40a-5b91a1639610 admin admin] [instance: 74092b75-dc20-47e5-9127-c63367d05b29] Failed to attach volume at mountpoint: /dev/vdb 2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 74092b75-dc20-47e5-9127-c63367d05b29] Traceback (most recent call last): 2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 74092b75-dc20-47e5-9127-c63367d05b29] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1160, in attach_volume 2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 74092b75-dc20-47e5-9127-c63367d05b29] guest.attach_device(conf, persistent=True, live=live) 2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 74092b75-dc20-47e5-9127-c63367d05b29] File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 252, in attach_device 2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 74092b75-dc20-47e5-9127-c63367d05b29] self._domain.attachDeviceFlags(device_xml, flags=flags) 2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 74092b75-dc20-47e5-9127-c63367d05b29] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit 2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 74092b75-dc20-47e5-9127-c63367d05b29] result = proxy_call(self._autowrap, f, *args, **kwargs) 2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 74092b75-dc20-47e5-9127-c63367d05b29] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in proxy_call 2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 74092b75-dc20-47e5-9127-c63367d05b29] rv = execute(f, *args, **kwargs) 2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 74092b75-dc20-47e5-9127-c63367d05b29] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute 2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 74092b75-dc20-47e5-9127-c63367d05b29] six.reraise(c, e, tb) 2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 74092b75-dc20-47e5-9127-c63367d05b29] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker 2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 74092b75-dc20-47e5-9127-c63367d05b29] rv = meth(*args, **kwargs) 2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 74092b75-dc20-47e5-9127-c63367d05b29] File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 517, in attachDeviceFlags 2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 74092b75-dc20-47e5-9127-c63367d05b29] if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self) 2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 74092b75-dc20-47e5-9127-c63367d05b29] libvirtError: operation failed: open disk image file failed


The */var/log/libvirtd.log* entry....

2016-06-24 15:09:21.572+0000: 21000: debug : qemuMonitorIOProcess:396 : QEMU_MONITOR_IO_PROCESS: mon=0x7fd4f000c920 buf={"return": "could not open disk image iscsi://10.52.1.11:3260/iqn.2000-05.com.3pardata%3A20810002ac00383d/0: Unknown protocol\r\n", "id": "libvirt-18"}^M
 len=153




So the argument that linux iSCSI client tools have proven unreliable also holds true for libiscsi.
This really isn't a win.



As a side note here:
  I am working on a performance report that I'm working on to test
the performance of bare metal iscsi vs. host passed through to virsh (like we do in Nova today) and qemu iscsi built in support. my preliminary results show that libiscsi and host attach passed through to virsh run about the same relative io performance, but both are about 50% iSCSI to bare metal.

Walt

are QEMU
releases done and upgraded on customer deployments vs. python packages
(os-brick)?
We're removing the entire layer of instability by removing the need to
deal with any command line tools, and thus greatly simplifying our
setup on compute nodes. No matter what we might do in os-brick it'll
never give us a simple or reliable system - we're just papering over
the flaws by doing stuff like blindly re-trying iscsi commands upon
failure.

Regards,
Daniel

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to