On Thu, Jun 23, 2016 at 12:07:43PM -0600, Chris Friesen wrote: > On 06/23/2016 10:09 AM, Walter A. Boring IV wrote: > > > >volumes connected to QEMU instances eventually become directly connected? > > > >>Our long term goal is that 100% of all network storage will be connected > >>to directly by QEMU. We already have the ability to partially do this with > >>iSCSI, but it is lacking support for multipath. As & when that gap is > >>addressed though, we'll stop using the host OS for any iSCSI stuff. > >> > >>So if you're requiring access to host iSCSI volumes, it'll work in the > >>short-medium term, but in the medium-long term we're not going to use > >>that so plan accordingly. > > > >What is the benefit of this largely monolithic approach? It seems that > >moving everything into QEMU is diametrically opposed to the unix model > >itself and > >is just a re-implementation of what already exists in the linux world > >outside of > >QEMU. > > > >Does QEMU support hardware initiators? iSER? > > > >We regularly fix issues with iSCSI attaches in the release cycles of > >OpenStack, > >because it's all done in python using existing linux packages. How often > >are QEMU > >releases done and upgraded on customer deployments vs. python packages > >(os-brick)? > > > >I don't see a compelling reason for re-implementing the wheel, > >and it seems like a major step backwards. > > This is an interesting point. > > Unless there's a significant performance benefit to connecting > directly from qemu, it seems to me that we would want to leverage > the existing work done by the kernel and other "standard" iSCSI > initators. > > Chris
I'm curious to find out this as well. Is this for a performance gain? If so, do we have any metrics showing that gain is significant enough to warrant making a change like this? The host OS is still going to be involved. AFAIK, this just cuts out the software iSCSI initiator from the picture. So we would be moving from a piece of software dedicated to one specific functionality, to a different piece of software that's main reason for existence is nothing to do with IO path management. I'm not saying I'm completely opposed to this. If there is a reason for doing it then it could be worth it. But so far I haven't seen anything explaining why this would be better than what we have today. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev