On 06/23/2016 10:09 AM, Walter A. Boring IV wrote:
volumes connected to QEMU instances eventually become directly connected?
Our long term goal is that 100% of all network storage will be connected
to directly by QEMU. We already have the ability to partially do this with
iSCSI, but it is lacking support for multipath. As & when that gap is
addressed though, we'll stop using the host OS for any iSCSI stuff.
So if you're requiring access to host iSCSI volumes, it'll work in the
short-medium term, but in the medium-long term we're not going to use
that so plan accordingly.
What is the benefit of this largely monolithic approach? It seems that
moving everything into QEMU is diametrically opposed to the unix model itself
and
is just a re-implementation of what already exists in the linux world outside of
QEMU.
Does QEMU support hardware initiators? iSER?
We regularly fix issues with iSCSI attaches in the release cycles of OpenStack,
because it's all done in python using existing linux packages. How often are
QEMU
releases done and upgraded on customer deployments vs. python packages
(os-brick)?
I don't see a compelling reason for re-implementing the wheel,
and it seems like a major step backwards.
This is an interesting point.
Unless there's a significant performance benefit to connecting directly from
qemu, it seems to me that we would want to leverage the existing work done by
the kernel and other "standard" iSCSI initators.
Chris
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev