This seems interesting.
I am interested in pursuing this further and helping contribute to
vdsm lsm integration. lsm is still in the early stages, but i feel
the right time to start influencing it so that vdsm integration can
smooth. My interest mainly lies in how external storage array can be
integrated into oVirt/VDSM and help oVirt exploit the array offload
features as part of the virtualization stack.
I didn't find any oVirt wiki page on this topic, tho' there is a old
mailing list thread on vdsm lsm integration, which when read brings
more issues to discuss :)
How does storage repo engine and possible vdsm services framework ( i
learnt about these in my brief chat with Saggie some time back) play
role here ?
Maybe Saggi could elaborate here.
Can "Provisioning Storage" itself be like a high level service, with
gluster and lsm exposing storage services, which vdsm can enumerate
send to oVirt GUI, is that the idea ?
I'm not sure "Provisioning Storage" is a clear enough definition, as it could
cover a lot of possibly unrelated things, but I'd need to understand more what you mean
to really be able to comment properly.
Well, I was envisioning oVirt as being able to provision and consume
storage, both, going ahead.
Provisioning thru' vdsm-libstoragemgmt (lsm) integration. oVirt user
should be able to carve out LUNs,
be able to associate the LUNs visibility to host(s) of a oVirt cluster,
all via libstoragemgmt interfaces.
With gluster being integrated into vdsm, oVirt user can provision and
manage gluster volumes soon,
which also falls under "provisioning storage", hence I was wondering if
there would be a new tab
in oVirt for "provisioning storage", where gluster ( in near future) and
external array/LUNs ( via
vdsm -lsm integration) can be provisioned.
Is there any wiki page on this topic which lists the todos on this
front, which I can start looking at ?
Unfortunately there is not as we haven't sat down to plan it in depth, but
you're more than welcome to start it.
Generally, the idea is as follows:
Currently vdsm has storage virtualization capabilites, i.e. we've implemented a
form of thin-provisioning, we provide snapshots using qcow etc, without relying
on the hardware. Using lsm we could have feature negotiation and whenever we
can offload, do it.
e.g. we could know if a storage array supports thin cloning, if it supports
thick cloning, if a LUN supports thin provisioning etc.
In the last example (thin provisioning) when we create a VG on top of a thin-p
LUN we should create all disk image (LVs) 'preallocated' and avoid vdsm's thin
provisioning implementation (as it is not needed).
I was thinking libstoragemgmt 'query capability' or similar interface
should help vdsm know the array capabilities.
I agree that if the backing LUN already is thinp'ed, then vdsm should
not add its own over it. So such usecases & needs
from vdsm perspective need to be thought about and eventually it should
influence the libstoragemgmt interfaces
However, we'd need a mechanism at domain level to 'disable' some of the
capabilities, so for example if we know that on a specific array snapshots are
limited or provide poor performance (worse than qcow) or whatever, we'd be able
to fall back to vdsm's software implementation.
I was thinking that its for the user to decide, not sure if we can
auto-detect and automate this. But i feel this falls under the 'advanced
usecase' category :)
We can always think about this later, rite ?
vdsm-devel mailing list