Re: [vdsm] Storage Device Management in VDSM and oVirt

2012-05-09 Thread Ayal Baron


- Original Message -
> 
> >> This seems interesting.
> >>
> >> I am interested in pursuing this further and helping contribute to
> >> the
> >> vdsm lsm integration. lsm is still in the early stages, but i feel
> >> its
> >> the right time to start influencing it so that vdsm integration
> >> can
> >> be
> >> smooth. My interest mainly lies in how external storage array can
> >> be
> >> integrated into oVirt/VDSM and help oVirt exploit the array
> >> offload
> >> features as part of the virtualization stack.
> >>
> >> I didn't find any oVirt wiki page on this topic, tho' there is a
> >> old
> >> mailing list thread on vdsm lsm integration, which when read
> >> brings
> >> up
> >> more issues to discuss :)
> >> How does storage repo engine and possible vdsm services framework
> >> ( i
> >> learnt about these in my brief chat with Saggie some time back)
> >> play
> >> a
> >> role here ?
> > Maybe Saggi could elaborate here.
> >
> >> Can "Provisioning Storage" itself be like a high level service,
> >> with
> >> gluster and lsm  exposing storage services, which vdsm can
> >> enumerate
> >> and
> >> send to oVirt GUI, is that the idea ?
> > I'm not sure "Provisioning Storage" is a clear enough definition,
> > as it could cover a lot of possibly unrelated things, but I'd need
> > to understand more what you mean to really be able to comment
> > properly.
> >
> 
> Well, I was envisioning oVirt as being able to provision and consume
> storage, both, going ahead.
> Provisioning thru' vdsm-libstoragemgmt (lsm) integration. oVirt user
> should be able to carve out LUNs,
> be able to associate the LUNs visibility to host(s) of a oVirt
> cluster,
> all via libstoragemgmt interfaces.
> 
> With gluster being integrated into vdsm, oVirt user can provision and
> manage gluster volumes soon,
> which also falls under "provisioning storage", hence I was wondering
> if
> there would be a new tab
> in oVirt for "provisioning storage", where gluster ( in near future)
> and
> external array/LUNs  ( via
> vdsm -lsm integration) can be provisioned.


Ok, now I that I understand a little more, then in general I agree.
First upstream oVirt already has the ability to provision gLuster (albeit still 
in a limited way) and definitely we will need more provisioning capabilities 
including for example setting up LIO on a host and exposing LUNs that would be 
available to other hosts/VMs (for one, live storage migration without shared 
disks would need this).
Specifically wrt "Provisioning Storage" tab, that's more of a design question 
as there are going to be many services we will need to provision not all 
specifically around storage and I'm not sure that we'd want a new tab for every 
type.


> 
> 
> >> Is there any wiki page on this topic which lists the todos on this
> >> front, which I can start looking at ?
> > Unfortunately there is not as we haven't sat down to plan it in
> > depth, but you're more than welcome to start it.
> >
> > Generally, the idea is as follows:
> > Currently vdsm has storage virtualization capabilites, i.e. we've
> > implemented a form of thin-provisioning, we provide snapshots
> > using qcow etc, without relying on the hardware.  Using lsm we
> > could have feature negotiation and whenever we can offload, do it.
> > e.g. we could know if a storage array supports thin cloning, if it
> > supports thick cloning, if a LUN supports thin provisioning etc.
> > In the last example (thin provisioning) when we create a VG on top
> > of a thin-p LUN we should create all disk image (LVs)
> > 'preallocated' and avoid vdsm's thin provisioning implementation
> > (as it is not needed).
> >
> 
> I was thinking libstoragemgmt 'query capability' or similar interface
> should help vdsm know the array capabilities.

that is correct.

> I agree that if the backing LUN already is thinp'ed, then vdsm should
> not add its own over it. So such usecases & needs
> from vdsm perspective need to be thought about and eventually it
> should
> influence the libstoragemgmt interfaces

I don't see how it would influence the lsm interfaces.

> 
> > However, we'd need a mechanism at domain level to 'disable' some of
> > the capabilities, so for example if we know that on a specific
> > array snapshots are limited or provide poor performance (worse
> > than qcow) or whatever, we'd be able to fall back to vdsm's
> > software implementation.
> >
> 
> I was thinking that its for the user to decide, not sure if we can
> auto-detect and automate this. But i feel this falls under the
> 'advanced
> usecase' category :)
> We can always think about this later, rite ?

Correct, the mechanism is in order to allow the user to decide.

> 
> 
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Storage Device Management in VDSM and oVirt

2012-05-07 Thread Deepak C Shetty



This seems interesting.

I am interested in pursuing this further and helping contribute to
the
vdsm lsm integration. lsm is still in the early stages, but i feel
its
the right time to start influencing it so that vdsm integration can
be
smooth. My interest mainly lies in how external storage array can be
integrated into oVirt/VDSM and help oVirt exploit the array offload
features as part of the virtualization stack.

I didn't find any oVirt wiki page on this topic, tho' there is a old
mailing list thread on vdsm lsm integration, which when read brings
up
more issues to discuss :)
How does storage repo engine and possible vdsm services framework ( i
learnt about these in my brief chat with Saggie some time back) play
a
role here ?

Maybe Saggi could elaborate here.


Can "Provisioning Storage" itself be like a high level service, with
gluster and lsm  exposing storage services, which vdsm can enumerate
and
send to oVirt GUI, is that the idea ?

I'm not sure "Provisioning Storage" is a clear enough definition, as it could 
cover a lot of possibly unrelated things, but I'd need to understand more what you mean 
to really be able to comment properly.



Well, I was envisioning oVirt as being able to provision and consume 
storage, both, going ahead.
Provisioning thru' vdsm-libstoragemgmt (lsm) integration. oVirt user 
should be able to carve out LUNs,
be able to associate the LUNs visibility to host(s) of a oVirt cluster, 
all via libstoragemgmt interfaces.


With gluster being integrated into vdsm, oVirt user can provision and 
manage gluster volumes soon,
which also falls under "provisioning storage", hence I was wondering if 
there would be a new tab
in oVirt for "provisioning storage", where gluster ( in near future) and 
external array/LUNs  ( via

vdsm -lsm integration) can be provisioned.



Is there any wiki page on this topic which lists the todos on this
front, which I can start looking at ?

Unfortunately there is not as we haven't sat down to plan it in depth, but 
you're more than welcome to start it.

Generally, the idea is as follows:
Currently vdsm has storage virtualization capabilites, i.e. we've implemented a 
form of thin-provisioning, we provide snapshots using qcow etc, without relying 
on the hardware.  Using lsm we could have feature negotiation and whenever we 
can offload, do it.
e.g. we could know if a storage array supports thin cloning, if it supports 
thick cloning, if a LUN supports thin provisioning etc.
In the last example (thin provisioning) when we create a VG on top of a thin-p 
LUN we should create all disk image (LVs) 'preallocated' and avoid vdsm's thin 
provisioning implementation (as it is not needed).



I was thinking libstoragemgmt 'query capability' or similar interface 
should help vdsm know the array capabilities.
I agree that if the backing LUN already is thinp'ed, then vdsm should 
not add its own over it. So such usecases & needs
from vdsm perspective need to be thought about and eventually it should 
influence the libstoragemgmt interfaces



However, we'd need a mechanism at domain level to 'disable' some of the 
capabilities, so for example if we know that on a specific array snapshots are 
limited or provide poor performance (worse than qcow) or whatever, we'd be able 
to fall back to vdsm's software implementation.



I was thinking that its for the user to decide, not sure if we can 
auto-detect and automate this. But i feel this falls under the 'advanced 
usecase' category :)

We can always think about this later, rite ?

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Storage Device Management in VDSM and oVirt

2012-05-03 Thread Ayal Baron


- Original Message -
> On 04/24/2012 07:37 PM, Ayal Baron wrote:
> >
> > - Original Message -
> >> On 04/24/2012 02:07 AM, Ayal Baron wrote:
> >>> - Original Message -
>  On 04/22/2012 12:28 PM, Ayal Baron wrote:
> >>> This way we'd have a 2 stage process:
> >>> 1. setupStorage (generic)
> >> I was looking up on the VDSM archives and there are talks of
> >> using
> >> libstoragemgmt (lsm)
> > Funny, we started using that acronym for Live Storage Migration
> > :)
> >
> >> under VDSM. I was wondering if the setupStorage would be
> >> something
> >> where
> >> lsm would
> >> be used to do the work, it seems fit for purpose here.
> >>
> >>
> > I don't think this is the libstoragemgmt mandate.
> >
> > libstoragemgmt is:
> > "A library that will provide a vendor agnostic open source
> > storage
> > application programming interface (API) for storage arrays."
> >
> > i.e. it is there to abstract storage array specifics from the
> > user.
> > It will be used by things like LVM etc, not the other way
> > around.
> >
> > setupStorage would use libstoragemgmt wherever appropriate of
> > course.
> >
> > But as the libstoragemgmt maintainer, Tony (cc'd) can correct
> > me
> > if
> > I'm wrong here.
> >
> >
>  I was looking at setupStorage as Provisioning + Setting up.
>  I know one of the basic goals of lsm is provision the storage to
>  the
>  host
>  and preparing the storage for consumption is higher layers work.
> 
>  With that, i think then its becomes a 3 stage process, from
>  oVirt/VDSM
>  pov...
>  1) Provision Storage (using lsm if applicable, based on whether
>  external
>  storage is connected)
>  2) Setup Storage (prepare the provisioned LUNs for usage)
>  3) createSD/createGlusterVolume/...  (plugin specific)
> 
>  Since we are talking about Storage management using VDSM, i was
>  interested in understanding the plans, strategy of how VDSM +
>  lsm
>  will integrate ?
> >>> There are various ways of approaching this.
> >>> 1. Given proper storage you could just provision new LUNs
> >>> whenever
> >>> you need a new virtual disk and utilize storage side thin
> >>> provisioning and snapshots for most of your needs.
> >>> When you have such storage you don't really need steps 2 and 3
> >>> above.  Your storage is your virtual images repository.
> >>> Although quite simple and powerful, very few arrays are capable
> >>> of
> >>> growing to a very large number of objects (luns + snapshots +
> >>> whatever) today, so I don't see this being the most common use
> >>> case any time soon.
> >> This is not clear to me. This only talks about provisioning but
> >> not
> >> consuming.
> >> 2 and 3 above are required from a consumability perspective. The
> >> LUNs
> >> will have
> >> to prepared and used by LVM (pv, vg, lv, metadata) for VDSM to
> >> host a
> >> storage domain.
> > There are several ways of managing the repo in such a scenario,
> > just an example is to provision a LUN where vdsm would manage
> > metadata (listing of images, relations between snapshots, logical
> > sizes of images, etc) and every image is another LUN that we would
> > provision, so there would be no need for LVM in such a scenario.
> >
> 
> Sorry, but not clear to me. If vdsm is configured for file based
> storage
> domain, it would expect the LUN to have a fs, and vdsm would create
> the
> fs storage domain over it. If vdsm is configured for block based
> storage
> domain, it would end up using the LUN as a pv, over which the VG/LV
> would sit (hence the need for lvm) and form the block storage domain,
> unless you are talking of vdsm using raw LUNs which is not supported
> today ?

Today vdsm cannot provision LUNs, only consume there so the full scenario is of 
course not supported today.
However, we do support exposing LUNs directly to VMs so the only delta is that 
we do not manage the LUNs as a repository.

> 
> >>
> >>> 2. Provision LUNs (either statically or dynamically using lsm)
> >>> once, preferably thinly provisioned. Then setupStorage (storage
> >>> domain over VG / gLuster / other) and use lsm for creating
> >>> snapshots/clones on the fly
> >>> In my opinion this will be more prevalent to begin with.
> >>>
> >>> With lsm we will (hopefully) have a way of enumerating storage
> >>> side
> >>> capabilities so then when we create a repository (gluster / sd /
> >>> ...) we'd be able to determine on the fly what capabilities it
> >>> has
> >>> and determine if to use these or to use virtualized capabilities
> >>> (e.g. in the virt case when you need to create a snapshot use
> >>> qcowX).
> >>>
> >>> In oVirt, once you've defined a storage domain and it exposes a
> >>> set
> >>> of capabilities, user should be able to override (e.g. even
> >>> though
> >>> storage supports snapshots,

Re: [vdsm] Storage Device Management in VDSM and oVirt

2012-05-03 Thread Deepak C Shetty

On 04/25/2012 04:52 PM, Deepak C Shetty wrote:


This seems interesting.

I am interested in pursuing this further and helping contribute to the 
vdsm lsm integration. lsm is still in the early stages, but i feel its 
the right time to start influencing it so that vdsm integration can be 
smooth. My interest mainly lies in how external storage array can be 
integrated into oVirt/VDSM and help oVirt exploit the array offload 
features as part of the virtualization stack.


I didn't find any oVirt wiki page on this topic, tho' there is a old 
mailing list thread on vdsm lsm integration, which when read brings up 
more issues to discuss :)
How does storage repo engine and possible vdsm services framework ( i 
learnt about these in my brief chat with Saggie some time back) play a 
role here ?


Can "Provisioning Storage" itself be like a high level service, with 
gluster and lsm  exposing storage services, which vdsm can enumerate 
and send to oVirt GUI, is that the idea ?
Is there any wiki page on this topic which lists the todos on this 
front, which I can start looking at ?




Hello Ayal,
Looking for you opinion here.

thanx,
deepak

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Storage Device Management in VDSM and oVirt

2012-04-25 Thread Deepak C Shetty

On 04/24/2012 07:37 PM, Ayal Baron wrote:


- Original Message -

On 04/24/2012 02:07 AM, Ayal Baron wrote:

- Original Message -

On 04/22/2012 12:28 PM, Ayal Baron wrote:

This way we'd have a 2 stage process:
1. setupStorage (generic)

I was looking up on the VDSM archives and there are talks of
using
libstoragemgmt (lsm)

Funny, we started using that acronym for Live Storage Migration
:)


under VDSM. I was wondering if the setupStorage would be
something
where
lsm would
be used to do the work, it seems fit for purpose here.



I don't think this is the libstoragemgmt mandate.

libstoragemgmt is:
"A library that will provide a vendor agnostic open source
storage
application programming interface (API) for storage arrays."

i.e. it is there to abstract storage array specifics from the
user.
It will be used by things like LVM etc, not the other way around.

setupStorage would use libstoragemgmt wherever appropriate of
course.

But as the libstoragemgmt maintainer, Tony (cc'd) can correct me
if
I'm wrong here.



I was looking at setupStorage as Provisioning + Setting up.
I know one of the basic goals of lsm is provision the storage to
the
host
and preparing the storage for consumption is higher layers work.

With that, i think then its becomes a 3 stage process, from
oVirt/VDSM
pov...
1) Provision Storage (using lsm if applicable, based on whether
external
storage is connected)
2) Setup Storage (prepare the provisioned LUNs for usage)
3) createSD/createGlusterVolume/...  (plugin specific)

Since we are talking about Storage management using VDSM, i was
interested in understanding the plans, strategy of how VDSM + lsm
will integrate ?

There are various ways of approaching this.
1. Given proper storage you could just provision new LUNs whenever
you need a new virtual disk and utilize storage side thin
provisioning and snapshots for most of your needs.
When you have such storage you don't really need steps 2 and 3
above.  Your storage is your virtual images repository.
Although quite simple and powerful, very few arrays are capable of
growing to a very large number of objects (luns + snapshots +
whatever) today, so I don't see this being the most common use
case any time soon.

This is not clear to me. This only talks about provisioning but not
consuming.
2 and 3 above are required from a consumability perspective. The LUNs
will have
to prepared and used by LVM (pv, vg, lv, metadata) for VDSM to host a
storage domain.

There are several ways of managing the repo in such a scenario, just an example 
is to provision a LUN where vdsm would manage metadata (listing of images, 
relations between snapshots, logical sizes of images, etc) and every image is 
another LUN that we would provision, so there would be no need for LVM in such 
a scenario.



Sorry, but not clear to me. If vdsm is configured for file based storage 
domain, it would expect the LUN to have a fs, and vdsm would create the 
fs storage domain over it. If vdsm is configured for block based storage 
domain, it would end up using the LUN as a pv, over which the VG/LV 
would sit (hence the need for lvm) and form the block storage domain, 
unless you are talking of vdsm using raw LUNs which is not supported today ?





2. Provision LUNs (either statically or dynamically using lsm)
once, preferably thinly provisioned. Then setupStorage (storage
domain over VG / gLuster / other) and use lsm for creating
snapshots/clones on the fly
In my opinion this will be more prevalent to begin with.

With lsm we will (hopefully) have a way of enumerating storage side
capabilities so then when we create a repository (gluster / sd /
...) we'd be able to determine on the fly what capabilities it has
and determine if to use these or to use virtualized capabilities
(e.g. in the virt case when you need to create a snapshot use
qcowX).

In oVirt, once you've defined a storage domain and it exposes a set
of capabilities, user should be able to override (e.g. even though
storage supports snapshots, I want to use qcow as this storage can
only create 255 snapshots per volume and I need more than that).

I'm assuming that we will not have any way of knowing the limits
per machine.

Does that make sense?


Agree to #2. Thinking deeper

1) Provisioning Storage

Provisioning storage using lsm would require new VDSM verbs to be
added,
which can create / show the LUNs to the oVirt user and user can then
select which LUN(s) to use for setupStorage.

create LUN doesn't exist today, but show LUNs does.

Currently the (simplified) flow is:
1. connect to storage (when relevant)
2. get listing of devices
3. create a storage domain on selected devices


Provisioning LUNs will probably exploit the lsm capabilities and
provide
the options to the user to create the LUNs using the available array
features.

With GlusterFS also providing some of the array capabilities (stripe,
replicate etc), user might want to provision GlusterFS volume (with
whatever capabilities glus

Re: [vdsm] Storage Device Management in VDSM and oVirt

2012-04-24 Thread Ayal Baron


- Original Message -
> On 04/24/2012 02:07 AM, Ayal Baron wrote:
> >
> > - Original Message -
> >> On 04/22/2012 12:28 PM, Ayal Baron wrote:
> > This way we'd have a 2 stage process:
> > 1. setupStorage (generic)
>  I was looking up on the VDSM archives and there are talks of
>  using
>  libstoragemgmt (lsm)
> >>> Funny, we started using that acronym for Live Storage Migration
> >>> :)
> >>>
>  under VDSM. I was wondering if the setupStorage would be
>  something
>  where
>  lsm would
>  be used to do the work, it seems fit for purpose here.
> 
> 
> >>> I don't think this is the libstoragemgmt mandate.
> >>>
> >>> libstoragemgmt is:
> >>> "A library that will provide a vendor agnostic open source
> >>> storage
> >>> application programming interface (API) for storage arrays."
> >>>
> >>> i.e. it is there to abstract storage array specifics from the
> >>> user.
> >>> It will be used by things like LVM etc, not the other way around.
> >>>
> >>> setupStorage would use libstoragemgmt wherever appropriate of
> >>> course.
> >>>
> >>> But as the libstoragemgmt maintainer, Tony (cc'd) can correct me
> >>> if
> >>> I'm wrong here.
> >>>
> >>>
> >> I was looking at setupStorage as Provisioning + Setting up.
> >> I know one of the basic goals of lsm is provision the storage to
> >> the
> >> host
> >> and preparing the storage for consumption is higher layers work.
> >>
> >> With that, i think then its becomes a 3 stage process, from
> >> oVirt/VDSM
> >> pov...
> >> 1) Provision Storage (using lsm if applicable, based on whether
> >> external
> >> storage is connected)
> >> 2) Setup Storage (prepare the provisioned LUNs for usage)
> >> 3) createSD/createGlusterVolume/...  (plugin specific)
> >>
> >> Since we are talking about Storage management using VDSM, i was
> >> interested in understanding the plans, strategy of how VDSM + lsm
> >> will integrate ?
> >
> > There are various ways of approaching this.
> > 1. Given proper storage you could just provision new LUNs whenever
> > you need a new virtual disk and utilize storage side thin
> > provisioning and snapshots for most of your needs.
> > When you have such storage you don't really need steps 2 and 3
> > above.  Your storage is your virtual images repository.
> > Although quite simple and powerful, very few arrays are capable of
> > growing to a very large number of objects (luns + snapshots +
> > whatever) today, so I don't see this being the most common use
> > case any time soon.
> 
> This is not clear to me. This only talks about provisioning but not
> consuming.
> 2 and 3 above are required from a consumability perspective. The LUNs
> will have
> to prepared and used by LVM (pv, vg, lv, metadata) for VDSM to host a
> storage domain.

There are several ways of managing the repo in such a scenario, just an example 
is to provision a LUN where vdsm would manage metadata (listing of images, 
relations between snapshots, logical sizes of images, etc) and every image is 
another LUN that we would provision, so there would be no need for LVM in such 
a scenario.

> 
> 
> > 2. Provision LUNs (either statically or dynamically using lsm)
> > once, preferably thinly provisioned. Then setupStorage (storage
> > domain over VG / gLuster / other) and use lsm for creating
> > snapshots/clones on the fly
> > In my opinion this will be more prevalent to begin with.
> >
> > With lsm we will (hopefully) have a way of enumerating storage side
> > capabilities so then when we create a repository (gluster / sd /
> > ...) we'd be able to determine on the fly what capabilities it has
> > and determine if to use these or to use virtualized capabilities
> > (e.g. in the virt case when you need to create a snapshot use
> > qcowX).
> >
> > In oVirt, once you've defined a storage domain and it exposes a set
> > of capabilities, user should be able to override (e.g. even though
> > storage supports snapshots, I want to use qcow as this storage can
> > only create 255 snapshots per volume and I need more than that).
> >
> > I'm assuming that we will not have any way of knowing the limits
> > per machine.
> >
> > Does that make sense?
> >
> 
> Agree to #2. Thinking deeper
> 
> 1) Provisioning Storage
> 
> Provisioning storage using lsm would require new VDSM verbs to be
> added,
> which can create / show the LUNs to the oVirt user and user can then
> select which LUN(s) to use for setupStorage.

create LUN doesn't exist today, but show LUNs does.

Currently the (simplified) flow is:
1. connect to storage (when relevant)
2. get listing of devices
3. create a storage domain on selected devices

> 
> Provisioning LUNs will probably exploit the lsm capabilities and
> provide
> the options to the user to create the LUNs using the available array
> features.
> 
> With GlusterFS also providing some of the array capabilities (stripe,
> replicate etc), user might want to provision GlusterFS volume (with
> whatever capabilities glu

Re: [vdsm] Storage Device Management in VDSM and oVirt

2012-04-24 Thread Deepak C Shetty

On 04/24/2012 02:07 AM, Ayal Baron wrote:


- Original Message -

On 04/22/2012 12:28 PM, Ayal Baron wrote:

This way we'd have a 2 stage process:
1. setupStorage (generic)

I was looking up on the VDSM archives and there are talks of using
libstoragemgmt (lsm)

Funny, we started using that acronym for Live Storage Migration :)


under VDSM. I was wondering if the setupStorage would be something
where
lsm would
be used to do the work, it seems fit for purpose here.



I don't think this is the libstoragemgmt mandate.

libstoragemgmt is:
"A library that will provide a vendor agnostic open source storage
application programming interface (API) for storage arrays."

i.e. it is there to abstract storage array specifics from the user.
It will be used by things like LVM etc, not the other way around.

setupStorage would use libstoragemgmt wherever appropriate of
course.

But as the libstoragemgmt maintainer, Tony (cc'd) can correct me if
I'm wrong here.



I was looking at setupStorage as Provisioning + Setting up.
I know one of the basic goals of lsm is provision the storage to the
host
and preparing the storage for consumption is higher layers work.

With that, i think then its becomes a 3 stage process, from
oVirt/VDSM
pov...
1) Provision Storage (using lsm if applicable, based on whether
external
storage is connected)
2) Setup Storage (prepare the provisioned LUNs for usage)
3) createSD/createGlusterVolume/...  (plugin specific)

Since we are talking about Storage management using VDSM, i was
interested in understanding the plans, strategy of how VDSM + lsm
will integrate ?


There are various ways of approaching this.
1. Given proper storage you could just provision new LUNs whenever you need a 
new virtual disk and utilize storage side thin provisioning and snapshots for 
most of your needs.
When you have such storage you don't really need steps 2 and 3 above.  Your 
storage is your virtual images repository.
Although quite simple and powerful, very few arrays are capable of growing to a 
very large number of objects (luns + snapshots + whatever) today, so I don't 
see this being the most common use case any time soon.


This is not clear to me. This only talks about provisioning but not consuming.
2 and 3 above are required from a consumability perspective. The LUNs will have
to prepared and used by LVM (pv, vg, lv, metadata) for VDSM to host a storage 
domain.



2. Provision LUNs (either statically or dynamically using lsm) once, preferably 
thinly provisioned. Then setupStorage (storage domain over VG / gLuster / 
other) and use lsm for creating snapshots/clones on the fly
In my opinion this will be more prevalent to begin with.

With lsm we will (hopefully) have a way of enumerating storage side 
capabilities so then when we create a repository (gluster / sd / ...) we'd be 
able to determine on the fly what capabilities it has and determine if to use 
these or to use virtualized capabilities (e.g. in the virt case when you need 
to create a snapshot use qcowX).

In oVirt, once you've defined a storage domain and it exposes a set of 
capabilities, user should be able to override (e.g. even though storage 
supports snapshots, I want to use qcow as this storage can only create 255 
snapshots per volume and I need more than that).

I'm assuming that we will not have any way of knowing the limits per machine.

Does that make sense?



Agree to #2. Thinking deeper

1) Provisioning Storage

Provisioning storage using lsm would require new VDSM verbs to be added,
which can create / show the LUNs to the oVirt user and user can then
select which LUN(s) to use for setupStorage.

Provisioning LUNs will probably exploit the lsm capabilities and provide
the options to the user to create the LUNs using the available array
features.

With GlusterFS also providing some of the array capabilities (stripe,
replicate etc), user might want to provision GlusterFS volume (with
whatever capabilities gluster offers) to host storage upon, especially
if the storage is coming from not-so-reliable commodity hw storage.

I feel this also has to be considered as part of provisioning and should
come before the setupStorage step.

IMHO, there should be a "Storage Provisioning" tab in oVirt which will
allow user to ...

1a) Carve LUNs from external Storage array.

1b) Provision storage as GlusterFS volume. User can select the LUNs
carved (from #1a) as bricks for GlusterFS volume, if need be.

1c) Use local host free disk space.

Somewhere here there should be option ( as applicable) for user to
select whether to exploit storage array features or host virt
capabilities for say snapshot, in cases where both are applicable.

2) Setup Storage

Here the user would create VDSM file or block based storage domain,
based on the storage provisioned from the "Storage Provisioning" tab.
I believe this is where  VDSM will add its metadata to the provisioned
storage to make it a storage domain.

IMHO for image operations lik

Re: [vdsm] Storage Device Management in VDSM and oVirt

2012-04-23 Thread Ayal Baron


- Original Message -
> On 04/22/2012 12:28 PM, Ayal Baron wrote:
> >
> >>> This way we'd have a 2 stage process:
> >>> 1. setupStorage (generic)
> >> I was looking up on the VDSM archives and there are talks of using
> >> libstoragemgmt (lsm)
> > Funny, we started using that acronym for Live Storage Migration :)
> >
> >> under VDSM. I was wondering if the setupStorage would be something
> >> where
> >> lsm would
> >> be used to do the work, it seems fit for purpose here.
> >>
> >>
> > I don't think this is the libstoragemgmt mandate.
> >
> > libstoragemgmt is:
> > "A library that will provide a vendor agnostic open source storage
> > application programming interface (API) for storage arrays."
> >
> > i.e. it is there to abstract storage array specifics from the user.
> > It will be used by things like LVM etc, not the other way around.
> >
> > setupStorage would use libstoragemgmt wherever appropriate of
> > course.
> >
> > But as the libstoragemgmt maintainer, Tony (cc'd) can correct me if
> > I'm wrong here.
> >
> >
> 
> I was looking at setupStorage as Provisioning + Setting up.
> I know one of the basic goals of lsm is provision the storage to the
> host
> and preparing the storage for consumption is higher layers work.
> 
> With that, i think then its becomes a 3 stage process, from
> oVirt/VDSM
> pov...
> 1) Provision Storage (using lsm if applicable, based on whether
> external
> storage is connected)
> 2) Setup Storage (prepare the provisioned LUNs for usage)
> 3) createSD/createGlusterVolume/...  (plugin specific)
> 
> Since we are talking about Storage management using VDSM, i was
> interested in understanding the plans, strategy of how VDSM + lsm
> will integrate ?


There are various ways of approaching this.
1. Given proper storage you could just provision new LUNs whenever you need a 
new virtual disk and utilize storage side thin provisioning and snapshots for 
most of your needs.
When you have such storage you don't really need steps 2 and 3 above.  Your 
storage is your virtual images repository.
Although quite simple and powerful, very few arrays are capable of growing to a 
very large number of objects (luns + snapshots + whatever) today, so I don't 
see this being the most common use case any time soon.

2. Provision LUNs (either statically or dynamically using lsm) once, preferably 
thinly provisioned. Then setupStorage (storage domain over VG / gLuster / 
other) and use lsm for creating snapshots/clones on the fly
In my opinion this will be more prevalent to begin with.

With lsm we will (hopefully) have a way of enumerating storage side 
capabilities so then when we create a repository (gluster / sd / ...) we'd be 
able to determine on the fly what capabilities it has and determine if to use 
these or to use virtualized capabilities (e.g. in the virt case when you need 
to create a snapshot use qcowX).

In oVirt, once you've defined a storage domain and it exposes a set of 
capabilities, user should be able to override (e.g. even though storage 
supports snapshots, I want to use qcow as this storage can only create 255 
snapshots per volume and I need more than that).

I'm assuming that we will not have any way of knowing the limits per machine.

Does that make sense?

> 
> thanx,
> deepak
> 
> 
> 
> 
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Storage Device Management in VDSM and oVirt

2012-04-23 Thread Tony Asleson
On 04/22/2012 01:58 AM, Ayal Baron wrote:
> On 04/20/2012 02:23 PM, Deepak C Shetty wrote:
>> under VDSM. I was wondering if the setupStorage would be something
>> where lsm would be used to do the work, it seems fit for purpose
>> here.
>> 
>> 
> 
> I don't think this is the libstoragemgmt mandate.
> 
> libstoragemgmt is: "A library that will provide a vendor agnostic
> open source storage application programming interface (API) for
> storage arrays."
> 
> i.e. it is there to abstract storage array specifics from the user. 
> It will be used by things like LVM etc, not the other way around.

Yes, this is the current plan for libStorageMgmt.  To provide the
missing management path to third party storage arrays in a consistent
manner.

I'm starting to think that the project name is causing some confusion.
Some of the ambiguity was intentional as eventually the library will
hopefully allow users to manage other pieces of the storage puzzle like
FC switches, but perhaps it is too general.

I believe users would like tools/libraries to manage and simplify the
whole area of storage and
http://sourceforge.net/p/storagemanager/home/Home/ is working towards that.

-Tony
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Storage Device Management in VDSM and oVirt

2012-04-23 Thread Deepak C Shetty

On 04/22/2012 12:28 PM, Ayal Baron wrote:



This way we'd have a 2 stage process:
1. setupStorage (generic)

I was looking up on the VDSM archives and there are talks of using
libstoragemgmt (lsm)

Funny, we started using that acronym for Live Storage Migration :)


under VDSM. I was wondering if the setupStorage would be something
where
lsm would
be used to do the work, it seems fit for purpose here.



I don't think this is the libstoragemgmt mandate.

libstoragemgmt is:
"A library that will provide a vendor agnostic open source storage application 
programming interface (API) for storage arrays."

i.e. it is there to abstract storage array specifics from the user.
It will be used by things like LVM etc, not the other way around.

setupStorage would use libstoragemgmt wherever appropriate of course.

But as the libstoragemgmt maintainer, Tony (cc'd) can correct me if I'm wrong 
here.




I was looking at setupStorage as Provisioning + Setting up.
I know one of the basic goals of lsm is provision the storage to the host
and preparing the storage for consumption is higher layers work.

With that, i think then its becomes a 3 stage process, from oVirt/VDSM 
pov...
1) Provision Storage (using lsm if applicable, based on whether external 
storage is connected)

2) Setup Storage (prepare the provisioned LUNs for usage)
3) createSD/createGlusterVolume/...  (plugin specific)

Since we are talking about Storage management using VDSM, i was
interested in understanding the plans, strategy of how VDSM + lsm
will integrate ?

thanx,
deepak



___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Storage Device Management in VDSM and oVirt

2012-04-21 Thread Ayal Baron


- Original Message -
> 
> >>> So although I believe that when we create a gluster volume or an
> >>> ovirt storage domain then indeed we shouldn't need a lot of low
> >>> level commands, but it would appear to me that not allowing for
> >>> more control when needed is not going to work and that there are
> >>> enough use cases which do not involve a gluster volume nor a
> >>> storage domain to warrant this to be generic.
> >> I'm not against more control; I'm against uncontrollable API such
> >> as
> >> runThisLvmCommandAsRoot()
> > I can't argue with this.
> > I think what we're missing here though is something similar to
> > setupNetworks which would solve the problem.  Not have 100 verbs
> > (createPartition, createFS, createVG, createLV,  setupRaid,...)
> > but rather have setupStorage (better name suggestions are welcome)
> > which would get the list of objects to use and the final
> > configuration to setup.
> >
> > This way we'd have a 2 stage process:
> > 1. setupStorage (generic)
> 
> I was looking up on the VDSM archives and there are talks of using
> libstoragemgmt (lsm)

Funny, we started using that acronym for Live Storage Migration :)

> under VDSM. I was wondering if the setupStorage would be something
> where
> lsm would
> be used to do the work, it seems fit for purpose here.
> 
> 

I don't think this is the libstoragemgmt mandate.

libstoragemgmt is:
"A library that will provide a vendor agnostic open source storage application 
programming interface (API) for storage arrays."

i.e. it is there to abstract storage array specifics from the user.
It will be used by things like LVM etc, not the other way around.

setupStorage would use libstoragemgmt wherever appropriate of course.

But as the libstoragemgmt maintainer, Tony (cc'd) can correct me if I'm wrong 
here.
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Storage Device Management in VDSM and oVirt

2012-04-20 Thread Deepak C Shetty



So although I believe that when we create a gluster volume or an
ovirt storage domain then indeed we shouldn't need a lot of low
level commands, but it would appear to me that not allowing for
more control when needed is not going to work and that there are
enough use cases which do not involve a gluster volume nor a
storage domain to warrant this to be generic.

I'm not against more control; I'm against uncontrollable API such as
runThisLvmCommandAsRoot()

I can't argue with this.
I think what we're missing here though is something similar to setupNetworks 
which would solve the problem.  Not have 100 verbs (createPartition, createFS, 
createVG, createLV,  setupRaid,...) but rather have setupStorage (better name 
suggestions are welcome) which would get the list of objects to use and the 
final configuration to setup.

This way we'd have a 2 stage process:
1. setupStorage (generic)


I was looking up on the VDSM archives and there are talks of using 
libstoragemgmt (lsm)
under VDSM. I was wondering if the setupStorage would be something where 
lsm would

be used to do the work, it seems fit for purpose here.

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Storage Device Management in VDSM and oVirt

2012-04-19 Thread Ayal Baron


- Original Message -
> On Wed, Apr 18, 2012 at 09:06:36AM -0400, Ayal Baron wrote:
> > 
> > 
> > - Original Message -
> > > On Tue, Apr 17, 2012 at 03:38:25PM +0530, Shireesh Anjal wrote:
> > > > Hi all,
> > > > 
> > > > As part of adding Gluster support in ovirt, we need to
> > > > introduce
> > > > some Storage Device management capabilities (on the host).
> > > > Since
> > > > these are quite generic and not specific to Gluster as such, we
> > > > think it might be useful to add it as a core vdsm and oVirt
> > > > feature.
> > > > At a high level, this involves following:
> > > > 
> > > >  - A "Storage Devices" sub-tab on "Host" entity, displaying
> > > > information about all the storage devices*
> > > >  - Listing of different types of storage devices of a host
> > > > - Regular Disks and Partitions*
> > > > - LVM*
> > > > - Software RAID*
> > > >  - Various actions related to device configuration
> > > > - Partition disks*
> > > > - Format and mount disks / partitions*
> > > > - Create, resize and delete LVM Volume Groups (VGs)
> > > > - Create, resize, delete, format and mount LVM Logical
> > > > Volumes
> > > > (LVs)
> > > > - Create, resize, delete, partition, format and mount
> > > > Software
> > > > RAID devices
> > > >  - Edit properties of the devices
> > > >  - UI can be modeled similar to the system-config-lvm tool
> > > > 
> > > > The items marked with (*) in above list are urgently required
> > > > for
> > > > the Gluster feature, and will be developed first.
> > > > 
> > > > Comments / inputs welcome.
> > > 
> > > This seems like a big undertaking, and I would like to understand
> > > the
> > > complete use case of this. Is it intended to create the block
> > > storage
> > > devices on top of which a Gluster volume will be created?
> > 
> > Yes, but not only.
> > It could also be used to create the file system on top of which you
> > create a local storage domain (just an example, there are many
> > others, more listed below).
> > 
> > > 
> > > I must tell that we had a bad experience with exposing low level
> > > commands over the Vdsm API: A Vdsm storage domain is a VG with
> > > some
> > > metadata on top. We used to have two API calls for creating a
> > > storage
> > > domain: one to create the VG and one to add the metadata and make
> > > it
> > > an
> > > SD. But it is pretty hard to handle all the error cases remotely.
> > > It
> > > proved more useful to have one atomic command for the whole
> > > sequence.
> > > 
> > > I suspect that this would be the case here, too. I'm not sure if
> > > using
> > > Vdsm as an ssh-replacement for transporting lvm/md/fdisk commands
> > > is
> > > the
> > > best approach.
> > 
> > I agree, we should either provide added value or figure out a way
> > where we don't need to simply add a verb every time the underlying
> > APIs added something.
> > 
> > > 
> > > It may be better to have a single verb for creating Gluster
> > > volume
> > > out
> > > of block storage devices. Something like: "take these disks,
> > > partition
> > > them, build a raid, cover with a vg, carve some PVs and make each
> > > of
> > > them a Gluster volume".
> > > 
> > > Obviously, it is not simple to define a good language to describe
> > > a
> > > general architecture of a Gluster voluem. But it would have to be
> > > done
> > > somewhere - if not in Vdsm then in Engine; and I suspect it would
> > > be
> > > better done on the local host, not beyond a fragile network link.
> > > 
> > > Please note that currently, Vdsm makes a lot of effort not to
> > > touch
> > > LVM
> > > metadata of existing VGs on regular "HSM" hosts. All such
> > > operations
> > > are
> > > done on the engine-selected "SPM" host. When implementing this,
> > > we
> > > must
> > > bear in mind these safeguards and think whether we want to break
> > > them.
> > 
> > I'm not sure I see how this is relevant, we allow creating a VG on
> > any host today and that isn't going to change...
> 
> We have one painful exception, that alone is no reason to add more.
> Note
> that currently, Engine uses the would-be-spm for vg creation. In the
> gluster use case, any host is expected to do this on async timing. It
> might be required, but it's not warm and fuzzy.

Engine does not use the spm for vg creation, it uses whatever host the user 
wishes to use (there is a drop-down to choose/you can pass host in API).  The 
default host is SPM in the drop-down, but it's optional.
In addition, next major version we'll have SDM and then there will be no good 
default.
This also means that the same host would be allowed to manipulate 

> 
> > 
> > In general, we know that we already need to support using a LUN
> > even if it has partitions on it (with force or something).
> > 
> > We know that we have requirements for more control over the VG that
> > we create e.g. support striping, control over max LV size (limited
> > by pv extent size today) etc.
> > 
> > We al

Re: [vdsm] Storage Device Management in VDSM and oVirt

2012-04-18 Thread Dan Kenigsberg
On Wed, Apr 18, 2012 at 09:06:36AM -0400, Ayal Baron wrote:
> 
> 
> - Original Message -
> > On Tue, Apr 17, 2012 at 03:38:25PM +0530, Shireesh Anjal wrote:
> > > Hi all,
> > > 
> > > As part of adding Gluster support in ovirt, we need to introduce
> > > some Storage Device management capabilities (on the host). Since
> > > these are quite generic and not specific to Gluster as such, we
> > > think it might be useful to add it as a core vdsm and oVirt
> > > feature.
> > > At a high level, this involves following:
> > > 
> > >  - A "Storage Devices" sub-tab on "Host" entity, displaying
> > > information about all the storage devices*
> > >  - Listing of different types of storage devices of a host
> > > - Regular Disks and Partitions*
> > > - LVM*
> > > - Software RAID*
> > >  - Various actions related to device configuration
> > > - Partition disks*
> > > - Format and mount disks / partitions*
> > > - Create, resize and delete LVM Volume Groups (VGs)
> > > - Create, resize, delete, format and mount LVM Logical Volumes
> > > (LVs)
> > > - Create, resize, delete, partition, format and mount Software
> > > RAID devices
> > >  - Edit properties of the devices
> > >  - UI can be modeled similar to the system-config-lvm tool
> > > 
> > > The items marked with (*) in above list are urgently required for
> > > the Gluster feature, and will be developed first.
> > > 
> > > Comments / inputs welcome.
> > 
> > This seems like a big undertaking, and I would like to understand the
> > complete use case of this. Is it intended to create the block storage
> > devices on top of which a Gluster volume will be created?
> 
> Yes, but not only.
> It could also be used to create the file system on top of which you create a 
> local storage domain (just an example, there are many others, more listed 
> below).
> 
> > 
> > I must tell that we had a bad experience with exposing low level
> > commands over the Vdsm API: A Vdsm storage domain is a VG with some
> > metadata on top. We used to have two API calls for creating a storage
> > domain: one to create the VG and one to add the metadata and make it
> > an
> > SD. But it is pretty hard to handle all the error cases remotely. It
> > proved more useful to have one atomic command for the whole sequence.
> > 
> > I suspect that this would be the case here, too. I'm not sure if
> > using
> > Vdsm as an ssh-replacement for transporting lvm/md/fdisk commands is
> > the
> > best approach.
> 
> I agree, we should either provide added value or figure out a way where we 
> don't need to simply add a verb every time the underlying APIs added 
> something.
> 
> > 
> > It may be better to have a single verb for creating Gluster volume
> > out
> > of block storage devices. Something like: "take these disks,
> > partition
> > them, build a raid, cover with a vg, carve some PVs and make each of
> > them a Gluster volume".
> > 
> > Obviously, it is not simple to define a good language to describe a
> > general architecture of a Gluster voluem. But it would have to be
> > done
> > somewhere - if not in Vdsm then in Engine; and I suspect it would be
> > better done on the local host, not beyond a fragile network link.
> > 
> > Please note that currently, Vdsm makes a lot of effort not to touch
> > LVM
> > metadata of existing VGs on regular "HSM" hosts. All such operations
> > are
> > done on the engine-selected "SPM" host. When implementing this, we
> > must
> > bear in mind these safeguards and think whether we want to break
> > them.
> 
> I'm not sure I see how this is relevant, we allow creating a VG on any host 
> today and that isn't going to change...

We have one painful exception, that alone is no reason to add more. Note
that currently, Engine uses the would-be-spm for vg creation. In the
gluster use case, any host is expected to do this on async timing. It
might be required, but it's not warm and fuzzy.

> 
> In general, we know that we already need to support using a LUN even if it 
> has partitions on it (with force or something).
> 
> We know that we have requirements for more control over the VG that we create 
> e.g. support striping, control over max LV size (limited by pv extent size 
> today) etc.
> 
> We also know that users would like a way not only to use a local dir for a 
> storage domain but also create the directory + fs?

These three examples are storage domain flavors..

> 
> We know that in the gLuster use case we would like the ability to setup samba 
> over the gluster volume as well as iSCSI probably.

Now I do not see the relevance. Configuring gluster and how it exposes
its volume is something other than preparing block storage for gluster
bricks.

> 
> So although I believe that when we create a gluster volume or an ovirt 
> storage domain then indeed we shouldn't need a lot of low level commands, but 
> it would appear to me that not allowing for more control when needed is not 
> going to work and that there are enou

Re: [vdsm] Storage Device Management in VDSM and oVirt

2012-04-18 Thread Dan Kenigsberg
(Note that vdsm-devel is on fedorahosted.org. vdsm-de...@ovirt.org was
created by mistake, and I believe we agreed to dropped it)

On Tue, Apr 17, 2012 at 03:38:25PM +0530, Shireesh Anjal wrote:
> Hi all,
> 
> As part of adding Gluster support in ovirt, we need to introduce
> some Storage Device management capabilities (on the host). Since
> these are quite generic and not specific to Gluster as such, we
> think it might be useful to add it as a core vdsm and oVirt feature.
> At a high level, this involves following:
> 
>  - A "Storage Devices" sub-tab on "Host" entity, displaying
> information about all the storage devices*
>  - Listing of different types of storage devices of a host
> - Regular Disks and Partitions*
> - LVM*
> - Software RAID*
>  - Various actions related to device configuration
> - Partition disks*
> - Format and mount disks / partitions*
> - Create, resize and delete LVM Volume Groups (VGs)
> - Create, resize, delete, format and mount LVM Logical Volumes (LVs)
> - Create, resize, delete, partition, format and mount Software
> RAID devices
>  - Edit properties of the devices
>  - UI can be modeled similar to the system-config-lvm tool
> 
> The items marked with (*) in above list are urgently required for
> the Gluster feature, and will be developed first.
> 
> Comments / inputs welcome.

This seems like a big undertaking, and I would like to understand the
complete use case of this. Is it intended to create the block storage
devices on top of which a Gluster volume will be created?

I must tell that we had a bad experience with exposing low level
commands over the Vdsm API: A Vdsm storage domain is a VG with some
metadata on top. We used to have two API calls for creating a storage
domain: one to create the VG and one to add the metadata and make it an
SD. But it is pretty hard to handle all the error cases remotely. It
proved more useful to have one atomic command for the whole sequence.

I suspect that this would be the case here, too. I'm not sure if using
Vdsm as an ssh-replacement for transporting lvm/md/fdisk commands is the
best approach.

It may be better to have a single verb for creating Gluster volume out
of block storage devices. Something like: "take these disks, partition
them, build a raid, cover with a vg, carve some PVs and make each of
them a Gluster volume".

Obviously, it is not simple to define a good language to describe a
general architecture of a Gluster voluem. But it would have to be done
somewhere - if not in Vdsm then in Engine; and I suspect it would be
better done on the local host, not beyond a fragile network link.

Please note that currently, Vdsm makes a lot of effort not to touch LVM
metadata of existing VGs on regular "HSM" hosts. All such operations are
done on the engine-selected "SPM" host. When implementing this, we must
bear in mind these safeguards and think whether we want to break them.

Regards,
Dan.
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel