Re: [openstack-dev] [Cinder] Regarding manage_existing and unmanage

2014-04-24 Thread Duncan Thomas
On 17 April 2014 18:01, Deepak Shetty dpkshe...@gmail.com wrote:

 On Fri, Apr 11, 2014 at 7:29 PM, Duncan Thomas duncan.tho...@gmail.com
 wrote:

 The scenario I *don't want to see is:
 1) Admin import a few hundred volumes into the cloud
 2) Some significant time goes by
 3) Cloud is being decommissioned / the storage transfer / etc. so the
 admin runs unmanage on all cinder volumes on that storage
 4) The volumes get renamed or not, based on whether they happened to
 come into cinder via manage or volume create

 *That* I would consider broken.

 What exactly is broken here.. Sorry but i didn't get it!

What is broken is that volumes that came into cinder via manage, and
volumes that came into cinder via volume create behave differently
when unmanaged.

Since manage and unmanage would generally be run with a  very long
separation in time, there is no reason for the admin to expect some
volumes to be renames and some not when they do the unmanage.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Regarding manage_existing and unmanage

2014-04-17 Thread Deepak Shetty
On Fri, Apr 11, 2014 at 7:29 PM, Duncan Thomas duncan.tho...@gmail.comwrote:

 On 11 April 2014 14:21, Deepak Shetty dpkshe...@gmail.com wrote:
  My argument was mostly from the perspective that unmanage shud do its
 best
  to revert back the volume to its original state (mainly the name).
 
  Like you said, once its given to cinder, its not a external volume
 anymore
  similary, once its taken out of cinder, its a external volume and its
 just
  logical for admin/user to expect the volume with its original name
 
  Thinking of scenarios..  (i may be wrong here)
 
  An admin submits few storage array LUNs (for which he has setup a
  mirroring relationship in the storage array) as volumes to cinder using
  manage_existing.. uses the volume as part of openstack, and there
  are 2 cases here
  1) cinder renames the volume, which causes his backend mirroring
  relationship to be broken
  2) He disconnects the mirror relnship while submitting the volume to
  cinder and when he unmanages it, expects the mirror to work
 
  Will this break if cinder renames the volume ?


 Both of those are unreasonable expectations, and I would entirely
 expect both of them to break. Once you give cidner a volume, you no
 longer have *any* control over what happens to that volume. Mirroring
 relationships, volume names, etc *all* become completely under
 cinder's control. Expecting *anything* to go back to the way it was
 before cinder got hold of the volume is completely wrong.


While i agree with you point of cinder taking full control of its volumes,
I still feel that providing the abiity to use the backend array features
along
w/ manage existing should be welcome'd by all.. esp given the price of
these arrays its good to design things in openstack that aid in using
the array features if the setup/env/admin wishes to do so, so that we fully
exploit the investment done in purchasing the storage arrrays :)




 The scenario I *don't want to see is:
 1) Admin import a few hundred volumes into the cloud
 2) Some significant time goes by
 3) Cloud is being decommissioned / the storage transfer / etc. so the
 admin runs unmanage on all cinder volumes on that storage
 4) The volumes get renamed or not, based on whether they happened to
 come into cinder via manage or volume create

 *That* I would consider broken.


What exactly is broken here.. Sorry but i didn't get it!

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Regarding manage_existing and unmanage

2014-04-11 Thread Deepak Shetty
My argument was mostly from the perspective that unmanage shud do its best
to revert back the volume to its original state (mainly the name).

Like you said, once its given to cinder, its not a external volume anymore
similary, once its taken out of cinder, its a external volume and its just
logical for admin/user to expect the volume with its original name

Thinking of scenarios..  (i may be wrong here)

An admin submits few storage array LUNs (for which he has setup a
mirroring relationship in the storage array) as volumes to cinder using
manage_existing.. uses the volume as part of openstack, and there
are 2 cases here
1) cinder renames the volume, which causes his backend mirroring
relationship to be broken
2) He disconnects the mirror relnship while submitting the volume to
cinder and when he unmanages it, expects the mirror to work

Will this break if cinder renames the volume ?



On Thu, Apr 10, 2014 at 10:50 PM, Duncan Thomas duncan.tho...@gmail.comwrote:

 On 10 April 2014 09:02, Deepak Shetty dpkshe...@gmail.com wrote:

  Ok, agreed. But then when admin unmanages it, we shud rename it back to
 the
  name
  that it originally had before it was managed by cinder. At least thats
 what
  admin can hope
  to expect, since he is un-doing the managed_existing stuff, he expects
 his
  file name to be
  present as it was before he managed it w/ cinder.

 I'd question this assertion. Once you've given a volume to cinder, it
 is not an external volume any more, it is cinder's. Unmanage of any
 volume should be consistent, regardless of whether it got into cinder
 via a volume create or a 'cinder manage' command. It is far worse to
 have unmanage  inconsistent at some point in the distant future than
 it is for the storage admin to do some extra work in the short term if
 he is experimenting with managing / unmanaging volumes.

 As was discussed at the summit, manage / unmanage is *not* designed to
 be a routine operation. If you're unmanaging volumes regularly then
 you're not using the interface as intended, and we need to discuss
 your use-case, not bake weird and inconsistent behaviour into the
 current interface.

 So, under what circumstances do you expect that the current behaviour
 causes a significant problem?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Regarding manage_existing and unmanage

2014-04-11 Thread Duncan Thomas
On 11 April 2014 14:21, Deepak Shetty dpkshe...@gmail.com wrote:
 My argument was mostly from the perspective that unmanage shud do its best
 to revert back the volume to its original state (mainly the name).

 Like you said, once its given to cinder, its not a external volume anymore
 similary, once its taken out of cinder, its a external volume and its just
 logical for admin/user to expect the volume with its original name

 Thinking of scenarios..  (i may be wrong here)

 An admin submits few storage array LUNs (for which he has setup a
 mirroring relationship in the storage array) as volumes to cinder using
 manage_existing.. uses the volume as part of openstack, and there
 are 2 cases here
 1) cinder renames the volume, which causes his backend mirroring
 relationship to be broken
 2) He disconnects the mirror relnship while submitting the volume to
 cinder and when he unmanages it, expects the mirror to work

 Will this break if cinder renames the volume ?


Both of those are unreasonable expectations, and I would entirely
expect both of them to break. Once you give cidner a volume, you no
longer have *any* control over what happens to that volume. Mirroring
relationships, volume names, etc *all* become completely under
cinder's control. Expecting *anything* to go back to the way it was
before cinder got hold of the volume is completely wrong.

The scenario I *don't want to see is:
1) Admin import a few hundred volumes into the cloud
2) Some significant time goes by
3) Cloud is being decommissioned / the storage transfer / etc. so the
admin runs unmanage on all cinder volumes on that storage
4) The volumes get renamed or not, based on whether they happened to
come into cinder via manage or volume create

*That* I would consider broken.

I'll say it again, to make my position totally clear - Once you've run
cinder manage, you can have no further expectations on a a volume.
Cinder might rename it, migrate it, compress it, change the on disk
format of it, etc. Cinder will not, and should not, remember
*anything* about the volume before it was managed. You give the volume
to cinder, and it becomes just another cinder volume, nothing special
about it at all.

Anything else is *not* covered by the manage / unmanage commands, and
needs to be discussed, with clear, reasoned usecases. We do not want
people using this interface with any other expectations, because even
for things that happen to work now, they might get changed in future,
without warning.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Regarding manage_existing and unmanage

2014-04-10 Thread Deepak Shetty
On Wed, Apr 9, 2014 at 9:39 PM, Duncan Thomas duncan.tho...@gmail.comwrote:

 On 9 April 2014 08:35, Deepak Shetty dpkshe...@gmail.com wrote:

  Alternatively, does this mean we need to make name_id a generic field
 (not a
  ID) and then use somethign like uuidutils.is_uuid_like() to determine if
 its
  UUID or non-UUID and then backend will accordinly map it ?

 Definitely not, overloading fields is horrible. If we are going to do
 a mapping, create a new, explicit field for it.

  Lastly,  I said storage admin will lose track of it bcos he would have
  named is my_vol and when he asks cidner to manage it using
 my_cinder_vol
  its not expected that u wud rename the volume's name on the backend :)
  I mean its good if we could implement manage_existing w/o renaming as
 then
  it would seem like less disruptive :)

 I think this leads to a bad kind of thinking. Once you've given a
 volume to cinder, the storage admin shouldn't be /trying/ to keep
 track of it. It is a cinder volume now, and cinder can and should do
 whatever it feels appropriate with that volume (rename it, migrate it
 to a new backend, etc etc etc)


Ok, agreed. But then when admin unmanages it, we shud rename it back to the
name
that it originally had before it was managed by cinder. Atleast thats what
admin can hope
to expect, since he is un-doing the managed_existing stuff, he expects his
file name to be
present as it was before he managed it w/ cinder.

We can always store the orignal name of the volume in a new field in
admin_metadata ?
say managed_name and let cinder do whatever it wants (incl rename) when it
manages it

There are 2 adv to this
1) admin can always see the admin_metadata to know which original name maps
to cinder
name. This also helps to figure of all the volumes managed by cinder, which
were the ones
that actually got in thru managed_existing such that it was _not_
actually created by cinder
in the first place

2) During unmanage, use the managed_name to rename the file to if original
name.

thanx,
deepak



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Regarding manage_existing and unmanage

2014-04-10 Thread Asselin, Ramy
I agree 'unmanage' should try to 'undo' as much as possible.
In this way, 'manage' the 2nd time will also work with the exact same command 
arguments as it did the first time.
Ramy

From: Deepak Shetty [mailto:dpkshe...@gmail.com]
Sent: Thursday, April 10, 2014 1:03 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] Regarding manage_existing and unmanage



On Wed, Apr 9, 2014 at 9:39 PM, Duncan Thomas 
duncan.tho...@gmail.commailto:duncan.tho...@gmail.com wrote:
On 9 April 2014 08:35, Deepak Shetty 
dpkshe...@gmail.commailto:dpkshe...@gmail.com wrote:

 Alternatively, does this mean we need to make name_id a generic field (not a
 ID) and then use somethign like uuidutils.is_uuid_like() to determine if its
 UUID or non-UUID and then backend will accordinly map it ?
Definitely not, overloading fields is horrible. If we are going to do
a mapping, create a new, explicit field for it.

 Lastly,  I said storage admin will lose track of it bcos he would have
 named is my_vol and when he asks cidner to manage it using my_cinder_vol
 its not expected that u wud rename the volume's name on the backend :)
 I mean its good if we could implement manage_existing w/o renaming as then
 it would seem like less disruptive :)
I think this leads to a bad kind of thinking. Once you've given a
volume to cinder, the storage admin shouldn't be /trying/ to keep
track of it. It is a cinder volume now, and cinder can and should do
whatever it feels appropriate with that volume (rename it, migrate it
to a new backend, etc etc etc)

Ok, agreed. But then when admin unmanages it, we shud rename it back to the name
that it originally had before it was managed by cinder. Atleast thats what 
admin can hope
to expect, since he is un-doing the managed_existing stuff, he expects his file 
name to be
present as it was before he managed it w/ cinder.
We can always store the orignal name of the volume in a new field in 
admin_metadata ?
say managed_name and let cinder do whatever it wants (incl rename) when it 
manages it

There are 2 adv to this
1) admin can always see the admin_metadata to know which original name maps to 
cinder
name. This also helps to figure of all the volumes managed by cinder, which 
were the ones
that actually got in thru managed_existing such that it was _not_ actually 
created by cinder
in the first place
2) During unmanage, use the managed_name to rename the file to if original name.

thanx,
deepak


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Regarding manage_existing and unmanage

2014-04-10 Thread Duncan Thomas
On 10 April 2014 09:02, Deepak Shetty dpkshe...@gmail.com wrote:

 Ok, agreed. But then when admin unmanages it, we shud rename it back to the
 name
 that it originally had before it was managed by cinder. At least thats what
 admin can hope
 to expect, since he is un-doing the managed_existing stuff, he expects his
 file name to be
 present as it was before he managed it w/ cinder.

I'd question this assertion. Once you've given a volume to cinder, it
is not an external volume any more, it is cinder's. Unmanage of any
volume should be consistent, regardless of whether it got into cinder
via a volume create or a 'cinder manage' command. It is far worse to
have unmanage  inconsistent at some point in the distant future than
it is for the storage admin to do some extra work in the short term if
he is experimenting with managing / unmanaging volumes.

As was discussed at the summit, manage / unmanage is *not* designed to
be a routine operation. If you're unmanaging volumes regularly then
you're not using the interface as intended, and we need to discuss
your use-case, not bake weird and inconsistent behaviour into the
current interface.

So, under what circumstances do you expect that the current behaviour
causes a significant problem?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Regarding manage_existing and unmanage

2014-04-09 Thread Avishay Traeger
On Wed, Apr 9, 2014 at 8:35 AM, Deepak Shetty dpkshe...@gmail.com wrote:




 On Tue, Apr 8, 2014 at 6:24 PM, Avishay Traeger 
 avis...@stratoscale.comwrote:

 On Tue, Apr 8, 2014 at 9:17 AM, Deepak Shetty dpkshe...@gmail.comwrote:

 Hi List,
 I had few Qs on the implementation of manage_existing and unmanage
 API extns

 1) For LVM case, it renames the lv.. isn't it better to use name_id (one
 used during cinder migrate to keep id same for a diff backend name/id) to
 map cinder name/id to backend name/id and thus avoid renaming the backend
 storage. Renaming isn't good since it changes the original name of the
 storage object and hence storage admin may lose track? The Storwize uses
 UID and changes vdisk_name on the backend array which isn't good either. Is
 renaming a must, if yes why ?


 'name_id' is an ID, like c8b3d8e2-2410-4362-b24b-548a13fa850b.
 In migration, both the original and new volumes use the same template for
 volume names, just with a different ID, so name_id works well for that.
  When importing a volume that wasn't created by Cinder, chances are it
 won't conform to this template, and so name_id won't work (i.e., I can call
 the volume 'my_very_important_db_volume', and name_id can't help with
 that).  When importing, the admin should give the volume a proper name and
 description, and won't lose track of it - it is now being managed by Cinder.


 Avishay,
 thanks for ur reply.. it did help. Just one more Q tho...

  (i.e., I can call the volume 'my_very_important_db_volume', and name_id
 can't help with that).
 This is the name of the volume. but isn't it common for most arrays to
 provide name and ID (which is again UUID) for a volume on the backend.. so
 name_id can still point to the UID which has the name
 'my_very_important_db_volume'
 In fact in storwize, you are using vdisk_id itself and changing the
 vdisk_name to match what the user gave.. and vdisk_id is a UUID and matches
 w/ name_id format


Not exactly, it's a number (like '5'), not a UUID like
c8b3d8e2-2410-4362-b24b-548a13fa850b


 Alternatively, does this mean we need to make name_id a generic field (not
 a ID) and then use somethign like uuidutils.is_uuid_like() to determine if
 its UUID or non-UUID and then backend will accordinly map it ?

 Lastly,  I said storage admin will lose track of it bcos he would have
 named is my_vol and when he asks cidner to manage it using
 my_cinder_vol its not expected that u wud rename the volume's name on the
 backend :)
 I mean its good if we could implement manage_existing w/o renaming as then
 it would seem like less disruptive :)


 I think there are a few trade-offs here - making it less disruptive in
this sense makes it more disruptive to:
1. Managing the storage over its lifetime.  If we assume that the admin
will stick with Cinder for managing their volumes, and if they need to find
the volume on the storage, it should be done uniformly (i.e., go to the
backend and find the volume named 'volume-%s' % name_id).
2. The code, where a change of this kind could make things messy.
 Basically the rename approach has a little bit of complexity overhead when
you do manage_existing, but from then on it's just like any other volume.
 Otherwise, it's always a special case in different code paths, which could
be tricky.

If you still feel that rename is wrong and that there is a better approach,
I encourage you to try, and post code if it works.  I don't mind being
proved wrong. :)

Thanks,
Avishay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Regarding manage_existing and unmanage

2014-04-09 Thread Geraint North
I personally don't like the rename approach (and I implemented it!). 
However, as Avishay says, we don't have that many options.

One thing that we could do is start to use the admin_metadata associated 
with a volume to store a reference to the volume other than the name 
(which is the UUID).  However, this requires that individual drivers must 
change to support that - e.g. the Storwize driver could choose to store 
the vdisk ID/UUID in admin_metadata, and use it whenever it needed to 
perform an operation on a volume.  Similarly, the LVM driver could do the 
same, and use that in preference to assuming that the LVM was named from 
the volume['name'] if it existed, but these are going to be fairly 
signficant changes.

Thanks,
Geraint.

Geraint North
Storage Virtualization Architect and Master Inventor, Cloud Systems 
Software.
IBM Manchester Lab, UK.



From:   Avishay Traeger avis...@stratoscale.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date:   09/04/2014 11:23
Subject:Re: [openstack-dev] [Cinder] Regarding manage_existing and 
unmanage



On Wed, Apr 9, 2014 at 8:35 AM, Deepak Shetty dpkshe...@gmail.com wrote:



On Tue, Apr 8, 2014 at 6:24 PM, Avishay Traeger avis...@stratoscale.com 
wrote:
On Tue, Apr 8, 2014 at 9:17 AM, Deepak Shetty dpkshe...@gmail.com wrote:
Hi List,
I had few Qs on the implementation of manage_existing and unmanage API 
extns

1) For LVM case, it renames the lv.. isn't it better to use name_id (one 
used during cinder migrate to keep id same for a diff backend name/id) to 
map cinder name/id to backend name/id and thus avoid renaming the backend 
storage. Renaming isn't good since it changes the original name of the 
storage object and hence storage admin may lose track? The Storwize uses 
UID and changes vdisk_name on the backend array which isn't good either. 
Is renaming a must, if yes why ?

'name_id' is an ID, like c8b3d8e2-2410-4362-b24b-548a13fa850b.
In migration, both the original and new volumes use the same template for 
volume names, just with a different ID, so name_id works well for that. 
 When importing a volume that wasn't created by Cinder, chances are it 
won't conform to this template, and so name_id won't work (i.e., I can 
call the volume 'my_very_important_db_volume', and name_id can't help with 
that).  When importing, the admin should give the volume a proper name and 
description, and won't lose track of it - it is now being managed by 
Cinder.

Avishay,
thanks for ur reply.. it did help. Just one more Q tho...

 (i.e., I can call the volume 'my_very_important_db_volume', and name_id 
can't help with that).
This is the name of the volume. but isn't it common for most arrays to 
provide name and ID (which is again UUID) for a volume on the backend.. so 
name_id can still point to the UID which has the name 
'my_very_important_db_volume'
In fact in storwize, you are using vdisk_id itself and changing the 
vdisk_name to match what the user gave.. and vdisk_id is a UUID and 
matches w/ name_id format

Not exactly, it's a number (like '5'), not a UUID like 
c8b3d8e2-2410-4362-b24b-548a13fa850b
 
Alternatively, does this mean we need to make name_id a generic field (not 
a ID) and then use somethign like uuidutils.is_uuid_like() to determine if 
its UUID or non-UUID and then backend will accordinly map it ?

Lastly,  I said storage admin will lose track of it bcos he would have 
named is my_vol and when he asks cidner to manage it using 
my_cinder_vol its not expected that u wud rename the volume's name on 
the backend :)
I mean its good if we could implement manage_existing w/o renaming as then 
it would seem like less disruptive :)

 I think there are a few trade-offs here - making it less disruptive in 
this sense makes it more disruptive to:
1. Managing the storage over its lifetime.  If we assume that the admin 
will stick with Cinder for managing their volumes, and if they need to 
find the volume on the storage, it should be done uniformly (i.e., go to 
the backend and find the volume named 'volume-%s' % name_id).
2. The code, where a change of this kind could make things messy. 
 Basically the rename approach has a little bit of complexity overhead 
when you do manage_existing, but from then on it's just like any other 
volume.  Otherwise, it's always a special case in different code paths, 
which could be tricky.

If you still feel that rename is wrong and that there is a better 
approach, I encourage you to try, and post code if it works.  I don't mind 
being proved wrong. :)

Thanks,
Avishay___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

Re: [openstack-dev] [Cinder] Regarding manage_existing and unmanage

2014-04-09 Thread Duncan Thomas
On 9 April 2014 08:35, Deepak Shetty dpkshe...@gmail.com wrote:

 Alternatively, does this mean we need to make name_id a generic field (not a
 ID) and then use somethign like uuidutils.is_uuid_like() to determine if its
 UUID or non-UUID and then backend will accordinly map it ?

Definitely not, overloading fields is horrible. If we are going to do
a mapping, create a new, explicit field for it.

 Lastly,  I said storage admin will lose track of it bcos he would have
 named is my_vol and when he asks cidner to manage it using my_cinder_vol
 its not expected that u wud rename the volume's name on the backend :)
 I mean its good if we could implement manage_existing w/o renaming as then
 it would seem like less disruptive :)

I think this leads to a bad kind of thinking. Once you've given a
volume to cinder, the storage admin shouldn't be /trying/ to keep
track of it. It is a cinder volume now, and cinder can and should do
whatever it feels appropriate with that volume (rename it, migrate it
to a new backend, etc etc etc)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Regarding manage_existing and unmanage

2014-04-08 Thread Deepak Shetty
Hi List,
I had few Qs on the implementation of manage_existing and unmanage API
extns

1) For LVM case, it renames the lv.. isn't it better to use name_id (one
used during cinder migrate to keep id same for a diff backend name/id) to
map cinder name/id to backend name/id and thus avoid renaming the backend
storage. Renaming isn't good since it changes the original name of the
storage object and hence storage admin may lose track? The Storwize uses
UID and changes vdisk_name on the backend array which isn't good either. Is
renaming a must, if yes why ?

2) How about a force rename option can be provided ? if force = yes, use
rename otherwise name_id ?

3) Durign unmanage its good if we can revert the name back (in case it was
renamed as part of manage), so that we leave the storage object as it was
before it was managed by cinder ?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Regarding manage_existing and unmanage

2014-04-08 Thread Avishay Traeger
On Tue, Apr 8, 2014 at 9:17 AM, Deepak Shetty dpkshe...@gmail.com wrote:

 Hi List,
 I had few Qs on the implementation of manage_existing and unmanage API
 extns

 1) For LVM case, it renames the lv.. isn't it better to use name_id (one
 used during cinder migrate to keep id same for a diff backend name/id) to
 map cinder name/id to backend name/id and thus avoid renaming the backend
 storage. Renaming isn't good since it changes the original name of the
 storage object and hence storage admin may lose track? The Storwize uses
 UID and changes vdisk_name on the backend array which isn't good either. Is
 renaming a must, if yes why ?


'name_id' is an ID, like c8b3d8e2-2410-4362-b24b-548a13fa850b.
In migration, both the original and new volumes use the same template for
volume names, just with a different ID, so name_id works well for that.
 When importing a volume that wasn't created by Cinder, chances are it
won't conform to this template, and so name_id won't work (i.e., I can call
the volume 'my_very_important_db_volume', and name_id can't help with
that).  When importing, the admin should give the volume a proper name and
description, and won't lose track of it - it is now being managed by Cinder.


 2) How about a force rename option can be provided ? if force = yes, use
 rename otherwise name_id ?


As I mentioned, name_id won't work.  You would need some DB changes to
accept ANY volume name, and it can get messy.


 3) Durign unmanage its good if we can revert the name back (in case it was
 renamed as part of manage), so that we leave the storage object as it was
 before it was managed by cinder ?


I don't see any compelling reason to do this.

Thanks,
Avishay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Regarding manage_existing and unmanage

2014-04-08 Thread Deepak Shetty
On Tue, Apr 8, 2014 at 6:24 PM, Avishay Traeger avis...@stratoscale.comwrote:

 On Tue, Apr 8, 2014 at 9:17 AM, Deepak Shetty dpkshe...@gmail.com wrote:

 Hi List,
 I had few Qs on the implementation of manage_existing and unmanage
 API extns

 1) For LVM case, it renames the lv.. isn't it better to use name_id (one
 used during cinder migrate to keep id same for a diff backend name/id) to
 map cinder name/id to backend name/id and thus avoid renaming the backend
 storage. Renaming isn't good since it changes the original name of the
 storage object and hence storage admin may lose track? The Storwize uses
 UID and changes vdisk_name on the backend array which isn't good either. Is
 renaming a must, if yes why ?


 'name_id' is an ID, like c8b3d8e2-2410-4362-b24b-548a13fa850b.
 In migration, both the original and new volumes use the same template for
 volume names, just with a different ID, so name_id works well for that.
  When importing a volume that wasn't created by Cinder, chances are it
 won't conform to this template, and so name_id won't work (i.e., I can call
 the volume 'my_very_important_db_volume', and name_id can't help with
 that).  When importing, the admin should give the volume a proper name and
 description, and won't lose track of it - it is now being managed by Cinder.


Avishay,
thanks for ur reply.. it did help. Just one more Q tho...

 (i.e., I can call the volume 'my_very_important_db_volume', and name_id
can't help with that).
This is the name of the volume. but isn't it common for most arrays to
provide name and ID (which is again UUID) for a volume on the backend.. so
name_id can still point to the UID which has the name
'my_very_important_db_volume'
In fact in storwize, you are using vdisk_id itself and changing the
vdisk_name to match what the user gave.. and vdisk_id is a UUID and matches
w/ name_id format

Alternatively, does this mean we need to make name_id a generic field (not
a ID) and then use somethign like uuidutils.is_uuid_like() to determine if
its UUID or non-UUID and then backend will accordinly map it ?

Lastly,  I said storage admin will lose track of it bcos he would have
named is my_vol and when he asks cidner to manage it using
my_cinder_vol its not expected that u wud rename the volume's name on the
backend :)
I mean its good if we could implement manage_existing w/o renaming as then
it would seem like less disruptive :)

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev