Re: [openstack-dev] [cinder] Why not allow deleting volume from a CG ?

2015-02-09 Thread Nilesh P Bhosale
Adding an ability to Add/Remove existing volumes to/from CG looks fine. 
But, it does not help the use-case where one would want to directly delete 
a volume from CG.
Why do we force him to first remove a volume from CG and then delete?
As CG goes along with replication and backends creating a separate pool 
per CG, removing a volume from CG, just to be able to delete it in the 
next step, may be an unnecessary expensive operation.

I think, we can allow removing volume from a CG with something like 
'--force' option, so that user consciously makes that decision.

In fact, I think whatever decision user takes, even to delete a normal 
volume, is treated as his conscious decision.

Thanks,
Nilesh



From:   yang, xing xing.y...@emc.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date:   02/07/2015 01:54 AM
Subject:Re: [openstack-dev] [cinder] Why not allow deleting volume 
from a CG ?



As Mike said, allowing deletion of a single volume from a CG is error 
prone.  User could be deleting a single volume without knowing that it is 
part of a CG.  The new Modify CG feature for Kilo allows you to remove a 
volume from CG and you can delete it as a separate operation.  When user 
removes a volume from a CG, at least he/she is making a conscious decision 
knowing that the volume is currently part of the CG.

Thanks,
Xing


-Original Message-
From: Mike Perez [mailto:thin...@gmail.com] 
Sent: Friday, February 06, 2015 1:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder] Why not allow deleting volume from a 
CG ?

On 15:51 Fri 06 Feb , Nilesh P Bhosale wrote:
snip
 I understand this is as per design, but curious to understand logic 
 behind this.
snip
 Why not allow deletion of volumes form the CG? at least when there are 
 no dependent snapshots.

From the review [1], this is because allowing a volume that's part of a 
consistency group to be deleted is error prone for both the user and the 
storage backend. It assumes the storage backend will register the volume 
not being part of the consistency group. It also assumes the user is 
keeping tracking of what's part of a consistency group.

 With the current implementation, only way to delete the volume is to 
 delete the complete CG, deleting all the volumes in that, which I feel 
 is not right.

The plan in Kilo is to allow adding/removing volumes from a consistency 
group [2][3]. The user now has to explicitly remove the volume from a 
consistency group, which in my opinion is better than implicit with 
delete.

I'm open to rediscussing this issue with vendors and seeing about making 
sure things in the backend to be cleaned up properly, but I think this 
solution helps prevent the issue for both users and backends.

[1] - https://review.openstack.org/#/c/149095/
[2] - 
https://blueprints.launchpad.net/cinder/+spec/consistency-groups-kilo-update

[3] - https://review.openstack.org/#/c/144561/

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] Manila pythonclient compatibility with Juno release

2015-02-08 Thread Nilesh P Bhosale
Hi All,

We want to run python-manilaclient along with clients for other OpenStack 
services coming from OpenStack Juno release.
But, since https://github.com/openstack/python-manilaclient has only the 
master branch and no branch for Juno, if I try to deploy it on the same 
node, it breaks others due to the conflict in dependencies (specified in 
requirements.txt).
So, can someone help get me the Git revision/tag which can be used as a 
Juno release tag?
I tried to see the Git logs, but not sure which one to pick inline with 
the Juno release schedule, code freeze/release date?.

Need help in this regard.

Thanks,
Nilesh Bhosale__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Why not allow deleting volume from a CG ?

2015-02-06 Thread Nilesh P Bhosale
Hi All,

I see the following error, while deleting a volume from a consistency 
group:
$ [admin]cinder delete vol1
Delete for volume vol1 failed: Bad Request (HTTP 400) (Request-ID: 
req-7c958443-edb2-434f-82a2-4254ab357e99)
ERROR: Unable to delete any of specified volumes.

And when I tried to debug this, found the following at: 
https://github.com/openstack/cinder/blob/master/cinder/volume/api.py#L310:
if volume['consistencygroup_id'] is not None:
msg = _(Volume cannot be deleted while in a consistency 
group.)
LOG.info(_LI('Unable to delete volume: %s, '
 'volume is currently part of a '
 'consistency group.'), volume['id'])
raise exception.InvalidVolume(reason=msg)

I understand this is as per design, but curious to understand logic behind 
this.
Why not allow deletion of volumes form the CG? at least when there are no 
dependent snapshots.
With the current implementation, only way to delete the volume is to 
delete the complete CG, deleting all the volumes in that, which I feel is 
not right.

Am I missing anything? Please help understand.

Thanks,
Nilesh Bhosale

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] glusterfs: Looking for reviews for my patch

2014-06-01 Thread Nilesh P Bhosale
Hi Deepak,

With your proposed change, as per my understanding whenever cinder service 
is restarted the glusterfs volume driver will ensure that the gluster 
mounts are unmounted and remounted so that any new mount options added to 
the shares config file is taken effect, post service restart.

I appreciate this thought, but at the same time I has a basic question.
Although, cinder driver comes in the control path, actions like unmount 
and remount of the filesystem which may be in use from the nova instances 
(some of the volumes on this mount might be attached to the live VMs and 
those would be using them for their IO). Are you sure your changes won't 
be affecting any live IO from the nova instances?
I just want to make sure that your change does not interrupt IO from the 
active VMs on the volumes attached to them, which come from the gluster 
mounts.

Please help clarify.

Thanks,
Nilesh



From:   Deepak Shetty dpkshe...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   05/27/2014 11:11 AM
Subject:[openstack-dev] [Cinder] glusterfs: Looking for reviews 
for my patch



I am looking for reviews of my patch so that I can close on this soon
https://review.openstack.org/#/c/86888/

Appreciate your time.

thanx,
deepak___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Efficient image cloning implementation in NetApp nfs drivers // make this part of base NFS driver

2014-04-14 Thread Nilesh P Bhosale
Hi All,

I was going through the following blue print, NetApp proposed and 
implemented in its driver (NetAppNFSDriver - 
cinder/volume/drivers/netapp/nfs.py ) a while back (change):
https://blueprints.launchpad.net/cinder/+spec/netapp-cinder-nfs-image-cloning

It looks quite an interesting and valuable feature for the end customers.
Can we make it part of the base NfsDriver (cinder/volume/drivers/nfs.py)? 
so that the customers using the base NFS driver can benefit and also other 
drivers inheriting from this base NFS driver (e.g. IBMNAS_NFSDriver, 
NexentaNfsDriver) can also benefit.

Please let me know your valuable opinion.
I can start a blueprint for the Juno release.

Thanks,
Nilesh___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev