Hello,

With Kilo I noticed that with LVM backend for these two parameters to be 
considered they must be put in specific LVM section. Like this:

lvm-local]
iscsi_helper=lioadm
volume_group=cinder-volumes-local
iscsi_ip_address=X.X.X.X
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
volumes_dir=/var/lib/cinder/volumes
iscsi_protocol=iscsi
volume_backend_name=lvm-local
!
!
!
volume_clear=zero
volume_clear_size=300

Otherwise they are ignored and it was taking lots of time to delete the volume, 
I do not use rbd but I suspect it should be done the same way as well.

Thank you

Serguei
[http://www.cisco.com/c/dam/assets/email-signature-tool/logo_07.png?ct=1423747865775]

Serguei Bezverkhi,
TECHNICAL LEADER.SERVICES
Global SP Services
[email protected]<mailto:[email protected]>
Phone: +1 416 306 7312
Mobile: +1 514 234 7374

CCIE (R&S,SP,Sec) - #9527


Cisco.com<http://www.cisco.com/>



[http://www.cisco.com/assets/swa/img/thinkbeforeyouprint.gif] Think before you 
print.

This email may contain confidential and privileged material for the sole use of 
the intended recipient. Any review, use, distribution or disclosure by others 
is strictly prohibited. If you are not the intended recipient (or authorized to 
receive for the recipient), please contact the sender by reply email and delete 
all copies of this message.
Please click 
here<http://www.cisco.com/web/about/doing_business/legal/cri/index.html> for 
Company Registration Information.



From: David Wahlstrom [mailto:[email protected]]
Sent: Wednesday, November 04, 2015 12:53 PM
To: OpenStack Operators <[email protected]>
Subject: Re: [Openstack-operators] cinder volume_clear=zero makes sense with 
rbd ?

Looking at the code in master (and ignoring tests), the only drivers I see 
reference to volume_clear are the LVM and block device drivers:

$ git grep -l volume_clear
driver.py
drivers/block_device.py
drivers/lvm.py
utils.py

So other drivers (netapp, smb, gluster, and of course Ceph/RBD) simply ignore 
this option (or more accurately, don't take any action).


On Wed, Nov 4, 2015 at 8:52 AM, Chris Friesen 
<[email protected]<mailto:[email protected]>> wrote:
On 11/04/2015 08:46 AM, Saverio Proto wrote:
Hello there,

I am using cinder with rbd, and most volumes are created from glance
images on rbd as well.
Because of ceph features, these volumes are CoW and only blocks
different from the original parent image are really written.

Today I am debugging why in my production system deleting cinder
volumes gets very slow. Looks like the problem happens only at scale,
I can't reproduce it on my small test cluster.

I read all the cinder.conf reference, and I found this default value
=>   volume_clear=0.

Is this parameter evaluated when cinder works with rbd ?

I don't think that's actually used with rbd, since as you say Ceph uses CoW 
internally.

I believe it's also ignored if you use LVM with thin provisioning.

Chris


_______________________________________________
OpenStack-operators mailing list
[email protected]<mailto:[email protected]>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



--
David W.
Unix, because every barista in Seattle has an MCSE.
_______________________________________________
OpenStack-operators mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to