So here is my best guess. Could be that I am missing this patch ? https://github.com/openstack/cinder/commit/6211d8fa2033c2a607c20667110c5913cf60dd53
proto@controller:~$ apt-cache policy python-cinder python-cinder: Installed: 1:2014.2.3-0ubuntu1.1~cloud0 Candidate: 1:2014.2.3-0ubuntu1.1~cloud0 Thanks Saverio 2015-11-12 16:25 GMT+01:00 Saverio Proto <[email protected]>: > Hello there, > > I am investigating why my cinder is slow deleting volumes. > > you might remember my email from few days ago with subject: > "cinder volume_clear=zero makes sense with rbd ?" > > so it comes out that volume_clear has nothing to do with the rbd driver. > > cinder was not guilty, it was really ceph rbd slow itself to delete big > volumes. > > I was able to reproduce the slowness just using the rbd client. > > I was also able to fix the slowness just using the rbd client :) > > This is fixed in ceph hammer release, introducing a new feature. > > http://www.sebastien-han.fr/blog/2015/07/06/ceph-enable-the-object-map-feature/ > > Enabling the object map feature rbd is now super fast to delete large volumes. > > However how I am in trouble with cinder. Looks like my cinder-api > (running juno here) ignores the changes in my ceph.conf file. > > cat cinder.conf | grep rbd > > volume_driver=cinder.volume.drivers.rbd.RBDDriver > rbd_user=cinder > rbd_max_clone_depth=5 > rbd_ceph_conf=/etc/ceph/ceph.conf > rbd_flatten_volume_from_snapshot=False > rbd_pool=volumes > rbd_secret_uuid=secret > > But when I create a volume with cinder, The options in ceph.conf are ignored: > > cat /etc/ceph/ceph.conf | grep rbd > rbd default format = 2 > rbd default features = 13 > > But the volume: > > rbd image 'volume-78ca9968-77e8-4b68-9744-03b25b8068b1': > size 102400 MB in 25600 objects > order 22 (4096 kB objects) > block_name_prefix: rbd_data.533f4356fe034 > format: 2 > features: layering > flags: > > > so my first question is: > > does anyone use cinder with rbd driver and object map feature enabled > ? Does it work for anyone ? > > thank you > > Saverio _______________________________________________ OpenStack-operators mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
