Re: [Openstack-operators] cinder-api with rbd driver ignores ceph.conf

2015-12-08 Thread Saverio Proto
Hello there,

finally yesterday I found fast way to backport the rbd driver in juno
glance_store.

I found this repository with the right patch I was looking for:
https://github.com/vumrao/glance_store.git (branch rbd_default_features)

I reworked the patch on top of stable juno:
https://github.com/zioproto/glance_store/commit/564129f865e10e7fcd5378a0914847323139f901

and I created my ubuntu packages.

Now everything works. I am testing the deb packages in my staging
cluster. I do have cinder and glance
honoring the ceph.conf default features. All volumes and images are
created in the ceph
backend with the object-map.

If anyone is running juno and wants to enable this feature we have
packages published here:
http://ubuntu.mirror.cloud.switch.ch/engines/packages/

Saverio



2015-11-26 11:36 GMT+01:00 Saverio Proto :
> Hello,
>
> I think it is worth to update the list on this issue, because a lot of
> operators are running Juno, and might want to enable the object map
> feature in their rbd backend.
>
> our cinder backport seems to work great.
>
> however, most of volumes are CoW from glance images. Glance uses as
> well the rbd backend.
>
> This means that if glance images do not have the rbd object map
> features, the cinder volumes will have flags "object map invalid".
>
> So, we are now trying to backport this feature of the rbd driver in
> glance as well.
>
> Saverio
>
>
>
> 2015-11-24 13:12 GMT+01:00 Saverio Proto :
>> Hello there,
>>
>> we were able finally to backport the patch to Juno:
>> https://github.com/zioproto/cinder/tree/backport-ceph-object-map
>>
>> we are testing this version. Everything good so far.
>>
>> this will require in your ceph.conf
>> rbd default format = 2
>> rbd default features = 13
>>
>> if anyone is willing to test this on his Juno setup I can also share
>> .deb packages for Ubuntu
>>
>> Saverio
>>
>>
>>
>> 2015-11-16 16:21 GMT+01:00 Saverio Proto :
>>> Thanks,
>>>
>>> I tried to backport this patch to Juno but it is not that trivial for
>>> me. I have 2 tests failing, about volume cloning and create a volume
>>> without layering.
>>>
>>> https://github.com/zioproto/cinder/commit/0d26cae585f54c7bda5ba5b423d8d9ddc87e0b34
>>> https://github.com/zioproto/cinder/commits/backport-ceph-object-map
>>>
>>> I guess I will stop trying to backport this patch and wait for the
>>> upgrade to Kilo of our Openstack installation to have the feature.
>>>
>>> If anyone ever backported this feature to Juno it would be nice to
>>> know, so I can use the patch to generate deb packages.
>>>
>>> thanks
>>>
>>> Saverio
>>>
>>> 2015-11-12 17:55 GMT+01:00 Josh Durgin :
 On 11/12/2015 07:41 AM, Saverio Proto wrote:
>
> So here is my best guess.
> Could be that I am missing this patch ?
>
>
> https://github.com/openstack/cinder/commit/6211d8fa2033c2a607c20667110c5913cf60dd53


 Exactly, you need that patch for cinder to use rbd_default_features
 from ceph.conf instead of its own default of only layering.

 In infernalis and later version of ceph you can also add object map to
 existing rbd images via the 'rbd feature enable' and 'rbd object-map
 rebuild' commands.

 Josh

> proto@controller:~$ apt-cache policy python-cinder
> python-cinder:
>Installed: 1:2014.2.3-0ubuntu1.1~cloud0
>Candidate: 1:2014.2.3-0ubuntu1.1~cloud0
>
>
> Thanks
>
> Saverio
>
>
>
> 2015-11-12 16:25 GMT+01:00 Saverio Proto :
>>
>> Hello there,
>>
>> I am investigating why my cinder is slow deleting volumes.
>>
>> you might remember my email from few days ago with subject:
>> "cinder volume_clear=zero makes sense with rbd ?"
>>
>> so it comes out that volume_clear has nothing to do with the rbd driver.
>>
>> cinder was not guilty, it was really ceph rbd slow itself to delete big
>> volumes.
>>
>> I was able to reproduce the slowness just using the rbd client.
>>
>> I was also able to fix the slowness just using the rbd client :)
>>
>> This is fixed in ceph hammer release, introducing a new feature.
>>
>>
>> http://www.sebastien-han.fr/blog/2015/07/06/ceph-enable-the-object-map-feature/
>>
>> Enabling the object map feature rbd is now super fast to delete large
>> volumes.
>>
>> However how I am in trouble with cinder. Looks like my cinder-api
>> (running juno here) ignores the changes in my ceph.conf file.
>>
>> cat cinder.conf | grep rbd
>>
>> volume_driver=cinder.volume.drivers.rbd.RBDDriver
>> rbd_user=cinder
>> rbd_max_clone_depth=5
>> rbd_ceph_conf=/etc/ceph/ceph.conf
>> rbd_flatten_volume_from_snapshot=False
>> rbd_pool=volumes
>> rbd_secret_uuid=secret
>>
>> But when I create a volume with cinder, The options in ceph.conf are
>> ignored:

Re: [Openstack-operators] cinder-api with rbd driver ignores ceph.conf

2015-11-26 Thread Saverio Proto
Hello,

I think it is worth to update the list on this issue, because a lot of
operators are running Juno, and might want to enable the object map
feature in their rbd backend.

our cinder backport seems to work great.

however, most of volumes are CoW from glance images. Glance uses as
well the rbd backend.

This means that if glance images do not have the rbd object map
features, the cinder volumes will have flags "object map invalid".

So, we are now trying to backport this feature of the rbd driver in
glance as well.

Saverio



2015-11-24 13:12 GMT+01:00 Saverio Proto :
> Hello there,
>
> we were able finally to backport the patch to Juno:
> https://github.com/zioproto/cinder/tree/backport-ceph-object-map
>
> we are testing this version. Everything good so far.
>
> this will require in your ceph.conf
> rbd default format = 2
> rbd default features = 13
>
> if anyone is willing to test this on his Juno setup I can also share
> .deb packages for Ubuntu
>
> Saverio
>
>
>
> 2015-11-16 16:21 GMT+01:00 Saverio Proto :
>> Thanks,
>>
>> I tried to backport this patch to Juno but it is not that trivial for
>> me. I have 2 tests failing, about volume cloning and create a volume
>> without layering.
>>
>> https://github.com/zioproto/cinder/commit/0d26cae585f54c7bda5ba5b423d8d9ddc87e0b34
>> https://github.com/zioproto/cinder/commits/backport-ceph-object-map
>>
>> I guess I will stop trying to backport this patch and wait for the
>> upgrade to Kilo of our Openstack installation to have the feature.
>>
>> If anyone ever backported this feature to Juno it would be nice to
>> know, so I can use the patch to generate deb packages.
>>
>> thanks
>>
>> Saverio
>>
>> 2015-11-12 17:55 GMT+01:00 Josh Durgin :
>>> On 11/12/2015 07:41 AM, Saverio Proto wrote:

 So here is my best guess.
 Could be that I am missing this patch ?


 https://github.com/openstack/cinder/commit/6211d8fa2033c2a607c20667110c5913cf60dd53
>>>
>>>
>>> Exactly, you need that patch for cinder to use rbd_default_features
>>> from ceph.conf instead of its own default of only layering.
>>>
>>> In infernalis and later version of ceph you can also add object map to
>>> existing rbd images via the 'rbd feature enable' and 'rbd object-map
>>> rebuild' commands.
>>>
>>> Josh
>>>
 proto@controller:~$ apt-cache policy python-cinder
 python-cinder:
Installed: 1:2014.2.3-0ubuntu1.1~cloud0
Candidate: 1:2014.2.3-0ubuntu1.1~cloud0


 Thanks

 Saverio



 2015-11-12 16:25 GMT+01:00 Saverio Proto :
>
> Hello there,
>
> I am investigating why my cinder is slow deleting volumes.
>
> you might remember my email from few days ago with subject:
> "cinder volume_clear=zero makes sense with rbd ?"
>
> so it comes out that volume_clear has nothing to do with the rbd driver.
>
> cinder was not guilty, it was really ceph rbd slow itself to delete big
> volumes.
>
> I was able to reproduce the slowness just using the rbd client.
>
> I was also able to fix the slowness just using the rbd client :)
>
> This is fixed in ceph hammer release, introducing a new feature.
>
>
> http://www.sebastien-han.fr/blog/2015/07/06/ceph-enable-the-object-map-feature/
>
> Enabling the object map feature rbd is now super fast to delete large
> volumes.
>
> However how I am in trouble with cinder. Looks like my cinder-api
> (running juno here) ignores the changes in my ceph.conf file.
>
> cat cinder.conf | grep rbd
>
> volume_driver=cinder.volume.drivers.rbd.RBDDriver
> rbd_user=cinder
> rbd_max_clone_depth=5
> rbd_ceph_conf=/etc/ceph/ceph.conf
> rbd_flatten_volume_from_snapshot=False
> rbd_pool=volumes
> rbd_secret_uuid=secret
>
> But when I create a volume with cinder, The options in ceph.conf are
> ignored:
>
> cat /etc/ceph/ceph.conf | grep rbd
> rbd default format = 2
> rbd default features = 13
>
> But the volume:
>
> rbd image 'volume-78ca9968-77e8-4b68-9744-03b25b8068b1':
>  size 102400 MB in 25600 objects
>  order 22 (4096 kB objects)
>  block_name_prefix: rbd_data.533f4356fe034
>  format: 2
>  features: layering
>  flags:
>
>
> so my first question is:
>
> does anyone use cinder with rbd driver and object map feature enabled
> ? Does it work for anyone ?
>
> thank you
>
> Saverio


 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

>>>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org

Re: [Openstack-operators] cinder-api with rbd driver ignores ceph.conf

2015-11-24 Thread Saverio Proto
Hello there,

we were able finally to backport the patch to Juno:
https://github.com/zioproto/cinder/tree/backport-ceph-object-map

we are testing this version. Everything good so far.

this will require in your ceph.conf
rbd default format = 2
rbd default features = 13

if anyone is willing to test this on his Juno setup I can also share
.deb packages for Ubuntu

Saverio



2015-11-16 16:21 GMT+01:00 Saverio Proto :
> Thanks,
>
> I tried to backport this patch to Juno but it is not that trivial for
> me. I have 2 tests failing, about volume cloning and create a volume
> without layering.
>
> https://github.com/zioproto/cinder/commit/0d26cae585f54c7bda5ba5b423d8d9ddc87e0b34
> https://github.com/zioproto/cinder/commits/backport-ceph-object-map
>
> I guess I will stop trying to backport this patch and wait for the
> upgrade to Kilo of our Openstack installation to have the feature.
>
> If anyone ever backported this feature to Juno it would be nice to
> know, so I can use the patch to generate deb packages.
>
> thanks
>
> Saverio
>
> 2015-11-12 17:55 GMT+01:00 Josh Durgin :
>> On 11/12/2015 07:41 AM, Saverio Proto wrote:
>>>
>>> So here is my best guess.
>>> Could be that I am missing this patch ?
>>>
>>>
>>> https://github.com/openstack/cinder/commit/6211d8fa2033c2a607c20667110c5913cf60dd53
>>
>>
>> Exactly, you need that patch for cinder to use rbd_default_features
>> from ceph.conf instead of its own default of only layering.
>>
>> In infernalis and later version of ceph you can also add object map to
>> existing rbd images via the 'rbd feature enable' and 'rbd object-map
>> rebuild' commands.
>>
>> Josh
>>
>>> proto@controller:~$ apt-cache policy python-cinder
>>> python-cinder:
>>>Installed: 1:2014.2.3-0ubuntu1.1~cloud0
>>>Candidate: 1:2014.2.3-0ubuntu1.1~cloud0
>>>
>>>
>>> Thanks
>>>
>>> Saverio
>>>
>>>
>>>
>>> 2015-11-12 16:25 GMT+01:00 Saverio Proto :

 Hello there,

 I am investigating why my cinder is slow deleting volumes.

 you might remember my email from few days ago with subject:
 "cinder volume_clear=zero makes sense with rbd ?"

 so it comes out that volume_clear has nothing to do with the rbd driver.

 cinder was not guilty, it was really ceph rbd slow itself to delete big
 volumes.

 I was able to reproduce the slowness just using the rbd client.

 I was also able to fix the slowness just using the rbd client :)

 This is fixed in ceph hammer release, introducing a new feature.


 http://www.sebastien-han.fr/blog/2015/07/06/ceph-enable-the-object-map-feature/

 Enabling the object map feature rbd is now super fast to delete large
 volumes.

 However how I am in trouble with cinder. Looks like my cinder-api
 (running juno here) ignores the changes in my ceph.conf file.

 cat cinder.conf | grep rbd

 volume_driver=cinder.volume.drivers.rbd.RBDDriver
 rbd_user=cinder
 rbd_max_clone_depth=5
 rbd_ceph_conf=/etc/ceph/ceph.conf
 rbd_flatten_volume_from_snapshot=False
 rbd_pool=volumes
 rbd_secret_uuid=secret

 But when I create a volume with cinder, The options in ceph.conf are
 ignored:

 cat /etc/ceph/ceph.conf | grep rbd
 rbd default format = 2
 rbd default features = 13

 But the volume:

 rbd image 'volume-78ca9968-77e8-4b68-9744-03b25b8068b1':
  size 102400 MB in 25600 objects
  order 22 (4096 kB objects)
  block_name_prefix: rbd_data.533f4356fe034
  format: 2
  features: layering
  flags:


 so my first question is:

 does anyone use cinder with rbd driver and object map feature enabled
 ? Does it work for anyone ?

 thank you

 Saverio
>>>
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] cinder-api with rbd driver ignores ceph.conf

2015-11-16 Thread Saverio Proto
Thanks,

I tried to backport this patch to Juno but it is not that trivial for
me. I have 2 tests failing, about volume cloning and create a volume
without layering.

https://github.com/zioproto/cinder/commit/0d26cae585f54c7bda5ba5b423d8d9ddc87e0b34
https://github.com/zioproto/cinder/commits/backport-ceph-object-map

I guess I will stop trying to backport this patch and wait for the
upgrade to Kilo of our Openstack installation to have the feature.

If anyone ever backported this feature to Juno it would be nice to
know, so I can use the patch to generate deb packages.

thanks

Saverio

2015-11-12 17:55 GMT+01:00 Josh Durgin :
> On 11/12/2015 07:41 AM, Saverio Proto wrote:
>>
>> So here is my best guess.
>> Could be that I am missing this patch ?
>>
>>
>> https://github.com/openstack/cinder/commit/6211d8fa2033c2a607c20667110c5913cf60dd53
>
>
> Exactly, you need that patch for cinder to use rbd_default_features
> from ceph.conf instead of its own default of only layering.
>
> In infernalis and later version of ceph you can also add object map to
> existing rbd images via the 'rbd feature enable' and 'rbd object-map
> rebuild' commands.
>
> Josh
>
>> proto@controller:~$ apt-cache policy python-cinder
>> python-cinder:
>>Installed: 1:2014.2.3-0ubuntu1.1~cloud0
>>Candidate: 1:2014.2.3-0ubuntu1.1~cloud0
>>
>>
>> Thanks
>>
>> Saverio
>>
>>
>>
>> 2015-11-12 16:25 GMT+01:00 Saverio Proto :
>>>
>>> Hello there,
>>>
>>> I am investigating why my cinder is slow deleting volumes.
>>>
>>> you might remember my email from few days ago with subject:
>>> "cinder volume_clear=zero makes sense with rbd ?"
>>>
>>> so it comes out that volume_clear has nothing to do with the rbd driver.
>>>
>>> cinder was not guilty, it was really ceph rbd slow itself to delete big
>>> volumes.
>>>
>>> I was able to reproduce the slowness just using the rbd client.
>>>
>>> I was also able to fix the slowness just using the rbd client :)
>>>
>>> This is fixed in ceph hammer release, introducing a new feature.
>>>
>>>
>>> http://www.sebastien-han.fr/blog/2015/07/06/ceph-enable-the-object-map-feature/
>>>
>>> Enabling the object map feature rbd is now super fast to delete large
>>> volumes.
>>>
>>> However how I am in trouble with cinder. Looks like my cinder-api
>>> (running juno here) ignores the changes in my ceph.conf file.
>>>
>>> cat cinder.conf | grep rbd
>>>
>>> volume_driver=cinder.volume.drivers.rbd.RBDDriver
>>> rbd_user=cinder
>>> rbd_max_clone_depth=5
>>> rbd_ceph_conf=/etc/ceph/ceph.conf
>>> rbd_flatten_volume_from_snapshot=False
>>> rbd_pool=volumes
>>> rbd_secret_uuid=secret
>>>
>>> But when I create a volume with cinder, The options in ceph.conf are
>>> ignored:
>>>
>>> cat /etc/ceph/ceph.conf | grep rbd
>>> rbd default format = 2
>>> rbd default features = 13
>>>
>>> But the volume:
>>>
>>> rbd image 'volume-78ca9968-77e8-4b68-9744-03b25b8068b1':
>>>  size 102400 MB in 25600 objects
>>>  order 22 (4096 kB objects)
>>>  block_name_prefix: rbd_data.533f4356fe034
>>>  format: 2
>>>  features: layering
>>>  flags:
>>>
>>>
>>> so my first question is:
>>>
>>> does anyone use cinder with rbd driver and object map feature enabled
>>> ? Does it work for anyone ?
>>>
>>> thank you
>>>
>>> Saverio
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] cinder-api with rbd driver ignores ceph.conf

2015-11-12 Thread Saverio Proto
So here is my best guess.
Could be that I am missing this patch ?

https://github.com/openstack/cinder/commit/6211d8fa2033c2a607c20667110c5913cf60dd53

proto@controller:~$ apt-cache policy python-cinder
python-cinder:
  Installed: 1:2014.2.3-0ubuntu1.1~cloud0
  Candidate: 1:2014.2.3-0ubuntu1.1~cloud0


Thanks

Saverio



2015-11-12 16:25 GMT+01:00 Saverio Proto :
> Hello there,
>
> I am investigating why my cinder is slow deleting volumes.
>
> you might remember my email from few days ago with subject:
> "cinder volume_clear=zero makes sense with rbd ?"
>
> so it comes out that volume_clear has nothing to do with the rbd driver.
>
> cinder was not guilty, it was really ceph rbd slow itself to delete big 
> volumes.
>
> I was able to reproduce the slowness just using the rbd client.
>
> I was also able to fix the slowness just using the rbd client :)
>
> This is fixed in ceph hammer release, introducing a new feature.
>
> http://www.sebastien-han.fr/blog/2015/07/06/ceph-enable-the-object-map-feature/
>
> Enabling the object map feature rbd is now super fast to delete large volumes.
>
> However how I am in trouble with cinder. Looks like my cinder-api
> (running juno here) ignores the changes in my ceph.conf file.
>
> cat cinder.conf | grep rbd
>
> volume_driver=cinder.volume.drivers.rbd.RBDDriver
> rbd_user=cinder
> rbd_max_clone_depth=5
> rbd_ceph_conf=/etc/ceph/ceph.conf
> rbd_flatten_volume_from_snapshot=False
> rbd_pool=volumes
> rbd_secret_uuid=secret
>
> But when I create a volume with cinder, The options in ceph.conf are ignored:
>
> cat /etc/ceph/ceph.conf | grep rbd
> rbd default format = 2
> rbd default features = 13
>
> But the volume:
>
> rbd image 'volume-78ca9968-77e8-4b68-9744-03b25b8068b1':
> size 102400 MB in 25600 objects
> order 22 (4096 kB objects)
> block_name_prefix: rbd_data.533f4356fe034
> format: 2
> features: layering
> flags:
>
>
> so my first question is:
>
> does anyone use cinder with rbd driver and object map feature enabled
> ? Does it work for anyone ?
>
> thank you
>
> Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] cinder-api with rbd driver ignores ceph.conf

2015-11-12 Thread Josh Durgin

On 11/12/2015 07:41 AM, Saverio Proto wrote:

So here is my best guess.
Could be that I am missing this patch ?

https://github.com/openstack/cinder/commit/6211d8fa2033c2a607c20667110c5913cf60dd53


Exactly, you need that patch for cinder to use rbd_default_features
from ceph.conf instead of its own default of only layering.

In infernalis and later version of ceph you can also add object map to
existing rbd images via the 'rbd feature enable' and 'rbd object-map
rebuild' commands.

Josh


proto@controller:~$ apt-cache policy python-cinder
python-cinder:
   Installed: 1:2014.2.3-0ubuntu1.1~cloud0
   Candidate: 1:2014.2.3-0ubuntu1.1~cloud0


Thanks

Saverio



2015-11-12 16:25 GMT+01:00 Saverio Proto :

Hello there,

I am investigating why my cinder is slow deleting volumes.

you might remember my email from few days ago with subject:
"cinder volume_clear=zero makes sense with rbd ?"

so it comes out that volume_clear has nothing to do with the rbd driver.

cinder was not guilty, it was really ceph rbd slow itself to delete big volumes.

I was able to reproduce the slowness just using the rbd client.

I was also able to fix the slowness just using the rbd client :)

This is fixed in ceph hammer release, introducing a new feature.

http://www.sebastien-han.fr/blog/2015/07/06/ceph-enable-the-object-map-feature/

Enabling the object map feature rbd is now super fast to delete large volumes.

However how I am in trouble with cinder. Looks like my cinder-api
(running juno here) ignores the changes in my ceph.conf file.

cat cinder.conf | grep rbd

volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_user=cinder
rbd_max_clone_depth=5
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot=False
rbd_pool=volumes
rbd_secret_uuid=secret

But when I create a volume with cinder, The options in ceph.conf are ignored:

cat /etc/ceph/ceph.conf | grep rbd
rbd default format = 2
rbd default features = 13

But the volume:

rbd image 'volume-78ca9968-77e8-4b68-9744-03b25b8068b1':
 size 102400 MB in 25600 objects
 order 22 (4096 kB objects)
 block_name_prefix: rbd_data.533f4356fe034
 format: 2
 features: layering
 flags:


so my first question is:

does anyone use cinder with rbd driver and object map feature enabled
? Does it work for anyone ?

thank you

Saverio


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators