Re: [ceph-users] rgw meta pool

2016-09-10 Thread Pavan Rallabhandi
Thanks Casey for the reply, more on the tracker.

Thanks!

On 9/9/16, 11:32 PM, "ceph-users on behalf of Casey Bodley" 
 wrote:

Hi,

My (limited) understanding of this metadata heap pool is that it's an 
archive of metadata entries and their versions. According to Yehuda, 
this was intended to support recovery operations by reverting specific 
metadata objects to a previous version. But nothing has been implemented 
so far, and I'm not aware of any plans to do so. So these objects are 
being created, but never read or deleted.

This was discussed in the rgw standup this morning, and we agreed that 
this archival should be made optional (and default to off), most likely 
by assigning an empty pool name to the zone's 'metadata_heap' field. 
I've created a ticket at http://tracker.ceph.com/issues/17256 to track 
this issue.

Casey


On 09/09/2016 11:01 AM, Warren Wang - ISD wrote:
> A little extra context here. Currently the metadata pool looks like it is
> on track to exceed the number of objects in the data pool, over time. In a
> brand new cluster, we¹re already up to almost 2 million in each pool.
>
>  NAME  ID USED  %USED MAX AVAIL
> OBJECTS
>  default.rgw.buckets.data  17 3092G  0.86  345T
> 2013585
>  default.rgw.meta  25  743M 0  172T
> 1975937
>
> We¹re concerned this will be unmanageable over time.
>
> Warren Wang
>
>
> On 9/9/16, 10:54 AM, "ceph-users on behalf of Pavan Rallabhandi"
>  prallabha...@walmartlabs.com> wrote:
>
>> Any help on this is much appreciated, am considering to fix this, given
>> it¹s confirmed an issue unless am missing something obvious.
>>
>> Thanks,
>> -Pavan.
>>
>> On 9/8/16, 5:04 PM, "ceph-users on behalf of Pavan Rallabhandi"
>> > prallabha...@walmartlabs.com> wrote:
>>
>> Trying it one more time on the users list.
>> 
>> In our clusters running Jewel 10.2.2, I see default.rgw.meta pool
>> running into large number of objects, potentially to the same range of
>> objects contained in the data pool.
>> 
>> I understand that the immutable metadata entries are now stored in
>> this heap pool, but I couldn¹t reason out why the metadata objects are
>> left in this pool even after the actual bucket/object/user deletions.
>> 
>> The put_entry() promptly seems to be storing the same in the heap
>> pool
>> https://github.com/ceph/ceph/blob/master/src/rgw/rgw_metadata.cc#L880,
>> but I do not see them to be reaped ever. Are they left there for some
>> reason?
>> 
>> Thanks,
>> -Pavan.
>> 
>> 
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> This email and any files transmitted with it are confidential and 
intended solely for the individual or entity to whom they are addressed. If you 
have received this email in error destroy it immediately. *** Walmart 
Confidential ***
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rgw meta pool

2016-09-09 Thread Casey Bodley

Hi,

My (limited) understanding of this metadata heap pool is that it's an 
archive of metadata entries and their versions. According to Yehuda, 
this was intended to support recovery operations by reverting specific 
metadata objects to a previous version. But nothing has been implemented 
so far, and I'm not aware of any plans to do so. So these objects are 
being created, but never read or deleted.


This was discussed in the rgw standup this morning, and we agreed that 
this archival should be made optional (and default to off), most likely 
by assigning an empty pool name to the zone's 'metadata_heap' field. 
I've created a ticket at http://tracker.ceph.com/issues/17256 to track 
this issue.


Casey


On 09/09/2016 11:01 AM, Warren Wang - ISD wrote:

A little extra context here. Currently the metadata pool looks like it is
on track to exceed the number of objects in the data pool, over time. In a
brand new cluster, we¹re already up to almost 2 million in each pool.

 NAME  ID USED  %USED MAX AVAIL
OBJECTS
 default.rgw.buckets.data  17 3092G  0.86  345T
2013585
 default.rgw.meta  25  743M 0  172T
1975937

We¹re concerned this will be unmanageable over time.

Warren Wang


On 9/9/16, 10:54 AM, "ceph-users on behalf of Pavan Rallabhandi"
 wrote:


Any help on this is much appreciated, am considering to fix this, given
it¹s confirmed an issue unless am missing something obvious.

Thanks,
-Pavan.

On 9/8/16, 5:04 PM, "ceph-users on behalf of Pavan Rallabhandi"
 wrote:

Trying it one more time on the users list.

In our clusters running Jewel 10.2.2, I see default.rgw.meta pool

running into large number of objects, potentially to the same range of
objects contained in the data pool.

I understand that the immutable metadata entries are now stored in

this heap pool, but I couldn¹t reason out why the metadata objects are
left in this pool even after the actual bucket/object/user deletions.

The put_entry() promptly seems to be storing the same in the heap

pool
https://github.com/ceph/ceph/blob/master/src/rgw/rgw_metadata.cc#L880,
but I do not see them to be reaped ever. Are they left there for some
reason?

Thanks,

-Pavan.


___

ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

This email and any files transmitted with it are confidential and intended 
solely for the individual or entity to whom they are addressed. If you have 
received this email in error destroy it immediately. *** Walmart Confidential 
***
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rgw meta pool

2016-09-09 Thread Warren Wang - ISD
A little extra context here. Currently the metadata pool looks like it is
on track to exceed the number of objects in the data pool, over time. In a
brand new cluster, we¹re already up to almost 2 million in each pool.

NAME  ID USED  %USED MAX AVAIL
OBJECTS
default.rgw.buckets.data  17 3092G  0.86  345T
2013585
default.rgw.meta  25  743M 0  172T
1975937

We¹re concerned this will be unmanageable over time.

Warren Wang


On 9/9/16, 10:54 AM, "ceph-users on behalf of Pavan Rallabhandi"
 wrote:

>Any help on this is much appreciated, am considering to fix this, given
>it¹s confirmed an issue unless am missing something obvious.
>
>Thanks,
>-Pavan.
>
>On 9/8/16, 5:04 PM, "ceph-users on behalf of Pavan Rallabhandi"
>prallabha...@walmartlabs.com> wrote:
>
>Trying it one more time on the users list.
>
>In our clusters running Jewel 10.2.2, I see default.rgw.meta pool
>running into large number of objects, potentially to the same range of
>objects contained in the data pool.
>
>I understand that the immutable metadata entries are now stored in
>this heap pool, but I couldn¹t reason out why the metadata objects are
>left in this pool even after the actual bucket/object/user deletions.
>
>The put_entry() promptly seems to be storing the same in the heap
>pool 
>https://github.com/ceph/ceph/blob/master/src/rgw/rgw_metadata.cc#L880,
>but I do not see them to be reaped ever. Are they left there for some
>reason?
>
>Thanks,
>-Pavan.
>
>
>___
>ceph-users mailing list
>ceph-users@lists.ceph.com
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>___
>ceph-users mailing list
>ceph-users@lists.ceph.com
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

This email and any files transmitted with it are confidential and intended 
solely for the individual or entity to whom they are addressed. If you have 
received this email in error destroy it immediately. *** Walmart Confidential 
***
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rgw meta pool

2016-09-09 Thread Pavan Rallabhandi
Any help on this is much appreciated, am considering to fix this, given it’s 
confirmed an issue unless am missing something obvious. 

Thanks,
-Pavan.

On 9/8/16, 5:04 PM, "ceph-users on behalf of Pavan Rallabhandi" 
 
wrote:

Trying it one more time on the users list.

In our clusters running Jewel 10.2.2, I see default.rgw.meta pool running 
into large number of objects, potentially to the same range of objects 
contained in the data pool. 

I understand that the immutable metadata entries are now stored in this 
heap pool, but I couldn’t reason out why the metadata objects are left in this 
pool even after the actual bucket/object/user deletions.

The put_entry() promptly seems to be storing the same in the heap pool 
https://github.com/ceph/ceph/blob/master/src/rgw/rgw_metadata.cc#L880, but I do 
not see them to be reaped ever. Are they left there for some reason?

Thanks,
-Pavan.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] rgw meta pool

2016-09-08 Thread Pavan Rallabhandi
Trying it one more time on the users list.

In our clusters running Jewel 10.2.2, I see default.rgw.meta pool running into 
large number of objects, potentially to the same range of objects contained in 
the data pool. 

I understand that the immutable metadata entries are now stored in this heap 
pool, but I couldn’t reason out why the metadata objects are left in this pool 
even after the actual bucket/object/user deletions.

The put_entry() promptly seems to be storing the same in the heap pool 
https://github.com/ceph/ceph/blob/master/src/rgw/rgw_metadata.cc#L880, but I do 
not see them to be reaped ever. Are they left there for some reason?

Thanks,
-Pavan.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com