Re: [ceph-users] default.rgw.log contains large omap object

2019-10-14 Thread Troy Ablan
Yep, that's on me.  I did enable it in the config originally, and I 
think that I thought at the time that it might be useful, but I wasn't 
aware of a sharding caveat owing to most of our traffic is happening on 
one rgw user.


I think I know what I need to do to fix it now though.

Thanks again!

-Troy

On 10/14/19 3:23 PM, Paul Emmerich wrote:

Yeah, the number of shards is configurable ("rgw usage num shards"? or
something).

Are you sure you aren't using it? This feature is not enabled by
default, someone had to explicitly set "rgw enable usage log" for you
to run into this problem.


Paul


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] default.rgw.log contains large omap object

2019-10-14 Thread Paul Emmerich
Yeah, the number of shards is configurable ("rgw usage num shards"? or
something).

Are you sure you aren't using it? This feature is not enabled by
default, someone had to explicitly set "rgw enable usage log" for you
to run into this problem.


Paul
-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 M√ľnchen
www.croit.io
Tel: +49 89 1896585 90

On Tue, Oct 15, 2019 at 12:15 AM Troy Ablan  wrote:
>
> Paul,
>
> Apparently never.  Appears to (potentially) have every request from the
> beginning of time (late last year, in my case).  In our use case, we
> don't really need this data (not multi-tenant), so I might simply clear it.
>
> But in the case where this were an extremely high transaction cluster
> where we did care about historical data for longer, can this be spread
> or sharded across more than just this small handful of objects?
>
> Thanks for the insight.
>
> -Troy
>
>
> On 10/14/19 3:01 PM, Paul Emmerich wrote:
> > Looks like the usage log (radosgw-admin usage show), how often do you trim 
> > it?
> >
> >
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] default.rgw.log contains large omap object

2019-10-14 Thread Troy Ablan

Paul,

Apparently never.  Appears to (potentially) have every request from the 
beginning of time (late last year, in my case).  In our use case, we 
don't really need this data (not multi-tenant), so I might simply clear it.


But in the case where this were an extremely high transaction cluster 
where we did care about historical data for longer, can this be spread 
or sharded across more than just this small handful of objects?


Thanks for the insight.

-Troy


On 10/14/19 3:01 PM, Paul Emmerich wrote:

Looks like the usage log (radosgw-admin usage show), how often do you trim it?



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] default.rgw.log contains large omap object

2019-10-14 Thread Paul Emmerich
Looks like the usage log (radosgw-admin usage show), how often do you trim it?


-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 M√ľnchen
www.croit.io
Tel: +49 89 1896585 90

On Mon, Oct 14, 2019 at 11:55 PM Troy Ablan  wrote:
>
> Hi folks,
>
> Mimic cluster here, RGW pool with only default zone.  I have a
> persistent error here
>
> LARGE_OMAP_OBJECTS 1 large omap objects
>
>  1 large objects found in pool 'default.rgw.log'
>
>  Search the cluster log for 'Large omap object found' for more
> details.
>
> I think I've narrowed it down to this namespace:
>
> -[~:#]- rados -p default.rgw.log --namespace usage ls
>
>
>
>
>
> usage.17
> usage.19
> usage.24
>
> -[~:#]- rados -p default.rgw.log --namespace usage listomapkeys usage.17
> | wc -l
> 21284
>
> -[~:#]- rados -p default.rgw.log --namespace usage listomapkeys usage.19
> | wc -l
>
>
>
>
> 1355968
>
> -[~:#]- rados -p default.rgw.log --namespace usage listomapkeys usage.24
> | wc -l
>
>
>
>
> 0
>
>
> Now, this last one take a long time to return -- minutes, even with a 0
> response, and listomapvals indeed returns a very large amount of data.
> I'm doing `wc -c` on listomapvals but this hasn't returned at the time
> of writing this message.  Is there anything I can do about this?
>
> Whenever PGs on the default.rgw.log are recovering or backfilling, my
> RGW cluster appears to block writes for almost two hours, and I think it
> points to this object, or at least this pool.
>
> I've been having trouble finding any documentation about how this log
> pool is used by RGW.  I have a feeling updating this object happens on
> every write to the cluster.  How would I remove this bottleneck?  Can I?
>
> Thanks
>
> -Troy
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com