Re: [ceph-users] Large OMAP Object

2019-11-20 Thread Paul Emmerich
erformair.com > Sent: Friday, November 15, 2019 9:13 AM > To: ceph-users@lists.ceph.com > Cc: Stephen Self > Subject: Re: [ceph-users] Large OMAP Object > > Wido; > > Ok, yes, I have tracked it down to the index for one of our buckets. I > missed the ID in the ceph df out

Re: [ceph-users] Large OMAP Object

2019-11-20 Thread Nathan Fish
erformair.com > Sent: Friday, November 15, 2019 9:13 AM > To: ceph-users@lists.ceph.com > Cc: Stephen Self > Subject: Re: [ceph-users] Large OMAP Object > > Wido; > > Ok, yes, I have tracked it down to the index for one of our buckets. I > missed the ID in the ceph df out

Re: [ceph-users] Large OMAP Object

2019-11-20 Thread DHilsbos
...@performair.com Sent: Friday, November 15, 2019 9:13 AM To: ceph-users@lists.ceph.com Cc: Stephen Self Subject: Re: [ceph-users] Large OMAP Object Wido; Ok, yes, I have tracked it down to the index for one of our buckets. I missed the ID in the ceph df output previously. Next time I'll wait to read replies

Re: [ceph-users] Large OMAP Object

2019-11-15 Thread DHilsbos
www.PerformAir.com -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wido den Hollander Sent: Friday, November 15, 2019 8:40 AM To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Large OMAP Object On 11/15/19 4:35 PM, dhils...@performair.com

Re: [ceph-users] Large OMAP Object

2019-11-15 Thread DHilsbos
: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Paul Emmerich Sent: Friday, November 15, 2019 8:48 AM To: Wido den Hollander Cc: Ceph Users Subject: Re: [ceph-users] Large OMAP Object Note that the size limit changed from 2M keys to 200k keys recently (14.2.3 or 14.2.2

Re: [ceph-users] Large OMAP Object

2019-11-15 Thread Paul Emmerich
; > Director – Information Technology > > Perform Air International Inc. > > dhils...@performair.com > > www.PerformAir.com > > > > > > > > -Original Message- > > From: Wido den Hollander [mailto:w...@42on.com] > > Sent: Friday, November 1

Re: [ceph-users] Large OMAP Object

2019-11-15 Thread Wido den Hollander
rmation Technology > Perform Air International Inc. > dhils...@performair.com > www.PerformAir.com > > > > -Original Message- > From: Wido den Hollander [mailto:w...@42on.com] > Sent: Friday, November 15, 2019 1:56 AM > To: Dominic Hilsbos; ceph-users@lists.c

Re: [ceph-users] Large OMAP Object

2019-11-15 Thread DHilsbos
-Original Message- From: Wido den Hollander [mailto:w...@42on.com] Sent: Friday, November 15, 2019 1:56 AM To: Dominic Hilsbos; ceph-users@lists.ceph.com Cc: Stephen Self Subject: Re: [ceph-users] Large OMAP Object Did you check /var/log/ceph/ceph.log on one of the Monitors to see which

Re: [ceph-users] Large OMAP Object

2019-11-15 Thread Wido den Hollander
Did you check /var/log/ceph/ceph.log on one of the Monitors to see which pool and Object the large Object is in? Wido On 11/15/19 12:23 AM, dhils...@performair.com wrote: > All; > > We had a warning about a large OMAP object pop up in one of our clusters > overnight. The cluster is configured

Re: [ceph-users] Large OMAP Object

2019-11-14 Thread JC Lopez
Hi this probably comes from your RGW which is a big consumer/producer of OMAP for bucket indexes. Have a look at this previous post and just adapt the pool name to match the one where it’s detected: https://www.spinics.net/lists/ceph-users/msg51681.html Regards JC > On Nov 14, 2019, at

[ceph-users] Large OMAP Object

2019-11-14 Thread DHilsbos
All; We had a warning about a large OMAP object pop up in one of our clusters overnight. The cluster is configured for CephFS, but nothing mounts a CephFS, at this time. The cluster mostly uses RGW. I've checked the cluster log, the MON log, and the MGR log on one of the mons, with no

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-06-12 Thread Wido den Hollander
On 6/11/19 9:48 PM, J. Eric Ivancich wrote: > Hi Wido, > > Interleaving below > > On 6/11/19 3:10 AM, Wido den Hollander wrote: >> >> I thought it was resolved, but it isn't. >> >> I counted all the OMAP values for the GC objects and I got back: >> >> gc.0: 0 >> gc.11: 0 >> gc.14: 0 >>

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-06-11 Thread J. Eric Ivancich
Hi Wido, Interleaving below On 6/11/19 3:10 AM, Wido den Hollander wrote: > > I thought it was resolved, but it isn't. > > I counted all the OMAP values for the GC objects and I got back: > > gc.0: 0 > gc.11: 0 > gc.14: 0 > gc.15: 0 > gc.16: 0 > gc.18: 0 > gc.19: 0 > gc.1: 0 > gc.20: 0 >

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-06-11 Thread Wido den Hollander
On 6/4/19 8:00 PM, J. Eric Ivancich wrote: > On 6/4/19 7:37 AM, Wido den Hollander wrote: >> I've set up a temporary machine next to the 13.2.5 cluster with the >> 13.2.6 packages from Shaman. >> >> On that machine I'm running: >> >> $ radosgw-admin gc process >> >> That seems to work as

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-06-04 Thread J. Eric Ivancich
On 6/4/19 7:37 AM, Wido den Hollander wrote: > I've set up a temporary machine next to the 13.2.5 cluster with the > 13.2.6 packages from Shaman. > > On that machine I'm running: > > $ radosgw-admin gc process > > That seems to work as intended! So the PR seems to have fixed it. > > Should be

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-06-04 Thread Wido den Hollander
On 5/30/19 2:45 PM, Wido den Hollander wrote: > > > On 5/29/19 11:22 PM, J. Eric Ivancich wrote: >> Hi Wido, >> >> When you run `radosgw-admin gc list`, I assume you are *not* using the >> "--include-all" flag, right? If you're not using that flag, then >> everything listed should be expired

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-05-30 Thread Wido den Hollander
On 5/29/19 11:22 PM, J. Eric Ivancich wrote: > Hi Wido, > > When you run `radosgw-admin gc list`, I assume you are *not* using the > "--include-all" flag, right? If you're not using that flag, then > everything listed should be expired and be ready for clean-up. If after > running

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-05-29 Thread J. Eric Ivancich
Hi Wido, When you run `radosgw-admin gc list`, I assume you are *not* using the "--include-all" flag, right? If you're not using that flag, then everything listed should be expired and be ready for clean-up. If after running `radosgw-admin gc process` the same entries appear in `radosgw-admin gc

[ceph-users] Large OMAP object in RGW GC pool

2019-05-29 Thread Wido den Hollander
Hi, I've got a Ceph cluster with this status: health: HEALTH_WARN 3 large omap objects After looking into it I see that the issue comes from objects in the '.rgw.gc' pool. Investigating it I found that the gc.* objects have a lot of OMAP keys: for OBJ in $(rados -p .rgw.gc

Re: [ceph-users] large omap object in usage_log_pool

2019-05-27 Thread shubjero
Thanks Casey. This helped me understand the purpose of this pool. I trimmed the usage logs which reduced the number of keys stored in that index significantly and I may even disable the usage log entirely as I don't believe we use it for anything. On Fri, May 24, 2019 at 3:51 PM Casey Bodley

Re: [ceph-users] large omap object in usage_log_pool

2019-05-24 Thread Casey Bodley
On 5/24/19 1:15 PM, shubjero wrote: Thanks for chiming in Konstantin! Wouldn't setting this value to 0 disable the sharding? Reference: http://docs.ceph.com/docs/mimic/radosgw/config-ref/ rgw override bucket index max shards Description:Represents the number of shards for the bucket index

Re: [ceph-users] large omap object in usage_log_pool

2019-05-24 Thread shubjero
Thanks for chiming in Konstantin! Wouldn't setting this value to 0 disable the sharding? Reference: http://docs.ceph.com/docs/mimic/radosgw/config-ref/ rgw override bucket index max shards Description:Represents the number of shards for the bucket index object, a value of zero indicates there

Re: [ceph-users] large omap object in usage_log_pool

2019-05-23 Thread Konstantin Shalygin
in the config. ```"rgw_override_bucket_index_max_shards": "8",```. Should this be increased? Should be decreased to default `0`, I think. Modern Ceph releases resolve large omaps automatically via bucket dynamic resharding: ``` {     "option": {     "name":

[ceph-users] large omap object in usage_log_pool

2019-05-23 Thread shubjero
Hi there, We have an old cluster that was built on Giant that we have maintained and upgraded over time and are now running Mimic 13.2.5. The other day we received a HEALTH_WARN about 1 large omap object in the pool '.usage' which is our usage_log_pool defined in our radosgw zone. I am trying to

Re: [ceph-users] large omap object

2018-06-14 Thread Gregory Farnum
There may be a mismatch between be auto-restarting and the omap warning code. Looks like you already have 349 shards, with 13 of them warning on size! You can increase a config value to shut that error up, but you may want to get somebody from RGW to look at how you’ve managed to exceed those

[ceph-users] large omap object

2018-06-13 Thread stephan schultchen
Hello, i am running a ceph 13.2.0 cluster exclusively for radosrw / s3. i only have one big bucket. and the cluster is currently in warning state: cluster: id: d605c463-9f1c-4d91-a390-a28eedb21650 health: HEALTH_WARN 13 large omap objects i tried to google it, but i