Re: [ceph-users] Dynamic bucket index resharding bug? - rgw.none showing unreal number of objects

2019-11-22 Thread J. Eric Ivancich
pipermail/ceph-users-ceph.com/2018-October/030791.html > https://tracker.ceph.com/issues/37942 > > For now we are cancelling these resharding jobs since they seem to be > causing performance issues with the cluster, but this is an untenable > solution. Does anyone know what is causing t

Re: [ceph-users] radosgw pegging down 5 CPU cores when no data is being transferred

2019-08-21 Thread J. Eric Ivancich
you using lifecycle? And garbage collection is another background task. And just to be clear -- sometimes all 3 of your rados gateways are simultaneously in this state? But the call graph would be incredibly helpful. Thank you, Eric -- J. Eric Ivancich he/him/his Red Hat

Re: [ceph-users] Adventures with large RGW buckets [EXT]

2019-08-02 Thread J. Eric Ivancich
ome help from Canonical, who I think are working on > patches). > There was a recently merged PR that addressed bucket deletion with missing shadow objects: https://tracker.ceph.com/issues/40590 Thank you for reporting your experience w/ rgw, Eric -- J. Eric Ivanci

Re: [ceph-users] Cannot delete bucket

2019-07-01 Thread J. Eric Ivancich
: http://tracker.ceph.com/issues/40526 <http://tracker.ceph.com/issues/40526> Eric -- J. Eric Ivancich he/him/his Red Hat Storage Ann Arbor, Michigan, USA ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Cannot delete bucket

2019-06-25 Thread J. Eric Ivancich
/github.com/ceph/ceph/pull/28724 Eric -- J. Eric Ivancich he/him/his Red Hat Storage Ann Arbor, Michigan, USA ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-06-11 Thread J. Eric Ivancich
04 > gc.26: 42 > gc.28: 111292 > gc.17: 111314 > gc.12: 111534 > gc.31: 111956 Casey Bodley mentioned to me that he's seen similar behavior to what you're describing when RGWs are upgraded but not all OSDs are upgraded as well. Is it possible that the OSDs hosting gc.13, gc.10, and so

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-06-04 Thread J. Eric Ivancich
On 6/4/19 7:37 AM, Wido den Hollander wrote: > I've set up a temporary machine next to the 13.2.5 cluster with the > 13.2.6 packages from Shaman. > > On that machine I'm running: > > $ radosgw-admin gc process > > That seems to work as intended! So the PR seems to have fixed it. > > Should be

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-05-29 Thread J. Eric Ivancich
Hi Wido, When you run `radosgw-admin gc list`, I assume you are *not* using the "--include-all" flag, right? If you're not using that flag, then everything listed should be expired and be ready for clean-up. If after running `radosgw-admin gc process` the same entries appear in `radosgw-admin gc

Re: [ceph-users] Ceph Bucket strange issues rgw.none + id and marker diferent.

2019-05-15 Thread J. Eric Ivancich
Hi Manuel, My response is interleaved below. On 5/8/19 3:17 PM, EDH - Manuel Rios Fernandez wrote: > Eric, > > Yes we do : > > time s3cmd ls s3://[BUCKET]/ --no-ssl and we get near 2min 30 secs for list > the bucket. We're adding an --allow-unordered option to `radosgw-admin bucket list`.

Re: [ceph-users] Ceph Bucket strange issues rgw.none + id and marker diferent.

2019-05-09 Thread J. Eric Ivancich
> show more 0B. Is it correct? I am having difficulty understanding that sentence. Would you be so kind as to rewrite it? I don’t want to create confusion by guessing. Eric > Thanks for your response. > > > -Mensaje original- > De: J. Eric Ivancich > Enviado el: miércole

Re: [ceph-users] Ceph Bucket strange issues rgw.none + id and marker diferent.

2019-05-08 Thread J. Eric Ivancich
Hi Manuel, My response is interleaved. On 5/7/19 7:32 PM, EDH - Manuel Rios Fernandez wrote: > Hi Eric, > > This looks like something the software developer must do, not something than > Storage provider must allow no? True -- so you're using `radosgw-admin bucket list --bucket=XYZ` to list

Re: [ceph-users] Ceph Bucket strange issues rgw.none + id and marker diferent.

2019-05-07 Thread J. Eric Ivancich
On 5/7/19 11:24 AM, EDH - Manuel Rios Fernandez wrote: > Hi Casey > > ceph version 13.2.5 (cbff874f9007f1869bfd3821b7e33b2a6ffd4988) mimic > (stable) > > Reshard is something than don’t allow us customer to list index? > > Regards Listing of buckets with a large number of buckets is notoriously

Re: [ceph-users] How to config mclock_client queue?

2019-03-26 Thread J. Eric Ivancich
So I do not think mclock_client queue works the way you’re hoping it does. For categorization purposes it joins the operation class and the client identifier with the intent that that will execute operations among clients more evenly (i.e., it won’t favor one client over another). However, it

Re: [ceph-users] Omap issues - metadata creating too many

2019-01-03 Thread J. Eric Ivancich
tadata list bucket.instance | jq -r '.[]' | sort > >   > > Give that a try and see if you see the same problem.  It seems that once > you remove the old bucket instances the omap dbs don't reduce in size > until you compact them. > >   > > Bryan > >   > > *From

Re: [ceph-users] Removing orphaned radosgw bucket indexes from pool

2018-12-18 Thread J. Eric Ivancich
On 11/29/18 6:58 PM, Bryan Stillwell wrote: > Wido, > > I've been looking into this large omap objects problem on a couple of our > clusters today and came across your script during my research. > > The script has been running for a few hours now and I'm already over 100,000 > 'orphaned'

Re: [ceph-users] Omap issues - metadata creating too many

2018-12-18 Thread J. Eric Ivancich
On 12/17/18 9:18 AM, Josef Zelenka wrote: > Hi everyone, i'm running a Luminous 12.2.5 cluster with 6 hosts on > ubuntu 16.04 - 12 HDDs for data each, plus 2 SSD metadata OSDs(three > nodes have an additional SSD i added to have more space to rebalance the > metadata). CUrrently, the cluster is

Re: [ceph-users] inexplicably slow bucket listing at top level

2018-11-05 Thread J. Eric Ivancich
I did make an inquiry and someone here does have some experience w/ the mc command -- minio client. We're curious how "ls -r" is implemented under mc. Does it need to get a full listing and then do some path parsing to produce nice output? If so, it may be playing a role in the delay as well.

Re: [ceph-users] inexplicably slow bucket listing at top level

2018-11-05 Thread J. Eric Ivancich
The numbers you're reporting strike me as surprising as well. Which version are you running? In case you're not aware, listing of buckets is not a very efficient operation given that the listing is required to return with objects in lexical order. They are distributed across the shards via a