Re: [ceph-users] Removing orphaned radosgw bucket indexes from pool

2018-12-18 Thread J. Eric Ivancich
11-29 > 20:21:53.733824Z", "num_shards": 7, > go-test-dashboard:default.891941432.359004: "mtime": "2018-11-29 > 20:22:09.201965Z", "num_shards": 46, > > The num_shards is typically around 46, but looking at all 288 instances of >

Re: [ceph-users] Removing orphaned radosgw bucket indexes from pool

2018-11-29 Thread Bryan Stillwell
cally around 46, but looking at all 288 instances of that bucket index, it has varied between 3 and 62 shards. Have you figured anything more out about this since you posted this originally two weeks ago? Thanks, Bryan From: ceph-users on behalf of Wido den Hollander Date: Thursday, N

[ceph-users] Removing orphaned radosgw bucket indexes from pool

2018-11-15 Thread Wido den Hollander
Hi, Recently we've seen multiple messages on the mailinglists about people seeing HEALTH_WARN due to large OMAP objects on their cluster. This is due to the fact that starting with 12.2.6 OSDs warn about this. I've got multiple people asking me the same questions and I've done some digging