On 6/11/19 9:48 PM, J. Eric Ivancich wrote:
> Hi Wido,
>
> Interleaving below
>
> On 6/11/19 3:10 AM, Wido den Hollander wrote:
>>
>> I thought it was resolved, but it isn't.
>>
>> I counted all the OMAP values for the GC objects and I got back:
>>
>> gc.0: 0
>> gc.11: 0
>> gc.14: 0
>>
Hi Wido,
Interleaving below
On 6/11/19 3:10 AM, Wido den Hollander wrote:
>
> I thought it was resolved, but it isn't.
>
> I counted all the OMAP values for the GC objects and I got back:
>
> gc.0: 0
> gc.11: 0
> gc.14: 0
> gc.15: 0
> gc.16: 0
> gc.18: 0
> gc.19: 0
> gc.1: 0
> gc.20: 0
>
On 6/4/19 8:00 PM, J. Eric Ivancich wrote:
> On 6/4/19 7:37 AM, Wido den Hollander wrote:
>> I've set up a temporary machine next to the 13.2.5 cluster with the
>> 13.2.6 packages from Shaman.
>>
>> On that machine I'm running:
>>
>> $ radosgw-admin gc process
>>
>> That seems to work as
On 6/4/19 7:37 AM, Wido den Hollander wrote:
> I've set up a temporary machine next to the 13.2.5 cluster with the
> 13.2.6 packages from Shaman.
>
> On that machine I'm running:
>
> $ radosgw-admin gc process
>
> That seems to work as intended! So the PR seems to have fixed it.
>
> Should be
On 5/30/19 2:45 PM, Wido den Hollander wrote:
>
>
> On 5/29/19 11:22 PM, J. Eric Ivancich wrote:
>> Hi Wido,
>>
>> When you run `radosgw-admin gc list`, I assume you are *not* using the
>> "--include-all" flag, right? If you're not using that flag, then
>> everything listed should be expired
On 5/29/19 11:22 PM, J. Eric Ivancich wrote:
> Hi Wido,
>
> When you run `radosgw-admin gc list`, I assume you are *not* using the
> "--include-all" flag, right? If you're not using that flag, then
> everything listed should be expired and be ready for clean-up. If after
> running
Hi Wido,
When you run `radosgw-admin gc list`, I assume you are *not* using the
"--include-all" flag, right? If you're not using that flag, then
everything listed should be expired and be ready for clean-up. If after
running `radosgw-admin gc process` the same entries appear in
`radosgw-admin gc
Hi,
I've got a Ceph cluster with this status:
health: HEALTH_WARN
3 large omap objects
After looking into it I see that the issue comes from objects in the
'.rgw.gc' pool.
Investigating it I found that the gc.* objects have a lot of OMAP keys:
for OBJ in $(rados -p .rgw.gc