Hi,
I've got a Ceph cluster with this status:
health: HEALTH_WARN
3 large omap objects
After looking into it I see that the issue comes from objects in the
'.rgw.gc' pool.
Investigating it I found that the gc.* objects have a lot of OMAP keys:
for OBJ in $(rados -p .rgw.gc ls); do
echo $OBJ
rados -p .rgw.gc listomapkeys $OBJ|wc -l
done
I then found out that on average these objects have about 100k of OMAP
keys each, but two stand out and have about 3M OMAP keys.
I can list the GC with 'radosgw-admin gc list' and this yields a JSON
which is a couple of MB in size.
I ran:
$ radosgw-admin gc process
That runs for hours and then finishes, but the large list of OMAP keys
stays.
Running Mimic 13.3.5 on this cluster.
Has anybody seen this before?
Wido
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com