Hi,
By deleting orphaned bucket index shards objects, the `large omap objects`
warning disappeared temporarily but appeared again the next day.
```
ceph health detail
HEALTH_WARN 20 large omap objects
[WRN] LARGE_OMAP_OBJECTS: 20 large omap objects
20 large objects found in pool
'ceph-poc-object-store-ssd-index.rgw.buckets.index'
Search the cluster log for 'Large omap object found' for more details.
```
I counted the number of omap using rados command and it was zero. This seems to
be correct as there are only empty buckets.
```
$ POOL=ceph-poc-object-store-ssd-index.rgw.buckets.index
$ OBJS=$(rados ls --pool $POOL)
$ for OBJ in $OBJS; do echo $OBJ; rados -p $POOL listomapkeys $OBJ | wc -l ;
done
.dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.53178977.1.9
0
.dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.53178977.1.3
0
.dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.62065601.1.5
0
.dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.53178977.1.10
0
.dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.62065601.1.6
0
.dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.62065601.1.9
0
.dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.62065601.1.8
0
.dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.62065601.1.4
0
.dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.53178977.1.7
0
.dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.62065601.1.2
0
.dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.53178977.1.4
0
.dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.53178977.1.0
0
.dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.62065601.1.7
0
.dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.53178977.1.1
0
.dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.62065601.1.10
0
.dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.62065601.1.3
0
.dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.62065601.1.0
0
.dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.53178977.1.8
0
.dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.53178977.1.2
0
.dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.62065601.1.1
0
.dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.53178977.1.5
0
.dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.53178977.1.6
0
```
However, it seems that the osd that stored the index still has space for omap,
which looks odd. Is this condition correct? Would you have any idea how to
resolve this condition?
```
$ ceph osd df | grep ssd
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META
AVAIL %USE VAR PGS STATUS
<...snip...>
14 ssd 1.00000 1.00000 1 TiB 18 GiB 18 GiB 100 MiB 373 MiB
1006 GiB 1.80 90.08 98 up
23 ssd 1.00000 1.00000 1 TiB 11 GiB 11 GiB 55 MiB 629 MiB
1013 GiB 1.09 54.58 102 up
21 ssd 1.00000 1.00000 1 TiB 17 GiB 17 GiB 96 MiB 380 MiB
1007 GiB 1.69 84.57 100 up
22 ssd 1.00000 1.00000 1 TiB 12 GiB 12 GiB 18 MiB 395 MiB
1012 GiB 1.18 58.82 100 up
19 ssd 1.00000 1.00000 1 TiB 14 GiB 13 GiB 83 MiB 310 MiB
1010 GiB 1.33 66.59 93 up
20 ssd 1.00000 1.00000 1 TiB 16 GiB 15 GiB 93 MiB 612 MiB
1008 GiB 1.56 77.79 107 up
```
Thanks,
Yuji
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]