All;
We run 2 Nautilus clusters, with RADOSGW replication (14.2.11 --> 14.2.16).
Initially our bucket grew very quickly, as I was loading old data into it and
we quickly ran into Large OMAP Object warnings.
I have since done a couple manual reshards, which has fixed the warning on the
primary cluster. I have never been able to get rid of the issue on the cluster
with the replica.
I prior conversation on this list led me to this command:
radosgw-admin reshard stale-instances list --yes-i-really-mean-it
The results of which look like this:
[
"nextcloud-ra:f91aeff8-a365-47b4-a1c8-928cd66134e8.185262.1",
"nextcloud:f91aeff8-a365-47b4-a1c8-928cd66134e8.53761.6",
"nextcloud:f91aeff8-a365-47b4-a1c8-928cd66134e8.53761.2",
"nextcloud:f91aeff8-a365-47b4-a1c8-928cd66134e8.53761.5",
"nextcloud:f91aeff8-a365-47b4-a1c8-928cd66134e8.53761.4",
"nextcloud:f91aeff8-a365-47b4-a1c8-928cd66134e8.53761.3",
"nextcloud:f91aeff8-a365-47b4-a1c8-928cd66134e8.53761.1",
"3520ae821f974340afd018110c1065b8/OS
Development:f91aeff8-a365-47b4-a1c8-928cd66134e8.4298264.1",
"10dfdfadb7374ea1ba37bee1435d87ad/volumebackups:f91aeff8-a365-47b4-a1c8-928cd66134e8.4298264.2",
"WorkOrder:f91aeff8-a365-47b4-a1c8-928cd66134e8.44130.1"
]
I find this particularly interesting, as nextcloud-ra, <swift>/OS Development,
<swift>/volumbackups, and WorkOrder buckets no longer exist.
When I run:
for obj in $(rados -p 300.rgw.buckets.index ls | grep
f91aeff8-a365-47b4-a1c8-928cd66134e8.3512190.1); do printf "%-60s %7d\n"
$obj $(rados -p 300.rgw.buckets.index listomapkeys $obj | wc -l); done
I get the expected 64 entries, with counts around 20000 +/- 1000.
Are the above listed stale instances ok to delete? If so, how do I go about
doing so?
Thank you,
Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International Inc.
[email protected]
www.PerformAir.com
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]