Hi,
I'm running reef / cephadm. A user wanted two buckets removed (one of
which was empty, and one of which had a large number (~half a million)
of small objects in). In the primary zone I ran:
radosgw-admin bucket rm --bucket=name --bypass-gc --purge-objects
[which I've previously used for removing buckets with very-many-objects
in single-site just fine]
That returned OK for both buckets (quickly for the empty one, after
about 20m for the many-objects one). 24 hours later, the empty bucket is
gone in both zones, but the many-object one is still present in the
secondary zone:
source bucket
:gitlab-artifacts[64f0dd71-48bf-45aa-9741-69a51c083556.75705.1])
incremental sync on 11 shards
bucket is behind on 11 shards
behind shards: [0,1,2,3,4,5,6,7,8,9,10]
...which I think is showing that no shards have caught up, even a day
later? Similar, sync status shows being behind on 1 metadata shard, and
11 data shards. There are no reported errors in sync.
Is there a way to see if this is actually making headway? And/or to get
rid of the bucket in the secondary zone?
For future reference, would I have been better off trying to stop sync
of the bucket and then run a bucket rm in both zones separately?
FWIW, this is a small (3-4 OSD hosts per site) and slow (hdds, with nvme
for block.db) cluster.
Thanks,
Matthew
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io