Hi,

It sounds like the .rgw.bucket.index pool has grown maybe due to some
problem with dynamic bucket resharding.

I wonder if the (stale/old/not used) bucket index's needs to be purged
using something like the below

radosgw-admin bi purge --bucket=<bucket_name> --bucket-id=<old_bucket_id>

Not sure how you would find the old_bucket_id however.

Thanks

[1]
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/object_gateway_guide_for_ubuntu/administration_cli


On Wed, Jun 20, 2018 at 12:34 PM, Tom W <to...@ukfast.co.uk> wrote:

> Hi all,
>
>
>
> We have recently upgraded from Jewel (10.2.10) to Luminous (12.2.5) and
> after this we decided to update our tunables configuration to the optimals,
> which were previously at Firefly. During this process, we have noticed the
> OSDs (bluestore) rapidly filling on the RGW index and GC pool. We estimated
> the index to consume around 30G of space and the GC negligible, but they
> are now filling all 4 OSDs per host which contain 2TB SSDs in each.
>
>
>
> Does anyone have any experience with this, or how to determine why the
> sudden growth has been encountered during recovery after the tunables
> update?
>
>
>
> We have disabled resharding activity due to this issue,
> https://tracker.ceph.com/issues/24551 and our gc queue is only a few
> items at present.
>
>
>
> Kind Regards,
>
>
>
> Tom
>
> ------------------------------
>
> NOTICE AND DISCLAIMER
> This e-mail (including any attachments) is intended for the above-named
> person(s). If you are not the intended recipient, notify the sender
> immediately, delete this email from your system and do not disclose or use
> for any purpose. We may monitor all incoming and outgoing emails in line
> with current legislation. We have taken steps to ensure that this email and
> attachments are free from any virus, but it remains your responsibility to
> ensure that viruses do not adversely affect you
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to