I looks like "limit check" counts _all_ objects, but the resharder only
uses stats from rgw.main. Is that correct?

If so, why? Shouldn't the auto-resharding act on the total number of
objects in the bucket?

I have around 5 million objects in rgw.none and 3 mil in rgw.main... limit
check reports 8-9 million objects.

(ceph version 19.2.3)


Den man. 8. dec. 2025 kl. 10.10 skrev Johan Thomsen via ceph-users <
[email protected]>:

> Hi
>
> For some reason, the auto-resharding is not doing anything, even though it
> reports fill status over 100%. See below:
>
> $ radosgw-admin bucket limit check --rgw-zonegroup=k1-objectstore
> --rgw-zone=k1-objectstore
>
>
> 2025-12-08T08:45:57.143+0000 7f8921528900 0 ERROR: current period
> 43e92ddb-d0d2-4fbc-b18b-dc87126a7664 does not contain zone id
> 1da0f602-9397-4997-8e76-3e3425468746
>
> 2025-12-08T08:45:57.166+0000 7f8921528900 0 period
> (43e92ddb-d0d2-4fbc-b18b-dc87126a7664 does not have zone
> 1da0f602-9397-4997-8e76-3e3425468746 configured
>
> [
>
>     {
>
>         "user_id": "k1-ceph-user",
>
>         "buckets": [
>
>             {
>
>                 "bucket": "k1-snapshots",
>
>                 "tenant": "",
>
>                 "num_objects": 5766564,
>
>                 "num_shards": 97,
>
>                 "objects_per_shard": 59449,
>
>                 "fill_status": "OVER 114%"
>
>             }
>
>         ]
>
>     }
>
>
> Here's a dump of related config:
>
>
> rgw_dynamic_resharding true
>
> rgw_max_dynamic_shards 1999
>
> rgw_max_objs_per_shard 50000
>
> rgw_md_log_max_shards 64
>
> rgw_objexp_hints_num_shards 127
>
> rgw_override_bucket_index_max_shards 1031
>
> rgw_reshard_batch_size 64
>
> rgw_reshard_bucket_lock_duration 360
>
> rgw_reshard_max_aio 128
>
> rgw_reshard_num_logs 16
>
> rgw_reshard_thread_interval 600
>
> rgw_safe_max_objects_per_shard 52000
>
> rgw_shard_warning_threshold 90.000000
>
> rgw_usage_max_shards 32
>
>
> Does anyone have an idea why it is not auto-resharding to a value below at
> least 52k objs per shard?
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to