Also found what the 2nd problem was:
When there are pools using the default replicated_ruleset while there are
multiple rulesets with differenct device classes, the autoscaler does not
produce any output.

Should I open a bug for that?

Am Mi., 4. Okt. 2023 um 14:36 Uhr schrieb Boris Behrens <b...@kervyn.de>:

> Found the bug for the TOO_MANY_PGS: https://tracker.ceph.com/issues/62986
> But I am still not sure, why I don't have any output on that one cluster.
>
> Am Mi., 4. Okt. 2023 um 14:08 Uhr schrieb Boris Behrens <b...@kervyn.de>:
>
>> Hi,
>> I've just upgraded to our object storages to the latest pacific version
>> (16.2.14) and the autscaler is acting weird.
>> On one cluster it just shows nothing:
>> ~# ceph osd pool autoscale-status
>> ~#
>>
>> On the other clusters it shows this when it is set to warn:
>> ~# ceph health detail
>> ...
>> [WRN] POOL_TOO_MANY_PGS: 2 pools have too many placement groups
>>     Pool .rgw.buckets.data has 1024 placement groups, should have 1024
>>     Pool device_health_metrics has 1 placement groups, should have 1
>>
>> Version 16.2.13 seems to act normal.
>> Is this a known bug?
>> --
>> Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
>> groüen Saal.
>>
>
>
> --
> Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
> groüen Saal.
>


-- 
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to