I usually set it to warn, so I don't forget to check from time to time :)

Am Do., 5. Okt. 2023 um 12:24 Uhr schrieb Eugen Block <ebl...@nde.ag>:

> Hi,
>
> I strongly agree with Joachim, I usually disable the autoscaler in
> production environments. But the devs would probably appreciate bug
> reports to improve it.
>
> Zitat von Boris Behrens <b...@kervyn.de>:
>
> > Hi,
> > I've just upgraded to our object storages to the latest pacific version
> > (16.2.14) and the autscaler is acting weird.
> > On one cluster it just shows nothing:
> > ~# ceph osd pool autoscale-status
> > ~#
> >
> > On the other clusters it shows this when it is set to warn:
> > ~# ceph health detail
> > ...
> > [WRN] POOL_TOO_MANY_PGS: 2 pools have too many placement groups
> >     Pool .rgw.buckets.data has 1024 placement groups, should have 1024
> >     Pool device_health_metrics has 1 placement groups, should have 1
> >
> > Version 16.2.13 seems to act normal.
> > Is this a known bug?
> > --
> > Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
> > groüen Saal.
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>


-- 
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to