The balancer does a pretty good job. It's the PG autoscaler that has bitten
us frequently enough that we always ensure it is disabled for all pools.

David

On Mon, Nov 1, 2021 at 2:08 PM Alexander Closs <acl...@csail.mit.edu> wrote:

> I can add another 2 positive datapoints for the balancer, my personal and
> work clusters are both happily balancing.
>
> Good luck :)
> -Alex
>
> On 11/1/21, 3:05 PM, "Josh Baergen" <jbaer...@digitalocean.com> wrote:
>
>     Well, those who have negative reviews are often the most vocal. :)
>     We've had few, if any, problems with the balancer in our own use of
>     it.
>
>     Josh
>
>     On Mon, Nov 1, 2021 at 12:58 PM Szabo, Istvan (Agoda)
>     <istvan.sz...@agoda.com> wrote:
>     >
>     > Yeah, just follow the autoscaler at the moment, it suggested 128,
> might enable later the balancer, just scare a bit due to negative feedbacks
> about it.
>     >
>     > Istvan Szabo
>     > Senior Infrastructure Engineer
>     > ---------------------------------------------------
>     > Agoda Services Co., Ltd.
>     > e: istvan.sz...@agoda.com
>     > ---------------------------------------------------
>     >
>     > On 2021. Nov 1., at 19:29, Josh Baergen <jbaer...@digitalocean.com>
> wrote:
>     >
>     > Email received from the internet. If in doubt, don't click any link
> nor open any attachment !
>     > ________________________________
>     >
>     > To expand on the comments below, "max avail" takes into account usage
>     > imbalance between OSDs. There's a pretty significant imbalance in
> this
>     > cluster and Ceph assumes that the imbalance will continue, and thus
>     > indicates that there's not much room left in the pool. Rebalancing
>     > that pool will make a big difference in terms of top-OSD fullness and
>     > the "max avail" metric.
>     >
>     > Josh
>     >
>     > On Mon, Nov 1, 2021 at 12:25 PM Alexander Closs <
> acl...@csail.mit.edu> wrote:
>     >
>     >
>     > Max available = free space actually usable now based on OSD usage,
> not including already-used space.
>     >
>     >
>     > -Alex
>     >
>     > MIT CSAIL
>     >
>     >
>     > On 11/1/21, 2:18 PM, "Szabo, Istvan (Agoda)" <
> istvan.sz...@agoda.com> wrote:
>     >
>     >
>     >    It says max available: 115TB and current use is 104TB, what I
> don’t understand where the max available come from because on the pool no
> object and no size limit is set:
>     >
>     >
>     >    quotas for pool 'sin.rgw.buckets.data':
>     >
>     >      max objects: N/A
>     >
>     >      max bytes  : N/A
>     >
>     >
>     >    Istvan Szabo
>     >
>     >    Senior Infrastructure Engineer
>     >
>     >    ---------------------------------------------------
>     >
>     >    Agoda Services Co., Ltd.
>     >
>     >    e: istvan.sz...@agoda.com<mailto:istvan.sz...@agoda.com>
>     >
>     >    ---------------------------------------------------
>     >
>     >
>     >    On 2021. Nov 1., at 18:48, Etienne Menguy <
> etienne.men...@croit.io> wrote:
>     >
>     >
>     >    sin.rgw.buckets.data    24  128  104 TiB  104 TiB      0 B
> 1.30G  156 TiB  156 TiB      0 B  47.51    115 TiB  N/A            N/A
>      1.30G         0 B          0 B
>     >
>     >    _______________________________________________
>     >
>     >    ceph-users mailing list -- ceph-users@ceph.io
>     >
>     >    To unsubscribe send an email to ceph-users-le...@ceph.io
>     >
>     >
>     >
>     > _______________________________________________
>     >
>     > ceph-users mailing list -- ceph-users@ceph.io
>     >
>     > To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to