[ceph-users] Re: Advice on enabling autoscaler

2022-02-09 Thread Maarten van Ingen
Hi, Thanks so far for the suggestions. We have enabled the balancer first to make sure PG distribution is more optimal. After a few additions/replacements and data growth it was not optimal. We enabled upmap as this was suggested to be better than the default setting. To limit simultaneous

[ceph-users] Re: Advice on enabling autoscaler

2022-02-07 Thread Mark Nelson
On 2/7/22 12:34 PM, Alexander E. Patrakov wrote: пн, 7 февр. 2022 г. в 17:30, Robert Sander : And keep in mind that when PGs are increased that you also may need to increase the number of OSDs as one OSD should carry a max of around 200 PGs. But I do not know if that is still the case with

[ceph-users] Re: Advice on enabling autoscaler

2022-02-07 Thread Alexander E. Patrakov
пн, 7 февр. 2022 г. в 17:30, Robert Sander : > And keep in mind that when PGs are increased that you also may need to > increase the number of OSDs as one OSD should carry a max of around 200 > PGs. But I do not know if that is still the case with current Ceph versions. This is just the default

[ceph-users] Re: Advice on enabling autoscaler

2022-02-07 Thread Dan van der Ster
> On 02/07/2022 1:51 PM Maarten van Ingen wrote: > One more thing -- how many PGs do you have per OSD right now for the nvme and > hdd roots? > Can you share the output of `ceph osd df tree` ? > > >> This is only 1347 lines of text, you sure you want that :-) On a summary > >> for HDD we have

[ceph-users] Re: Advice on enabling autoscaler

2022-02-07 Thread Maarten van Ingen
st look at > the above output. > Before taking any steps I was wondering what the best course of action is. As > it's just a few pools affected, doing a manual increase would be and option > for me as well, if recommended. > > As you can see one pool is basically lacking pg's whil

[ceph-users] Re: Advice on enabling autoscaler

2022-02-07 Thread Maarten van Ingen
Hi Robert, Am 07.02.22 um 13:15 schrieb Maarten van Ingen: > As it's just a few pools affected, doing a manual increase would be and > option for me as well, if recommended. > > As you can see one pool is basically lacking pg's while the others are mostly > increasing due to the much higher

[ceph-users] Re: Advice on enabling autoscaler

2022-02-07 Thread Dan van der Ster
ould be and option > for me as well, if recommended. > > As you can see one pool is basically lacking pg's while the others are mostly > increasing due to the much higher target_bytes compared to the current usage. > > > Van: Dan van der Ster > Verzonden: ma

[ceph-users] Re: Advice on enabling autoscaler

2022-02-07 Thread Robert Sander
Am 07.02.22 um 13:15 schrieb Maarten van Ingen: As it's just a few pools affected, doing a manual increase would be and option for me as well, if recommended. As you can see one pool is basically lacking pg's while the others are mostly increasing due to the much higher target_bytes compared

[ceph-users] Re: Advice on enabling autoscaler

2022-02-07 Thread Maarten van Ingen
compared to the current usage. Van: Dan van der Ster Verzonden: maandag 7 februari 2022 12:53 Aan: Maarten van Ingen; ceph-users Onderwerp: Re: [ceph-users] Advice on enabling autoscaler Dear Maarten, For a cluster that size, I would not immediately

[ceph-users] Re: Advice on enabling autoscaler

2022-02-07 Thread Dan van der Ster
Dear Maarten, For a cluster that size, I would not immediately enable the autoscaler but first enabled it in "warn" mode to sanity check what it would plan to do: # ceph osd pool set pg_autoscale_mode warn Please share the output of "ceph osd pool autoscale-status" so we can help guide what