On 4/15/19 1:13 PM, Alfredo Daniel Rezinovsky wrote:
>
> On 15/4/19 06:54, Jasper Spaans wrote:
>> On 14/04/2019 17:05, Alfredo Daniel Rezinovsky wrote:
>>> autoscale-status reports some of my PG_NUMs are way too big
>>>
>>> I have 256 and need 32
>>>
>>> POOL SIZE TARGET SIZE
On 15/4/19 06:54, Jasper Spaans wrote:
On 14/04/2019 17:05, Alfredo Daniel Rezinovsky wrote:
autoscale-status reports some of my PG_NUMs are way too big
I have 256 and need 32
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET
RATIO PG_NUM NEW PG_NUM AUTOSCALE
rbd
On 14/04/2019 17:05, Alfredo Daniel Rezinovsky wrote:
> autoscale-status reports some of my PG_NUMs are way too big
>
> I have 256 and need 32
>
> POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET
> RATIO PG_NUM NEW PG_NUM AUTOSCALE
> rbd 1214G
You might have some clients with older version?
Or need to do: ceph osd require-osd-release ***?
Best,
Feng
Best,
Feng
On Sun, Apr 14, 2019 at 1:31 PM Alfredo Daniel Rezinovsky
wrote:
>
> autoscale-status reports some of my PG_NUMs are way too big
>
> I have 256 and need 32
>
> POOL
autoscale-status reports some of my PG_NUMs are way too big
I have 256 and need 32
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET
RATIO PG_NUM NEW PG_NUM AUTOSCALE
rbd 1214G 3.0 56490G
0.0645 256 32 warn
If
On 07/22/2014 04:11 PM, Chad Seys wrote:
Hi All,
Is it possible to decrease pg_num? I was able to decrease pgp_num, but when
I try to decrease pg_num I get an error:
No, the pg splitting is supported, so increasing. But merging PGs is not
supported yet.
# ceph osd pool set tibs pg_num