Re: [ceph-users] Decreasing pg_num

2019-04-15 Thread Wido den Hollander


On 4/15/19 1:13 PM, Alfredo Daniel Rezinovsky wrote:
> 
> On 15/4/19 06:54, Jasper Spaans wrote:
>> On 14/04/2019 17:05, Alfredo Daniel Rezinovsky wrote:
>>> autoscale-status reports some of my PG_NUMs are way too big
>>>
>>> I have 256 and need 32
>>>
>>> POOL   SIZE  TARGET SIZE  RATE  RAW CAPACITY RATIO  TARGET
>>> RATIO  PG_NUM  NEW PG_NUM  AUTOSCALE
>>> rbd   1214G    3.0 56490G
>>> 0.0645   256  32  warn
>>>
>>> If I try to decrease the pg_num I get:
>>>
>>> # ceph osd pool set rbd pg_num  32
>>> Error EPERM: nautilus OSDs are required to decrease pg_num
>>>
>>> But all my osds are nautilus
>> This is somewhat hidden in the upgrade docs but I missed it as well the
>> first time - did you run
>>
>> ceph osd require-osd-release nautilus
>>
>> ?
>>
> That was.  It worked. With very few misplaced objects
> 
> Should I also decrease pgp_num ?
> 

Yes, both should be decreased to the same value.

Wido

> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Decreasing pg_num

2019-04-15 Thread Alfredo Daniel Rezinovsky


On 15/4/19 06:54, Jasper Spaans wrote:

On 14/04/2019 17:05, Alfredo Daniel Rezinovsky wrote:

autoscale-status reports some of my PG_NUMs are way too big

I have 256 and need 32

POOL   SIZE  TARGET SIZE  RATE  RAW CAPACITY RATIO  TARGET
RATIO  PG_NUM  NEW PG_NUM  AUTOSCALE
rbd   1214G    3.0 56490G
0.0645   256  32  warn

If I try to decrease the pg_num I get:

# ceph osd pool set rbd pg_num  32
Error EPERM: nautilus OSDs are required to decrease pg_num

But all my osds are nautilus

This is somewhat hidden in the upgrade docs but I missed it as well the
first time - did you run

ceph osd require-osd-release nautilus

?


That was.  It worked. With very few misplaced objects

Should I also decrease pgp_num ?

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Decreasing pg_num

2019-04-15 Thread Jasper Spaans
On 14/04/2019 17:05, Alfredo Daniel Rezinovsky wrote:
> autoscale-status reports some of my PG_NUMs are way too big
>
> I have 256 and need 32
>
> POOL   SIZE  TARGET SIZE  RATE  RAW CAPACITY RATIO  TARGET
> RATIO  PG_NUM  NEW PG_NUM  AUTOSCALE
> rbd   1214G    3.0 56490G 
> 0.0645   256  32  warn
>
> If I try to decrease the pg_num I get:
>
> # ceph osd pool set rbd pg_num  32
> Error EPERM: nautilus OSDs are required to decrease pg_num
>
> But all my osds are nautilus

This is somewhat hidden in the upgrade docs but I missed it as well the
first time - did you run

ceph osd require-osd-release nautilus

?


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Decreasing pg_num

2019-04-14 Thread Feng Zhang
You might have some clients with older version?

Or need to do: ceph osd require-osd-release ***?

Best,

Feng

Best,

Feng


On Sun, Apr 14, 2019 at 1:31 PM Alfredo Daniel Rezinovsky
 wrote:
>
> autoscale-status reports some of my PG_NUMs are way too big
>
> I have 256 and need 32
>
> POOL   SIZE  TARGET SIZE  RATE  RAW CAPACITY RATIO  TARGET
> RATIO  PG_NUM  NEW PG_NUM  AUTOSCALE
> rbd   1214G3.0 56490G
> 0.0645   256  32  warn
>
> If I try to decrease the pg_num I get:
>
> # ceph osd pool set rbd pg_num  32
> Error EPERM: nautilus OSDs are required to decrease pg_num
>
> But all my osds are nautilus
>
> ceph tell osd.* version
> osd.0: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
> osd.1: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
> osd.2: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
> osd.3: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
> osd.4: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
> osd.5: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
> osd.6: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
> osd.7: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
> osd.8: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
> osd.9: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
> osd.10: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
> osd.11: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
>
> Should I let pg_num as they are now or there's a way to reduce them?
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Decreasing pg_num

2019-04-14 Thread Alfredo Daniel Rezinovsky

autoscale-status reports some of my PG_NUMs are way too big

I have 256 and need 32

POOL   SIZE  TARGET SIZE  RATE  RAW CAPACITY RATIO  TARGET 
RATIO  PG_NUM  NEW PG_NUM  AUTOSCALE
rbd   1214G    3.0 56490G  
0.0645   256  32  warn


If I try to decrease the pg_num I get:

# ceph osd pool set rbd pg_num  32
Error EPERM: nautilus OSDs are required to decrease pg_num

But all my osds are nautilus

ceph tell osd.* version
osd.0: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}
osd.1: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}
osd.2: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}
osd.3: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}
osd.4: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}
osd.5: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}
osd.6: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}
osd.7: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}
osd.8: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}
osd.9: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}
osd.10: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}
osd.11: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}

Should I let pg_num as they are now or there's a way to reduce them?

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] decreasing pg_num?

2014-07-22 Thread Wido den Hollander

On 07/22/2014 04:11 PM, Chad Seys wrote:

Hi All,
   Is it possible to decrease pg_num?  I was able to decrease pgp_num, but when
I try to decrease pg_num I get an error:



No, the pg splitting is supported, so increasing. But merging PGs is not 
supported yet.



# ceph osd pool set tibs pg_num 1024
specified pg_num 1024 = current 2048

Thanks!
C.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Wido den Hollander
Ceph consultant and trainer
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com