If you are running Luminous or newer, you can simply enable the balancer module 
[1].

[1]
http://docs.ceph.com/docs/luminous/mgr/balancer/


________________________________
From: ceph-users <[email protected]> on behalf of Robert 
LeBlanc <[email protected]>
Sent: Tuesday, June 25, 2019 5:22 PM
To: [email protected]
Cc: ceph-users
Subject: Re: [ceph-users] rebalancing ceph cluster

The placement of PGs is random in the cluster and takes into account any CRUSH 
rules which may also skew the distribution. Having more PGs will help give more 
options for placing PGs, but it still may not be adequate. It is recommended to 
have between 100-150 PGs per OSD, and you are pretty close. If you aren't 
planning to add any more pools, then splitting the PGs for pools that have a 
lot of data can help.

To get things to be more balanced, you can reweight the high utlization OSDs 
down to cause CRUSH to migrate some PGs off. This won't mean that they will get 
moved to the lowest utilized OSDs (they might wind up on another one that is 
pretty full). So, it may take several iterations to get things balanced. Just 
be sure that if you reweighted one down and it is now much lower usage than the 
others to reweight it back up to attract some PGs back to it.

```ceph osd reweight {osd-num} {weight}```
----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Mon, Jun 24, 2019 at 2:25 AM 
[email protected]<mailto:[email protected]> 
<[email protected]<mailto:[email protected]>> wrote:
Hello everyone,

We have some osd on the ceph.
Some osd's usage is more than 77% and another osd's usage is 39% in the same 
host.

I wonder why osd’s usage is different.(Difference is large) and how can i fix 
it?

ID  CLASS   WEIGHT    REWEIGHT SIZE    USE     AVAIL   %USE  VAR  PGS TYPE NAME
 -2          93.26010        - 93.3TiB 52.3TiB 41.0TiB 56.04 0.98   -     host 
serverA
…...
 33 HDD  9.09511  1.00000 9.10TiB 3.55TiB 5.54TiB 39.08 0.68  66         osd.4
 45 HDD   7.27675  1.00000 7.28TiB 5.64TiB 1.64TiB 77.53 1.36  81         osd.7
…...

-5          79.99017        - 80.0TiB 47.7TiB 32.3TiB 59.62 1.04   -     host 
serverB
  1 HDD   9.09511  1.00000 9.10TiB 4.79TiB 4.31TiB 52.63 0.92  87         osd.1
  6 HDD   9.09511  1.00000 9.10TiB 6.62TiB 2.48TiB 72.75 1.27  99         osd.6
 …...

Thank you
_______________________________________________
ceph-users mailing list
[email protected]<mailto:[email protected]>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to