ceph osd df will get you more information: variation & pg number for
each OSD

Ceph does not spread object on a per-object basis, but on a pg-basis

The data repartition is thus not perfect
You may increase your pg_num, and/or use the mgr balancer module
(http://docs.ceph.com/docs/mimic/mgr/balancer/)


On 09/02/2018 01:28 PM, Marc Roos wrote:
> 
> If I have only one rbd ssd pool, 3 replicated, and 4 ssd osd's. Why are 
> these objects so unevenly spread across the four osd's? Should they all 
> not have 162G?
> 
> 
> [@c01 ]# ceph osd status 2>&1
> +----+------+-------+-------+--------+---------+--------+---------+-----
> ------+
> | id | host |  used | avail | wr ops | wr data | rd ops | rd data |   
> state   |
> +----+------+-------+-------+--------+---------+--------+---------+-----
> ------+
> | 19 | c01  |  133G |  313G |    0   |     0   |    0   |     0   | 
> exists,up |
> | 20 | c02  |  158G |  288G |    0   |     0   |    0   |     0   | 
> exists,up |
> | 21 | c03  |  208G |  238G |    0   |     0   |    0   |     0   | 
> exists,up |
> | 30 | c04  |  149G |  297G |    0   |     0   |    0   |     0   | 
> exists,up |
> +----+------+-------+-------+--------+---------+--------+---------+-----
> ------+
> 
> All objects in the rbd pool are 4MB not? Should be easy to spread them 
> evenly, what am I missing here?
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to