Re: [ceph-users] 3x replicated rbd pool ssd data spread across 4 osd's

2018-09-03 Thread Marc Roos
finished. -Original Message- From: Jack [mailto:c...@jack.fr.eu.org] Sent: zondag 2 september 2018 15:53 To: Marc Roos; ceph-users Subject: Re: [ceph-users] 3x replicated rbd pool ssd data spread across 4 osd's Well, you have more than one pool here pg_num = 8, size = 3 -> 24

Re: [ceph-users] 3x replicated rbd pool ssd data spread across 4 osd's

2018-09-02 Thread Jack
Jack [mailto:c...@jack.fr.eu.org] > Sent: zondag 2 september 2018 14:06 > To: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] 3x replicated rbd pool ssd data spread across > 4 osd's > > ceph osd df will get you more information: variation & pg number for > each

Re: [ceph-users] 3x replicated rbd pool ssd data spread across 4 osd's

2018-09-02 Thread Marc Roos
- From: Jack [mailto:c...@jack.fr.eu.org] Sent: zondag 2 september 2018 14:06 To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] 3x replicated rbd pool ssd data spread across 4 osd's ceph osd df will get you more information: variation & pg number for each OSD Ceph does not spread ob

Re: [ceph-users] 3x replicated rbd pool ssd data spread across 4 osd's

2018-09-02 Thread Jack
ceph osd df will get you more information: variation & pg number for each OSD Ceph does not spread object on a per-object basis, but on a pg-basis The data repartition is thus not perfect You may increase your pg_num, and/or use the mgr balancer module

[ceph-users] 3x replicated rbd pool ssd data spread across 4 osd's

2018-09-02 Thread Marc Roos
If I have only one rbd ssd pool, 3 replicated, and 4 ssd osd's. Why are these objects so unevenly spread across the four osd's? Should they all not have 162G? [@c01 ]# ceph osd status 2>&1 ++--+---+---++-++-+- --+ | id | host | used |