finished.
-Original Message-
From: Jack [mailto:c...@jack.fr.eu.org]
Sent: zondag 2 september 2018 15:53
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] 3x replicated rbd pool ssd data spread across
4 osd's
Well, you have more than one pool here
pg_num = 8, size = 3 -> 24
Jack [mailto:c...@jack.fr.eu.org]
> Sent: zondag 2 september 2018 14:06
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] 3x replicated rbd pool ssd data spread across
> 4 osd's
>
> ceph osd df will get you more information: variation & pg number for
> each
-
From: Jack [mailto:c...@jack.fr.eu.org]
Sent: zondag 2 september 2018 14:06
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] 3x replicated rbd pool ssd data spread across
4 osd's
ceph osd df will get you more information: variation & pg number for
each OSD
Ceph does not spread ob
ceph osd df will get you more information: variation & pg number for
each OSD
Ceph does not spread object on a per-object basis, but on a pg-basis
The data repartition is thus not perfect
You may increase your pg_num, and/or use the mgr balancer module
If I have only one rbd ssd pool, 3 replicated, and 4 ssd osd's. Why are
these objects so unevenly spread across the four osd's? Should they all
not have 162G?
[@c01 ]# ceph osd status 2>&1
++--+---+---++-++-+-
--+
| id | host | used |