I should be able to answer this question for you if you can supply the
output of the following commands. It will print out all of your pool names
along with how many PGs are in that pool. My guess is that you don't have
a power of 2 number of PGs in your pool. Alternatively you might have
multiple pools and the PGs from the various pools are just different sizes.
ceph osd lspools | tr ',' '\n' | awk '/^[0-9]/ {print $2}' | while read
pool; do echo $pool: $(ceph osd pool get $pool pg_num | cut -d' ' -f2); done
ceph df
For me the output looks like this.
rbd: 64
cephfs_metadata: 64
cephfs_data: 256
rbd-ssd: 32
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
46053G 26751G 19301G 41.91
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd-replica 4 897G 11.36 7006G 263000
cephfs_metadata 6 141M 0.05 268G 11945
cephfs_data 7 10746G 43.41 14012G 2795782
rbd-replica-ssd 9 241G 47.30 268G 75061
On Sun, Jun 24, 2018 at 9:48 PM shadow_lin <[email protected]> wrote:
> Hi List,
> The enviroment is:
> Ceph 12.2.4
> Balancer module on and in upmap mode
> Failure domain is per host, 2 OSD per host
> EC k=4 m=2
> PG distribution is almost even before and after the rebalancing.
>
>
> After marking out one of the osd,I noticed a lot of the data was moving
> into the other osd on the same host .
>
> Ceph osd df result is(osd.20 and osd.21 are in the same host and osd.20
> was marked out):
>
> ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS
> 19 hdd 9.09560 1.00000 9313G 7079G 2233G 76.01 1.00 135
> 21 hdd 9.09560 1.00000 9313G 8123G 1190G 87.21 1.15 135
> 22 hdd 9.09560 1.00000 9313G 7026G 2287G 75.44 1.00 133
> 23 hdd 9.09560 1.00000 9313G 7026G 2286G 75.45 1.00 134
>
> I am using RBD only so the objects should all be 4m .I don't understand
> why osd 21 got significant more data
> with the same pg as other osds.
> Is this behavior expected or I misconfiged something or some kind of
> bug?
>
> Thanks
>
>
> 2018-06-25
> shadow_lin
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com