Ok, I missed:
ceph osd pool set rbd pgp_num 160
Now I have:
ceph status
cluster 800221d2-4b8c-11e7-9bb9-cffc42889917
health HEALTH_ERR
9 pgs are stuck inactive for more than 300 seconds
9 pgs stuck inactive
9 pgs stuck unclean
monmap e1: 2 mons at {ceph-storage-rbx-1=
172.29.20.30:6789/0,ceph-storage-rbx-2=172.29.20.31:6789/0}
election epoch 4, quorum 0,1
ceph-storage-rbx-1,ceph-storage-rbx-2
osdmap e21: 6 osds: 6 up, 6 in
flags sortbitwise,require_jewel_osds
pgmap v50: 160 pgs, 1 pools, 0 bytes data, 0 objects
30925 MB used, 22194 GB / 22225 GB avail
143 active+clean
17 activating
2017-06-14 16:56 GMT+02:00 Stéphane Klein <[email protected]>:
> 2017-06-14 16:40 GMT+02:00 David Turner <[email protected]>:
>
>> Once those PG's have finished creating and the cluster is back to normal
>>
>
> How can I see Cluster migration progression?
>
> Now I have:
>
> # ceph status
> cluster 800221d2-4b8c-11e7-9bb9-cffc42889917
> health HEALTH_WARN
> pool rbd pg_num 160 > pgp_num 64
> monmap e1: 2 mons at {ceph-storage-rbx-1=172.29.20.
> 30:6789/0,ceph-storage-rbx-2=172.29.20.31:6789/0}
> election epoch 4, quorum 0,1 ceph-storage-rbx-1,ceph-
> storage-rbx-2
> osdmap e19: 6 osds: 6 up, 6 in
> flags sortbitwise,require_jewel_osds
> pgmap v45: 160 pgs, 1 pools, 0 bytes data, 0 objects
> 30923 MB used, 22194 GB / 22225 GB avail
> 160 active+clean
>
>
--
Stéphane Klein <[email protected]>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com