"143 active+clean
17 activating"
Wait until all of the PG's finish activating and you should be good. Let's
revisit your 160 PG's, though. If you had 128 PGs and 8TB of data in your
pool, then you each PG would have about 62.5GB in size. Because you set it
to 160 instead of a base 2 number, wha
And now :
ceph status
cluster 800221d2-4b8c-11e7-9bb9-cffc42889917
health HEALTH_OK
monmap e1: 2 mons at {ceph-storage-rbx-1=
172.29.20.30:6789/0,ceph-storage-rbx-2=172.29.20.31:6789/0}
election epoch 4, quorum 0,1
ceph-storage-rbx-1,ceph-storage-rbx-2
osdmap e21: 6
You increased your pg_num and it finished creating them "160
active+clean". Now you need to increase your pgp_num to match the 160 and
you should be good to go.
On Wed, Jun 14, 2017 at 10:57 AM Stéphane Klein
wrote:
> 2017-06-14 16:40 GMT+02:00 David Turner :
>
>> Once those PG's have finished
And now:
2017-06-14 17:00 GMT+02:00 Stéphane Klein :
> Ok, I missed:
>
> ceph osd pool set rbd pgp_num 160
>
> Now I have:
>
> ceph status
> cluster 800221d2-4b8c-11e7-9bb9-cffc42889917
> health HEALTH_ERR
> 9 pgs are stuck inactive for more than 300 seconds
> 9
Ok, I missed:
ceph osd pool set rbd pgp_num 160
Now I have:
ceph status
cluster 800221d2-4b8c-11e7-9bb9-cffc42889917
health HEALTH_ERR
9 pgs are stuck inactive for more than 300 seconds
9 pgs stuck inactive
9 pgs stuck unclean
monmap e1: 2 mons
2017-06-14 16:40 GMT+02:00 David Turner :
> Once those PG's have finished creating and the cluster is back to normal
>
How can I see Cluster migration progression?
Now I have:
# ceph status
cluster 800221d2-4b8c-11e7-9bb9-cffc42889917
health HEALTH_WARN
pool rbd pg_num 160
A few things to note, it is recommended to have your PG count, per pool, to
be a base 2 value. Also, the number of PG's per OSD is an aggregate number
between all of your pools. If you're planning to add 3 more pools for
cephfs and other things, then you really want to be mindful of how many
PG's
Hi,
see comments below.
JC
> On Jun 14, 2017, at 07:23, Stéphane Klein wrote:
>
> Hi,
>
> I have this parameter in my Ansible configuration:
>
> pool_default_pg_num: 300 # (100 * 6) / 2 = 300
>
> But I have this error:
>
> # ceph status
> cluster 800221d2-4b8c-11e7-9bb9-cffc42889917
>
Hi,
I have this parameter in my Ansible configuration:
pool_default_pg_num: 300 # (100 * 6) / 2 = 300
But I have this error:
# ceph status
cluster 800221d2-4b8c-11e7-9bb9-cffc42889917
health HEALTH_ERR
73 pgs are stuck inactive for more than 300 seconds
22 pgs d