Hello, everyone!
I have created a Ceph cluster (v0.87.1-1) using the info from the 'Quick
deploy <http://docs.ceph.com/docs/master/start/quick-ceph-deploy/>' page,
with the following setup:
- 1 x admin / deploy node;
- 3 x OSD and MON nodes;
- each OSD node has 2 x 8 GB HDDs;
The setup was made using Virtual Box images, on Ubuntu 14.04.2.
After performing all the steps, the 'ceph health' output lists the cluster
in the HEALTH_WARN state, with the following details:
HEALTH_WARN 64 pgs degraded; 64 pgs stuck degraded; 64 pgs stuck unclean;
64 pgs stuck undersized; 64 pgs undersized; too few pgs per osd (10 < min
20)
The output of 'ceph -s':
cluster b483bc59-c95e-44b1-8f8d-86d3feffcfab
health HEALTH_WARN 64 pgs degraded; 64 pgs stuck degraded; 64 pgs
stuck unclean; 64 pgs stuck undersized; 64 pgs undersized; too few pgs per
osd (10 < min 20)
monmap e1: 3 mons at {osd-003=
192.168.122.23:6789/0,osd-002=192.168.122.22:6789/0,osd-001=192.168.122.21:6789/0},
election epoch 6, quorum 0,1,2 osd-001,osd-002,osd-003
osdmap e20: 6 osds: 6 up, 6 in
pgmap v36: 64 pgs, 1 pools, 0 bytes data, 0 objects
199 MB used, 18166 MB / 18365 MB avail
64 active+undersized+degraded
I have tried to increase the pg_num and pgp_num to 512, as advised here
<http://ceph.com/docs/master/rados/operations/placement-groups/#a-preselection-of-pg-num>,
but Ceph refused to do that, with the following error:
Error E2BIG: specified pg_num 512 is too large (creating 384 new PGs on ~6
OSDs exceeds per-OSD max of 32)
After changing the pg*_num to 256, as advised here
<http://ceph.com/docs/master/rados/operations/placement-groups/#choosing-the-number-of-placement-groups>,
the warning was changed to:
health HEALTH_WARN 256 pgs degraded; 256 pgs stuck unclean; 256 pgs
undersized
What is the issue behind these warning? and what do I need to do to fix it?
I'm a newcomer in the Ceph world, so please don't shoot me if this issue
has been answered / discussed countless times before :) I have searched the
web and the mailing list for the answers, but I couldn't find a valid
solution.
Any help is highly appreciated. Thank you!
Regards,
Bogdan
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com