On Wed, 16 Sep 2015, Stefan Eriksson wrote:
I have a completely new cluster for testing and its three servers which all are monitors and hosts for OSD, they each have one disk.The issue is ceph status shows: 64 stale+undersized+degraded+peered health: health HEALTH_WARN clock skew detected on mon.ceph01-osd03 64 pgs degraded 64 pgs stale 64 pgs stuck degraded 64 pgs stuck inactive 64 pgs stuck stale 64 pgs stuck unclean 64 pgs stuck undersized 64 pgs undersized too few PGs per OSD (21 < min 30) Monitor clock skew detectedmonmap e1: 3 mons at {ceph01-osd01=192.1.41.51:6789/0,ceph01-osd02=192.1.41.52:6789/0,ceph01-osd03=192.1.41.53:6789/0} election epoch 82, quorum 0,1,2 ceph01-osd01,ceph01-osd02,ceph01-osd03osdmap e36: 3 osds: 3 up, 3 in pgmap v85: 64 pgs, 1 pools, 0 bytes data, 0 objects 101352 kB used, 8365 GB / 8365 GB avail 64 stale+undersized+degraded+peered
To start you can add more PGs and setup NTPd on your servers. /Jonas _______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
