Hi Ajitha,

For one, it looks like you don't have enough OSDs for the number of replicas you have specified in the config file. What is the value of your 'osd pool default size' in ceph.conf? If it's "3", for example, then you need to have at least 3 hosts with 1 OSD each (with the default CRUSH rules, IIRC). Alternatively, you could reduce the replication level. You can see how to do that here: http://ceph.com/docs/master/rados/operations/pools/#set-the-number-of-object-replicas

The other warning indicates that your monitor VM has a nearly full disk.

Hope that helps!

Cheers,
Lincoln

On 1/6/2015 5:07 AM, Ajitha Robert wrote:
Hi all,

I have installed ceph using ceph-deploy utility.. I have created three
VM's, one for monitor+mds and other two VM's for OSD's. ceph admin is
another seperate machine...


.Status and health of ceph are shown below.. Can you please suggest What i
can infer from the status.. I m a beginner to this..

*ceph status*

   cluster 3a946c74-b16d-41bd-a5fe-41efa96f0ee9
      health HEALTH_WARN 46 pgs degraded; 18 pgs incomplete; 64 pgs stale;
46 pgs stuck degraded; 18 pgs stuck inactive; 64 pgs stuck stale; 64 pgs
stuck unclean; 46 pgs stuck undersized; 46 pgs undersized; mon.MON low disk
space
      monmap e1: 1 mons at {MON=10.184.39.66:6789/0}, election epoch 1,
quorum 0 MON
      osdmap e19: 5 osds: 2 up, 2 in
       pgmap v33: 64 pgs, 1 pools, 0 bytes data, 0 objects
             10304 MB used, 65947 MB / 76252 MB avail
                   18 stale+incomplete
                   46 stale+active+undersized+degraded


*ceph health*

HEALTH_WARN 46 pgs degraded; 18 pgs incomplete; 64 pgs stale; 46 pgs stuck
degraded; 18 pgs stuck inactive; 64 pgs stuck stale; 64 pgs stuck unclean;
46 pgs stuck undersized; 46 pgs undersized; mon.MON low disk space

*ceph -w*
     cluster 3a946c74-b16d-41bd-a5fe-41efa96f0ee9
      health HEALTH_WARN 46 pgs degraded; 18 pgs incomplete; 64 pgs stale;
46 pgs stuck degraded; 18 pgs stuck inactive; 64 pgs stuck stale; 64 pgs
stuck unclean; 46 pgs stuck undersized; 46 pgs undersized; mon.MON low disk
space
      monmap e1: 1 mons at {MON=10.184.39.66:6789/0}, election epoch 1,
quorum 0 MON
      osdmap e19: 5 osds: 2 up, 2 in
       pgmap v31: 64 pgs, 1 pools, 0 bytes data, 0 objects
             10305 MB used, 65947 MB / 76252 MB avail
                   18 stale+incomplete
                   46 stale+active+undersized+degraded

2015-01-05 20:38:53.159998 mon.0 [INF] from='client.? 10.184.39.66:0/1011909'
entity='client.bootstrap-mds' cmd='[{"prefix": "auth get-or-create",
"entity": "mds.MON", "caps": ["osd", "allow rwx", "mds", "allow", "mon",
"allow profile mds"]}]': finished


2015-01-05 20:41:42.003690 mon.0 [INF] pgmap v32: 64 pgs: 18
stale+incomplete, 46 stale+active+undersized+degraded; 0 bytes data, 10304
MB used, 65947 MB / 76252 MB avail
2015-01-05 20:41:50.100784 mon.0 [INF] pgmap v33: 64 pgs: 18
stale+incomplete, 46 stale+active+undersized+degraded; 0 bytes data, 10304
MB used, 65947 MB / 76252 MB avail





*Regards,Ajitha R*



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to