On Wednesday, May 7, 2014 at 20:28, *sm1Ly wrote:
> 
> [sm1ly@salt1 ceph]$ sudo ceph -s
>     cluster 0b2c9c20-985a-4a39-af8e-ef2325234744
>      health HEALTH_WARN 19 pgs degraded; 192 pgs stuck unclean; recovery 
> 21/42 objects degraded (50.000%); too few pgs per osd (16 < min 20)
> 

You might need to adjust default number of PGs per pool and recreate pools.
http://ceph.com/docs/master/rados/operations/placement-groups/
http://ceph.com/docs/master/rados/operations/pools/#createpool

>      monmap e1: 3 mons at 
> {mon1=10.60.0.110:6789/0,mon2=10.60.0.111:6789/0,mon3=10.60.0.112:6789/0 
> (http://10.60.0.110:6789/0,mon2=10.60.0.111:6789/0,mon3=10.60.0.112:6789/0)}, 
> election epoch 6, quorum 0,1,2 mon1,mon2,mon3
>      mdsmap e6: 1/1/1 up {0=mds1=up:active}, 2 up:standby
>      osdmap e61: 12 osds: 12 up, 12 in
>       pgmap v103: 192 pgs, 3 pools, 9470 bytes data, 21 objects
> 



_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to