Hi,
perhaps your filesystem is too full?
df -k
du -hs /var/lib/ceph/mon/ceph-st3/store.db
What output/Error-Message you get if you start the mon in the foreground?
ceph-mon -i st3 -d -c /etc/ceph/ceph.conf
Udo
On 15.02.2014 09:30, Vadim Vatlin wrote:
> Hello
> Could you help me please
>
> ceph status
> cluster 4da1f6d8-ca10-4bfa-bff7-c3c1cdb3f888
> health HEALTH_WARN 229 pgs peering; 102 pgs stuck inactive; 236 pgs
> stuck unclean; 1 mons down, quorum 0,1 st1,st2
> monmap e3: 3 mons at
> {st1=109.233.57.226:6789/0,st2=91.224.140.229:6789/0,st3=176.9.250.166:6789/0},
> election epoch 72432, quorum 0,1 st1,st2
> osdmap e714: 3 osds: 3 up, 3 in
> pgmap v1824: 292 pgs, 4 pools, 135 bytes data, 2 objects
> 137 MB used, 284 GB / 284 GB avail
> 7 active
> 56 active+clean
> 188 peering
> 41 remapped+peering
>
> I try to restart st3 monitor
> service ceph -a restart mon.st3
>
> ps aux | grep ceph
> root 9642 1.7 19.8 785988 202260 ? S<sl 12:16 0:11
> /usr/bin/ceph-osd -i 2 --pid-file /var/run/ceph/osd.2.pid -c
> /etc/ceph/ceph.conf
> root 21375 5.0 3.5 212996 35852 pts/0 Sl 12:27 0:00
> /usr/bin/ceph-mon -i st3 --pid-file /var/run/ceph/mon.st3.pid -c
> /etc/ceph/ceph.conf
> root 21393 0.5 0.5 51308 6060 pts/0 S 12:27 0:00 python
> /usr/sbin/ceph-create-keys -i st3
>
> Process ceph-create-keys - stuck and have never finished.
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com