Hi all,
just like ceph osd create, do I have a similar command to obtain the
ID of the next new monitor? (i.e. there are 2 mons in the cluster at the
moment, next one will be the third)
Any other good way to go back up to the current number of monitors?
Thanks,
Jan
Hi all,
I am able to manually deploy a new ceph cluster by successfully
bootstrapping the first monitor:
# ceph -s
cluster 926daa03-5e59-4ae1-a0bd-401a227e74c7
health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean;
no osds
monmap e1: 1 mons at
On 06/18/2014 04:54 PM, Joao Eduardo Luis wrote:
On 18/06/14 11:58, Jan Kalcic wrote:
Hi all,
I am able to manually deploy a new ceph cluster by successfully
bootstrapping the first monitor:
# ceph -s
cluster 926daa03-5e59-4ae1-a0bd-401a227e74c7
health HEALTH_ERR 192 pgs stuck
Hi all,
approaching ceph today for the first time, so apologize for the basic
questions I promise I will do all my homework :-)
Following the documentation storage cluster quick start I am soon
stuck with the issue below while creating a first mon:
ceph-admin # ceph-deploy mon create
Hi Karan,
On 12/05/2013 10:31 AM, Karan Singh wrote:
Hello Jan
I faced similar kind of errors and these are really annoying. I tried this and
worked for me.
Glad to know I am not alone :-) , though this sounds like a not really
robust procedure...
1. Your ceph-node1 is now a monitor
Hi Joao,
On 12/05/2013 04:29 PM, Joao Eduardo Luis wrote:
On 12/05/2013 09:16 AM, Jan Kalcic wrote:
It seems ceph-mon does not exit with success, in fact:
ceph-node1 # sudo /usr/bin/ceph-mon -i ceph-node1 --pid-file
/var/run/ceph/mon.ceph-node1.pid -c /etc/ceph/ceph.conf -d
2013-12-05 10:06