Hello German 

Can you check the following and let us know. 

1. After you execute service ceph start , are the service getting started ?? 
what is the output of service ceph status 
2. what does cehp status says 
3. check on ceph-node02 what are things mounted. 

Many Thanks 
Karan Singh 


----- Original Message -----

From: "German Anders" <gand...@despegar.com> 
To: ceph-users@lists.ceph.com 
Sent: Friday, 13 December, 2013 7:11:41 PM 
Subject: [ceph-users] Ceph not responding after trying to add a new MON 

Hi to All, 
I've a situation where i can't run any "ceph" command on the cluster. Initially 
the cluster had only one MON daemon, with three OSD daemons running. Everything 
were ok, but someone from the team try to add a new MON daemon, and then when i 
try to start the ceph service I'm getting this error message (I've try it on 
every node): 

root@ceph-node02:/tmp/ceph-node02# service ceph start 
=== mon.ceph-node02 === 
Starting Ceph mon.ceph-node02 on ceph-node02... 
failed: 'ulimit -n 32768; /usr/bin/ceph-mon -i ceph-node02 --pid-file 
/var/run/ceph/mon.ceph-node02.pid -c /etc/ceph/ceph.conf ' 
Starting ceph-create-keys on ceph-node02... 
INFO:ceph-disk:Activating 
/dev/disk/by-parttypeuuid/4fbd7e29-9d25-41b8-afd0-062c0ceff05d.dbf17a68-e94e-4dc7-bcc4-60263e4b0a7c
 
INFO:ceph-disk:ceph osd.0 already mounted in position; unmounting ours. 

root@ceph-node01:/var/log/ceph# service ceph start 
=== mon.ceph-node01 === 
Starting Ceph mon.ceph-node01 on ceph-node01... 
failed: 'ulimit -n 32768; /usr/bin/ceph-mon -i ceph-node01 --pid-file 
/var/run/ceph/mon.ceph-node01.pid -c /etc/ceph/ceph.conf ' 
Starting ceph-create-keys on ceph-node01... 
INFO:ceph-disk:Activating 
/dev/disk/by-parttypeuuid/4fbd7e29-9d25-41b8-afd0-062c0ceff05d.fcf613c6-ae4a-4a44-b890-6d77dac3818b
 
INFO:ceph-disk:ceph osd.2 already mounted in position; unmounting ours. 
root@ceph-node01:/var/log/ceph# 

root@ceph-node03:~# service ceph start 
INFO:ceph-disk:Activating 
/dev/disk/by-parttypeuuid/4fbd7e29-9d25-41b8-afd0-062c0ceff05d.7ba458b7-bd58-4373-b4b7-a0b1cffec548
 
INFO:ceph-disk:ceph osd.1 already mounted in position; unmounting ours. 
root@ceph-node03:~# 

The initial monitor was "ceph-node01". 

Here's the /etc/ceph/ceph.conf file from the three nodes: 

[global] 
fsid = cd60ab37-23bd-4c17-9470-404cb3b31112 
mon_initial_members = ceph-node01 
mon_host = ceph-node01 
auth_supported = cephx 
osd_journal_size = 1024 
filestore_xattr_use_omap = true 

[mon.ceph-node01] 
host = ceph-node01 
mon addr = 10.111.82.242:6789 

[mon.ceph-node02] 
host = ceph-node02 
mon aggr = 10.111.82.245:6789 


Someone could point me out here to solve this issue? 

Thanks in advance, 

Best regards, 


German Anders 








_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to