Hi, I need help for deploying jewel OSDs on CentOS 7.

Following the guide, I have successfully run OSD daemons but all of them
are down according to `ceph -s`: 15/15 in osds are down

No errors in /var/log/ceph/ceph-osd.1.log, it just stoped at these lines
and never made progress:
2016-05-09 01:32:03.187802 7f35acb4a700  0 osd.0 100 crush map has features
2200130813952, adjusting msgr requires for clients
2016-05-09 01:32:03.187841 7f35acb4a700  0 osd.0 100 crush map has features
2200130813952 was 2199057080833, adjusting msgr requires for mons
2016-05-09 01:32:03.187859 7f35acb4a700  0 osd.0 100 crush map has features
2200130813952, adjusting msgr requires for osds

ceph health details shows:
osd.0 is down since epoch 0, last address :/0

Why the address is :/0? Am I configuring it wrong? I've followed the OSD
troubleshooting guide but with no luck.
And the network seems good, since the ports are telnet-able, and I can do
ceph -s on the OSD machine.

ceph.conf:
[global]
        fsid = fad5f8d4-f5f6-425d-b035-a018614c0664

        mon osd full ratio = .75
        mon osd nearfull ratio = .65

        auth cluster required = cephx
        auth service requried = cephx
        auth client required = cephx
        mon initial members = mon_vm_1,mon_vm_2,mon_vm_3
        mon host = 10.3.1.94,10.3.1.95,10.3.1.96

[mon.a]
        host = mon_vm_1
        mon addr = 10.3.1.94

[mon.b]
        host = mon_vm_2
        mon addr = 10.3.1.95

[mon.c]
        host = mon_vm_3
        mon addr = 10.3.1.96

[osd]
        osd journal size = 10240
        osd pool default size = 3
        osd pool default min size = 2
        osd pool default pg num = 512
        osd pool default pgp num = 512
        osd crush chooseleaf type = 1
        osd journal = /ceph_journal/$id
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to