On Sat, 26 Jul 2014, 10 minus wrote:
> Hi,
>
> I just setup a test ceph installation on 3 node Centos 6.5 .
> two of the nodes are used for hosting osds and the third acts as mon .
>
> Please note I'm using LVM so had to set up the osd using the manual install
> guide.
>
> --snip--
> ceph -s
> cluster 2929fa80-0841-4cb6-a133-90b2098fc802
> health HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean;
> noup,nodown,noout flag(s) set
> monmap e2: 3 mons
> at{ceph0=10.0.12.220:6789/0,ceph1=10.0.12.221:6789/0,ceph2=10.0.12.222:6789/0
> }, election epoch 46, quorum 0,1,2 ceph0,ceph1,ceph2
> osdmap e21: 2 osds: 0 up, 0 in
> flags noup,nodown,noout
^^^^
Do 'ceph osd unset noup' and they should start up. You likely also want
to clear nodown and noout as well.
sage
> pgmap v22: 192 pgs, 3 pools, 0 bytes data, 0 objects
> 0 kB used, 0 kB / 0 kB avail
> 192 creating
> --snip--
>
> osd tree
>
> --snip--
> ceph osd tree
> # id weight type name up/down reweight
> -1 2 root default
> -3 1 host ceph1
> 0 1 osd.0 down 0
> -2 1 host ceph2
> 1 1 osd.1 down 0
> --snip--
>
> --snip--
> ceph daemon osd.0 status
> { "cluster_fsid": "99babb8f-c880-4b32-a227-94aa483d4871",
> "osd_fsid": "1ad28bde-c23c-44ba-a3b7-0aaaafd3372e",
> "whoami": 0,
> "state": "booting",
> "oldest_map": 1,
> "newest_map": 21,
> "num_pgs": 0}
>
> --snip--
>
> --snip--
> ceph daemon osd.1 status
> { "cluster_fsid": "99babb8f-c880-4b32-a227-94aa483d4871",
> "osd_fsid": "becc3252-6977-47d6-87af-7b1337e591d8",
> "whoami": 1,
> "state": "booting",
> "oldest_map": 1,
> "newest_map": 21,
> "num_pgs": 0}
> --snip--
>
> # Cpus are idling
>
> # does anybody know what is wrong
>
> Thanks in advance
>
> _______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com