Timofey Koolin <timofey@...> writes:
>
> I have test cluster3 node:
> 1 - osd.0 mon.a mds.a
> 2 - osd.1
> 3 - empty
>
> I create osd.2:
> node1# ceph osd create
>
>
> node3# mkdir /var/lib/ceph/osd/ceph-2
> node3# mkfs.xfs /dev/sdb
> node3# mount /dev/sdb /var/lib/ceph/osd/ceph-2
> node3# ceph-osd -i 2 --mkfs --mkkey
>
>
> copy keyring from node 3 to node 1 in root/keyring
> node1# ceph auth add osd.2 osd 'allow *' mon 'allow rwx' -i keyring
> node1# ceph osd crush set 2 1 root=default rack=unknownrack host=s3
>
>
> node3# service ceph start
>
> node1# ceph -s
>
> health HEALTH_OK
> monmap e1: 1 mons at {a=x.x.x.x:6789/0}, election epoch 1, quorum 0 a
>
> osdmap e135: 3 osds: 2 up, 2 in
> pgmap v6454: 576 pgs: 576 active+clean; 179 MB data, 2568 MB used, 137
GB / 139 GB avail
> mdsmap e4: 1/1/1 up {0=a=up:active}
>
>
>
>
Hi Timofey,
I was having some problems with adding OSDs and found that the documentation
was incorrect. It has been corrected, please see
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/?
highlight=osd#adding-an-osd-manual
Please note that the output of "ceph osd create" provides what the new osd
number should be. Also, I found the "ceph osd tree" is helpful in
determining how things are layed out.
Joe
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com