I'm having a similar issue.

I'm following http://ceph.com/docs/master/install/manual-deployment/ to a T.

I have OSDs on the same host deployed with the short-form and they work
fine. I am trying to deploy some more via the long form (because I want
them to appear in a different location in the crush map). Everything
through step 10 (i.e. ceph osd crush add {id-or-name} {weight}
[{bucket-type}={bucket-name} ...] ) works just fine. When I go to step 11 (sudo
/etc/init.d/ceph start osd.{osd-num}) I get:
/etc/init.d/ceph: osd.16 not found (/etc/ceph/ceph.conf defines
mon.hobbit01 osd.7 osd.15 osd.10 osd.9 osd.1 osd.14 osd.2 osd.3 osd.13
osd.8 osd.12 osd.6 osd.11 osd.5 osd.4 osd.0 , /var/lib/ceph defines
mon.hobbit01 osd.7 osd.15 osd.10 osd.9 osd.1 osd.14 osd.2 osd.3 osd.13
osd.8 osd.12 osd.6 osd.11 osd.5 osd.4 osd.0)



On Wed, Feb 25, 2015 at 11:55 AM, Travis Rhoden <trho...@gmail.com> wrote:

> Also, did you successfully start your monitor(s), and define/create the
> OSDs within the Ceph cluster itself?
>
> There are several steps to creating a Ceph cluster manually.  I'm unsure
> if you have done the steps to actually create and register the OSDs with
> the cluster.
>
>  - Travis
>
> On Wed, Feb 25, 2015 at 9:49 AM, Leszek Master <keks...@gmail.com> wrote:
>
>> Check firewall rules and selinux. It sometimes is a pain in the ... :)
>> 25 lut 2015 01:46 "Barclay Jameson" <almightybe...@gmail.com> napisaƂ(a):
>>
>> I have tried to install ceph using ceph-deploy but sgdisk seems to
>>> have too many issues so I did a manual install. After mkfs.btrfs on
>>> the disks and journals and mounted them I then tried to start the osds
>>> which failed. The first error was:
>>> #/etc/init.d/ceph start osd.0
>>> /etc/init.d/ceph: osd.0 not found (/etc/ceph/ceph.conf defines ,
>>> /var/lib/ceph defines )
>>>
>>> I then manually added the osds to the conf file with the following as
>>> an example:
>>> [osd.0]
>>>     osd_host = node01
>>>
>>> Now when I run the command :
>>> # /etc/init.d/ceph start osd.0
>>>
>>> There is no error or output from the command and in fact when I do a
>>> ceph -s no osds are listed as being up.
>>> Doing as ps aux | grep -i ceph or ps aux | grep -i osd shows there are
>>> no osd running.
>>> I also have done htop to see if any process are running and none are
>>> shown.
>>>
>>> I had this working on SL6.5 with Firefly but Giant on Centos 7 has
>>> been nothing but a giant pain.
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to