So I was attempting to add an OSD to my ceph-cluster (running Jewel 10.2.5), 
using ceph-deploy (1.5.35), on Ubuntu.

I have 2 OSD’s on this node, attempting to add third.

The first two OSD’s I created with on-disk journals, then later moved them to 
partitions on the NVMe system disk (Intel P3600).
I carved out 3 8GB partitions on the NVMe disk for journaling purposes, only 
using two originally, with one left for one more OSD when the time came.

> [2017-01-03 12:03:53,667][node25][DEBUG ] /dev/sda :
> [2017-01-03 12:03:53,667][node25][DEBUG ]  /dev/sda2 ceph journal
> [2017-01-03 12:03:53,668][node25][DEBUG ]  /dev/sda1 ceph data, active, 
> cluster ceph, osd.1, journal /dev/nvme0n1p2
> [2017-01-03 12:03:53,668][node25][DEBUG ] /dev/sdb :
> [2017-01-03 12:03:53,668][node25][DEBUG ]  /dev/sdb2 ceph journal
> [2017-01-03 12:03:53,668][node25][DEBUG ]  /dev/sdb1 ceph data, active, 
> cluster ceph, osd.9, journal /dev/nvme0n1p4

When attempting to add the new OSD (disk /dev/sdc, journal /dev/nvme0n1p5), I 
zapped SDC, then I attempted to prepare the OSD using:
> ceph-deploy --username root osd prepare node25:sdc:/dev/nvme0n1p5

When ceph-deploy finished, I had 1 down, 1 out.
> [2017-01-03 12:08:34,229][node25][INFO  ] checking OSD status...
> [2017-01-03 12:08:34,229][node25][DEBUG ] find the location of an executable
> [2017-01-03 12:08:34,232][node25][INFO  ] Running command: /usr/bin/ceph 
> --cluster=ceph osd stat --format=json
> [2017-01-03 12:08:34,397][node25][WARNING] there is 1 OSD down
> [2017-01-03 12:08:34,397][node25][WARNING] there is 1 OSD out
> [2017-01-03 12:08:34,398][ceph_deploy.osd][DEBUG ] Host node25 is now ready 
> for osd use.

However, when I tried to activate, it would fail.

For the life of me, I’m not seeing those logs sadly.

I then tried to create, instead of prepare-activate, after zapping SDC again.
> ceph-deploy --username root osd create node25:sdc:/dev/nvme0n1p5


> [2017-01-03 12:10:26,362][node25][INFO  ] checking OSD status...
> [2017-01-03 12:10:26,363][node25][DEBUG ] find the location of an executable
> [2017-01-03 12:10:26,365][node25][INFO  ] Running command: /usr/bin/ceph 
> --cluster=ceph osd stat --format=json
> [2017-01-03 12:10:26,630][node25][WARNING] there is 1 OSD down
> [2017-01-03 12:10:26,631][ceph_deploy.osd][DEBUG ] Host node25 is now ready 
> for osd use.

This obviously brought the OSD in, but down, and I was unable to bring it up.

It appears that this failed to create the filesystem for the journal in the 
temporary directory, and thus punting before creating the final OSD directory 
in /var/lib/ceph/osd/

> 2017-01-03 12:08:30.680705 7fedbed08800  0 set uid:gid to 64045:64045 
> (ceph:ceph)
> 2017-01-03 12:08:30.680732 7fedbed08800  0 ceph version 10.2.5 
> (c461ee19ecbc0c5c330aca20f7392c9a00730367), process ceph-osd, pid 10894
> 2017-01-03 12:08:30.684279 7fedbed08800  1 
> filestore(/var/lib/ceph/tmp/mnt.00hwsQ) mkfs in /var/lib/ceph/tmp/mnt.00hwsQ
> 2017-01-03 12:08:30.684311 7fedbed08800  1 
> filestore(/var/lib/ceph/tmp/mnt.00hwsQ) mkfs fsid is already set to 
> f0bfe11f-b89d-44dd-88ca-1d62251a03e9
> 2017-01-03 12:08:30.684336 7fedbed08800  1 
> filestore(/var/lib/ceph/tmp/mnt.00hwsQ) write_version_stamp 4
> 2017-01-03 12:08:30.685565 7fedbed08800  0 
> filestore(/var/lib/ceph/tmp/mnt.00hwsQ) backend xfs (magic 0x58465342)
> 2017-01-03 12:08:30.687479 7fedbed08800  1 
> filestore(/var/lib/ceph/tmp/mnt.00hwsQ) leveldb db exists/created
> 2017-01-03 12:08:30.687775 7fedbed08800 -1 
> filestore(/var/lib/ceph/tmp/mnt.00hwsQ) mkjournal error creating journal on 
> /var/lib/ceph/tmp/mnt.00hwsQ/journal: (13) Permission denied
> 2017-01-03 12:08:30.687801 7fedbed08800 -1 OSD::mkfs: ObjectStore::mkfs 
> failed with error -13
> 2017-01-03 12:08:30.687859 7fedbed08800 -1 ESC[0;31m ** ERROR: error creating 
> empty object store in /var/lib/ceph/tmp/mnt.00hwsQ: (13) Permission 
> deniedESC[0m
> 2017-01-03 12:08:31.563884 7f6541787800  0 set uid:gid to 64045:64045 
> (ceph:ceph)
> 2017-01-03 12:08:31.563919 7f6541787800  0 ceph version 10.2.5 
> (c461ee19ecbc0c5c330aca20f7392c9a00730367), process ceph-osd, pid 10977
> 2017-01-03 12:08:31.567261 7f6541787800  1 
> filestore(/var/lib/ceph/tmp/mnt.kRftwx) mkfs in /var/lib/ceph/tmp/mnt.kRftwx
> 2017-01-03 12:08:31.567294 7f6541787800  1 
> filestore(/var/lib/ceph/tmp/mnt.kRftwx) mkfs fsid is already set to 
> f0bfe11f-b89d-44dd-88ca-1d62251a03e9
> 2017-01-03 12:08:31.567298 7f6541787800  1 
> filestore(/var/lib/ceph/tmp/mnt.kRftwx) write_version_stamp 4
> 2017-01-03 12:08:31.567561 7f6541787800  0 
> filestore(/var/lib/ceph/tmp/mnt.kRftwx) backend xfs (magic 0x58465342)
> 2017-01-03 12:08:31.594423 7f6541787800  1 
> filestore(/var/lib/ceph/tmp/mnt.kRftwx) leveldb db exists/created
> 2017-01-03 12:08:31.594553 7f6541787800 -1 
> filestore(/var/lib/ceph/tmp/mnt.kRftwx) mkjournal error creating journal on 
> /var/lib/ceph/tmp/mnt.kRftwx/journal: (13) Permission denied
> 2017-01-03 12:08:31.594572 7f6541787800 -1 OSD::mkfs: ObjectStore::mkfs 
> failed with error -13
> 2017-01-03 12:08:31.594620 7f6541787800 -1 ESC[0;31m ** ERROR: error creating 
> empty object store in /var/lib/ceph/tmp/mnt.kRftwx: (13) Permission 
> deniedESC[0m
> 2017-01-03 12:10:22.891473 7f123dec1800  0 set uid:gid to 64045:64045 
> (ceph:ceph)
> 2017-01-03 12:10:22.891507 7f123dec1800  0 ceph version 10.2.5 
> (c461ee19ecbc0c5c330aca20f7392c9a00730367), process ceph-osd, pid 19651
> 2017-01-03 12:10:22.894023 7f123dec1800  1 
> filestore(/var/lib/ceph/tmp/mnt.zA19L0) mkfs in /var/lib/ceph/tmp/mnt.zA19L0
> 2017-01-03 12:10:22.894055 7f123dec1800  1 
> filestore(/var/lib/ceph/tmp/mnt.zA19L0) mkfs fsid is already set to 
> 644058d7-e1b0-4abe-92e2-43b17d75148e
> 2017-01-03 12:10:22.894059 7f123dec1800  1 
> filestore(/var/lib/ceph/tmp/mnt.zA19L0) write_version_stamp 4
> 2017-01-03 12:10:22.895334 7f123dec1800  0 
> filestore(/var/lib/ceph/tmp/mnt.zA19L0) backend xfs (magic 0x58465342)
> 2017-01-03 12:10:22.897835 7f123dec1800  1 
> filestore(/var/lib/ceph/tmp/mnt.zA19L0) leveldb db exists/created
> 2017-01-03 12:10:22.897997 7f123dec1800 -1 
> filestore(/var/lib/ceph/tmp/mnt.zA19L0) mkjournal error creating journal on 
> /var/lib/ceph/tmp/mnt.zA19L0/journal: (13) Permission denied
> 2017-01-03 12:10:22.898043 7f123dec1800 -1 OSD::mkfs: ObjectStore::mkfs 
> failed with error -13
> 2017-01-03 12:10:22.898122 7f123dec1800 -1 ESC[0;31m ** ERROR: error creating 
> empty object store in /var/lib/ceph/tmp/mnt.zA19L0: (13) Permission 
> deniedESC[0m
> 2017-01-03 12:10:23.514546 7f1d821f2800  0 set uid:gid to 64045:64045 
> (ceph:ceph)
> 2017-01-03 12:10:23.514577 7f1d821f2800  0 ceph version 10.2.5 
> (c461ee19ecbc0c5c330aca20f7392c9a00730367), process ceph-osd, pid 19754
> 2017-01-03 12:10:23.517465 7f1d821f2800  1 
> filestore(/var/lib/ceph/tmp/mnt.WaQmjK) mkfs in /var/lib/ceph/tmp/mnt.WaQmjK
> 2017-01-03 12:10:23.517494 7f1d821f2800  1 
> filestore(/var/lib/ceph/tmp/mnt.WaQmjK) mkfs fsid is already set to 
> 644058d7-e1b0-4abe-92e2-43b17d75148e
> 2017-01-03 12:10:23.517499 7f1d821f2800  1 
> filestore(/var/lib/ceph/tmp/mnt.WaQmjK) write_version_stamp 4
> 2017-01-03 12:10:23.517678 7f1d821f2800  0 
> filestore(/var/lib/ceph/tmp/mnt.WaQmjK) backend xfs (magic 0x58465342)
> 2017-01-03 12:10:23.519898 7f1d821f2800  1 
> filestore(/var/lib/ceph/tmp/mnt.WaQmjK) leveldb db exists/created
> 2017-01-03 12:10:23.520035 7f1d821f2800 -1 
> filestore(/var/lib/ceph/tmp/mnt.WaQmjK) mkjournal error creating journal on 
> /var/lib/ceph/tmp/mnt.WaQmjK/journal: (13) Permission denied
> 2017-01-03 12:10:23.520049 7f1d821f2800 -1 OSD::mkfs: ObjectStore::mkfs 
> failed with error -13
> 2017-01-03 12:10:23.520100 7f1d821f2800 -1 ESC[0;31m ** ERROR: error creating 
> empty object store in /var/lib/ceph/tmp/mnt.WaQmjK: (13) Permission 
> deniedESC[0m

I needed up creating the OSD’s with on-disk journals, then going back and 
moving the journals to the NVMe partition as intended, but hoping to do this 
all in one fell swoop, so hoping there may be some pointers on something I may 
be doing incorrectly with ceph-deploy for the external journal location. Adding 
a handful of OSD’s soon, and would like to do it correctly from the start.

Thanks,

Reed
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to