Hey there... resurrecting a dead apparently unanswered question. I had
issues with this, and nobody online had any answers, and I accidentally
ran into the solution. So I hope this helps someone.

> Hello,
>
> I have been trying to deploy bluestore OSDs in a test cluster of 2x OSDs
> and 3x mon (xen1,2,3) on Ubuntu Xenial and Jewel 10.2.1.
>
> Activating the OSDs gives an error in systemd as follows. the culprit is
> the command "ceph-osd --get-device-fsid" which fails to get fsid.
...
> root at xen2 <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>:/# 
> /usr/bin/ceph-osd --get-device-fsid /dev/sdb2
> 2016-06-02 19:03:50.960521 7f203b2928c0 -1 bluestore(/dev/sdb2)
> _read_bdev_label unable to decode label at offset 62:
> buffer::malformed_input: void
> bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) unknown
> encoding version > 1
> 2016-06-02 19:03:50.963348 7f203b2928c0 -1 journal read_header error
> decoding journal header
> failed to get device fsid for /dev/sdb2: (22) Invalid argument


To fix that, you have to run `ceph-osd ... --mkfs ...` (eg. `ceph-osd
--cluster "${cluster}" -i "${osd_number}" --mkfs --mkkey --osd-uuid
"${osd_uuid}"`) on the osd data dir, which requires that either a
symlink, or ceph.conf says where the block device is. Before that point,
the block device just has old garbage from whatever was on it before,
and not a bluestore block header.

I accidentally found the cause/solution for this by running the
filestore osd creating manual procedure while accidentally leaving "osd
objectstore = bluestore" in the ceph.conf, which created a file, not
block device, which worked for some reason, but only allocated half the
space of the osd and ran slower.

And I tested it in the latest kraken on Ubuntu 14.04 with kernel 4.4
from xenial.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to