Re: [ceph-users] Problems with osd creation in Ubuntu 18.04, ceph 13.2.4-1bionic

2019-03-15 Thread ceph
Hi Rainer,

Try something like

dd if=/dev/zero of=/dev/sdX bs=4096

To wipe/zap any Information on the disk.

HTH
Mehmet

Am 14. Februar 2019 13:57:51 MEZ schrieb Rainer Krienke 
:
>Hi,
>
>I am quite new to ceph and just try to set up a ceph cluster. Initially
>I used ceph-deploy for this but when I tried to create a BlueStore osd
>ceph-deploy fails. Next I tried the direct way on one of the OSD-nodes
>using ceph-volume to create the osd, but this also fails. Below you can
>see what  ceph-volume says.
>
>I ensured that there was no left over lvm VG and LV on the disk sdg
>before I started the osd creation for this disk. The very same error
>happens also on other disks not just for /dev/sdg. All the disk have
>4TB
>in size and the linux system is Ubuntu 18.04 and finally ceph is
>installed in version 13.2.4-1bionic from this repo:
>https://download.ceph.com/debian-mimic.
>
>There is a VG and two LV's  on the system for the ubuntu system itself
>that is installed on two separate disks configured as software raid1
>and
>lvm on top of the raid. But I cannot imagine that this might do any
>harm
>to cephs osd creation.
>
>Does anyone have an idea what might be wrong?
>
>Thanks for hints
>Rainer
>
>root@ceph1:~# wipefs -fa /dev/sdg
>root@ceph1:~# ceph-volume lvm prepare --bluestore --data /dev/sdg
>Running command: /usr/bin/ceph-authtool --gen-print-key
>Running command: /usr/bin/ceph --cluster ceph --name
>client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
>-i - osd new 14d041d6-0beb-4056-8df2-3920e2febce0
>Running command: /sbin/vgcreate --force --yes
>ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b /dev/sdg
> stdout: Physical volume "/dev/sdg" successfully created.
> stdout: Volume group "ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b"
>successfully created
>Running command: /sbin/lvcreate --yes -l 100%FREE -n
>osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
>ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b
>stdout: Logical volume "osd-block-14d041d6-0beb-4056-8df2-3920e2febce0"
>created.
>Running command: /usr/bin/ceph-authtool --gen-print-key
>Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
>--> Absolute path not found for executable: restorecon
>--> Ensure $PATH environment variable contains common executable
>locations
>Running command: /bin/chown -h ceph:ceph
>/dev/ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b/osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
>Running command: /bin/chown -R ceph:ceph /dev/dm-8
>Running command: /bin/ln -s
>/dev/ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b/osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
>/var/lib/ceph/osd/ceph-0/block
>Running command: /usr/bin/ceph --cluster ceph --name
>client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
>mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
> stderr: got monmap epoch 1
>Running command: /usr/bin/ceph-authtool
>/var/lib/ceph/osd/ceph-0/keyring
>--create-keyring --name osd.0 --add-key
>AQAAY2VcU968HxAAvYWMaJZmriUc4H9bCCp8XQ==
> stdout: creating /var/lib/ceph/osd/ceph-0/keyring
>added entity osd.0 auth auth(auid = 18446744073709551615
>key=AQAAY2VcU968HxAAvYWMaJZmriUc4H9bCCp8XQ== with 0 caps)
>Running command: /bin/chown -R ceph:ceph
>/var/lib/ceph/osd/ceph-0/keyring
>Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
>Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore
>bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap
>--keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid
>14d041d6-0beb-4056-8df2-3920e2febce0 --setuser ceph --setgroup ceph
> stderr: 2019-02-14 13:45:54.788 7f3fcecb3240 -1
>bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
> stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: In
>function 'virtual int KernelDevice::read(uint64_t, uint64_t,
>ceph::bufferlist*, IOContext*, bool)' thread 7f3fcecb3240 time
>2019-02-14 13:45:54.841130
> stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: 821:
>FAILED assert((uint64_t)r == len)
> stderr: ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e)
>mimic (stable)
> stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int,
>char const*)+0x102) [0x7f3fc60d33e2]
> stderr: 2: (()+0x26d5a7) [0x7f3fc60d35a7]
> stderr: 3: (KernelDevice::read(unsigned long, unsigned long,
>ceph::buffer::list*, IOContext*, bool)+0x4a7) [0x561371346817]
> stderr: 4: (BlueFS::_read(BlueFS::FileReader*,
>BlueFS::FileReaderBuffer*, unsigned long, unsigned long,
>ceph::buffer::list*, char*)+0x435) [0x5613713065c5]
> stderr: 5: (BlueFS::_replay(bool, bool)+0x214) [0x56137130c434]
> stderr: 6: (BlueFS::mount()+0x1f1) [0x561371310c81]
> stderr: 7: (BlueStore::_open_db(bool, bool)+0x17cd) [0x56137123704d]
> stderr: 8: (BlueStore::mkfs()+0x805) [0x561371267fe5]
> stderr: 9: (OSD::mkfs(CephContext*, ObjectStore*,
>std::__cxx11::basic_string,
>std::allocator > const&, uuid_d, int)+0x1b0) [0x561370e10480]
> stderr: 10: (main()+0x4222) [0x561370cf7462]
> stderr: 

Re: [ceph-users] Problems with osd creation in Ubuntu 18.04, ceph 13.2.4-1bionic

2019-02-18 Thread Alfredo Deza
On Mon, Feb 18, 2019 at 2:46 AM Rainer Krienke  wrote:
>
> Hello,
>
> thanks for your answer, but zapping the disk did not make any
> difference. I still get the same error.  Looking at the debug output I
> found this error message that is probably the root of all trouble:
>
> # ceph-volume lvm prepare --bluestore --data /dev/sdg
> 
> stderr: 2019-02-18 08:29:25.544 7fdaa50ed240 -1
> bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid

This "unparsable uuid" line is (unfortunately) expected from
bluestore, and will show up when the OSD is being created for the
first time.

The error messaging was improved a bit (see
https://tracker.ceph.com/issues/22285 and PR
https://github.com/ceph/ceph/pull/20090 )

>
> I found the bugreport below that seems to be exactly that problem I have:
> http://tracker.ceph.com/issues/15386

This doesn't look like the same thing, you are hitting an assert:

 stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: In
function 'virtual int KernelDevice::read(uint64_t, uint64_t,
ceph::bufferlist*, IOContext*, bool)' thread 7f3fcecb3240 time
2019-02-14 13:45:54.841130
 stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: 821:
FAILED assert((uint64_t)r == len)

Which looks like a valid issue to me, might want to go and create a
new ticket in

https://tracker.ceph.com/projects/bluestore/issues/new


>
> However there seems to be no solution  up to now.
>
> Does anyone have more information how to get around this problem?
>
> Thanks
> Rainer
>
> Am 15.02.19 um 18:12 schrieb David Turner:
> > I have found that running a zap before all prepare/create commands with
> > ceph-volume helps things run smoother.  Zap is specifically there to
> > clear everything on a disk away to make the disk ready to be used as an
> > OSD.  Your wipefs command is still fine, but then I would lvm zap the
> > disk before continuing.  I would run the commands like [1] this.  I also
> > prefer the single command lvm create as opposed to lvm prepare and lvm
> > activate.  Try that out and see if you still run into the problems
> > creating the BlueStore filesystem.
> >
> > [1] ceph-volume lvm zap /dev/sdg
> > ceph-volume lvm prepare --bluestore --data /dev/sdg
> >
> > On Thu, Feb 14, 2019 at 10:25 AM Rainer Krienke  > > wrote:
> >
> > Hi,
> >
> > I am quite new to ceph and just try to set up a ceph cluster. Initially
> > I used ceph-deploy for this but when I tried to create a BlueStore osd
> > ceph-deploy fails. Next I tried the direct way on one of the OSD-nodes
> > using ceph-volume to create the osd, but this also fails. Below you can
> > see what  ceph-volume says.
> >
> > I ensured that there was no left over lvm VG and LV on the disk sdg
> > before I started the osd creation for this disk. The very same error
> > happens also on other disks not just for /dev/sdg. All the disk have 4TB
> > in size and the linux system is Ubuntu 18.04 and finally ceph is
> > installed in version 13.2.4-1bionic from this repo:
> > https://download.ceph.com/debian-mimic.
> >
> > There is a VG and two LV's  on the system for the ubuntu system itself
> > that is installed on two separate disks configured as software raid1 and
> > lvm on top of the raid. But I cannot imagine that this might do any harm
> > to cephs osd creation.
> >
> > Does anyone have an idea what might be wrong?
> >
> > Thanks for hints
> > Rainer
> >
> > root@ceph1:~# wipefs -fa /dev/sdg
> > root@ceph1:~# ceph-volume lvm prepare --bluestore --data /dev/sdg
> > Running command: /usr/bin/ceph-authtool --gen-print-key
> > Running command: /usr/bin/ceph --cluster ceph --name
> > client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
> > -i - osd new 14d041d6-0beb-4056-8df2-3920e2febce0
> > Running command: /sbin/vgcreate --force --yes
> > ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b /dev/sdg
> >  stdout: Physical volume "/dev/sdg" successfully created.
> >  stdout: Volume group "ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b"
> > successfully created
> > Running command: /sbin/lvcreate --yes -l 100%FREE -n
> > osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
> > ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b
> >  stdout: Logical volume "osd-block-14d041d6-0beb-4056-8df2-3920e2febce0"
> > created.
> > Running command: /usr/bin/ceph-authtool --gen-print-key
> > Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
> > --> Absolute path not found for executable: restorecon
> > --> Ensure $PATH environment variable contains common executable
> > locations
> > Running command: /bin/chown -h ceph:ceph
> > 
> > /dev/ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b/osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
> > Running command: /bin/chown -R ceph:ceph /dev/dm-8
> > Running command: /bin/ln -s
> > 
> > 

Re: [ceph-users] Problems with osd creation in Ubuntu 18.04, ceph 13.2.4-1bionic

2019-02-17 Thread Rainer Krienke
Hello,

thanks for your answer, but zapping the disk did not make any
difference. I still get the same error.  Looking at the debug output I
found this error message that is probably the root of all trouble:

# ceph-volume lvm prepare --bluestore --data /dev/sdg

stderr: 2019-02-18 08:29:25.544 7fdaa50ed240 -1
bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid

I found the bugreport below that seems to be exactly that problem I have:
http://tracker.ceph.com/issues/15386

However there seems to be no solution  up to now.

Does anyone have more information how to get around this problem?

Thanks
Rainer

Am 15.02.19 um 18:12 schrieb David Turner:
> I have found that running a zap before all prepare/create commands with
> ceph-volume helps things run smoother.  Zap is specifically there to
> clear everything on a disk away to make the disk ready to be used as an
> OSD.  Your wipefs command is still fine, but then I would lvm zap the
> disk before continuing.  I would run the commands like [1] this.  I also
> prefer the single command lvm create as opposed to lvm prepare and lvm
> activate.  Try that out and see if you still run into the problems
> creating the BlueStore filesystem.
> 
> [1] ceph-volume lvm zap /dev/sdg
> ceph-volume lvm prepare --bluestore --data /dev/sdg
> 
> On Thu, Feb 14, 2019 at 10:25 AM Rainer Krienke  > wrote:
> 
> Hi,
> 
> I am quite new to ceph and just try to set up a ceph cluster. Initially
> I used ceph-deploy for this but when I tried to create a BlueStore osd
> ceph-deploy fails. Next I tried the direct way on one of the OSD-nodes
> using ceph-volume to create the osd, but this also fails. Below you can
> see what  ceph-volume says.
> 
> I ensured that there was no left over lvm VG and LV on the disk sdg
> before I started the osd creation for this disk. The very same error
> happens also on other disks not just for /dev/sdg. All the disk have 4TB
> in size and the linux system is Ubuntu 18.04 and finally ceph is
> installed in version 13.2.4-1bionic from this repo:
> https://download.ceph.com/debian-mimic.
> 
> There is a VG and two LV's  on the system for the ubuntu system itself
> that is installed on two separate disks configured as software raid1 and
> lvm on top of the raid. But I cannot imagine that this might do any harm
> to cephs osd creation.
> 
> Does anyone have an idea what might be wrong?
> 
> Thanks for hints
> Rainer
> 
> root@ceph1:~# wipefs -fa /dev/sdg
> root@ceph1:~# ceph-volume lvm prepare --bluestore --data /dev/sdg
> Running command: /usr/bin/ceph-authtool --gen-print-key
> Running command: /usr/bin/ceph --cluster ceph --name
> client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
> -i - osd new 14d041d6-0beb-4056-8df2-3920e2febce0
> Running command: /sbin/vgcreate --force --yes
> ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b /dev/sdg
>  stdout: Physical volume "/dev/sdg" successfully created.
>  stdout: Volume group "ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b"
> successfully created
> Running command: /sbin/lvcreate --yes -l 100%FREE -n
> osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
> ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b
>  stdout: Logical volume "osd-block-14d041d6-0beb-4056-8df2-3920e2febce0"
> created.
> Running command: /usr/bin/ceph-authtool --gen-print-key
> Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
> --> Absolute path not found for executable: restorecon
> --> Ensure $PATH environment variable contains common executable
> locations
> Running command: /bin/chown -h ceph:ceph
> 
> /dev/ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b/osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
> Running command: /bin/chown -R ceph:ceph /dev/dm-8
> Running command: /bin/ln -s
> 
> /dev/ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b/osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
> /var/lib/ceph/osd/ceph-0/block
> Running command: /usr/bin/ceph --cluster ceph --name
> client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
> mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
>  stderr: got monmap epoch 1
> Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring
> --create-keyring --name osd.0 --add-key
> AQAAY2VcU968HxAAvYWMaJZmriUc4H9bCCp8XQ==
>  stdout: creating /var/lib/ceph/osd/ceph-0/keyring
> added entity osd.0 auth auth(auid = 18446744073709551615
> key=AQAAY2VcU968HxAAvYWMaJZmriUc4H9bCCp8XQ== with 0 caps)
> Running command: /bin/chown -R ceph:ceph
> /var/lib/ceph/osd/ceph-0/keyring
> Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
> Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore
> bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap
> 

Re: [ceph-users] Problems with osd creation in Ubuntu 18.04, ceph 13.2.4-1bionic

2019-02-15 Thread David Turner
I have found that running a zap before all prepare/create commands with
ceph-volume helps things run smoother.  Zap is specifically there to clear
everything on a disk away to make the disk ready to be used as an OSD.
Your wipefs command is still fine, but then I would lvm zap the disk before
continuing.  I would run the commands like [1] this.  I also prefer the
single command lvm create as opposed to lvm prepare and lvm activate.  Try
that out and see if you still run into the problems creating the BlueStore
filesystem.

[1] ceph-volume lvm zap /dev/sdg
ceph-volume lvm prepare --bluestore --data /dev/sdg

On Thu, Feb 14, 2019 at 10:25 AM Rainer Krienke 
wrote:

> Hi,
>
> I am quite new to ceph and just try to set up a ceph cluster. Initially
> I used ceph-deploy for this but when I tried to create a BlueStore osd
> ceph-deploy fails. Next I tried the direct way on one of the OSD-nodes
> using ceph-volume to create the osd, but this also fails. Below you can
> see what  ceph-volume says.
>
> I ensured that there was no left over lvm VG and LV on the disk sdg
> before I started the osd creation for this disk. The very same error
> happens also on other disks not just for /dev/sdg. All the disk have 4TB
> in size and the linux system is Ubuntu 18.04 and finally ceph is
> installed in version 13.2.4-1bionic from this repo:
> https://download.ceph.com/debian-mimic.
>
> There is a VG and two LV's  on the system for the ubuntu system itself
> that is installed on two separate disks configured as software raid1 and
> lvm on top of the raid. But I cannot imagine that this might do any harm
> to cephs osd creation.
>
> Does anyone have an idea what might be wrong?
>
> Thanks for hints
> Rainer
>
> root@ceph1:~# wipefs -fa /dev/sdg
> root@ceph1:~# ceph-volume lvm prepare --bluestore --data /dev/sdg
> Running command: /usr/bin/ceph-authtool --gen-print-key
> Running command: /usr/bin/ceph --cluster ceph --name
> client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
> -i - osd new 14d041d6-0beb-4056-8df2-3920e2febce0
> Running command: /sbin/vgcreate --force --yes
> ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b /dev/sdg
>  stdout: Physical volume "/dev/sdg" successfully created.
>  stdout: Volume group "ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b"
> successfully created
> Running command: /sbin/lvcreate --yes -l 100%FREE -n
> osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
> ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b
>  stdout: Logical volume "osd-block-14d041d6-0beb-4056-8df2-3920e2febce0"
> created.
> Running command: /usr/bin/ceph-authtool --gen-print-key
> Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
> --> Absolute path not found for executable: restorecon
> --> Ensure $PATH environment variable contains common executable locations
> Running command: /bin/chown -h ceph:ceph
>
> /dev/ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b/osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
> Running command: /bin/chown -R ceph:ceph /dev/dm-8
> Running command: /bin/ln -s
>
> /dev/ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b/osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
> /var/lib/ceph/osd/ceph-0/block
> Running command: /usr/bin/ceph --cluster ceph --name
> client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
> mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
>  stderr: got monmap epoch 1
> Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring
> --create-keyring --name osd.0 --add-key
> AQAAY2VcU968HxAAvYWMaJZmriUc4H9bCCp8XQ==
>  stdout: creating /var/lib/ceph/osd/ceph-0/keyring
> added entity osd.0 auth auth(auid = 18446744073709551615
> key=AQAAY2VcU968HxAAvYWMaJZmriUc4H9bCCp8XQ== with 0 caps)
> Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
> Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
> Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore
> bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap
> --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid
> 14d041d6-0beb-4056-8df2-3920e2febce0 --setuser ceph --setgroup ceph
>  stderr: 2019-02-14 13:45:54.788 7f3fcecb3240 -1
> bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
>  stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: In
> function 'virtual int KernelDevice::read(uint64_t, uint64_t,
> ceph::bufferlist*, IOContext*, bool)' thread 7f3fcecb3240 time
> 2019-02-14 13:45:54.841130
>  stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: 821:
> FAILED assert((uint64_t)r == len)
>  stderr: ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e)
> mimic (stable)
>  stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int,
> char const*)+0x102) [0x7f3fc60d33e2]
>  stderr: 2: (()+0x26d5a7) [0x7f3fc60d35a7]
>  stderr: 3: (KernelDevice::read(unsigned long, unsigned long,
> ceph::buffer::list*, IOContext*, bool)+0x4a7) [0x561371346817]
>  stderr: 4: 

[ceph-users] Problems with osd creation in Ubuntu 18.04, ceph 13.2.4-1bionic

2019-02-14 Thread Rainer Krienke
Hi,

I am quite new to ceph and just try to set up a ceph cluster. Initially
I used ceph-deploy for this but when I tried to create a BlueStore osd
ceph-deploy fails. Next I tried the direct way on one of the OSD-nodes
using ceph-volume to create the osd, but this also fails. Below you can
see what  ceph-volume says.

I ensured that there was no left over lvm VG and LV on the disk sdg
before I started the osd creation for this disk. The very same error
happens also on other disks not just for /dev/sdg. All the disk have 4TB
in size and the linux system is Ubuntu 18.04 and finally ceph is
installed in version 13.2.4-1bionic from this repo:
https://download.ceph.com/debian-mimic.

There is a VG and two LV's  on the system for the ubuntu system itself
that is installed on two separate disks configured as software raid1 and
lvm on top of the raid. But I cannot imagine that this might do any harm
to cephs osd creation.

Does anyone have an idea what might be wrong?

Thanks for hints
Rainer

root@ceph1:~# wipefs -fa /dev/sdg
root@ceph1:~# ceph-volume lvm prepare --bluestore --data /dev/sdg
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
-i - osd new 14d041d6-0beb-4056-8df2-3920e2febce0
Running command: /sbin/vgcreate --force --yes
ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b /dev/sdg
 stdout: Physical volume "/dev/sdg" successfully created.
 stdout: Volume group "ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b"
successfully created
Running command: /sbin/lvcreate --yes -l 100%FREE -n
osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b
 stdout: Logical volume "osd-block-14d041d6-0beb-4056-8df2-3920e2febce0"
created.
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
--> Absolute path not found for executable: restorecon
--> Ensure $PATH environment variable contains common executable locations
Running command: /bin/chown -h ceph:ceph
/dev/ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b/osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
Running command: /bin/chown -R ceph:ceph /dev/dm-8
Running command: /bin/ln -s
/dev/ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b/osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
/var/lib/ceph/osd/ceph-0/block
Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
 stderr: got monmap epoch 1
Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring
--create-keyring --name osd.0 --add-key
AQAAY2VcU968HxAAvYWMaJZmriUc4H9bCCp8XQ==
 stdout: creating /var/lib/ceph/osd/ceph-0/keyring
added entity osd.0 auth auth(auid = 18446744073709551615
key=AQAAY2VcU968HxAAvYWMaJZmriUc4H9bCCp8XQ== with 0 caps)
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore
bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap
--keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid
14d041d6-0beb-4056-8df2-3920e2febce0 --setuser ceph --setgroup ceph
 stderr: 2019-02-14 13:45:54.788 7f3fcecb3240 -1
bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
 stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: In
function 'virtual int KernelDevice::read(uint64_t, uint64_t,
ceph::bufferlist*, IOContext*, bool)' thread 7f3fcecb3240 time
2019-02-14 13:45:54.841130
 stderr: /build/ceph-13.2.4/src/os/bluestore/KernelDevice.cc: 821:
FAILED assert((uint64_t)r == len)
 stderr: ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e)
mimic (stable)
 stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int,
char const*)+0x102) [0x7f3fc60d33e2]
 stderr: 2: (()+0x26d5a7) [0x7f3fc60d35a7]
 stderr: 3: (KernelDevice::read(unsigned long, unsigned long,
ceph::buffer::list*, IOContext*, bool)+0x4a7) [0x561371346817]
 stderr: 4: (BlueFS::_read(BlueFS::FileReader*,
BlueFS::FileReaderBuffer*, unsigned long, unsigned long,
ceph::buffer::list*, char*)+0x435) [0x5613713065c5]
 stderr: 5: (BlueFS::_replay(bool, bool)+0x214) [0x56137130c434]
 stderr: 6: (BlueFS::mount()+0x1f1) [0x561371310c81]
 stderr: 7: (BlueStore::_open_db(bool, bool)+0x17cd) [0x56137123704d]
 stderr: 8: (BlueStore::mkfs()+0x805) [0x561371267fe5]
 stderr: 9: (OSD::mkfs(CephContext*, ObjectStore*,
std::__cxx11::basic_string,
std::allocator > const&, uuid_d, int)+0x1b0) [0x561370e10480]
 stderr: 10: (main()+0x4222) [0x561370cf7462]
 stderr: 11: (__libc_start_main()+0xe7) [0x7f3fc3695b97]
 stderr: 12: (_start()+0x2a) [0x561370dc095a]
 stderr: NOTE: a copy of the executable, or `objdump -rdS `
is needed to interpret this.
 stderr: 2019-02-14 13:45:54.840 7f3fcecb3240 -1