** Package changed: systemd (Ubuntu) => ceph (Ubuntu)
--
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1828617
Title:
Hosts randomly 'losing' disks, breaking ceph-osd
Thanks for all the details.
I need to confirm this but I think the block.db and block.wal symlinks
are created as a result of 'ceph-volume lvm prepare --bluestore --data
--block.wal --block.db '.
That's coded in the ceph-osd charm around here:
https://opendev.org/openstack/charm-ceph-
udevadm info -e >/tmp/1828617-2.out
~# ls -l /var/lib/ceph/osd/ceph*
-rw--- 1 ceph ceph 69 May 21 08:44
/var/lib/ceph/osd/ceph.client.osd-upgrade.keyring
/var/lib/ceph/osd/ceph-11:
total 24
lrwxrwxrwx 1 ceph ceph 93 May 28 22:12 block ->
journalctl --no-pager -lu systemd-udevd.service >/tmp/1828617-1.out
Hostname obfusticated
lsblk:
NAME
MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0
Andrey, I don't know if you saw James' comment as yours may have
coincided but if you can get the ceph-osd package version that would be
helpful. Thanks!
--
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
Yes, it is latest - the cluster is being re-deployed as part of
Bootstack handover.
Corey,
The bug you point to is fixing the sequence of ceph/udev. Here however udev
can't create any devices as they don't exist at the moment of udev run seems so
- when the host boots and settles down - there
Please can you confirm which version of the ceph-osd package you have
installed; older versions rely on a charm shipped udev ruleset, rather
than it being provided by the packaging.
--
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is
This feels similar to https://bugs.launchpad.net/charm-ceph-
osd/+bug/1812925. First question, are you running with the latest stable
charms which have the fix for that bug?
--
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to
The ceph-osd package provide udev rules which should switch the owner
for all ceph related LVM VG's to ceph:ceph.
# OSD LVM layout example
# VG prefix: ceph-
# LV prefix: osd-
ACTION=="add", SUBSYSTEM=="block", \
ENV{DEVTYPE}=="disk", \
ENV{DM_LV_NAME}=="osd-*", \
ENV{DM_VG_NAME}=="ceph-*",
by-dname udev rules are created by MAAS/curtin as part of the server
install I think.
--
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1828617
Title:
Hosts randomly
Steve,
This is MAAS who creates these udev rules. We requested this feature to be
implemented in order to be able to use persistent names in further services
configuration (using templating). We couldn't go with /dev/sdX names as they
may change after the reboot, and can't use wwn names as they
> LVM module is supposed to create PVs from devices using the links in
> /dev/disk/by-dname/
> folder that are created by udev.
Created by udev how? disk/by-dname is not part of the hierarchy that is
populated by the standard udev rules, nor is this created by lvm2. Is
there something in the
Just one update, if I change the perms of the symlink made (chown -h)
the OSD will actually start.
After rebooting, however, I found that the links I had made had gone
again and the whole process needed repeating in order to start the OSD.
--
You received this bug notification because you are a
I'm seeing this in a slightly different manner, on Bionic/Queens.
We have LVMs encrypted (thanks Vault), and rebooting a host results in
at least one OSD not returning fairly consistently. The LVs appear in
the list, however the difference between a working and a non-working OSD
is the lack of
** Tags added: canonical-bootstack
--
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1828617
Title:
Hosts randomly 'losing' disks, breaking ceph-osd service enumeration
This manifests itself as the following, as reported by lsblk(1). Note
the missing Ceph LVM volume on the 6th NVME disk:
$ cat sos_commands/block/lsblk
NAME
MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda
Status changed to 'Confirmed' because the bug affects multiple users.
** Changed in: systemd (Ubuntu)
Status: New => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
17 matches
Mail list logo