I would very much agree with xnox (it's a duplicate) (and Dan, nothing
for curtin to do);

curtin generated dname rules rely upon pointing to a /dev/bcache/by-
uuid/* symlink which is currently broken per
https://bugs.launchpad.net/ubuntu/+source/linux-signed/+bug/1861941
which at this time points some issue in udev itself (the kernel emits
all of the correct uevents we expect).

And as James' workaround shows; it's *not* always happening; a rescan
can "restore" the links; but that's not 100% reliable.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to lvm2 in Ubuntu.
https://bugs.launchpad.net/bugs/1878752

Title:
  vgcreate fails on /dev/disk/by-dname block devices

Status in OpenStack ceph-osd charm:
  New
Status in curtin package in Ubuntu:
  Invalid
Status in lvm2 package in Ubuntu:
  New

Bug description:
  Ubuntu Focal, OpenStack Charmers Next Charms.

  juju run-action --wait ceph-osd/0 add-disk osd-devices=/dev/disk/by-
  dname/bcache2

  unit-ceph-osd-0:
    UnitId: ceph-osd/0
    id: "5"
    message: exit status 1
    results:
      ReturnCode: 1
      Stderr: |
        partx: /dev/disk/by-dname/bcache2: failed to read partition table
          Failed to find physical volume "/dev/bcache1".
          Failed to find physical volume "/dev/bcache1".
          Device /dev/disk/by-dname/bcache2 not found.
        Traceback (most recent call last):
          File "/var/lib/juju/agents/unit-ceph-osd-0/charm/actions/add-disk", 
line 79, in <module>
            request = add_device(request=request,
          File "/var/lib/juju/agents/unit-ceph-osd-0/charm/actions/add-disk", 
line 34, in add_device
            charms_ceph.utils.osdize(device_path, hookenv.config('osd-format'),
          File "lib/charms_ceph/utils.py", line 1497, in osdize
            osdize_dev(dev, osd_format, osd_journal,
          File "lib/charms_ceph/utils.py", line 1570, in osdize_dev
            cmd = _ceph_volume(dev,
          File "lib/charms_ceph/utils.py", line 1705, in _ceph_volume
            cmd.append(_allocate_logical_volume(dev=dev,
          File "lib/charms_ceph/utils.py", line 1965, in 
_allocate_logical_volume
            lvm.create_lvm_volume_group(vg_name, pv_dev)
          File "hooks/charmhelpers/contrib/storage/linux/lvm.py", line 104, in 
create_lvm_volume_group
            check_call(['vgcreate', volume_group, block_device])
          File "/usr/lib/python3.8/subprocess.py", line 364, in check_call
            raise CalledProcessError(retcode, cmd)
        subprocess.CalledProcessError: Command '['vgcreate', 
'ceph-911bc34b-4634-4ebd-a055-876b978d0b0a', '/dev/disk/by-dname/bcache2']' 
returned non-zero exit status 5.
      Stdout: |2
          Physical volume "/dev/disk/by-dname/bcache2" successfully created.
    status: failed
    timing:
      completed: 2020-05-15 06:04:41 +0000 UTC
      enqueued: 2020-05-15 06:04:30 +0000 UTC
      started: 2020-05-15 06:04:39 +0000 UTC

  The same action on the /dev/bcacheX device succeeds - looks like some
  sort of behaviour break in Ubuntu.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-osd/+bug/1878752/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to     : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to