[Expired for linux (Ubuntu) because there has been no activity for 60
days.]
** Changed in: linux (Ubuntu)
Status: Incomplete => Expired
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1871874
Title:
lvremove occasionally fails on nodes with multiple volumes and curtin
does not catch the failure
Status in curtin package in Ubuntu:
Expired
Status in linux package in Ubuntu:
Expired
Bug description:
For example:
Wiping lvm logical volume: /dev/ceph-db-wal-dev-sdc/ceph-db-dev-sdi
wiping 1M on /dev/ceph-db-wal-dev-sdc/ceph-db-dev-sdi at offsets [0, -1048576]
using "lvremove" on ceph-db-wal-dev-sdc/ceph-db-dev-sdi
Running command ['lvremove', '--force', '--force',
'ceph-db-wal-dev-sdc/ceph-db-dev-sdi'] with allowed return codes [0]
(capture=False)
device-mapper: remove ioctl on (253:14) failed: Device or resource busy
Logical volume "ceph-db-dev-sdi" successfully removed
On a node with 10 disks configured as follows:
/dev/sda2 /
/dev/sda1 /boot
/dev/sda3 /var/log
/dev/sda5 /var/crash
/dev/sda6 /var/lib/openstack-helm
/dev/sda7 /var
/dev/sdj1 /srv
sdb and sdc are used for BlueStore WAL and DB
sdd, sde, sdf: ceph OSDs, using sdb
sdg, sdh, sdi: ceph OSDs, using sdc
across multiple servers this happens occasionally with various disks.
It looks like this maybe a race condition maybe in lvm as curtin is
wiping multiple volumes before lvm fails
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1871874/+subscriptions
--
Mailing list: https://launchpad.net/~kernel-packages
Post to : [email protected]
Unsubscribe : https://launchpad.net/~kernel-packages
More help : https://help.launchpad.net/ListHelp