[Expired for curtin (Ubuntu) because there has been no activity for 60
days.]
** Changed in: curtin (Ubuntu)
Status: Incomplete => Expired
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1871874
[Expired for linux (Ubuntu) because there has been no activity for 60
days.]
** Changed in: linux (Ubuntu)
Status: Incomplete => Expired
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1871874
> This is in an integration lab so these hosts (including maas) are stopped,
> MAAS is reinstalled, and the systems are redeployed without any release
> or option to wipe during a MAAS release.
> Then MAAS deploys Bionic on these hosts thinking they are completely new
> systems but in reality they
Ryan,
From the logs the concern is the device or resource busy from meesage:
Running command ['lvremove', '--force', '--force', 'vgk/sdklv'] with allowed
return codes [0] (capture=False)
device-mapper: remove ioctl on (253:5) failed: Device or resource busy
Logical volume "sdklv"
>
> Ryan,
> We believe this is a bug as we expect curtin to wipe the disks. In this
> case it's failing to wipe the disks and occasionally that causes issues
> with our automation deploying ceph on those disks.
I'm still confused about what the actual error you believe is happening.
Note
** Attachment added: "curtin-install-cfg.yaml"
https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1871874/+attachment/5351450/+files/curtin-install-cfg.yaml
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
** Attachment added: "curtin-install.log"
https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1871874/+attachment/5351448/+files/curtin-install.log
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Ryan,
We believe this is a bug as we expect curtin to wipe the disks. In this
case it's failing to wipe the disks and occasionally that causes issues with
our automation deploying ceph on those disks. This may be more of an issue
with LVM and a race condition trying to wipe all of the
During a clear-holders operation we do not need to catch any failure;
we're attempting to destroy the devices in question. The destruction of
a device is explicitly requested in the config via a wipe: value[1]
present on one or more devices that are members of the LV.
1.
** Also affects: linux (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1871874
Title:
lvremove occasionally fails on nodes with multiple volumes
I was able to reproduce this with a VM deployed by MAAS. I created a VM
and added 26 disks to in using virsh (NOTE: I use zfs volumes for my
disks)
for i in {a..z}; do sudo zfs create -s -V 30G rpool/libvirt/maas-node-20$i; done
for i in {a..z}; do virsh attach-disk maas-node-20
# LVM(8)
DIAGNOSTICS
All tools return a status code of zero on success or non-zero on
failure. The non-zero codes distinguish only between the broad categories of
unrecognised commands, prob‐
lems processing the command line arguments and any other failures. As
LVM remains
The above was for focal ^
In Xenial:
#define ECMD_PROCESSED 1
#define ENO_SUCH_CMD2
#define EINVALID_CMD_LINE 3
#define ECMD_FAILED 5
In Bionic
#define ECMD_PROCESSED 1
#define ENO_SUCH_CMD2
#define EINVALID_CMD_LINE 3
#define
13 matches
Mail list logo