Package: lvm2
Version: 2.03.11-2.1
Severity: wishlist
Hi,
On a machine, I've two VG (aya+raid1 and aya+raid6).
While a pvmove was in progress in aya+raid6, I tried
to remove an unused PV in aya+raid1.
vgreduce worked correctly
pvremove also worked correctly
but, as the PV was a RAID1 mdadm volume, I did not
succeed into stopping it due to pvmove having an
open file descriptor on it:
# vgreduce aya+raid1 /dev/md/aya-raid1-B
Removed "/dev/md123" from volume group "aya+raid1"
# pvremove /dev/md/aya-raid1-B
Labels on physical volume "/dev/md/aya-raid1-B" successfully wiped.
# mdadm -S /dev/md/aya-raid1-B
mdadm: failed to stop array /dev/md/aya-raid1-B: Device or resource busy
Perhaps a running process, mounted filesystem or active volume group?
# lsof /dev/md/aya-raid1-B
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/2001/gvfs
Output information may be incomplete.
lsof: WARNING: can't stat() fuse.portal file system /run/user/2001/doc
Output information may be incomplete.
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
pvmove 688244 root 64r BLK 9,123 0t0 573 /dev/md/../md123
Looking at fd of pvmove, we can see:
# ls -l /proc/688244/fd
total 0
lrwx------ 1 root root 64 Jan 19 20:19 0 -> /dev/pts/25
lrwx------ 1 root root 64 Jan 19 20:19 1 -> /dev/pts/25
lrwx------ 1 root root 64 Jan 19 20:19 2 -> /dev/pts/25
lrwx------ 1 root root 64 Jan 19 20:19 3 -> 'socket:[15831408]'
lr-x------ 1 root root 64 Jan 19 22:42 42 -> /dev/sde1
lrwx------ 1 root root 64 Jan 19 22:42 5 -> /dev/md119
lr-x------ 1 root root 64 Jan 19 22:42 59 -> /dev/md118
lr-x------ 1 root root 64 Jan 19 22:42 62 -> /dev/md121
lr-x------ 1 root root 64 Jan 19 22:42 63 -> /dev/md122
lr-x------ 1 root root 64 Jan 19 22:42 64 -> /dev/md123
lr-x------ 1 root root 64 Jan 19 22:42 65 -> /dev/md124
lr-x------ 1 root root 64 Jan 19 22:42 66 -> /dev/md125
lrwx------ 1 root root 64 Jan 19 22:42 7 -> /dev/md120
lrwx------ 1 root root 64 Jan 19 22:42 8 -> /dev/md126
lrwx------ 1 root root 64 Jan 19 22:42 9 -> /dev/md127
Starting from the fifth (sde1 and the following ones),
they are all the PV present this system.
The four opened 'rw' (i.e. md119, md120, md126 and md127)
are the one in the VG where the pvmove is running (even
if only md127 and md120 are concerned by the pvmove)
All the others are PV from the other VG (not concerned by
the running pvmove). They are opened 'ro' only. But this
prevent to do some things with them : if it is a disk
partition, it cannot be removed (the kernel tells us it
cannot reload the new partition table). If it is a RAID
block as here, the mdadm RAID cannot be stopped.
It seems to me that there is no reason to keep an
opened file descriptor on PV of other VG when running
the pvmove. And it would be even better if only involved
PV in the involved VG would be locked (but there is
perhaps reason to do so here).
Note: the workaround is easy: just wait the end of the pvmove.
Regards
Vincent
-- System Information:
Debian Release: 11.6
APT prefers stable-security
APT policy: (990, 'stable-security'), (990, 'stable'), (500,
'stable-updates'), (500, 'oldstable-updates'), (500, 'oldstable'), (200,
'unstable')
Architecture: amd64 (x86_64)
Foreign Architectures: i386
Kernel: Linux 6.0.0-0.deb11.6-amd64 (SMP w/32 CPU threads; PREEMPT)
Kernel taint flags: TAINT_OOT_MODULE, TAINT_UNSIGNED_MODULE
Locale: LANG=C.UTF-8, LC_CTYPE=C.UTF-8 (charmap=UTF-8), LANGUAGE not set
Shell: /bin/sh linked to /bin/bash
Init: systemd (via /run/systemd/system)
Versions of packages lvm2 depends on:
ii dmeventd 2:1.02.175-2.1
ii dmsetup 2:1.02.175-2.1
ii init-system-helpers 1.60
ii libaio1 0.3.112-9
ii libblkid1 2.36.1-8+deb11u1
ii libc6 2.31-13+deb11u5
ii libdevmapper-event1.02.1 2:1.02.175-2.1
ii libedit2 3.1-20191231-2+b1
ii libselinux1 3.1-3
ii libsystemd0 247.3-7+deb11u1
ii libudev1 247.3-7+deb11u1
ii lsb-base 11.1.0
Versions of packages lvm2 recommends:
ii thin-provisioning-tools 0.9.0-1
lvm2 suggests no packages.