Re: pvscan in udev rules - buster vs bullseye

2022-11-25 Diskussionsfäden Thomas Lange
> On Wed, 7 Sep 2022 09:21:46 +, Markus Rexhepi-Lindberg 
>  said:

> Hi,
> I was affected by the same issue and the suggested patch fixed it.
> Thank you Andrew.

> Can we add this to FAI? I assume it can be added to fai-make-nfsroot.
The next FAI version (6.0) will include this.
See this commit. The syntax error in it was already fixed.
https://github.com/faiproject/fai/commit/64f85a332864b64b449043199c499f6e43cf5f42


My plan is to release FAI 6.0 before the end of this year.
-- 
regards Thomas


Re: pvscan in udev rules - buster vs bullseye

2022-09-07 Diskussionsfäden Markus Rexhepi-Lindberg
Hi,

I was affected by the same issue and the suggested patch fixed it.
Thank you Andrew.

Can we add this to FAI? I assume it can be added to fai-make-nfsroot.

Cheers!

--
Markus

From: linux-fai  on behalf of andrew bezella 

Sent: Wednesday, September 7, 2022 12:07:05 AM
To: fully automatic installation for Linux
Subject: pvscan in udev rules - buster vs bullseye

hello -

i had been experiencing a problem trying to use a bullseye netboot to
reinstall a server's os.  the same configuration worked with a buster
netboot.

when a logical volume exists on a metadevice then that lv was being
activated soon after the completion of `mdadm --assemble --scan --
config=/tmp/fai/mdadm-from-examine.conf`.  this caused the subsequent
`mdadm -W --stop` loop to fail when it reached that md:
(CMD) mdadm -W --stop /dev/md5 1> /tmp/RhKvizyXZk 2> /tmp/DrTvcNhaf6
Executing: mdadm -W --stop /dev/md5
(STDERR) mdadm: Cannot get exclusive access to /dev/md5:Perhaps a
running process, mounted filesystem or active volume group?
mdadm -W --stop /dev/md5 had exit code 1
(STDERR) mdadm: Cannot get exclusive access to /dev/md5:Perhaps a
running process, mounted filesystem or active volume group?
Command had non-zero exit code

eventually i found the udev rule that triggers the difference.  to
revert to the older behavior i copied /lib/udev/rules.d/69-lvm-
metad.rules into /etc/udev/rules.d and applied the following patch:
--- /srv/fai/nfsroot/bullseye-amd64/etc/udev/rules.d/69-lvm-metad.rules.orig 
2021-02-22 13:39:14.0 -0800
+++ /srv/fai/nfsroot/bullseye-amd64/etc/udev/rules.d/69-lvm-metad.rules 
2022-09-01 19:22:52.426117170 -0700
@@ -75,8 +75,7 @@

 ENV{SYSTEMD_READY}="1"

-TEST!="/run/systemd/system", GOTO="direct_pvscan"
-TEST=="/run/systemd/system", GOTO="systemd_background"
+GOTO="systemd_background"

 LABEL="systemd_background"

further down it is noted that the direct_pvscan mode is not used and
should be removed.  but it seems that since there is no systemd in
fai's bullseye nfsroot it currently is the default.  in buster the
method for invoking pvscan is apparently selected at build time and
defaults to systemd_background.  if/when FAI migrates to systemd this
may raise its head again.

hope this helps if someone else has a similar issue.

andy

--
andrew bezella 
internet archive


pvscan in udev rules - buster vs bullseye

2022-09-06 Diskussionsfäden andrew bezella
hello -

i had been experiencing a problem trying to use a bullseye netboot to
reinstall a server's os.  the same configuration worked with a buster
netboot.

when a logical volume exists on a metadevice then that lv was being
activated soon after the completion of `mdadm --assemble --scan --
config=/tmp/fai/mdadm-from-examine.conf`.  this caused the subsequent
`mdadm -W --stop` loop to fail when it reached that md:
(CMD) mdadm -W --stop /dev/md5 1> /tmp/RhKvizyXZk 2> /tmp/DrTvcNhaf6
Executing: mdadm -W --stop /dev/md5
(STDERR) mdadm: Cannot get exclusive access to /dev/md5:Perhaps a
running process, mounted filesystem or active volume group?
mdadm -W --stop /dev/md5 had exit code 1
(STDERR) mdadm: Cannot get exclusive access to /dev/md5:Perhaps a
running process, mounted filesystem or active volume group?
Command had non-zero exit code

eventually i found the udev rule that triggers the difference.  to
revert to the older behavior i copied /lib/udev/rules.d/69-lvm-
metad.rules into /etc/udev/rules.d and applied the following patch:
--- /srv/fai/nfsroot/bullseye-amd64/etc/udev/rules.d/69-lvm-metad.rules.orig 
2021-02-22 13:39:14.0 -0800
+++ /srv/fai/nfsroot/bullseye-amd64/etc/udev/rules.d/69-lvm-metad.rules 
2022-09-01 19:22:52.426117170 -0700
@@ -75,8 +75,7 @@

 ENV{SYSTEMD_READY}="1"

-TEST!="/run/systemd/system", GOTO="direct_pvscan"
-TEST=="/run/systemd/system", GOTO="systemd_background"
+GOTO="systemd_background"

 LABEL="systemd_background"

further down it is noted that the direct_pvscan mode is not used and
should be removed.  but it seems that since there is no systemd in
fai's bullseye nfsroot it currently is the default.  in buster the
method for invoking pvscan is apparently selected at build time and
defaults to systemd_background.  if/when FAI migrates to systemd this
may raise its head again.

hope this helps if someone else has a similar issue.

andy

-- 
andrew bezella 
internet archive
# config file for setup-storage
#
# disabling both -c (mount-count-dependent) and -i (time-dependent)
#   checking
#
#

disk_config disk1 disklabel:gpt-bios align-at:1M

primary -   32G -   -
primary -   32G -   -
primary -   8G  -   -

disk_config disk2 sameas:disk1

disk_config raid fstabkey:uuid

raid1   /   disk1.1,disk2.1 ext4defaults,errors=remount-ro  
mdcreateopts="--metadata=1.2 --assume-clean --bitmap=internal" createopts="-G 
256 -L root" tuneopts="-c 0 -i 0"
raid1   swapdisk1.2,disk2.2 swapsw  
mdcreateopts="--metadata=1.2 --assume-clean --bitmap=internal"
raid1   /tmpdisk1.3,disk2.3 ext4defaults,nosuid,nodev,noatime   
mdcreateopts="--metadata=1.2 --assume-clean --bitmap=internal" createopts="-G 
256 -L tmp" tuneopts="-c 0 -i 0"