Hi
As indicated in direct conversation, the changes in 2.02.126-3 seem
to avoid the problem for me, both on lvm2-only and mdadm+lvm2 systems
using initramfs-tools.
Regards
Stefan Lippers-Hollmann
pgp27SI_U2zmC.pgp
Description: Digitale Signatur von OpenPGP
On Fri, Jul 31, 2015 at 08:08:38AM +0200, Stefan Lippers-Hollmann wrote:
It took many reboots (50), but here is a reproduction with the
official Debian kernel - gzipped logs attached.
Okay, thank you. However it just shows that udev never processes the
add event for sda2, so never runs pvscan
Hi
On 2015-08-01, Stefan Lippers-Hollmann wrote:
On 2015-07-31, Michael Biebl wrote:
[...]
Bastian built the lvm2 on amd64 on a non-systemd system, it seems. This
results in /lib/udev/rules.d/69-lvm-metad.rules lookin like this:
...
ENV{SYSTEMD_READY}=1
RUN+=/sbin/lvm pvscan
On Mon, 27 Jul 2015, Michael Biebl wrote:
Not sure if that is happening here. But fixing [2] and making sure
pvscan is run via /bin/systemd-run look like should be done in any
case.
Michael
[2] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=783182
Hi
On 2015-07-31, Michael Biebl wrote:
On Fri, 31 Jul 2015 08:08:38 +0200 Stefan Lippers-Hollmann
s@gmx.de wrote:
Hi
On 2015-07-31, Stefan Lippers-Hollmann wrote:
On 2015-07-31, Stefan Lippers-Hollmann wrote:
On 2015-07-25, Bastian Blank wrote:
[...]
The attached bootlog
Am 31.07.2015 um 10:54 schrieb Michael Biebl:
If I replace /lib/udev/rules.d/69-lvm-metad.rules with the attached
file, my problems with LVM on top of RAID1 are gone.
Grr, nvm. While testing, I actually had use_lvmetad disabled. Still
getting failures, even with the modified
Hi
On 2015-07-31, Stefan Lippers-Hollmann wrote:
On 2015-07-31, Stefan Lippers-Hollmann wrote:
On 2015-07-25, Bastian Blank wrote:
[...]
The attached bootlog (serial console udev.log-priority=7) has
unfortunately not been recorded with an official Debian kernel, but
I've been able to
On Fri, 31 Jul 2015 08:08:38 +0200 Stefan Lippers-Hollmann
s@gmx.de wrote:
Hi
On 2015-07-31, Stefan Lippers-Hollmann wrote:
On 2015-07-31, Stefan Lippers-Hollmann wrote:
On 2015-07-25, Bastian Blank wrote:
[...]
The attached bootlog (serial console udev.log-priority=7) has
Hi
On 2015-07-25, Bastian Blank wrote:
On Tue, Jul 21, 2015 at 09:11:57PM +0200, Bastian Blank wrote:
So the next step could be debugging udev and see what it calls and when.
Please provide the complete udev db (udevadm info -e) and udev debugging
attached (in the broken state) as
Hi
On 2015-07-31, Stefan Lippers-Hollmann wrote:
[...]
challenger:~# pvs
PV VGFmt Attr PSize PFree
/dev/sda2 vg-challenger lvm2 a-- 831,49g 251,49g
challenger:~# vgs
VG#PV #LV #SN Attr VSize VFree
vg-challenger 1 4 0 wz--n- 831,49g
Hi
On 2015-07-31, Stefan Lippers-Hollmann wrote:
On 2015-07-25, Bastian Blank wrote:
output (udev.log-priority=8 at the kernel command line) from a failed
boot.
[...]
Loading, please wait...
invalid udev.log[2.343952] random: systemd-udevd urandom read with 4 bits
of entropy
Hi
Just confirming that there's no change with src:lvm2 2.02.126-1, the
problem is still present.
Regards
Stefan Lippers-Hollmann
pgpf6OM5kuNIg.pgp
Description: Digitale Signatur von OpenPGP
On Mon, Jul 27, 2015 at 05:40:39PM +0200, Michael Biebl wrote:
udev under systemd doesn't allow long running processes which background
to be started from udev rules, such processes are killed by udevd [4].
Not sure if that is happening here. But fixing [2] and making sure
pvscan is run via
Am 28.07.2015 um 08:41 schrieb Bastian Blank:
On Mon, Jul 27, 2015 at 05:40:39PM +0200, Michael Biebl wrote:
udev under systemd doesn't allow long running processes which background
to be started from udev rules, such processes are killed by udevd [4].
Not sure if that is happening here. But
On 07/25/2015 09:34 PM, Bastian Blank wrote:
Hi Peter
Currently I think that all this problems are related to missing or
broken pvscan --cache calls.
I found one problematic case regarding coldplug; I believe Redhat does
not longer use this code path. In none of my tests the artificial
On 07/27/2015 04:12 PM, Peter Rajnoha wrote:
It's the OPTIONS+=db_persist that needs to be used in initramfs
for MD devices. This marks udev db records related to this device with
sticky bit then which is then recognized by udev code and the udev
db state is not cleaned up in that case:
For
Am 27.07.2015 um 17:40 schrieb Michael Biebl:
Am 27.07.2015 um 07:56 schrieb Bastian Blank:
On Sun, Jul 26, 2015 at 12:24:43AM +0200, Michael Biebl wrote:
Fwiw, I could easily and reliably reproduce this problem in a VM with
LVM (guided setup with separate /, /home, /tmp, /var) on top of mdadm
Am 27.07.2015 um 07:56 schrieb Bastian Blank:
On Sun, Jul 26, 2015 at 12:24:43AM +0200, Michael Biebl wrote:
Fwiw, I could easily and reliably reproduce this problem in a VM with
LVM (guided setup with separate /, /home, /tmp, /var) on top of mdadm
RAID1 with a minimal standard installation.
Just noticed this option is not yet documented!
I've filed a report for udev guys to add mention
this in the man page and describe it a bit since
it's quite important and yet it's hidden functionality
if not documented:
https://bugzilla.redhat.com/show_bug.cgi?id=1247210
--
To UNSUBSCRIBE,
On 07/27/2015 03:57 PM, Peter Rajnoha wrote:
That's how it was supposed to work. I can imagine the problematic
part here may be the transfer of the udev database state from initramfs
to root fs - there is a special way that udev uses to mark devices
so that the udev db state is kept from
Hi
Just for testing, I've tried using dracut as provider for
linux-initramfs-tool instead of initramfs-tools. The results were
positive, around 30 successful reboots - going back to initramfs-tools
exposed the original problem right away again.
I don't use any special initramfs-tools
On Sat, Jul 25, 2015 at 04:15:58PM -0400, Rick Thomas wrote:
OK. We have a tentative diagnosis. That's good. Is there something I can
do to verify for sure that this is what's actually happening and give us a
clue as to what we need to do to fix it?
In
On Sun, Jul 26, 2015 at 12:24:43AM +0200, Michael Biebl wrote:
Fwiw, I could easily and reliably reproduce this problem in a VM with
LVM (guided setup with separate /, /home, /tmp, /var) on top of mdadm
RAID1 with a minimal standard installation.
There are at least two distinct problems. The
On Sat, 25 Jul 2015 14:27:03 +0200 Bastian Blank wa...@debian.org wrote:
On Tue, Jul 21, 2015 at 09:11:57PM +0200, Bastian Blank wrote:
So the next step could be debugging udev and see what it calls and when.
Please provide the complete udev db (udevadm info -e) and udev debugging
output
On Tue, Jul 21, 2015 at 07:05:42PM -0700, Rick Thomas wrote:
I created a virtual machine with VMWare running on my Mac. It has a virtual
DVD-drive (loaded with the Jessie 8.1.0 amd64 install image) and three
virtual disk drives. One virtual disk is a small (1 GB) drive to hold /boot.
The
Hi Peter
Currently I think that all this problems are related to missing or
broken pvscan --cache calls.
I found one problematic case regarding coldplug; I believe Redhat does
not longer use this code path. In none of my tests the artificial add
event triggers pvscan as it should. The udev
On Jul 25, 2015, at 3:21 PM, Bastian Blank wrote:
On Tue, Jul 21, 2015 at 07:05:42PM -0700, Rick Thomas wrote:
I created a virtual machine with VMWare running on my Mac. It has a virtual
DVD-drive (loaded with the Jessie 8.1.0 amd64 install image) and three
virtual disk drives. One
On Tue, Jul 21, 2015 at 09:11:57PM +0200, Bastian Blank wrote:
So the next step could be debugging udev and see what it calls and when.
Please provide the complete udev db (udevadm info -e) and udev debugging
output (udev.log-priority=8 at the kernel command line) from a failed
boot.
As this
On Jul 21, 2015, at 12:11 PM, Bastian Blank wa...@debian.org wrote:
However I'm still unable to reproduce the problem
without a sledgehammer.
I reproduced the problem in a tiny test system as follows:
I created a virtual machine with VMWare running on my Mac. It has a virtual
DVD-drive
On Tue, Jul 21, 2015 at 08:37:16PM +0200, Bastian Blank wrote:
Yeah. pvscan should be run by udev for each new device. For some
reason this either don't work, breaks in the middle or no idea what
happens.
Okay, at least I can prove that removing pvscan breaks everything with
similar effects.
On Mon, Jul 20, 2015 at 11:11:43AM +1200, Ben Caradoc-Davies wrote:
Booting succeeds for me with / and /home on separate LVs in a single
crypto-luks PV (see lsblk below) with lvm2 2.02.122-2 amd64. However, after
updating to the latest lvm2, pvscan, pvs, vgs, and lvs all hang
indefinitely
Hi
On 2015-07-20, Ben Caradoc-Davies wrote:
On Mon, 20 Jul 2015 01:16:12 +0200 Stefan Lippers-Hollmann
s@gmx.de wrote:
Interesting enough, systems using a SSD for the system
mountpoints usually succeed booting most of the time
Thanks for this observation, Stefan. My successful boots
Hi
On 2015-07-19, Bastian Blank wrote:
On Thu, Jul 09, 2015 at 05:16:57AM +0200, Stefan Lippers-Hollmann wrote:
Upgrading src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting due to
a new systemd unit dependency failures regarding lvmetad when mounting
non-rootfs logical volumes.
Booting succeeds for me with / and /home on separate LVs in a single
crypto-luks PV (see lsblk below) with lvm2 2.02.122-2 amd64. However,
after updating to the latest lvm2, pvscan, pvs, vgs, and lvs all
hang indefinitely until I manually run pvscan --cache. They worked
fine with 2.02.111-2.2,
On Mon, 20 Jul 2015 01:16:12 +0200 Stefan Lippers-Hollmann
s@gmx.de wrote:
Interesting enough, systems using a SSD for the system
mountpoints usually succeed booting most of the time
Thanks for this observation, Stefan. My successful boots are indeed on a
system using an SSD. I have not
On Thu, Jul 09, 2015 at 05:16:57AM +0200, Stefan Lippers-Hollmann wrote:
Upgrading src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting due to
a new systemd unit dependency failures regarding lvmetad when mounting
non-rootfs logical volumes. Jumping to the emergency shell and invoking
Package: lvm2
Version: 2.02.122-1
Hi there,
I've got the same bug and I've had to downgrade the lvm2 packages too.
Note: My partitions are encrypted.
Regards,
Marcelo
--- System information. ---
Architecture: amd64
Kernel: Linux 4.0.0-2-amd64
Debian Release: stretch/sid
40
Package: lvm2
Version: 2.02.122-1
Severity: serious
Hi
Upgrading src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting due to
a new systemd unit dependency failures regarding lvmetad when mounting
non-rootfs logical volumes. Jumping to the emergency shell and invoking
vgchange -ay and mount
38 matches
Mail list logo