Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-08-11 Thread Stefan Lippers-Hollmann
Hi

As indicated in direct conversation, the changes in 2.02.126-3 seem
to avoid the problem for me, both on lvm2-only and mdadm+lvm2 systems
using initramfs-tools.

Regards
Stefan Lippers-Hollmann


pgp27SI_U2zmC.pgp
Description: Digitale Signatur von OpenPGP


Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-08-10 Thread Bastian Blank
On Fri, Jul 31, 2015 at 08:08:38AM +0200, Stefan Lippers-Hollmann wrote:
 It took many reboots (50), but here is a reproduction with the
 official Debian kernel - gzipped logs attached.

Okay, thank you.  However it just shows that udev never processes the
add event for sda2, so never runs pvscan at all.

Bastian

-- 
Another dream that failed.  There's nothing sadder.
-- Kirk, This side of Paradise, stardate 3417.3


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-08-02 Thread Stefan Lippers-Hollmann
Hi

On 2015-08-01, Stefan Lippers-Hollmann wrote:
 On 2015-07-31, Michael Biebl wrote:
[...]
  Bastian built the lvm2 on amd64 on a non-systemd system, it seems. This
  results in /lib/udev/rules.d/69-lvm-metad.rules lookin like this:
  ...
  ENV{SYSTEMD_READY}=1
  RUN+=/sbin/lvm pvscan --background --cache --activate ay --major $major
  --minor $minor, ENV{LVM_SCANNED}=1
  ...
  
  If you build lvm2 on a systemd system, those rules look like
  ...
  ENV{SYSTEMD_READY}=1
  ACTION!=remove, ENV{LVM_PV_GONE}==1, RUN+=/bin/systemd-run
  /sbin/lvm pvscan --cache $major:$minor, GOTO=lvm_end
  ENV{SYSTEMD_ALIAS}=/dev/block/$major:$minor
  ENV{ID_MODEL}=LVM PV $env{ID_FS_UUID_ENC} on /dev/$name
  ENV{SYSTEMD_WANTS}=lvm2-pvscan@$major:$minor.service
  
  
  If I replace /lib/udev/rules.d/69-lvm-metad.rules with the attached
  file, my problems with LVM on top of RAID1 are gone. Can you copy the
  attached file to /etc/udev/rules.d/ and test if that fixes your problem?

Just an update for the situation with lvm2 2.02.126-2:
- all affected systems are running the amd64 architecture
- all systems are up to date Debian unstable/main

using initramfs-tools 0.120:
- most systems are broken with lvm2 2.02.126-2, to varying degrees.
  the problem is apparently timing sensitive, systems using a SSD
  for the system paths (with their dedicated volume group) are less
  likely to fail booting, but occassionally they still do break.
- doing a local bin-NMU of lvm2 2.02.126-2, in order to update
  /lib/udev/rules.d/69-lvm-metad.rules with the changes pointed out
  by Michael Biebl helps me on all non-mdadm == lvm2-only systems.
  Not a single failed boot on these systems so far.
- lvm2 (2.02.126-2) on top of mdadm (RAID1) fails reliably for me,
  regardless of the bin-NMU for 69-lvm-metad.rules or staying on
  the plain lvm2 2.02.126-2; I'm aware of #793631 and just mention
  it because the update to lvm2 2.02.126-2 doesn't appear to make
  a difference.

using dracut 040+1-1:
- all lvm-only systems are booting fine, no local bin-NMU needed.
- the mdadm(RAID1)+lvm2 system is also booting reliably, no local 
  bin-NMU needed.
- no issues found with the current lvm2 and dracut (but I obviously
  don't need any special initramfs hooks/ scripts)

Regards
Stefan Lippers-Hollmann


pgp69W1HNJxku.pgp
Description: Digitale Signatur von OpenPGP


Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-08-01 Thread Cristian Ionescu-Idbohrn
On Mon, 27 Jul 2015, Michael Biebl wrote:

 Not sure if that is happening here. But fixing [2] and making sure
 pvscan is run via /bin/systemd-run look like should be done in any

 case.

 Michael


 [2] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=783182

Just a minor point here.

I have 2 systems running lvm over raid0.

On one of them (non-systemd init unstable box, but with systemd and
udev 223-2 installed) there's no /bin/systemd-run:

# which systemd-run
/usr/bin/systemd-run

Hardly possible to use _before_ /usr (separate partition) is mounted.

On another system (stretch (104 upgraded, 0 newly installed, 0 to
remove and 141 not upgraded), uptime: 13:09:13 up 840 days, 23:13, 67
  ^^^
users), the systemd package is not even installed.  udev 222-2 and
lvm2 2.02.111-2.2 are installed.

systemd-run is refered to in /lib/udev/rules.d/69-lvm-metad.rules:

ACTION!=remove, ENV{LVM_PV_GONE}==1, RUN+=/bin/systemd-run 
/sbin/lvm pvscan --cache $major:$minor, GOTO=lvm_end


Cheers,

-- 
Cristian


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-31 Thread Stefan Lippers-Hollmann
Hi

On 2015-07-31, Michael Biebl wrote:
 On Fri, 31 Jul 2015 08:08:38 +0200 Stefan Lippers-Hollmann
 s@gmx.de wrote:
  Hi
  
  On 2015-07-31, Stefan Lippers-Hollmann wrote:
   On 2015-07-31, Stefan Lippers-Hollmann wrote:
On 2015-07-25, Bastian Blank wrote:
  [...]
   The attached bootlog (serial console  udev.log-priority=7) has
   unfortunately not been recorded with an official Debian kernel, but
   I've been able to reproduce it with 4.0.0-2-amd64 as well. Just that I
   missed increasing the scrollback buffer in time and wasn't able to 
   fetch a full bootlog then - and, regardless of the kernel in use, 
   reproducing takes quite many reboots (too many for now) with full 
   logging enabled.
  
  It took many reboots (50), but here is a reproduction with the
  official Debian kernel - gzipped logs attached.
 
 Stefan, you are running amd64, right?

Yes, all affected systems are running unstable/ amd64. 

While I still use 3 non 64 bit capable i386 systems, I haven't powered 
them up often enough to be 100% sure about their status in this regard.

 Bastian built the lvm2 on amd64 on a non-systemd system, it seems. This
 results in /lib/udev/rules.d/69-lvm-metad.rules lookin like this:
 ...
 ENV{SYSTEMD_READY}=1
 RUN+=/sbin/lvm pvscan --background --cache --activate ay --major $major
 --minor $minor, ENV{LVM_SCANNED}=1
 ...
 
 If you build lvm2 on a systemd system, those rules look like
 ...
 ENV{SYSTEMD_READY}=1
 ACTION!=remove, ENV{LVM_PV_GONE}==1, RUN+=/bin/systemd-run
 /sbin/lvm pvscan --cache $major:$minor, GOTO=lvm_end
 ENV{SYSTEMD_ALIAS}=/dev/block/$major:$minor
 ENV{ID_MODEL}=LVM PV $env{ID_FS_UUID_ENC} on /dev/$name
 ENV{SYSTEMD_WANTS}=lvm2-pvscan@$major:$minor.service
 
 
 If I replace /lib/udev/rules.d/69-lvm-metad.rules with the attached
 file, my problems with LVM on top of RAID1 are gone. Can you copy the
 attached file to /etc/udev/rules.d/ and test if that fixes your problem?
[...]

I've done a local bin-NMU (on a systemd using chroot, so I ended up
with exactly the same lib/udev/rules.d/69-lvm-metad.rules you got), 
as that was easier to deploy and test locally - and it indeed seems
to fix the problem. Both the nforce4 system and the ivy-bridge system
used for reporting this bug have gone through 20 successful reboots 
each and all other affected systems I've tested seem to be fixed as 
well (none of them having mdadm installed, I haven't been able to 
test the single system using mdadm+lvm2 so far).

Thanks a lot
Stefan Lippers-Hollmann


pgplCcSnfP0pG.pgp
Description: Digitale Signatur von OpenPGP


Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-31 Thread Michael Biebl
Am 31.07.2015 um 10:54 schrieb Michael Biebl:

 If I replace /lib/udev/rules.d/69-lvm-metad.rules with the attached
 file, my problems with LVM on top of RAID1 are gone. 

Grr, nvm. While testing, I actually had use_lvmetad disabled. Still
getting failures, even with the modified 69-lvm-metad.rules. So this was
a red herring.

That said, it still looks like this specific rule should have a runtime,
not a compile time switch.


-- 
Why is it that all of the instruments seeking intelligent life in the
universe are pointed away from Earth?



signature.asc
Description: OpenPGP digital signature


Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-31 Thread Stefan Lippers-Hollmann
Hi

On 2015-07-31, Stefan Lippers-Hollmann wrote:
 On 2015-07-31, Stefan Lippers-Hollmann wrote:
  On 2015-07-25, Bastian Blank wrote:
[...]
 The attached bootlog (serial console  udev.log-priority=7) has
 unfortunately not been recorded with an official Debian kernel, but
 I've been able to reproduce it with 4.0.0-2-amd64 as well. Just that I
 missed increasing the scrollback buffer in time and wasn't able to 
 fetch a full bootlog then - and, regardless of the kernel in use, 
 reproducing takes quite many reboots (too many for now) with full 
 logging enabled.

It took many reboots (50), but here is a reproduction with the
official Debian kernel - gzipped logs attached.

Regards
Stefan Lippers-Hollmann



boot-serialcon.log.gz
Description: application/gzip


dmesg.log.gz
Description: application/gzip


journalctl.log.gz
Description: application/gzip


udevadm-info-post-fix.log.gz
Description: application/gzip


udevadm-info-pre-fix.log.gz
Description: application/gzip


pgp3POJUb5_nF.pgp
Description: Digitale Signatur von OpenPGP


Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-31 Thread Michael Biebl

On Fri, 31 Jul 2015 08:08:38 +0200 Stefan Lippers-Hollmann
s@gmx.de wrote:
 Hi
 
 On 2015-07-31, Stefan Lippers-Hollmann wrote:
  On 2015-07-31, Stefan Lippers-Hollmann wrote:
   On 2015-07-25, Bastian Blank wrote:
 [...]
  The attached bootlog (serial console  udev.log-priority=7) has
  unfortunately not been recorded with an official Debian kernel, but
  I've been able to reproduce it with 4.0.0-2-amd64 as well. Just that I
  missed increasing the scrollback buffer in time and wasn't able to 
  fetch a full bootlog then - and, regardless of the kernel in use, 
  reproducing takes quite many reboots (too many for now) with full 
  logging enabled.
 
 It took many reboots (50), but here is a reproduction with the
 official Debian kernel - gzipped logs attached.

Stefan, you are running amd64, right?

Bastian built the lvm2 on amd64 on a non-systemd system, it seems. This
results in /lib/udev/rules.d/69-lvm-metad.rules lookin like this:
...
ENV{SYSTEMD_READY}=1
RUN+=/sbin/lvm pvscan --background --cache --activate ay --major $major
--minor $minor, ENV{LVM_SCANNED}=1
...

If you build lvm2 on a systemd system, those rules look like
...
ENV{SYSTEMD_READY}=1
ACTION!=remove, ENV{LVM_PV_GONE}==1, RUN+=/bin/systemd-run
/sbin/lvm pvscan --cache $major:$minor, GOTO=lvm_end
ENV{SYSTEMD_ALIAS}=/dev/block/$major:$minor
ENV{ID_MODEL}=LVM PV $env{ID_FS_UUID_ENC} on /dev/$name
ENV{SYSTEMD_WANTS}=lvm2-pvscan@$major:$minor.service


If I replace /lib/udev/rules.d/69-lvm-metad.rules with the attached
file, my problems with LVM on top of RAID1 are gone. Can you copy the
attached file to /etc/udev/rules.d/ and test if that fixes your problem?

We likely need a runtime check, not a compile time check, in
69-lvm-metad.rules to decide which rules to run.



-- 
Why is it that all of the instruments seeking intelligent life in the
universe are pointed away from Earth?
# Copyright (C) 2012 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.

# Udev rules for LVM.
#
# Scan all block devices having a PV label for LVM metadata.
# Store this information in LVMetaD (the LVM metadata daemon) and maintain LVM
# metadata state for improved performance by avoiding further scans while
# running subsequent LVM commands or while using lvm2app library.
# Also, notify LVMetaD about any relevant block device removal.
#
# This rule is essential for having the information in LVMetaD up-to-date.
# It also requires blkid to be called on block devices before so only devices
# used as LVM PVs are processed (ID_FS_TYPE=LVM2_member or LVM1_member).

SUBSYSTEM!=block, GOTO=lvm_end


ENV{DM_UDEV_DISABLE_OTHER_RULES_FLAG}==1, GOTO=lvm_end

# If the PV label got lost, inform lvmetad immediately.
# Detect the lost PV label by comparing previous ID_FS_TYPE value with current 
one.
ENV{.ID_FS_TYPE_NEW}=$env{ID_FS_TYPE}
IMPORT{db}=ID_FS_TYPE
ENV{ID_FS_TYPE}==LVM2_member|LVM1_member, 
ENV{.ID_FS_TYPE_NEW}!=LVM2_member|LVM1_member, ENV{LVM_PV_GONE}=1
ENV{ID_FS_TYPE}=$env{.ID_FS_TYPE_NEW}
ENV{LVM_PV_GONE}==1, GOTO=lvm_scan

# Only process devices already marked as a PV - this requires blkid to be 
called before.
ENV{ID_FS_TYPE}!=LVM2_member|LVM1_member, GOTO=lvm_end
ENV{DM_MULTIPATH_DEVICE_PATH}==1, GOTO=lvm_end

# Inform lvmetad about any PV that is gone.
ACTION==remove, GOTO=lvm_scan

# Create /dev/disk/by-id/lvm-pv-uuid-PV_UUID symlink for each PV
ENV{ID_FS_UUID_ENC}==?*, 
SYMLINK+=disk/by-id/lvm-pv-uuid-$env{ID_FS_UUID_ENC}

# If the PV is a special device listed below, scan only if the device is
# properly activated. These devices are not usable after an ADD event,
# but they require an extra setup and they are ready after a CHANGE event.
# Also support coldplugging with ADD event but only if the device is already
# properly activated.
# This logic should be eventually moved to rules where those particular
# devices are processed primarily (MD and loop).

# DM device:
KERNEL!=dm-[0-9]*, GOTO=next
ENV{DM_UDEV_PRIMARY_SOURCE_FLAG}==1, ENV{DM_ACTIVATION}==1, GOTO=lvm_scan
GOTO=lvm_end

# MD device:
LABEL=next
KERNEL!=md[0-9]*, GOTO=next
IMPORT{db}=LVM_MD_PV_ACTIVATED
ACTION==add, ENV{LVM_MD_PV_ACTIVATED}==1, GOTO=lvm_scan
ACTION==change, ENV{LVM_MD_PV_ACTIVATED}!=1, TEST==md/array_state, 
ENV{LVM_MD_PV_ACTIVATED}=1, GOTO=lvm_scan
ACTION==add, KERNEL==md[0-9]*p[0-9]*, GOTO=lvm_scan
ENV{LVM_MD_PV_ACTIVATED}!=1, ENV{SYSTEMD_READY}=0
GOTO=lvm_end

# Loop device:
LABEL=next
KERNEL!=loop[0-9]*, GOTO=next
ACTION==add, ENV{LVM_LOOP_PV_ACTIVATED}==1, GOTO=lvm_scan
ACTION==change, ENV{LVM_LOOP_PV_ACTIVATED}!=1, TEST==loop/backing_file, 
ENV{LVM_LOOP_PV_ACTIVATED}=1, GOTO=lvm_scan
ENV{LVM_LOOP_PV_ACTIVATED}!=1, ENV{SYSTEMD_READY}=0
GOTO=lvm_end

# If the PV is not a special device listed above, scan only after device 
addition (ADD event)
LABEL=next
ACTION!=add, GOTO=lvm_end

LABEL=lvm_scan

# The table below summarises the situations in which we reach the 
LABEL=lvm_scan.
# Marked by X, X* means only if the special dev is properly set up.
# The 

Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-30 Thread Stefan Lippers-Hollmann
Hi

On 2015-07-25, Bastian Blank wrote:
 On Tue, Jul 21, 2015 at 09:11:57PM +0200, Bastian Blank wrote:
  So the next step could be debugging udev and see what it calls and when.
 
 Please provide the complete udev db (udevadm info -e) and udev debugging

attached (in the broken state) as nforce4.log.gz (gzipped).

 output (udev.log-priority=8 at the kernel command line) from a failed
 boot.

I've finally found a system where I can grab this information via 
serial console (using the serial console makes it less likely to 
trigger, but it still happens):

### this is different hardware than the one used for the previous reports ###

Loading Linux 4.0.0-2-amd64 ...
Loading initial ramdisk ...
[0.00] Initializing cgroup subsys cpuset
[0.00] Initializing cgroup subsys cpu
[0.00] Initializing cgroup subsys cpuacct
[0.00] Linux version 4.0.0-2-amd64 (debian-ker...@lists.debian.org) 
(gcc version 4.9.3 (Debian 4.9.3-2) ) #1 SMP Debian 4.0.8-2 (2015-07-22)
[0.00] Command line: BOOT_IMAGE=/boot/vmlinuz-4.0.0-2-amd64 
root=/dev/mapper/vg--challenger-debian64 ro console=tty0 console=ttyS0,115200n8 
udev.log-priority=8
[0.00] e820: BIOS-provided physical RAM map:
[0.00] BIOS-e820: [mem 0x-0x0009f3ff] usable
[0.00] BIOS-e820: [mem 0x0009f400-0x0009] reserved
[0.00] BIOS-e820: [mem 0x000f-0x000f] reserved
[0.00] BIOS-e820: [mem 0x0010-0xd7fe] usable
[0.00] BIOS-e820: [mem 0xd7ff-0xd7ff2fff] ACPI NVS
[0.00] BIOS-e820: [mem 0xd7ff3000-0xd7ff] ACPI data
[0.00] BIOS-e820: [mem 0xe000-0xefff] reserved
[0.00] BIOS-e820: [mem 0xfec0-0x] reserved
[0.00] BIOS-e820: [mem 0x0001-0x000127ff] usable
[0.00] NX (Execute Disable) protection: active
[0.00] SMBIOS 2.3 present.
[0.00] AGP: No AGP bridge found
[0.00] e820: last_pfn = 0x128000 max_arch_pfn = 0x4
[0.00] PAT configuration [0-7]: WB  WC  UC- UC  WB  WC  UC- UC
[0.00] e820: last_pfn = 0xd7ff0 max_arch_pfn = 0x4
[0.00] found SMP MP-table at [mem 0x000f52f0-0x000f52ff] mapped at 
[880f52f0]
[0.00] init_memory_mapping: [mem 0x-0x000f]
[0.00] init_memory_mapping: [mem 0x127e0-0x127ff]
[0.00] init_memory_mapping: [mem 0x12000-0x127df]
[0.00] init_memory_mapping: [mem 0x1-0x11fff]
[0.00] init_memory_mapping: [mem 0x0010-0xd7fe]
[0.00] RAMDISK: [mem 0x35d6-0x36ea7fff]
[0.00] ACPI: Early table checksum verification disabled
[0.00] ACPI: RSDP 0x000F91D0 14 (v00 Nvidia)
[0.00] ACPI: RSDT 0xD7FF3040 34 (v01 Nvidia AWRDACPI 
42302E31 AWRD )
[0.00] ACPI: FACP 0xD7FF30C0 74 (v01 Nvidia AWRDACPI 
42302E31 AWRD )
[0.00] ACPI: DSDT 0xD7FF3180 0062AC (v01 NVIDIA AWRDACPI 
1000 MSFT 010E)
[0.00] ACPI: FACS 0xD7FF 40
[0.00] ACPI: SSDT 0xD7FF9540 0001CA (v01 PTLTD  POWERNOW 
0001  LTP 0001)
[0.00] ACPI: MCFG 0xD7FF9780 3C (v01 Nvidia AWRDACPI 
42302E31 AWRD )
[0.00] ACPI: APIC 0xD7FF9480 72 (v01 Nvidia AWRDACPI 
42302E31 AWRD )
[0.00] Scanning NUMA topology in Northbridge 24
[0.00] No NUMA configuration found
[0.00] Faking a node at [mem 0x-0x000127ff]
[0.00] NODE_DATA(0) allocated [mem 0x127ff8000-0x127ffbfff]
[0.00] Zone ranges:
[0.00]   DMA  [mem 0x1000-0x00ff]
[0.00]   DMA32[mem 0x0100-0x]
[0.00]   Normal   [mem 0x0001-0x000127ff]
[0.00] Movable zone start for each node
[0.00] Early memory node ranges
[0.00]   node   0: [mem 0x1000-0x0009efff]
[0.00]   node   0: [mem 0x0010-0xd7fe]
[0.00]   node   0: [mem 0x0001-0x000127ff]
[0.00] Initmem setup node 0 [mem 0x1000-0x000127ff]
[0.00] Nvidia board detected. Ignoring ACPI timer override.
[0.00] If you got timer trouble try acpi_use_timer_override
[0.00] ACPI: PM-Timer IO Port: 0x4008
[0.00] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
[0.00] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
[0.00] ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1])
[0.00] ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1])
[0.00] ACPI: IOAPIC (id[0x02] address[0xfec0] gsi_base[0])
[0.00] IOAPIC[0]: apic_id 2, version 17, address 0xfec0, GSI 0-23
[0.00] ACPI: INT_SRC_OVR 

Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-30 Thread Stefan Lippers-Hollmann
Hi

On 2015-07-31, Stefan Lippers-Hollmann wrote:
[...]
 challenger:~# pvs
   PV VGFmt  Attr PSize   PFree  
   /dev/sda2  vg-challenger lvm2 a--  831,49g 251,49g
 challenger:~# vgs
   VG#PV #LV #SN Attr   VSize   VFree  
   vg-challenger   1   4   0 wz--n- 831,49g 251,49g
 challenger:~# lvs
   LV   VGAttr   LSize   Pool Origin Data%  Meta%  Move 
 Log Cpy%Sync Convert
   debian64 vg-challenger -wi-ao  10,00g   
  
   home vg-challenger -wi--- 310,00g   
  
   storage  vg-challenger -wi--- 250,00g   
  
   var  vg-challenger -wi---  10,00g   
  
 challenger:~# lsblk
 NAMEMAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
 fd0   2:01 4K  0 disk 
 sda   8:00 931,5G  0 disk 
 ├─sda18:10   100G  0 part 
 └─sda28:20 831,5G  0 part 
   └─vg--challenger-debian64 254:0010G  0 lvm  /
 sr0  11:01  1024M  0 rom  
 sr1  11:11  1024M  0 rom
[...]

and now the same from the, previously failed, boot that has been
'encouraged' to find the missing logical volumes via:

 challenger:~# vgchange -ay
   4 logical volume(s) in volume group vg-challenger now active
 [  525.672908] EXT4-fs (dm-3): barriers disabled)
 [  525.731022] EXT4-fs (dm-1): barriers disabled
 [  525.733624] EXT4-fs (dm-3): mounted filesystem with ordered data mode. 
 Opts: barrier=0
 [  525.779844] EXT4-fs (dm-1): mounted filesystem with ordered data mode. 
 Opts: barrier=0
 [  525.783631] EXT4-fs (dm-2): barriers disabled
 [  525.808851] EXT4-fs (dm-2): mounted filesystem with ordered data mode. 
 Opts: barrier=0
 
 challenger:~# mount -a
 challenger:~# exit
[...]

challenger:~# pvs
  PV VGFmt  Attr PSize   PFree  
  /dev/sda2  vg-challenger lvm2 a--  831,49g 251,49g
challenger:~# vgs
  VG#PV #LV #SN Attr   VSize   VFree  
  vg-challenger   1   4   0 wz--n- 831,49g 251,49g
challenger:~# lvs
  LV   VGAttr   LSize   Pool Origin Data%  Meta%  Move Log 
Cpy%Sync Convert
  debian64 vg-challenger -wi-ao  10,00g 
   
  home vg-challenger -wi-ao 310,00g 
   
  storage  vg-challenger -wi-ao 250,00g 
   
  var  vg-challenger -wi-ao  10,00g 
   
challenger:~# lsblk
NAMEMAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
fd0   2:01 4K  0 disk 
sda   8:00 931,5G  0 disk 
├─sda18:10   100G  0 part 
└─sda28:20 831,5G  0 part 
  ├─vg--challenger-debian64 254:0010G  0 lvm  /
  ├─vg--challenger-var  254:1010G  0 lvm  /var
  ├─vg--challenger-home 254:20   310G  0 lvm  /home
  └─vg--challenger-storage  254:30   250G  0 lvm  /srv/storage
sr0  11:01  1024M  0 rom  
sr1  11:11  1024M  0 rom

challenger:~# cat /proc/mounts 
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,relatime,size=10240k,nr_inodes=495884,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,relatime,size=810532k,mode=755 0 0
/dev/dm-0 / ext4 rw,noatime,nobarrier,data=ordered 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup 
rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd
 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup 
rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup 
rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/perf_event cgroup 
rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
systemd-1 /proc/sys/fs/binfmt_misc 

Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-30 Thread Stefan Lippers-Hollmann
Hi

On 2015-07-31, Stefan Lippers-Hollmann wrote:
 On 2015-07-25, Bastian Blank wrote:
  output (udev.log-priority=8 at the kernel command line) from a failed
  boot.
[...]
 Loading, please wait...
 invalid udev.log[2.343952] random: systemd-udevd urandom read with 4 bits 
 of entropy available
 -priority ignored: 8
[...]

Well, obviously (or rather not quite that obviously), the maximum log 
level is 7.

systemd-223/src/libudev/libudev-util.c:
int util_log_priority(const char *priority)
{
[...]
if (prio = 0  prio = 7)
return prio;
else
return -ERANGE;
[...]
}

However it seems to be even harder to reproduce with udev.log-priority=7
set. While it triggers in roughly 85% of all reboots on this system 
without serial console and special logging parameters, it takes quite a 
few reboots to reproduce with serial console and udev.log-priority=7.

The attached bootlog (serial console  udev.log-priority=7) has
unfortunately not been recorded with an official Debian kernel, but
I've been able to reproduce it with 4.0.0-2-amd64 as well. Just that I
missed increasing the scrollback buffer in time and wasn't able to 
fetch a full bootlog then - and, regardless of the kernel in use, 
reproducing takes quite many reboots (too many for now) with full 
logging enabled.

Regards
Stefan Lippers-Hollmann


boot.log.gz
Description: application/gzip


pgp8ub8TfHObx.pgp
Description: Digitale Signatur von OpenPGP


Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-29 Thread Stefan Lippers-Hollmann
Hi

Just confirming that there's no change with src:lvm2 2.02.126-1, the
problem is still present.

Regards
Stefan Lippers-Hollmann


pgpf6OM5kuNIg.pgp
Description: Digitale Signatur von OpenPGP


Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-28 Thread Bastian Blank
On Mon, Jul 27, 2015 at 05:40:39PM +0200, Michael Biebl wrote:
 udev under systemd doesn't allow long running processes which background
 to be started from udev rules, such processes are killed by udevd [4].
 Not sure if that is happening here. But fixing [2] and making sure
 pvscan is run via /bin/systemd-run look like should be done in any case.

The timeout for each event in udevd is 180 seconds and it should write
an error to the log. Even a carefully placed sleep 60 does not break
booting, however it takes a long time.

Bastian

-- 
Totally illogical, there was no chance.
-- Spock, The Galileo Seven, stardate 2822.3


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-28 Thread Michael Biebl
Am 28.07.2015 um 08:41 schrieb Bastian Blank:
 On Mon, Jul 27, 2015 at 05:40:39PM +0200, Michael Biebl wrote:
 udev under systemd doesn't allow long running processes which background
 to be started from udev rules, such processes are killed by udevd [4].
 Not sure if that is happening here. But fixing [2] and making sure
 pvscan is run via /bin/systemd-run look like should be done in any case.
 
 The timeout for each event in udevd is 180 seconds and it should write
 an error to the log. Even a carefully placed sleep 60 does not break
 booting, however it takes a long time.

Have you tried running sleep in the background, detached from the
controlling terminal, like e.g:

#!/bin/sh
(
sleep 10
)  /dev/null 2/dev/null 



-- 
Why is it that all of the instruments seeking intelligent life in the
universe are pointed away from Earth?



signature.asc
Description: OpenPGP digital signature


Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-27 Thread Peter Rajnoha
On 07/25/2015 09:34 PM, Bastian Blank wrote:
 Hi Peter
 
 Currently I think that all this problems are related to missing or
 broken pvscan --cache calls.
 
 I found one problematic case regarding coldplug; I believe Redhat does
 not longer use this code path.  In none of my tests the artificial add
 event triggers pvscan as it should.  The udev rules test for
 LVM_MD_PV_ACTIVATED, which is never set in this case.

The MD here is very similar to DM in a way it is activated -
the MD device is created first (the ADD event) and then initialized
(the CHANGE event).

So we're expecting the CHANGE event with appearing md/array_state sysfs
attribute to declare the MD as initialized (and hence marked with
LVM_MD_PV_ACTIVATED=1).

When this MD activation/initialization happens in initramfs, the udev
database state needs to be transfered over from initramfs to root fs for
the MD device.

We're always doing IMPORT{db} for the LVM_MD_PV_ACTIVATED variable
so the rules can check whether the MD device is ready to use or not.

When switching to root fs and when the coldplug is done, the
ADD event is generated for the MD device - when we have ADD event
and at the same time we have LVM_MD_PV_ACTIVATED=1, we know this is
artificial event (the coldplug one) and we do jump to the pvscan
in that case.

That's how it was supposed to work. I can imagine the problematic
part here may be the transfer of the udev database state from initramfs
to root fs - there is a special way that udev uses to mark devices
so that the udev db state is kept from initramfs - I need to recall
that/check that because I don't remember that method right now...

-- 
Peter


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-27 Thread Peter Rajnoha
On 07/27/2015 04:12 PM, Peter Rajnoha wrote:
 It's the OPTIONS+=db_persist that needs to be used in initramfs
 for MD devices. This marks udev db records related to this device with
 sticky bit then which is then recognized by udev code and the udev
 db state is not cleaned up in that case:

For example, dracut (the initramfs environment used also in RH systems)
has these rules to handle MD devices (it has the OPTIONS+=db_persist):

https://github.com/haraldh/dracut/blob/master/modules.d/90mdraid/59-persistent-storage-md.rules

If you already use this in Debian and it doesn't work, it must be
a regression in some version of udev as I've already gone through
this with Harald Hoyer and Kay Sievers who take care of udev.

Simply, this is the correct sequence that should be used:

initramfs:
 - udev running in initramfs
 
 - mark records with OPTIONS+=db_persist for devices that require that
   (currently it's the MD and DM)
 - udev in initramfs stopped
 - udev database copied from initramfs to root fs

--- switch to root fs ---

 - udev running in root fs
 - udevadm info --cleanup-db (but will keep the records marked from
initramfs with the db_persist flag)
 - udevadm trigger --action=add for the coldplug

-- 
Peter


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-27 Thread Michael Biebl
Am 27.07.2015 um 17:40 schrieb Michael Biebl:
 Am 27.07.2015 um 07:56 schrieb Bastian Blank:
 On Sun, Jul 26, 2015 at 12:24:43AM +0200, Michael Biebl wrote:
 Fwiw, I could easily and reliably reproduce this problem in a VM with
 LVM (guided setup with separate /, /home, /tmp, /var) on top of mdadm
 RAID1 with a minimal standard installation.

 There are at least two distinct problems.  The cause for the
 reproducible problem with MD is known.  No cause is known for
 the more random blockage.
 
 It looks like [1] is another duplicate of this bug.

Bad quoting on my part. I wanted to say, that [1] is another duplicate
of the LVM on MD problem. Not the other one.

 [1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=774082


-- 
Why is it that all of the instruments seeking intelligent life in the
universe are pointed away from Earth?



signature.asc
Description: OpenPGP digital signature


Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-27 Thread Michael Biebl
Am 27.07.2015 um 07:56 schrieb Bastian Blank:
 On Sun, Jul 26, 2015 at 12:24:43AM +0200, Michael Biebl wrote:
 Fwiw, I could easily and reliably reproduce this problem in a VM with
 LVM (guided setup with separate /, /home, /tmp, /var) on top of mdadm
 RAID1 with a minimal standard installation.
 
 There are at least two distinct problems.  The cause for the
 reproducible problem with MD is known.  No cause is known for
 the more random blockage.

It looks like [1] is another duplicate of this bug.

 I see you already got the information you requested from Stefan, I can
 provide further diagnostics as well, if you want me to.
 
 If you have a more or less reproducible _non_-MD case, then I could use
 this information.

I tried to make lvmetad work a while ago and ran into [2] and [3].
Looking at /lib/udev/rules.d/69-lvm-metad.rules and rules/Makefile.in of
the current package, it looks like lvm2 was not compiled with
UDEV_SYSTEMD_BACKGROUND_JOBS = yes. The 69-lvm-metad.rules file on amd64 has
RUN+=/sbin/lvm pvscan --background --cache --activate ay --major $major
--minor $minor, ENV{LVM_SCANNED}=1


udev under systemd doesn't allow long running processes which background
to be started from udev rules, such processes are killed by udevd [4].
Not sure if that is happening here. But fixing [2] and making sure
pvscan is run via /bin/systemd-run look like should be done in any case.

Michael



[1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=774082
[2] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=783182
[3] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=783120
[4] http://www.freedesktop.org/software/systemd/man/udev.html#RUN{type}
-- 
Why is it that all of the instruments seeking intelligent life in the
universe are pointed away from Earth?



signature.asc
Description: OpenPGP digital signature


Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-27 Thread Peter Rajnoha
Just noticed this option is not yet documented!

I've filed a report for udev guys to add mention
this in the man page and describe it a bit since
it's quite important and yet it's hidden functionality
if not documented:

https://bugzilla.redhat.com/show_bug.cgi?id=1247210


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-27 Thread Peter Rajnoha
On 07/27/2015 03:57 PM, Peter Rajnoha wrote:
 That's how it was supposed to work. I can imagine the problematic
 part here may be the transfer of the udev database state from initramfs
 to root fs - there is a special way that udev uses to mark devices
 so that the udev db state is kept from initramfs - I need to recall
 that/check that because I don't remember that method right now...
 

It's the OPTIONS+=db_persist that needs to be used in initramfs
for MD devices. This marks udev db records related to this device with
sticky bit then which is then recognized by udev code and the udev
db state is not cleaned up in that case:

https://github.com/systemd/systemd/blob/master/src/udev/udevadm-info.c#L220

(the udevadm-info --cleanup-db - the records marked with sticky bit persist)

So once this udev db state is properly handed over from initramfs to root fs,
the rules in 69-dm-lvm-metad.rules should work (as it will use the
IMPORT{db}=LVM_MD_PV_ACTIVATED to retrieve the state from previous runs
and this should fire pvscan then on coldplug properly:

  IMPORT{db}=LVM_MD_PV_ACTIVATED
  ACTION==add, ENV{LVM_MD_PV_ACTIVATED}==1, GOTO=lvm_scan
-- 
Peter


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-27 Thread Stefan Lippers-Hollmann
Hi

Just for testing, I've tried using dracut as provider for 
linux-initramfs-tool instead of initramfs-tools. The results were
positive, around 30 successful reboots - going back to initramfs-tools
exposed the original problem right away again.

I don't use any special initramfs-tools configuration or strange 
hooks/ scripts:

$ dpkg -S /etc/initramfs-tools/
initramfs-tools: /etc/initramfs-tools

$ dpkg -S /usr/share/initramfs-tools/
kmod, udev, initramfs-tools, ntfs-3g, dmsetup, lvm2, intel-microcode, fuse, 
busybox: /usr/share/initramfs-tools

$ debsums -as initramfs-tools kmod udev ntfs-3g dmsetup lvm2 intel-microcode 
fuse busybox
$

Regards
Stefan Lippers-Hollmann



pgpvBBfuE3tNd.pgp
Description: Digitale Signatur von OpenPGP


Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-27 Thread Bastian Blank
On Sat, Jul 25, 2015 at 04:15:58PM -0400, Rick Thomas wrote:
 OK.  We have a tentative diagnosis.  That's good.  Is there something I can 
 do to verify for sure that this is what's actually happening and give us a 
 clue as to what we need to do to fix it?

In /lib/udev/rules.d/63-md-raid-arrays.rules, replace the existing blkid
call with:
| IMPORT{builtin}=blkid

In /lib/udev/rules.d/63-md-raid-arrays.rules, use this diff:
--- a/udev/69-dm-lvm-metad.rules.in
+++ b/udev/69-dm-lvm-metad.rules.in
@@ -55,7 +55,7 @@ LABEL=next
 KERNEL!=md[0-9]*, GOTO=next
 IMPORT{db}=LVM_MD_PV_ACTIVATED
 ACTION==add, ENV{LVM_MD_PV_ACTIVATED}==1, GOTO=lvm_scan
-ACTION==change, ENV{LVM_MD_PV_ACTIVATED}!=1, TEST==md/array_state, 
ENV{LVM_MD_PV_ACTIVATED}=1, GOTO=lvm_scan
+ENV{LVM_MD_PV_ACTIVATED}!=1, TEST==md/array_state, 
ENV{LVM_MD_PV_ACTIVATED}=1, GOTO=lvm_scan
 ACTION==add, KERNEL==md[0-9]*p[0-9]*, GOTO=lvm_scan
 ENV{LVM_MD_PV_ACTIVATED}!=1, ENV{SYSTEMD_READY}=0
 GOTO=lvm_end

I'll add a workaround for the missing blkid call in the next upload, as
I don't want to tie this to the mdadm fix.

Bastian

-- 
No one wants war.
-- Kirk, Errand of Mercy, stardate 3201.7


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-26 Thread Bastian Blank
On Sun, Jul 26, 2015 at 12:24:43AM +0200, Michael Biebl wrote:
 Fwiw, I could easily and reliably reproduce this problem in a VM with
 LVM (guided setup with separate /, /home, /tmp, /var) on top of mdadm
 RAID1 with a minimal standard installation.

There are at least two distinct problems.  The cause for the
reproducible problem with MD is known.  No cause is known for
the more random blockage.

 I see you already got the information you requested from Stefan, I can
 provide further diagnostics as well, if you want me to.

If you have a more or less reproducible _non_-MD case, then I could use
this information.

Bastian

-- 
Peace was the way.
-- Kirk, The City on the Edge of Forever, stardate unknown


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-25 Thread Michael Biebl
On Sat, 25 Jul 2015 14:27:03 +0200 Bastian Blank wa...@debian.org wrote:
 On Tue, Jul 21, 2015 at 09:11:57PM +0200, Bastian Blank wrote:
  So the next step could be debugging udev and see what it calls and when.
 
 Please provide the complete udev db (udevadm info -e) and udev debugging
 output (udev.log-priority=8 at the kernel command line) from a failed
 boot.
 
 As this bug only bites a small number of systems (I myself found none, I
 was only able to produce similar effects by breaking udev rules), I
 intend to downgrade this bug for now.


Fwiw, I could easily and reliably reproduce this problem in a VM with
LVM (guided setup with separate /, /home, /tmp, /var) on top of mdadm
RAID1 with a minimal standard installation.

So I fear this might actually bite quite a few people and I would
suggest keeping this bug RC for the time being.

I see you already got the information you requested from Stefan, I can
provide further diagnostics as well, if you want me to.

Regards,
Michael

-- 
Why is it that all of the instruments seeking intelligent life in the
universe are pointed away from Earth?



signature.asc
Description: OpenPGP digital signature


Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-25 Thread Bastian Blank
On Tue, Jul 21, 2015 at 07:05:42PM -0700, Rick Thomas wrote:
 I created a virtual machine with VMWare running on my Mac.  It has a virtual 
 DVD-drive (loaded with the Jessie 8.1.0 amd64 install image) and three 
 virtual disk drives.  One virtual disk is a small (1 GB) drive to hold /boot. 
  The other two (4GB each) to be configured at installation time as a software 
 RAID0 housing a single LVM2 physical volume with three logical volumes for 
 root, home, and swap.

Okay, detection of lvm on md have two problems:
- udev rules in mdadm breaks detection of lvm and
- lvm rules break coldplug.

Bastian

-- 
There's a way out of any cage.
-- Captain Christopher Pike, The Menagerie (The Cage),
   stardate unknown.


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-25 Thread Bastian Blank
Hi Peter

Currently I think that all this problems are related to missing or
broken pvscan --cache calls.

I found one problematic case regarding coldplug; I believe Redhat does
not longer use this code path.  In none of my tests the artificial add
event triggers pvscan as it should.  The udev rules test for
LVM_MD_PV_ACTIVATED, which is never set in this case.  My quick fix is
to ignore if the event is actually change.

Bastian

On Sat, Jul 25, 2015 at 09:21:47PM +0200, Bastian Blank wrote:
 Okay, detection of lvm on md have two problems:
 - udev rules in mdadm breaks detection of lvm and
 - lvm rules break coldplug.

-- 
You're dead, Jim.
-- McCoy, Amok Time, stardate 3372.7


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-25 Thread Rick Thomas

On Jul 25, 2015, at 3:21 PM, Bastian Blank wrote:

 On Tue, Jul 21, 2015 at 07:05:42PM -0700, Rick Thomas wrote:
 I created a virtual machine with VMWare running on my Mac.  It has a virtual 
 DVD-drive (loaded with the Jessie 8.1.0 amd64 install image) and three 
 virtual disk drives.  One virtual disk is a small (1 GB) drive to hold 
 /boot.  The other two (4GB each) to be configured at installation time as a 
 software RAID0 housing a single LVM2 physical volume with three logical 
 volumes for root, home, and swap.
 
 Okay, detection of lvm on md have two problems:
 - udev rules in mdadm breaks detection of lvm and
 - lvm rules break coldplug.
 
 Bastian

OK.  We have a tentative diagnosis.  That's good.  Is there something I can do 
to verify for sure that this is what's actually happening and give us a clue as 
to what we need to do to fix it?

I'll do the udev stuff you requested in your previous email (I'm traveling 
right now, but I'll get to it after I return home -- the middle of next week)  
Is that enough to complete the diagnosis, or are there other tests we 
can/should do?

Enjoy!
Rick

--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-25 Thread Bastian Blank
On Tue, Jul 21, 2015 at 09:11:57PM +0200, Bastian Blank wrote:
 So the next step could be debugging udev and see what it calls and when.

Please provide the complete udev db (udevadm info -e) and udev debugging
output (udev.log-priority=8 at the kernel command line) from a failed
boot.

As this bug only bites a small number of systems (I myself found none, I
was only able to produce similar effects by breaking udev rules), I
intend to downgrade this bug for now.

Bastian

-- 
It is necessary to have purpose.
-- Alice #1, I, Mudd, stardate 4513.3


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-21 Thread Rick Thomas

On Jul 21, 2015, at 12:11 PM, Bastian Blank wa...@debian.org wrote:

  However I'm still unable to reproduce the problem
 without a sledgehammer.

I reproduced the problem in a tiny test system as follows:

I created a virtual machine with VMWare running on my Mac.  It has a virtual 
DVD-drive (loaded with the Jessie 8.1.0 amd64 install image) and three virtual 
disk drives.  One virtual disk is a small (1 GB) drive to hold /boot.  The 
other two (4GB each) to be configured at installation time as a software RAID0 
housing a single LVM2 physical volume with three logical volumes for root, 
home, and swap.

When installed with Jessie, everything works fine.

Then I did full-upgrade to Testing/Stretch.  Everything still works fine.

Then I did full-upgrade to Unstable/Sid, and it broke.

When i disabled use_lvmetad in /etc/lvm/lvm.conf and did “update-initramfs -u” 
things went back to working.

I don’t expect the choice of VMWare as a platform has anything to do with this 
problem, so you can probably duplicate this procedure with a different VM 
platform…

The output of lsblk looks like this:

 NAME   MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
 fd0  2:014K  0 disk  
 sda  8:001G  0 disk  
 `-sda1   8:10 1022M  0 part  /boot
 sdb  8:16   04G  0 disk  
 `-sdb1   8:17   04G  0 part  
   `-md0  9:008G  0 raid0 
 |-stretch-root 253:00  3.7G  0 lvm   /
 |-stretch-swap 253:10  1.9G  0 lvm   [SWAP]
 `-stretch-home 253:20  2.4G  0 lvm   /home
 sdc  8:32   04G  0 disk  
 `-sdc1   8:33   04G  0 part  
   `-md0  9:008G  0 raid0 
 |-stretch-root 253:00  3.7G  0 lvm   /
 |-stretch-swap 253:10  1.9G  0 lvm   [SWAP]
 `-stretch-home 253:20  2.4G  0 lvm   /home
 sr0 11:01 1024M  0 rom   



If it matters, the VM has two virtual CPUs and 2 GB of virtual RAM.

Hope it helps!
Rick

--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-21 Thread Bastian Blank
On Tue, Jul 21, 2015 at 08:37:16PM +0200, Bastian Blank wrote:
 Yeah.  pvscan should be run by udev for each new device.  For some
 reason this either don't work, breaks in the middle or no idea what
 happens.

Okay, at least I can prove that removing pvscan breaks everything with
similar effects.  However I'm still unable to reproduce the problem
without a sledgehammer.

So the next step could be debugging udev and see what it calls and when.

Bastian

-- 
Those who hate and fight must stop themselves -- otherwise it is not stopped.
-- Spock, Day of the Dove, stardate unknown


--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-21 Thread Bastian Blank
On Mon, Jul 20, 2015 at 11:11:43AM +1200, Ben Caradoc-Davies wrote:
 Booting succeeds for me with / and /home on separate LVs in a single
 crypto-luks PV (see lsblk below) with lvm2 2.02.122-2 amd64. However, after
 updating to the latest lvm2, pvscan, pvs, vgs, and lvs all hang
 indefinitely until I manually run pvscan --cache. They worked fine with
 2.02.111-2.2, probably because lvmetad was not enabled.

Thanks for the information, this is finaly a clue what is going on.

 output of strace pvs while hung:

Is there a pvscan or similar task running?  This is the only time where
the state info is set this way.

 Then man lvmetad led me to pvscan --cache.

Yeah.  pvscan should be run by udev for each new device.  For some
reason this either don't work, breaks in the middle or no idea what
happens.

Bastian

-- 
Oblivion together does not frighten me, beloved.
-- Thalassa (in Anne Mulhall's body), Return to Tomorrow,
   stardate 4770.3.


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-20 Thread Stefan Lippers-Hollmann
Hi

On 2015-07-20, Ben Caradoc-Davies wrote:
 On Mon, 20 Jul 2015 01:16:12 +0200 Stefan Lippers-Hollmann 
 s@gmx.de wrote:
  Interesting enough, systems using a SSD for the system
 mountpoints usually succeed booting most of the time
 
 Thanks for this observation, Stefan. My successful boots are indeed on a 
 system using an SSD. I have not yet had a failed boot with lvm2 2.02.122-2.

Actually I have to partially withdraw that earlier conclusion, today
I did encounter one failure on a system with all logical volumes
making up the system paths on a SSD. The system in question had been 
booting fine with lvm2 2.02.122-2 roughly a dozen of times before and 
I haven't been able to reproduce the failure case again, but it does
strengthen the hunch of a timing related issue.

Regards
Stefan Lippers-Hollmann


pgpV_oUx_hecT.pgp
Description: Digitale Signatur von OpenPGP


Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-19 Thread Stefan Lippers-Hollmann
Hi

On 2015-07-19, Bastian Blank wrote:
 On Thu, Jul 09, 2015 at 05:16:57AM +0200, Stefan Lippers-Hollmann wrote:
  Upgrading src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting due to
  a new systemd unit dependency failures regarding lvmetad when mounting 
  non-rootfs logical volumes. Jumping to the emergency shell and invoking
  vgchange -ay and mount -a allows booting to finish.
 
 Please provide all information from the system regarding the storage.
 This includes:
 - /etc/fstab

already provided in the original submission:

# cat /etc/fstab
# /etc/fstab: filesystem table.
#
# filesystemmountpoint  typeoptions 
   dump  pass
/dev/vg-redstone/debian64   /   ext4
defaults,noatime,barrier=0  1  1
LABEL=UEFI  /boot/efi   vfat
auto,user,exec,nodev,nosuid,noatime 1  2

/dev/vg-redstone/swap   noneswapsw  
   0  0

/dev/vg-redstone/var/varext4
auto,user,exec,dev,noatime,barrier=01  2
/var/tmp/tmpnonebind
   0  0
/dev/vg-redstone/home   /home   ext4
auto,user,exec,nodev,nosuid,noatime,barrier=0   1  2
/dev/vg-redstone/storage/srv/storageext4
auto,user,noexec,nodev,nosuid,noatime,barrier=0 1  2

LABEL=seagate   /srv/seagateext4
auto,user,noexec,nodev,nosuid,noatime   1  2

 - /etc/lvm/lvm.conf

/etc/lvm/lvm.conf is unchanged from the package default of lvm2 
2.02.122-2, but I've attached it (gzipped) nevertheless.

$ md5sum -b /etc/lvm/lvm.conf 
de7411c6a935b065dd7dcde6208c364f */etc/lvm/lvm.conf

 - pvs, vgs, lvs

# pvs
  PV VG  Fmt  Attr PSize   PFree  
  /dev/sdb2  vg-redstone lvm2 a--  800,00g 446,00g

# vgs
  VG  #PV #LV #SN Attr   VSize   VFree  
  vg-redstone   1   5   0 wz--n- 800,00g 446,00g

# lvs
  LV   VG  Attr   LSize   Pool Origin Data%  Meta%  Move Log 
Cpy%Sync Convert
  debian64 vg-redstone -wi-ao  10,00g   
 
  home vg-redstone -wi-ao  30,00g   
 
  storage  vg-redstone -wi-ao 300,00g   
 
  swap vg-redstone -wi-ao   4,00g   
 
  var  vg-redstone -wi-ao  10,00g

 - lsblk

# lsblk
NAME  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda 8:00   2,7T  0 disk 
├─sda1  8:10   299M  0 part 
└─sda2  8:20   2,7T  0 part /srv/seagate
sdb 8:16   0 931,5G  0 disk 
├─sdb1  8:17   0   300M  0 part /boot/efi
└─sdb2  8:18   0   800G  0 part 
  ├─vg--redstone-debian64 254:0010G  0 lvm  /
  ├─vg--redstone-var  254:1010G  0 lvm  /var
  ├─vg--redstone-home 254:2030G  0 lvm  /home
  ├─vg--redstone-swap 254:30 4G  0 lvm  [SWAP]
  └─vg--redstone-storage  254:40   300G  0 lvm  /srv/storage

 - systemctl status in broken state

Taken, and stored in a temporary file, from within the emergency shell.

$ zcat systemctl-status.log.gz
● redstone
State: maintenance
 Jobs: 0 queued
   Failed: 1 units
Since: Mo 2015-07-20 00:04:38 CEST; 2min 18s ago
   CGroup: /
   ├─1 /sbin/init
   └─system.slice
 ├─lvm2-lvmetad.service
 │ └─200 /sbin/lvmetad -f
 ├─emergency.service
 │ ├─573 /bin/sh -c /sbin/sulogin; /bin/systemctl --job-mode=fail 
--no-block default
 │ ├─576 bash
 │ └─588 systemctl status
 ├─systemd-journald.service
 │ └─199 /lib/systemd/systemd-journald
 ├─systemd-networkd.service
 │ └─369 /lib/systemd/systemd-networkd
 └─systemd-udevd.service
   └─203 /lib/systemd/systemd-udevd

  Kernel: Linux 4.1.0-1.slh.3-aptosid-amd64 (SMP w/2 CPU cores; PREEMPT)
 
 Ah, this is no Debian system at all.

It is a Debian system, I'm just usually running the kernel[1] I'm 
developing and working on - but even though I picked the wrong 
one when submitting the bug, the problem can be reproduced easily 
using Debian's linux-image-4.0.0-2-amd64:

All logs above have been gathered using:

$ dpkg -l | grep -e linux-image-amd64 -e linux-image-4.0.0-2-amd64 -e 
2.02.122-2 -e 2:1.02.99-2
ii  dmeventd  2:1.02.99-2   
  amd64Linux Kernel Device Mapper event daemon
ii  dmsetup   2:1.02.99-2   
  

Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-19 Thread Ben Caradoc-Davies
Booting succeeds for me with / and /home on separate LVs in a single 
crypto-luks PV (see lsblk below) with lvm2 2.02.122-2 amd64. However, 
after updating to the latest lvm2, pvscan, pvs, vgs, and lvs all 
hang indefinitely until I manually run pvscan --cache. They worked 
fine with 2.02.111-2.2, probably because lvmetad was not enabled.


- lvm.conf is unmodified
- lvm2-monitor.service, avahi-daemon.service, and avahi-daemon.socket 
are masked

- Fully dist-upgraded sid/amd64

Nothing helpful in the logs (journalctl or /var/log).

Before pvscan --cache:

output of strace pvs while hung:
[...]
write(3, request=\pv_list\\ntoken =\filter..., 36) = 36
write(3, \n##\n, 4)   = 4
read(3, response = \token_mismatch\\nexpe, 32) = 32
read(3, cted = \update in progress\\nrece..., 1024) = 147
[...]

output of strace vgs while hung (lvs is similar):
[...]
write(3, request=\vg_list\\ntoken =\filter..., 36) = 36
write(3, \n##\n, 4)   = 4
read(3, response = \token_mismatch\\nexpe, 32) = 32
read(3, cted = \update in progress\\nrece..., 1024) = 147
[...]

Output of strace -e read=3 pvs while hung:
[...]
write(3, request=\pv_list\\ntoken =\filter..., 36) = 36
write(3, \n##\n, 4)   = 4
read(3, response = \token_mismatch\\nexpe, 32) = 32
 | 0  72 65 73 70 6f 6e 73 65  20 3d 20 22 74 6f 6b 65  response = 
toke |
 | 00010  6e 5f 6d 69 73 6d 61 74  63 68 22 0a 65 78 70 65 
n_mismatch.expe |

read(3, cted = \update in progress\\nrece..., 1024) = 147
 | 0  63 74 65 64 20 3d 20 22  75 70 64 61 74 65 20 69  cted = 
update i |
 | 00010  6e 20 70 72 6f 67 72 65  73 73 22 0a 72 65 63 65  n 
progress.rece |
 | 00020  69 76 65 64 20 3d 20 22  66 69 6c 74 65 72 3a 30  ived = 
filter:0 |
 | 00030  22 0a 72 65 61 73 6f 6e  20 3d 20 22 6c 76 6d 65  .reason = 
lvme |
 | 00040  74 61 64 20 63 61 63 68  65 20 69 73 20 69 6e 76  tad cache 
is inv |
 | 00050  61 6c 69 64 20 64 75 65  20 74 6f 20 61 20 67 6c  alid due to 
a gl |
 | 00060  6f 62 61 6c 5f 66 69 6c  74 65 72 20 63 68 61 6e  obal_filter 
chan |
 | 00070  67 65 20 6f 72 20 64 75  65 20 74 6f 20 61 20 72  ge or due 
to a r |
 | 00080  75 6e 6e 69 6e 67 20 72  65 73 63 61 6e 22 0a 0a  unning 
rescan.. |
 | 00090  23 23 0a  ##. 
  |

[...]

For your convenience, the message is:

response = token_mismatch.expected = update in progress.received = 
filter:0.reason = lvmetad cache is invalid due to a global_filter 
change or due to a running rescan..##.


Then man lvmetad led me to pvscan --cache.

After pvscan --cache:

# pvs
  PV VG   Fmt  Attr PSize  PFree
  /dev/mapper/sda2_crypt vg   lvm2 a--  55.62g0
# vgs
  VG   #PV #LV #SN Attr   VSize  VFree
  vg 1   2   0 wz--n- 55.62g0
# lvs
  LV   VG   Attr   LSize  Pool Origin Data%  Meta%  Move Log 
Cpy%Sync Convert
  home vg   -wi-ao 36.99g 


  root vg   -wi-ao 18.62g

Other system information (before pvscan --cache):

# systemctl status lvm2-lvmetad
● lvm2-lvmetad.service - LVM2 metadata daemon
   Loaded: loaded (/lib/systemd/system/lvm2-lvmetad.service; disabled; 
vendor preset: enabled)

   Active: active (running) since Mon 2015-07-20 09:27:27 NZST; 38min ago
 Docs: man:lvmetad(8)
 Main PID: 508 (lvmetad)
   CGroup: /system.slice/lvm2-lvmetad.service
   └─508 /sbin/lvmetad -f
[...]

fstab excerpt:
/dev/mapper/vg-root / ext4 noatime,errors=remount-ro 0 1
/dev/sda1 /boot ext4 noatime,errors=remount-ro 0 2
/dev/mapper/vg-home /home ext4 noatime,errors=remount-ro 0 2

# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda  8:00 55.9G  0 disk
├─sda1   8:10  285M  0 part  /boot
└─sda2   8:20 55.6G  0 part
  └─sda2_crypt 254:00 55.6G  0 crypt
├─vg-root  254:10 18.6G  0 lvm   /
└─vg-home  254:20   37G  0 lvm   /home

packages:
dmeventd 2:1.02.99-2 amd64
dmsetup 2:1.02.99-2 amd64
libdevmapper-event1.02.1:amd64 2:1.02.99-2 amd64
libdevmapper1.02.1:amd64 2:1.02.99-2 amd64
liblvm2app2.2:amd64 2.02.122-2 amd64
liblvm2cmd2.02:amd64 2.02.122-2 amd64
lvm2 2.02.122-2 amd64

kernel:
Linux ripley 4.0.0-2-amd64 #1 SMP Debian 4.0.8-1 (2015-07-11) x86_64 
GNU/Linux


Kind regards,

--
Ben Caradoc-Davies b...@transient.nz
Director
Transient Software Limited http://transient.nz/
New Zealand


--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-19 Thread Ben Caradoc-Davies
On Mon, 20 Jul 2015 01:16:12 +0200 Stefan Lippers-Hollmann 
s@gmx.de wrote:

Interesting enough, systems using a SSD for the system

mountpoints usually succeed booting most of the time

Thanks for this observation, Stefan. My successful boots are indeed on a 
system using an SSD. I have not yet had a failed boot with lvm2 2.02.122-2.


Kind regards,

--
Ben Caradoc-Davies b...@transient.nz
Director
Transient Software Limited http://transient.nz/
New Zealand


--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-19 Thread Bastian Blank
On Thu, Jul 09, 2015 at 05:16:57AM +0200, Stefan Lippers-Hollmann wrote:
 Upgrading src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting due to
 a new systemd unit dependency failures regarding lvmetad when mounting 
 non-rootfs logical volumes. Jumping to the emergency shell and invoking
 vgchange -ay and mount -a allows booting to finish.

Please provide all information from the system regarding the storage.
This includes:
- /etc/fstab
- /etc/lvm/lvm.conf
- pvs, vgs, lvs
- lsblk
- systemctl status in broken state

 Kernel: Linux 4.1.0-1.slh.3-aptosid-amd64 (SMP w/2 CPU cores; PREEMPT)

Ah, this is no Debian system at all.

Bastian

-- 
Kirk to Enterprise -- beam down yeoman Rand and a six-pack.


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-09 Thread Marcelo Santana
Package: lvm2
Version: 2.02.122-1

Hi there,

I've got the same bug and I've had to downgrade the lvm2 packages too.

Note: My partitions are encrypted.

Regards,
Marcelo

--- System information. ---
Architecture: amd64
Kernel:   Linux 4.0.0-2-amd64

Debian Release: stretch/sid
   40 experimentalftp.br.debian.org 
  100 unstableftp.br.debian.org 

--- Package information. ---
Depends  (Version) | Installed
==-+-==
libc6(= 2.15) | 
libdevmapper-event1.02.1(= 2:1.02.74) | 
libdevmapper1.02.1  (= 2:1.02.85) | 
libreadline5  (= 5.2) | 
libudev1  (= 183) | 
init-system-helpers (= 1.18~) | 
lsb-base   | 
dmsetup ( 2:1.02.47) | 
initscripts  (= 2.88dsf-13.3) | 


Package's Recommends field is empty.

Suggests (Version) | Installed
==-+-===
thin-provisioning-tools| 



pgpPnxw3CYQkK.pgp
Description: OpenPGP digital signature


Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-08 Thread Stefan Lippers-Hollmann
Package: lvm2
Version: 2.02.122-1
Severity: serious

Hi

Upgrading src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting due to
a new systemd unit dependency failures regarding lvmetad when mounting 
non-rootfs logical volumes. Jumping to the emergency shell and invoking
vgchange -ay and mount -a allows booting to finish.

Reverting all src:lvm2 packages to 2.02.111-2.2 from testing avoids the 
problem, re-upgrading results in the same error condition again. I can
reproduce the problem on several distinct systems, which share a similar
fstab structure for the basic system paths; in all cases all fstab 
devices exist and can be auto-mounted with src:lvm2 2.02.111-2.2. I have
attached the logs for journalctl -xb (gzipped) and the fstab.

Regards
Stefan Lippers-Hollmann

-- System Information:
Debian Release: stretch/sid
  APT prefers unstable
  APT policy: (500, 'unstable')
Architecture: amd64 (x86_64)

Kernel: Linux 4.1.0-1.slh.3-aptosid-amd64 (SMP w/2 CPU cores; PREEMPT)
Locale: LANG=de_DE.UTF-8, LC_CTYPE=de_DE.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)

Versions of packages lvm2 depends on:
ii  dmeventd  2:1.02.99-1
ii  dmsetup   2:1.02.99-1
ii  init-system-helpers   1.23
ii  initscripts   2.88dsf-59.2
ii  libc6 2.19-18
ii  libdevmapper-event1.02.1  2:1.02.99-1
ii  libdevmapper1.02.12:1.02.99-1
ii  libreadline5  5.2+dfsg-3
ii  libudev1  222-1
ii  lsb-base  4.1+Debian13+nmu1

lvm2 recommends no packages.

Versions of packages lvm2 suggests:
pn  thin-provisioning-tools  none

-- no debconf information


fstab
Description: Binary data


journalctl-xb.log.gz
Description: application/gzip


pgpAP13UAUDAP.pgp
Description: Digitale Signatur von OpenPGP