Public bug reported:

I have two lvm volumes (/dev/mapper/raid-btrfs and /dev/mapper/fast-
btrfs) in two different volume groups. I have created a btrfs (raid1)
filesystem on top of them and that's my root filesystem.

If i define it by UUID in the root= kernel argument, i just hit bug
#1574333. Forcing my root to "/dev/mapper/fast-btrfs" by defining
GRUB_DEVICE in /etc/default/grub works around that bug.

The problem now is that initrd is only activating the device given as
root= argument, leaving the other inactive; consequently the btrfs mount
fails to find its second device and the system fails to boot giving up
at initramfs prompt.

Manually adding a line to activate also 2nd device at the bottom of
/usr/share/initramfs-tools/scripts/local-top/lvm2 and rebuilding the
initramfs works around this issue too, but i suppose my mods will be
washed away by next package upgrade.

Here is the result:
> activate "$ROOT"
> activate "$resume"
> activate "/dev/mapper/raid-btrfs"

Proposed solution:
I understand this is an uncommon setup and correctly handling multidevice LVM 
roots is complicated, please just add a configuration option to manually 
define/append the list of volume groups to be activated at initrd time.

** Affects: lvm2 (Ubuntu)
     Importance: Undecided
         Status: New

** Description changed:

  I have two lvm volumes (/dev/mapper/raid-btrfs and /dev/mapper/fast-
  btrfs) in two different volume groups. I have created a btrfs (raid1)
  filesystem on top of them and that's my root filesystem.
  
- If i define it bu UUID in the root= kernel argument, i just hit bug
+ If i define it by UUID in the root= kernel argument, i just hit bug
  #1574333. Forcing my root to "/dev/mapper/fast-btrfs" by defining
  GRUB_DEVICE in /etc/default/grub works around that bug.
  
  The problem now is that initrd is only activating the device given as
  root= argument, leaving the other inactive; consequently the btrfs mount
  fails to find its second device and the system fails to boot giving up
  at initramfs prompt.
  
  Manually adding a line to activate also 2nd device at the bottom of
  /usr/share/initramfs-tools/scripts/local-top/lvm2 and rebuilding the
  initramfs works around this issue too, but i suppose my mods will be
  washed away by next package upgrade.
  
  Here is the result:
  > activate "$ROOT"
  > activate "$resume"
  > activate "/dev/mapper/raid-btrfs"
  
  Proposed solution:
  I understand this is an uncommon setup and correctly handling multidevice LVM 
roots is complicated, please just add a configuration option to manually 
define/append the list of volume groups to be activated at initrd time.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to lvm2 in Ubuntu.
https://bugs.launchpad.net/bugs/1848180

Title:
  LVM initrd fails to activate btrfs multidevice root

Status in lvm2 package in Ubuntu:
  New

Bug description:
  I have two lvm volumes (/dev/mapper/raid-btrfs and /dev/mapper/fast-
  btrfs) in two different volume groups. I have created a btrfs (raid1)
  filesystem on top of them and that's my root filesystem.

  If i define it by UUID in the root= kernel argument, i just hit bug
  #1574333. Forcing my root to "/dev/mapper/fast-btrfs" by defining
  GRUB_DEVICE in /etc/default/grub works around that bug.

  The problem now is that initrd is only activating the device given as
  root= argument, leaving the other inactive; consequently the btrfs
  mount fails to find its second device and the system fails to boot
  giving up at initramfs prompt.

  Manually adding a line to activate also 2nd device at the bottom of
  /usr/share/initramfs-tools/scripts/local-top/lvm2 and rebuilding the
  initramfs works around this issue too, but i suppose my mods will be
  washed away by next package upgrade.

  Here is the result:
  > activate "$ROOT"
  > activate "$resume"
  > activate "/dev/mapper/raid-btrfs"

  Proposed solution:
  I understand this is an uncommon setup and correctly handling multidevice LVM 
roots is complicated, please just add a configuration option to manually 
define/append the list of volume groups to be activated at initrd time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1848180/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to     : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to