I've also observed this problem in gutsy: LVM on md (raid1) fails to
boot, an "lvm vgchange -ay" on at the (initramfs) prompt is sufficient
to get it working.

My investigations show two parts to the problem.  First, udev's 65
-persistent-storage.rules file explicitly excludes md* devices from
analysis.  That means vol_id is never invoked on the md devices once
mdadm assembles them, so the ID_FS_TYPE variable is never set and so
will never trigger the ID_FS_TYPE=LVM* rule in 85-lvm2.rules which would
activate the volume groups.

However, allowing md* devices to be scanned by 65-persistent-
storage.rules is not sufficient to fix the problem.  It appears there is
some sort of race, and at the time udev gets the 'add' event for the md
device, it hasn't, in fact, been sufficiently configured to read
properly.  Thus the vol_id invocation fails and again fails to set the
ID_FS_TYPE variable correctly.  I've attached a dump of the udevd
--verbose output from the initramfs's context, which shows vol_id
failing on the newly discovered md0 device (line 32567).

** Attachment added: "udevd --verbose output"
   http://launchpadlibrarian.net/7931107/udev.log

-- 
Root fs on LVM fails to boot
https://bugs.launchpad.net/bugs/87745
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to