Similar problem hit me when I installed Ubuntu Server 10.10 and wanted
to confirm that both disks were capable of booting alone (in order to
simulate a disk failure "2 years from now" (since previous installations
always had problems with the MBR only being written to /dev/sda))
I do not use LVM on the boot device, so that differs from the original
bug report.
To confirm the bug, I also reproduced this on two different computers
with different hardware architectures.
During the installation of Ubuntu Server 10.10, I used the installer
partitioner to create the following setup:
md0 = /dev/sda1 , /dev/sdb1
md1 = /dev/sda2 , /dev/sdb2
md2 = /deV/sda3 , /dev/sdb3
cryptsetup with luks key thing like this:
md0 => md0_crypt
md1 => md1_crypt
fstab:
md0_crypt => / (ext4)
md1_crypt => swap
md2 => /boot (ext4)
After installation, poweroff and EITHER of the two /dev/sda and /dev/sdb
removed physically, the bootup fails with very cryptic error messages in the
bootup text.
Classic "printf debugging" (with echo in the bash scripts), I conclude
that the bug is when the raid1 arrays are assembled. It fails due being
degraded EVEN THOUGH I selected the option to boot even if degraded
during the installation.
By modifying the initrd image like this "ugly workaround", I was able to
circumvent the bootup bug:
mkdir /root/initrd-temp
cd /root/initrd-temp/
cp /boot/initrd.img-2.6.35-28-generic /boot/initrd.img-2.6.35-28-generic.orig
cp /boot/initrd.img-2.6.35-28-generic .
gzip -d < initrd.img-2.6.35-28-generic | cpio --extract --verbose
--make-directories --no-absolute-filenames
rm initrd.img-2.6.35-28-generic
################################
vi scripts/init-premount/mdadm
#added this line to the end of the script, just before the exit 0 line
mdadm --assemble --scan --run
################################
find . | cpio -H newc --create --verbose | gzip -9 >
initrd.img-2.6.35-28-generic
mv initrd.img-2.6.35-28-generic /boot/
During the debugging, I also found out that there are no arguments ($1 has no
value) passed to the mdadm script. Thus the "mountfail" case in the bottom of
the script is never going to be triggered:
case $1 in
# get pre-requisites
prereqs)
prereqs
exit 0
;;
mountfail)
mountroot_fail
exit 0
;;
esac
Thus, the code segment in the mountroot_fail call is never activated,
regardless if the choice in the installation was to set boot degraded to true:
if [ "$BOOT_DEGRADED" = "true" ]; then
echo "Attempting to start the RAID in degraded mode..."
if mdadm --assemble --scan --run; then
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/659899
Title:
Degraded boot fails when using encrypted raid1 with lvm
--
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs