Today a disk did not come up anymore in the morning and I just noticed that the 
bug is not fixed yet in mdadm-3.2.5-5ubuntu4.1. 
In fact, it got worse "bootdegraded=yes" is now default (ubuntu 14.04) and 
cannot be disabled anymore and so the system stays in an endless loop of 
"mdadm: CREATE group disk not found" messages. The only chance to survice the 
system was to boot a rescue system. As I didn't have a spare disk I had to get 
the system to boot in degraded mode.

Below some diagnostics. Please note, I'm not familiar at all how the
Ubuntu initramfs scripts are assembled from their pieces.

Diagnostic 1) In /usr/share/initramfs-tools/scripts/mdadm-functions

I disabled (commented out) the incremental if-branch
( mdadm --incremental --run --scan; then), instead only the assemble mdadm 
command run. After re-creating the initramfs and rebooting the  "mdadm: CREATE 
group disk not found" message was only shown *once*,  it then complained that 
it didn't find root partition and dropped to the busybox shell. MUCH BETTER!
Investigating on the shell I noticed that md devices had been assembled in 
degraded mode. Also,  running "mdadm --assemble --scan --run" and it brought up 
the same disk group message. So seems to be a bug in mdadm to show this message 
and to return an error code.
After running "vgchange -ay" I could leave the shell and continue to boot

Diagnostic 2) I now changed several things as we needed this system to
boot up automatically

2.1) I mad mountroot_fail to *always* execute 'vgchange -ay'

mountroot_fail()
{
    mount_root_res=1
    message "Incrementally starting RAID arrays..."
    if mdadm --incremental --run --scan; then
        message "Incrementally started RAID arrays."
        mount_root_res=0
    else
        if mdadm --assemble --scan --run; then
            message "Assembled and started RAID arrays."
            mount_root_res=0
        else
            message "Could not start RAID arrays in degraded mode."
        fi
    fi

      # note, if someone does that, she probably should change it to vgchange 
-ay || true
    vgchange -ay
 
    return  mount_root_res
}


2.2) /usr/share/initramfs-tools/scripts/init-premount/mdadm

case mountroot_fail  now exits with 0, not with the exist code of
mountroot_fail

case $1 in
# get pre-requisites
prereqs)
        prereqs
        exit 0
        ;;
mountfail)
        mountroot_fail
        exit 0
        ;;
esac

. /scripts/functions


I think that is all I changed and the system now boots up in degraded mode like 
a charm.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1077650

Title:
  booting from raid in degraded mode ends in endless loop

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1077650/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to