I ran into similar issues, with the added 'bonus' if the RAID5 giving
bad performance (running in degraded mode) and no protection against
disk failure (again, degraded mode).

When installing 14.04 LTS Server on an HP Z620 workstation (Intel C602
chipset), the installer detects the RAID5 array (3 disks, freshly
created in the Intel Matrix firmware), assembles it with mdadm and
starts syncing the disks (since this isn't done when creating it in the
firmware and is left up to the hardware).

When the installation has finished and the machine reboots, syncing has
not completed yet and would be continued after the reboot by mdadm (if
it were used). Instead, because nomdmonddf and nomdmonisw are set in the
default GRUB options dmraid gets used instead of mdadm and it looks like
the syncing does not resume. 'dmraid -s' will show status ok for the
array (even though it has not completely synced). If I then shut the
system down and unplug a disk, the Intel firmware shows Failed instead
of Degraded (which it should do it the disks were synced and parity
complete) and the array is no longer bootable.

My conclusion is that the sync was never completed. I have tested a
similar scenario using mdadm on CentOS 7 and the array did go into
degraded mode and was still bootable when one disk was removed.

I'll try the suggestion in post #3 and see if my array then properly
resyncs and can tolerate losing a single disk (in a 3-disk array).

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1318351

Title:
  mdadm doesn't assemble imsm raids during normal boot

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1318351/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to