I have a similar problem, but suspect the issue I'm having means it must
be either down to code in the kernel or options unique to my
ubuntu/kernel config.  /proc/version reports: 3.2.0-30-generic.

In my case, I have 5 disks in my system,  4 are on a backplane,
connected directly to the motherboard, and the 5th is connected where
the cd should be.

These come up on linux as /dev/sd[a-d] on the motherboard and /dev/sde
on the 5th disk.

I have uinstalled the OS entirely on the 5th disk, and configured
grub/fstab to identify all partitions by UUID.  fstab does not reference
any disks in /dev/sd[a-d].

The intention being, to install software raid on the a-d disk, to
present as /dev/md0

I created a RAID5 array with 3+spare, and one of the disks died.  So I
have a legitimated degraded array, which the OS should not need to boot.

However it won't boot either with 'bootdegraded=true' or not

Not sure editing mdadm functions will help as really I don't want any md
functions to run at initramfs time.  They can all wait until after it's
booted.

Any thoughts on how I can turn off mdadm completely from initramfs?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/990913

Title:
  RAID goes into degrade mode on every boot 12.04 LTS server

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/990913/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to