Hey everyone, I was having this exact same problem.

At some point an older array was defined, and there were superblocks for
md left on one (or more) of the partitions.  Even if I zeroed them out
(using sudo mdadm --zero-superblock /dev/sda1), the superblock came back
after a reboot!  It was very strange.  Regardless if the partitions were
of type "linux" or "linux raid autodetect", I was getting the above
errors and problems as people have already described.  However, if the
partitions were CLEARED completely, the above repeating error did not
occur.

SOLUTION FOR ME:
In the file: /etc/udev/rules.d/85-mdadm.rules
Comment out the only line:
SUBSYSTEM=="block", ACTION=="add|change", ENV{ID_FS_TYPE}=="linux_raid*", 
RUN+="watershed -i udev-mdadm /sbin/mdadm -As"

It looks like for some weird reason, udev is trying to be helpful and
searching for all devices with md superblocks and assemble any array it
might find?  Clearly this is useless because mdadm itself already does
this.

HENCE:  It makes sense that we're getting the above error: "md: array
md0 already has disks!" (although I'm not sure why it happens over and
over again).

Anyways, this is what worked for me (thanks to RaceTM for help!)
Cheers - hopefully my details will help the developers implement this bug fix.

-- 
mdam software raid fails to start up on reboot
https://bugs.launchpad.net/bugs/188392
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to