Sorry for the bump but maybe something to keep in mind with this issue. Some reading up on the internet has gotten me on a different track about the loss of the RAID Arrays in my old setup:
It seems starting with kernels newer then 2.6.24 there might be a change in the timers mdadm uses to declare a disk dead or alive. This is possible related to ERC/CCTL/TLER settings: The time a drive is allowed to use to recover from a read error (e.g. bad sector). Normal consumer disks have these ERC/CCTL/TLER setting disabled by default (or don't have it at all) causing them to take up to a minute to respond again to the OS. With ERC/CCTL/TLER enabled, a disk will respond to the OS/RAID driver again after a specified amount of time (e.g. 7 seconds). The RAID driver then sees the drive is still alive and corrects the data using the data on other drives. Could it be that mdadm's disk timers have been changed causing newer kernels to drop groups of disks from the array? -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/613872 Title: md raid5 set inaccessible after some time. -- ubuntu-bugs mailing list [email protected] https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
