> > > > > > > Success - I managed to get a raid1 device operating. > I created the final filesystem by using mkfs.xfs -f /dev/md0, then > waited for the rebuild to complete before rebooting the system. > > It appears to be created successfully. Now I'll try the same sequence > with sdb and sdc to see if sdc is a good disk. If that works, I'll > retry a raid5 array tomorrow night. > Hmm - it seems to be a bug in RAID5 creation. I can successfully create a RAID1 array either /dev/sdb1 and /dev/sdc1 or /dev/sdb1 and /dev/sdd1
If, however, I try to create a RAID5 array with all three elements, I get /dev/sdc reporting a failure. cat /proc/mdstat fails with the following report. Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md0 : active raid5 sdd1[3](S) sdc1[1](F) sdb1[0] 2930272256 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/1] [U__] unused devices: <none> Has anyone else experienced similar problems? Is there an extra diagnostic procedure which I can use to validate the sdc drive? Is there something extra I have to do when I go over the 2TB level which could explain this goofy behaviour?