I have a CentOS 7.9 system with a software raid 6 root partition. Today something very strange occurred. At 6:45AM the system crashed. I rebooted and when the system came up I had multiple emails indicating that 3 out of 6 drives had failed on the root partition. Strangely I was able to boot into the system and everything was working correctly despite

> cat /proc/mdstat

also indicating 3 out of 6 drives had failed. Since the system was up and running despite the fact more than 2 drives had failed in the root raid array I decided to reboot the system. Actually I shut it down, waited for the drives to spin down and then restarted. This time when it came back the 3 missing drives were back in the array and a cat /proc/mdstat indicated all 6 drives were again in the raid 6 array. So a few questions:

1.) If 3 our of 6 drives of a raid 6 array supposedly fail, how does the array still function?
2.) Why would a shutdown/restart sequence supposedly fix the array?
3.) My gut suggests that the raid array was never degraded and that my system (i.e. cat /proc/mdstat) was lying to me. Any Opinions?

Has anybody else ever seen such strange behavior?
--
Paul (ga...@nurdog.com)
Cell: (303)257-5208
_______________________________________________
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos

Reply via email to