I have (had?) a 4-disk RAID5 array (/dev/md1), consisting of:
/dev/hda1
/dev/hdc1
/dev/hde1
/dev/hdg1
After some months of uptime, I had to reboot the system for a
non-related issue -- and when it came back up, the array was running
in degraded mode. After further investigating, I had found
Neil, thanks very much. I found the answer to my problem in the
archives of this list, after no small amount of searching.
What I did was recreate the array, re-writing all of the superblocks,
and leaving out the drive that wasn't fully reconstructed:
mdadm --create /dev/md0 -c32 -l5 -n4
Carlos Carvalho wrote:
I think the demand for any solution to the unclean array is indeed low
because of the small probability of a double failure. Those that want
more reliability can use a spare drive that resyncs automatically or
raid6 (or both).
A spare disk would help, but note that
On Saturday November 19, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
The other is to use a filesystem that allows the problem to be avoided
by making sure that the only blocks that can be corrupted are dead
blocks.
This could be done with a copy-on-write filesystem that knows about the
Hi all,
I finally solved my problem by booting with knoppix and recreating the
raid from scratch.
From memory, the process might have looked like that:
boot with knoppix as single user
mdadm --zero-superblock /dev/hdg2
mdadm --zero-superblock /dev/hdg
mdadm --zero-superblock /dev/hdc2
mdadm