I am working on building my home file server (as noted from a previous
post about which IDE controller card to purchase), and I am running into
a minor stumbling block.
Hardware:
1 x IDE hard drive for OS
3 x 250GB IDE hard drives (RAID ARRAY #1 - level 5) hdc hde hdg
3 x 200GB IDE hard drives (RAID ARRAY #2 - level 0) hdd hdf hdh
for roughly 1.1 terabytes of disk space.
I am able to create the level 0 array with no problems. I am even able
to create the level 5 array, which appears to work, until I attempt to
simulate a failure.
I get the following error when I fail/remove hdc and attempt to
stop/restart the array, "assembled from 1 drive and 1 spare - not enough
to start the array."
If I rebuild the array and fail/remove hdg instead, I am still able to
start/stop the array, which seems to me to indicate that hdg is being
setup as a spare. When I created the array, I did not indicate that I
wanted a spare. If I am not mistaken RAID 5 stripes and parities across
all the disks, so there is not one disk that less important than the others.
The array was created using the following:
[EMAIL PROTECTED] ~]# mdadm --create /dev/md3 --verbose --level=raid5
--raid-devices=3 /dev/hd[ceg]1
I did notice that when I create the array I get a few weird indications,
such as:
[EMAIL PROTECTED] ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md3 : active raid5 hdg1[3] hde1[1] hdc1[0]
240974720 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
The "UU_" would seem to me to indicate that one device is in failure
mode? When I check the details of the array, there is no indication
that there is a failed device:
[EMAIL PROTECTED] ~]# mdadm --detail /dev/md3
/dev/md3:
Version : 00.90.03
Creation Time : Fri Nov 24 12:38:19 2006
Raid Level : raid5
Array Size : 240974720 (229.81 GiB 246.76 GB)
Device Size : 120487360 (114.91 GiB 123.38 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 3
Persistence : Superblock is persistent
Update Time : Fri Nov 24 12:38:19 2006
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 70b25481:ebbf8e5b:c8c3a366:13d2ddcd
Events : 0.1
Number Major Minor RaidDevice State
0 22 1 0 active sync /dev/hdc1
1 33 1 1 active sync /dev/hde1
0 0 0 0 removed
3 34 1 3 active sync /dev/hdg
Also, why is there this "0 0 0 0 removed"
line? This should be a clean build of the raid array.
Any thoughts as to why I am unable to remove any single disk, and not
run in "degraded" mode?
I am running Fedora Core 5, 32bit
[EMAIL PROTECTED] ~]# uname -a
Linux fileserver 2.6.18-1.2239.fc5 #1 Fri Nov 10 13:04:06 EST 2006 i686
athlon i386 GNU/Linux
As soon as I can get this and other things stable I intend to install
the 64bit edition, but if I can't get simple items like this working, I
dare not play with 64bit.
I also intend to place LVM over the RAID Array, but this also depends on
the stability of getting the RAID 5 working correctly.
The other concern is what happens if I reinstall the OS? Will I be able
to rebuild the array, or will all of my data be lost?
Thanks in advance,
Kenneth
/*
PLUG: http://plug.org, #utah on irc.freenode.net
Unsubscribe: http://plug.org/mailman/options/plug
Don't fear the penguin.
*/