Hi all, Some of you may remember the previous discussion I started regarding RAID setups...
Okay... after Russell suggested (and some other ppl) that I try software raid... I did. All *seemed* nice and fine... until I purposely tried to foil the setup. The array had 2 disks, raid 1 mirroring between the two. Both disks identical, and master, on 2 seperate cables. I tried to simulate a few problems I could forsee happening in future (from experience)... 1) hard disk fails to spin up properly at boot time, 2) disk errors during usage. Case 1) I replaced one of the disks with an old disk with bad blocks and strange sounding noises coming from it (obviously damaged... it had been dropped previously). When the motherboard was detecting things, it successfully detected the drive, but during the part that "lilo: " is supposed to come up, nothing did. The disk kept grinding and grinding, and eventually asked for a floppy. I was hoping that the 2nd, working drive in the raid array would kick in any moment, but that didn't happen. Everything stalled right there. If the bad drive is put in by itself, after a while the disk is failed and it tries to boot by floppy. The 2 disks are not on the same cable btw. The BIOS had the usual settings allowing me to set the boot order (Floppy first, CDrom next, hard disk 0, then network (no, i can't put hard disk 1, I wish i could), and finally had "Boot other devices" set to yes. My question: if this was hardware RAID 1... would this have happened? Would the hardware RAID controller recognise the problem, and only stop briefly, then try the second disk automatically and transparently? Case 2) I simulated errors by connecting a flaky IDE cable to one of the drives. I was hoping the software RAID would either compensate by doing most of it's reading from the good drive (with a good cable) or labelling the flaky cable/drive as bad, but instead it started slowing down, and writing to the array was taking much longer and strange errors starting occurring during writing. My question: would hardware raid have handled this situation any better? And as for Hardware IDE raid, which is better... Promise or HighPoint? promise seems to be better supported in the kernel, but I'm not so sure. What happens when (for example) a disk in the array fails? How do you control the hardware raid so you can control a rebuild? And for Promise, HighPoint, etc., what are the devices going to be called (/dev/hde? or maybe /dev/raid/array1?) Thanks in advance. (btw. google IS my friend, and I have looked through not only linuxdoc.org (http://www.linuxdoc.org/HOWTO/Software-RAID-HOWTO-3.html), but also http://www.thelinuxgurus.org/raid.shtml, http://linas.org/linux/raid.html, http://www.linuxdoc.org/HOWTO/mini/DPT-Hardware-RAID.html but none seem to really address IDE Hardware Raid cards in any depth). Sincerely, Jason -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

