> I've built 10-20 Linux software SCSI RAIDs on 5-10 systems under
> various 2.0.x kernels (but none under 2.2.x and none using IDE).
>
> One of the things I've found is that the hardware has to be *very*
> reliable. A recent system with two RAID-5 and one RAID-1 took over
> a month of swapping components until it was solid.
>
I have 3 systems in production for over a year now. 1 scsi and 2
IDE. All are raid 5, but one started out as raid 1 (the IDE system).
All of these started with premium motherboards, ASUS and IWILL (well
almost premium) and came up immediately and have run ever since. They
are all root raid with partitions of about 12 - 14 gigs. The IDEs are
3 disk and the scsi is 4 disk. Gotta say that I'm still using the
0.42 raid tools and the older raid code. Keep hearing stories about
crash and burn so I'll wait a few more months before converting
PRODUCTION systems to the new raid code. These are all 2.0.33
kernels.
Michael
[EMAIL PROTECTED]