I've built 10-20 Linux software SCSI RAIDs on 5-10 systems under various
2.0.x kernels (but none under 2.2.x and none using IDE).

One of the things I've found is that the hardware has to be *very*
reliable.  A recent system with two RAID-5 and one RAID-1 took over a month
of swapping components until it was solid.

Typically, for each RAID and available partition, I start the following on
a virtual terminal:

  while true; do mke2fs /dev/...; e2fsck -f /dev/...; done

I also run these on the raw partitions before building the RAIDs.  Often, a
system which passes manufacturer tests and runs NT and passes QAPLUS/FE and
installs Redhat will fail this test in an hour.

After a *lot* of wasted time in the past, we now require that the system
can run this test for a week before proceeding with further system
configuration.

The three most common problems we've found, in order:

1) Motherboard.
2) Memory.
3) SCSI cable / termination.

--Mike
----------------------------------------------------------
 Mike Bird          Tel: 209-742-5000   FAX: 209-966-3117
 President          POP: 209-742-5156   PGR: 209-742-9979 
 Iron Mtn Systems         http://member.yosemite.net/

Reply via email to