Hi folks,
I was just searching through the archives for help on a RAID problem and
the posts were helpful, so I decided to "give something back" to the
group.
Chris R. Brown ([EMAIL PROTECTED]) wrote:
> I've just come back to my RAID project and have run into some
> more problems. I was wondering if anyone on the list has kludged
> three promise ultra 33 cards and one onboard IDE controller together
> in one box. I'm still trying to get a big (128 gig) array going, but
> it isn't working.
I tried to set up a RAID array with UDMA-33 disks, and have some advice
to offer based on my experiences.
CASE: One very, very important point you might not consider is what
kind of case to use, because cable length is the bane of UMDA--if you go
much over the 18" spec, you may as well forget about getting UDMA
speeds. Most big tower cases have the disks in a cage that sits above
the motherboard compartment which places the upper slots too far from
motherboard IDE connectors, and way too far away from any PCI card.
Your best bet is a mini-tower, or a desktop case with more slots than
usual if you can find one. A rack-mount RAID cage might work if you
don't mind the ridiculous price tag.
I tried to set up a UDMA-33 RAID array in a massive 10-slot case and
found that most of the slots were simply too far away and required
cables longer than 18". I had an onboard Intel PIIX and a Promise
UDMA-33 PCI adapter, and I tried a bunch of kernels from 2.0.36 up to
2.2 series with Alan Cox's patches. Both adapters would often get
kicked out of DMA completely due to errors. When I tried to run the
drives that were too far away in UDMA mode, there were errors and resets
on the IDE adapter. Massive corruption of the filesystem and the RAID
was the result to the extent that reconstruction did not work.
ONBOARD PIIX CHIPSETS: My experience has been that onboard Intel PIIX
chipsets on motherboard other than Intel suck for UDMA, even when I've
tried them with a good 2.2 or 2.3 series kernel with Alan Cox's
patches. I had problems with Asus TX-97E (the motherboard I had in the
UDMA-33 RAID array), Asus TXP4, Tyan 1668 (Dual Titan Pro ATX), and Tyan
1662 (Dual Titan Pro AT).
The best I could ever get on the Tyan 1668 was DMA mode3--I finally
installed a Promise UDMA-33 PCI. UDMA works fine on the new Asus P2B
Baby AT motherboards that I have.
UDMA DISKS: So far, I've had the best luck with Maxtor drives. Some WDC
drives don't work at all for UDMA under Linux (they're listed in the
"Blacklist" in the Ultra-DMA HOWTO).
MY CURRENT SETUP: My most important goal was availability--to make sure
I could keep my core network server online in the event of some hardware
failure. One problem that RAID could not solve for my purposes is how I
could do upgrades and server maintenance without having to bring down
the server. In the end, I decided to set up 2 servers and mirror them
with rsync--that way, I can do maintenance on one server and leave the
other server up and running. When both servers are up and running, one
server is the "master" and the other is the "slave"--on the slave, rsync
runs periodically as a cron job to sync the files with the master (I
download from the master to the slave so I can use read-only access
rather than potentially exposing the slave to unfriendly write access).
BASIC SERVER HARDWARE:
motherboard: Asus P2B Baby AT w/Intel 440 BX
IDE chipset: onboard PIIX4
processor: single PIII 450 MHz
memory: 128 MB PC100 SDRAM DIMM
system disk: IBM 4 GB Ultra-2 SCSI
SCSI adapter: Adaptec Ultra-2 SCSI adapter AHA-2940U2
Fast Ethernet: 3com 100baseFX card with SC connectors
IDE disks: 3x 18 GB Maxtor UDMA-33 disk
Watchdog: Berkshire Products PCWD
AGP Video: Cheapest I could get--Was it Diamond or ATI?
Out-Of-Band Management: Serial link via fiber driver (I also set up the
serial port in lilo so I can get a bootprompt via the serial
connection).
OS: Red Hat 6.0 + updates, custom 2.3.3ac3+efs1.0b kernel
D. Carlos Knowlton wrote:
> Has SCSI finally met
> > its match with inexpensive IDE (U-DMA-66 vs. UW-SCSI-45 (again, how are
> > these numbers significant?))?
UDMA vs. SCSI: If you want to have lots of disks, I think SCSI is a much
better way to go vs. having multiple PCI cards soaking up system
resources, plus you can install more drives externally. LVD Ultra-2
SCSI is even better because it supports very long cables and up to 15+1
devices on the SCSI bus.
If you go with Ultra-2 SCSI, you should be aware that there are some
differences in terms of cabling and so forth eg. the drives don't
support termination so for the internal bus you need an internal
terminator which costs about $45. If you only need say 4 disks and want
a cheap, fast solution, UDMA-33 or UDMA-66 might do the trick, but you
have to be very careful when you pick out the hardware or UDMA might not
work at all.
Good luck!
--
"Jonathan F. Dill" ([EMAIL PROTECTED])