Wing D Lizard wrote: > I want to add a raid-1 to a server ( supermicro x5dal). Since the > motherboard doesn't support hw raid, I'm looking at pci sata > cards. > > Intrex has a couple: > > CON-SATAR - sil3112a chipset > CON-TX2300 - promice fastrack tx2300 ( SATA II??) > > has anybody tried either of these? Any problems?
Okay, so I'm like 10 days late on checking my TriLUG mail, and it shows. :) Still, I couldn't help but respond to this posting. There was lost of discussion about the Promise offerings, as well as the Silicon Image (SiL) chipset cards, both onboard and off. Strangely, I happen to have done a lot of side-by-side testing of these very chipsets, in the not-too-distant past. While working for Intrex, I got called out to Duke to help debug a problem with a workstation in one of the labs. In short, they were getting abysmal disk performance on this particular RAID card we had shipped them, compared to the nearly-identical SCSI-based system sitting right next to it. Note that they weren't actually using the RAID functionality of the card, just using it as a disk controller for 4 reasonably-fast SATA disks. They were running RHEL 3, and wanted to improve the performance of their SATA disks, or at least understand why it was so bad in comparison. Upon close inspection, there were actually two SATA controllers in the box, two ports onboard, on the Intel motherboard, controlled by "a SiL chipset of ill repute". In short, the Promise card was *considerably* faster than the SiL chipset (something on the order of 8x if my memory is correct), particularly when attempting to write to more than one disk at a time. Further investigation into this problem turned up some very interesting comments in one of the *BSD's kernel driver code for the SiL chipset, written by the developer, where he basically explicitly stated that no one should ever use that chipset. :) In short, the hardware itself wasn't able to properly handle reading and writing to more than one drive at a time, even though the chipset claimed to be capable of it, it basically wigged out when attempting to do so, causing all manner of failures. This resulted in ugly work arounds, at the expense of performance, to ensure that this didn't happen by accident. I didn't find similar comments in the Linux kernel, but I suspect similar tom-foolery was at work, hence the perceived differences. Even after getting all the drivers over on the Promise controller, and tuning every thing to the nines, I still couldn't get with in swinging distance of the performance of the SCSI system, which hadn't been tuned in the slightest, they just installed a stock RHEL3 and ran with it. I think the SCSI system was something like 40 to 50% faster. Bear in mind, of course, that the *drives* in the SCSI system were also substantially better, they were 10k rpm drives, vs the 7200rpm drives in the SATA system, so it's not entirely the fault of the transfer channel, but a happy accident of the type of hardware available on both platforms. It's also true that things may have changed in the ~2 years since I last banged my head against this problem. Aaron S. Joyner -- TriLUG mailing list : http://www.trilug.org/mailman/listinfo/trilug TriLUG Organizational FAQ : http://trilug.org/faq/ TriLUG Member Services FAQ : http://members.trilug.org/services_faq/
