Rick DeNatale wrote: > Just how much performance penalty is there in SW raid? My naive guess > would be that 1) it's probably not an issue for raid-1, where the > purpose is reduncancy, but might be more so for striped raid > configurations where the purpose is to achieve parallelism, and 2) the > SW raid implementation has probably gotten better over time such that > those who decided years ago that it was a performance problem are > cruising on an outdated notion.
The penalties really depend on where your bottlenecks are. Software RAID 1 uses twice as much bandwidth on the PCI bus as hardware RAID 1. With software RAID the OS has to push the same data to two different drives, with hardware RAID the controller takes care of that. You would need a REALLY good hardware controller to beat the speed of software RAID 5, although software RAID 5 will have more CPU overhead. Hardware RAID cards don't tend to have a CPU that can compute the checksums as fast as your server can. The more disks your RAID 5 array has, the less the extra PCI overhead matters. RAID 0 shouldn't incur any measurable PCI or CPU overhead. Hardware and software RAID 0 should perform very comparably. RAID 10 would have the same sort of overhead as RAID 1. My brain doesn't want to give me a good solution in this case... If you want all the redundancy you can get then you want each mirrored pair to have their drives on separate controllers. That means you are likely going to the mirroring in software, which means you may as well do the whole thing in software. I don't usually mind using double the bandwidth in a RAID 1 array. Like you said, if you are using RAID 1 performance isn't your primary goal. However, I when you are running RAID 10 that is a lot of wasted bandwidth... Pat
signature.asc
Description: OpenPGP digital signature
-- TriLUG mailing list : http://www.trilug.org/mailman/listinfo/trilug TriLUG Organizational FAQ : http://trilug.org/faq/ TriLUG Member Services FAQ : http://members.trilug.org/services_faq/
