On Sun, 19 Dec 2010, Luke S Crawford wrote: > So, in the past I have been very anti-hardware raid, mostly because > the cards I could afford wanted to charge me what another 3 spindles > would cost, and usually had only 64MiB of battery-backed cache. > Last time I benchmarked one of those cards, they were largely > indistinguishable, performance wise and reliability wise from linux > md (at the time I was testing the 'half failed' mode that so many > consumer sata drives fall into. Both the hardware raid I tested > (an expensive 3ware) and md dealt poorly with the half-failed drive. > I solved the problem by simply moving to 'enterprise' sata, and > settled on linux MD and raid 1+0, because I saw no benefit to > paying for the raid card.) > > Anyhow, I'm hearing things about new servers from dell and HP coming > with RAID cards that have on the order of 1GiB of cache; and better, > it's flash-based cache, so no battery modules to pay for/worry about. > > With that kind of cache, it seems to me like it may be time to > re-evaluate my prejudices; with enough persistent write cache, > raid5 can actually give better performance, from what I understand, > than the raid 1+0 I use, given the same number of spindles, but > that cache is pretty important. > > Anyhow, I was wondering what experiences others have had with this? > I mean, I'll have to start building larger boxes, I imagine, to > justify the cost of the card (my current systems are 8 core, > 32GiB ram 4 disk systems; it probably makes sense for me to > double or triple that, which is pretty easily doable.) > > What I'm wondering, though, is what success other people have had with > these cards, and with what Linux kernels?
I've been using LSI based cards for several years, I also have some of the 3ware cards and they are working well. I've heard good things about Acera(sp??) cards (fromthe postgres community), but haven't used them yet. The Dell PARC raid cards have a bad reputation with the postgres community in terms of a flash-based cache, I would have some concerns. one is the durability of the flash, with only 1G of flash heavy writes are going to wear it out much sooner than the same amounts of writes to a much larger SSD with wear leveling. another is the write speed, flash is relativly slow to write to, I would not expect it to have nearly the performance of battery-backed ram. I use hardware raid cards in many places, but also use software raid in some places. hardware raid (with battery backup) is significantly more durable, with software raid you have a problem that the OS may have updated some of the drives, but not others at the time you loose power, so the stripe may be in an inconsistant state. the battery backed cache can also give you huge performance improvements for workloads that do fsyncs to make sure the data is safe on disk (mail servers and database servers commonly do this) as they usually are faster than doing the same I/O without a cache and without fsync. the failure modes you run into with hardware raid are also frequently much nicer than with software raid. If the raid card has problems reading a drive, you still get your answer from the other drives at the normal degraded performance. But if the kernel has problems reading a drive, it keeps retrying, which can take quite a while, during which time lots of things just stop. David Lang _______________________________________________ Tech mailing list [email protected] https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech This list provided by the League of Professional System Administrators http://lopsa.org/
