James C. Dastrup <[EMAIL PROTECTED]> says: > Most of the complaints I see around software raid come from people > with HD tuners. > > I've been using software RAID for a few weeks and I'm seeing the > same problems others are. I have two HD tuners and one PVR150 with > a 100mbit remote storage of 4 IDE drives in RAID 5. The PVR 150 > recordings are fine, but HD streams are not so great, especially if > I have auto-commercial detection running or anything else going on > while recording.
100Mbps = 12.5MB/s = probably <10MB in most real-life applications. Recording two HD streams and one SD stream *and* playing a HD stream back simultaneously, as you'd probably want to do, is likely pushing the envelope. [Note that much of the below I've written to mythtv-users very, very recently. Like, within the past two weeks.] I have a single Pentium 4 3.0GHz frontend/backend. I found *tuned* NFS unusable (numerous IOBOUND messages in mythbackend.log) for recording one, let alone two, HD streams onto a software RAID 5/LVM2/JFS 2.8TB array, even over gigabit Ethernet. Samba, even relatively untuned (smb.conf takes a bazillion more configuration options than anything you can do with NFS, for good or for ill, and I've way more NFS than Samba experience), proved quite usable for recording two HD streams while playing a third. Adding commercial flagging to the mix did cause some intermittent problems, but I resolved that by only doing flagging when I'm not at home, and it's entirely possible that swapping (the MythTV box only has 512MB) was responsible for the IOBOUND messages in this scenario. To deal with some other issues raised in this thread: * Most cheap "hardware" RAID (Promise, RocketRAID, etc.) cards actually do software RAID through the driver. 3Ware is the cheapest and most popular true hardware RAID solution (although I am using my two 3Ware cards as JBOD-only controllers). * Linux software RAID, at least the Linux variety, may not be always as efficient as we'd like, but it's reliable (see below). It is a well-tested beast; it's had a lot of very, very smart people go over it with fine-toothed combs over the years. In addition to many others' experiences over the years I can add my own findings over the past 12 months. * $500 in an Intel/AMD CPU buys you a *lot* more horsepower than the equivalent sum spent on a hardware RAID controller card. Putting it another way, for the amount of money most people are willing to spend here, software RAID is almost certainly the way to go for performance reasons, even disregarding near-guaranteed downstream compatibility, its other main advantage. (Again, 3Ware has a pretty good reputation among hardware RAID vendors for also well-supporting customers who upgrade to *its own* later cards, but there are plenty of horror stories with many other vendors' products.) * Trey Boudreau raises a good point regarding software RAID 5's additional bus bandwidth usage for parity. I will note that 1) as mentioned, most cheap hardware RAID solutions are actually disguised proprietary software RAID so they have no advantage--and many disadvantages--over Linux software RAID here, and 2) a proper server setup should have more than enough bandwidth for even the largest software RAID 5 setups. Note I wrote "proper server setup." By this I mean a high-quality motherboard with 64-bit PCI-X or PCI Express slots and multiple buses serving the expansion slots. My RAID 5 array is on a Supermicro X5DAL-G motherboard with two 64-bit 133MHz PCI-X slots and three buses (one for each of the PCI-X slots and a third for the regular PCI slots; there is a reason why 3Ware cards support 64-bit PCI-X). By contrast, simply stuffing controller cards and drives into the old unused consumer-grade system you have sitting in the closet is not ideal. At best performance will be acceptable; at worst your poor motherboard--never designed for the amount of stress you're applying to its (likely single) bus by sending so much data over it at once--will freak out, causing parity errors. * With all that said, I am in the process of backing up said RAID 5 array (onto an Infrant 600 with 2.0TB) because of some odd read/write performance (*not* data integrity) issues with my particular setup. My guess is that it's related to LVM2, which I've seen elsewhere people attribute performance diminishing by half to, but I plan to test various RAID (including the 3Ware hardware RAID), LVM2, and filesystem combinations. I'd be surprised if, once I get the performance issues resolved, NFS does not work fine for my purposes. * David Bennett's original issue--two recordings starting at the same time and saved onto a RAID array not working properly, while starting one a minute apart from the other is OK--got lost in the shuffle as people (including me, now) took the opportunity raised by his rather melodramatic subject line and message text to discuss various RAID issues of their own. However, my sense is that as chris wrote, David's issue is not RAID-related per se; rather, some minor RAID-related performance/latency issue simply worsens a race condition (in layman's terms, two things trying to do the same thing at the same time and conflicting) within the Myth scheduler software. (And, David, I haven't had the chance to benchmark my Infrant for MythTV purposes yet, although I certainly look forward to doing so after my current use for it is done. If it tests well I will likely use it to hold my MythTV recordings. No question that it's by far the best-reviewed "NAS in a box" consumer/SMB-grade solution out there.) -- Yeechang Lee <[EMAIL PROTECTED]> | +1 650 776 7763 | San Francisco CA US _______________________________________________ mythtv-users mailing list [email protected] http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users
