On 10/27/05, Brian C. Huffman <[EMAIL PROTECTED]> wrote: > On Thu, 2005-10-27 at 11:42 -0400, Bryan Halter wrote: > > > Well to start with for RAID you need to have disks that are all the same > > size (preferably the same model). I believe Linux supports growing > > software RAID volumes and I'm sure someone will correct me if it > > doesn't. Personally I'd go out and buy a 4 device SATA-RAID controller > > and 4 250GB drives that will give you 750GB of storage and fewer > > headaches since the RAID array will be seen as any other scsi disk and > > the card will do the thinking so you won't take a CPU hit for having > > RAID.
I just wanted to throw in my 2cents into this 360' discussion about raid and file systems. Nothing is perfect. For data integrity and speed, HW Raid > SW Raid > no raid First I had no raid, just a single 300g drive. I found that with 4 tuners going, and 3 machines trying to commerical flag, watching even a single frontend was impossible without either stuttering video, choppy FFW, "IO Bound" on disk writes, network saturation, or some combo of the above. So I went to SW Raid. Ran XFS (tried EXT3, JFS, XFS, ReiserFS) / NFS (tried 2,3,4) / SWRaid 5. All the stuttering, IO Bound issues went away, and I could get good recordings, but stability just wasn't there. I fought with that for almost 2 years. I never lost "all" the data, which is obviously the goal of RAID5. I had IDE controllers die, individual drives die, motherboards die, upgraded multiple versions of Kernel, OS, etc, and always fought little crash issues, kernel panics, etc. I'd put the overall stability about 90%, which is TERRIBLE for something with a 24/7 duty cycle. I tried VIA chipset MB's, NV chipset MB's, MB IDE, MB SATA, PCI SATA, PCI IDE, and any combo. Just before I made the jump to HW Raid, I was up to 4xIDE 4xSATA, running on an Abit N7G2. Then about 3 months ago, when I got fed up with taking calls at work from the SO like "Honey, the backend locked up again", I finally bought a HW Raid card. (3ware 9500-s12) WOW. I should have done that in the beginning. Gone, INSTANTLY, all the stability issues. period. CPU load drops to 10% under heavy IO load. It took me almost two days to migrate all the data off to a couple of 300G HD's build a new RAID5 array and copy it all back, while continuing to do daily recordings. But wow. Yes, there are plenty of things still that could make my life miserable. I dodged a bullet just yesterday. Laying in bed, I hear a "THUMP'. Come to the computer room and the backend is off. I smell something in the air..... Long story short, the Antec Truepower 550 gave up. Slapped a new Enermax 600W in, and we're back up and operating, no data loss. Yes, I realize the PS could have coastered all my drives and I would have lost everything, raid or not. However, for me, this solution gives me exactly what I need. Speed, Stability, and the ability to sustain a SINGLE drive failure without loss of total data. (which seems to happen about every 6 months.... consumer drives are NOT rated for 100% duty cycle, but that's another discussion) Looking at the graphs from my switch (snmp & jffnms are a great combo) with commerical flaggers going, recorders going, I see sustained periods of 300Mbit on the main gig port of my primary backend. All without a hitch. Stable, solid, and days of uptime. So, my advice, in the short? Buy good hardware, set it up in ways known to be stable, and it will serve you well. You get what you pay for. Hope this helps someone else going through this journey. _______________________________________________ mythtv-users mailing list [email protected] http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users
