On Mon, Jun 4, 2012 at 11:55 AM, Benjamin D. Fillmore < [email protected]> wrote:
> ** > Andy's configuration is actually much more dangerous than the OP's > RAID-10. Creating a single 18TB array from 3TB drives is just begging for > complete system failure and data loss. The statistical probability of > multiple drive failures grows with the total size of the array and the > associated rebuild time. > No it's not. My RAID6 configuration would require 3 simultaneous/overlapping drive failures before the array as a whole would fail. A RAID 10 system using 4 drives would fail if 2 of the underlying drives in the same RAID1 branch failed. For a fixed number of drives, the likelihood of 3 simultaneous drive failures (assuming independent drive failure probabilities, i.e. the drives aren't all the same age, etc) is an order of magnitude lower than the likelihood of 2 drives failing. The likelihood of simultaneous drive failure increases more or less linearly with the total number of drives (again, assuming independent drive failure). Since I'm using 8 drives, my system is twice as likely to fail than if I were using 4 drives. That said, this doubling in failure likelihood is almost irrelevant compared to the order of magnitude difference in likelihood of 3 simultaneous failures vs 2 simultaneous failures. Thus, my RAID 6 configuration is far safer than any RAID 10 implementation using only 2 drives in the RAID 1 branch. If you increased the number of redundant drives in each RAID1 branch from 2 to 3, you would create a system that was a little bit safer than my RAID6 system, but you'd be only getting 2 drives worth of space out of 6 drives total, where as I'd be getting 6 drives worth of space out of 8 drives total. 2-drive RAID 10 is safer than RAID5, but again, not by as large a margin as you might expect (hence my comment about common misunderstanding of nested RAID failure rates). And at the added cost of lower efficiency (2 usable drives vs 3 usable drives assuming 4 drives total). Modern 3TB drives have far lower failure rates than the 500GB drives of 10 years ago. While larger drives are more likely to fail than smaller drives, all (high quality) modern drives are a lot less likely to fail than the drives of the previous generation that we all used in RAID systems for many years without complaint. There's no magic over-2TB barrier where drive failure rates increase dramatically. Drive size contributes a linear increase in failure likelihood that is generally eclipsed by other factors (RAID topology, number of drives, etc). larger drives also contribute to longer rebuild times, and thus heightened likelihood of simultaneous failures due to larger overlap windows. That said, using our hardware RAID card, we can do a full rebuild of a degraded array in under 24 hours. On a separate Linux MD server that we run using software RAID and a modern proc, we can rebuild an 18TB array in about 10 hours when properly tuned and bit-mapped for high speed rebuilds. For a nice analysis, see: http://evadman.blogspot.com/2010/08/raid-array-failure-probabilities.html And the classic Google paper with more details on low level drive failure parameters, see: http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/disk_failures.pdf There are always going to be trade-offs, and as anyone who deploys it correctly will tell you, RAID is not a backup. It's never fail-proof, just fail resistant to varying degrees and at varying costs. All of these facotrs will need to be considered when deciding what the right approach fro your clients is. -Andy > > Quite frankly, anything 2TB or greater, I would recommend doing > nothing more than RAID10. If you're using Linux, you could easily combine > the multiple RAID sets using LVM into a single logical mountpoint (if you > REALLY wanted that). However you never want to create a RAID array that > takes too long to rebuild in the event of a disk failure. > > I don't plan on writing an article about this here....there are plenty > available on the internet already. > > > On 6/4/2012 12:15 PM, Andy Sayler wrote: > > We run dual 18TB file servers using 8 3TB drives in a RAID6 configuration. > Each servers mirrors itself to the other server for redundant backup. These > servers are attached to our network and serve up the file stores via > SMB/CIFS (they are Windows-based). Until recently, we were accessing them > over a 10/100 Mbps network with no issues from our Rivendell workstations. > The only exception was the server-to-server mirroring, which runs over a > dedicated direct connection to avoid choking off the workstation access > when the mirroring occurs each night. > > Being Windows based, we use Areca ARC-1220 hardware RAID cards in these > machines: http://www.newegg.com/Product/Product.aspx?Item=N82E16816131004. > One of these per machine connects all 8 drives and powers the RAID6. if we > were using Linux servers (which we would be if we were building from > scratch again today), we'd probably use Linux software RAID (Linux MD RAID) > to drive the system. Modern processors are more than fast enough to handle > this and the Linux MD Raid implementation is fairly ubiquitous, fast, and > very stable. Were we to use software RAID, we'd go with a cheaper > HBA adapter card since we wouldn't need hardware RAID support: > http://www.newegg.com/Product/Product.aspx?Item=N82E16816118112. > > I'd recommend going with RAID 5 or RAID 6 (only a possibility if you > have more than 4 drives) instead of RAID 10. I've never been a big fan of > combined RAID levels, as they tend to be harder to expand, harder to > maintain, and very few people actually have a good understanding the > complex failure probabilities they present. With a 4 drive system, > I'd recommend using RAID 5 over RAID 10. > > ZFS is a cool system, but Linux ZFS support has always been spotty. > It's designed primarily for Solaris, not Linux, and licensing issue have > prevented its incorporation in the Linux kernel. There is some support via > FUSE (non-kernel implementation) and a few other 3rd party kernel modules, > but generally Linux does not fully or optimally support ZFS. BTRFS is the > Linux answer to ZFS, and it also provides built-in RAID-like capabilities. > That said, BTRFS is still under heavy development and I probably wouldn't > yet recommend it for production use. > > As Wayne mentioned, there are companies that sell pre-built systems, but > in this day and age, it's probably just as simple, and generally cheaper, > to build your own. I think your safest, most-stable, and simplest approach > would be to build a small Linux server with 4 to 6 drives running under > Linux MD software raid using RAID5 or 6. > > Good luck! > > -Andy Sayler > www.wmfo.org > www.andysayler.com > > On Mon, Jun 4, 2012 at 4:48 AM, Rob Landry <[email protected]>wrote: > >> >> A station I converted to Rivendell back in 2010 has asked me to update >> the system. In addition to moving from 1.7.2 to 2.x, they want to add more >> disk storage. >> >> They're currently running a RAID 1 array consisting of two 2 TB disks, >> providing about 1.7 TB of available storage. It is set up using Linux >> software RAID. I'm thinking of going with two systems (for redundancy), >> each with four 2TB drives configured as a RAID 10 array, effectively >> doubling the station's current capacity, also using software RAID. >> >> My question is: are there ready-made systems available in such a >> configuration, or will I have to build my own? >> >> Also, I've long been averse to playing audio files from a workstation over >> the air through a network from a server, and so I'm thinking of setting >> these new machines up as wholly self-contained Rivendell workstations, >> keeping them synchronized using the Perl scripts I developed for another >> station, where they have been working very well. >> >> Any thoughts? >> >> >> Rob >> _______________________________________________ >> Rivendell-dev mailing list >> [email protected] >> http://lists.rivendellaudio.org/mailman/listinfo/rivendell-dev >> > > > _______________________________________________ > Rivendell-dev mailing > [email protected]http://lists.rivendellaudio.org/mailman/listinfo/rivendell-dev > > > _______________________________________________ > Rivendell-dev mailing list > [email protected] > http://lists.rivendellaudio.org/mailman/listinfo/rivendell-dev > >
_______________________________________________ Rivendell-dev mailing list [email protected] http://lists.rivendellaudio.org/mailman/listinfo/rivendell-dev
