On Fri, Aug 21, 2009 at 7:41 PM, Richard Elling <richard.ell...@gmail.com>wrote:

> On Aug 21, 2009, at 3:34 PM, Tim Cook wrote:
>
>  On Fri, Aug 21, 2009 at 5:26 PM, Ross Walker <rswwal...@gmail.com> wrote:
>> On Aug 21, 2009, at 5:46 PM, Ron Mexico <no-re...@opensolaris.org> wrote:
>>
>> I'm in the process of setting up a NAS for my company. It's going to be
>> based on Open Solaris and ZFS, running on a Dell R710 with two SAS 5/E HBAs.
>> Each HBA will be connected to a 24 bay Supermicro JBOD chassis. Each chassis
>> will have 12 drives to start out with, giving us room for expansion as
>> needed.
>>
>> Ideally, I'd like to have a mirror of a raidz2 setup, but from the
>> documentation I've read, it looks like I can't do that, and that a stripe of
>> mirrors is the only way to accomplish this.
>>
>> Why?
>>
>> Because some people are paranoid.
>>
>
> cue the Kinks Destroyer :-)
>
>  It uses as many drives as a RAID10, but you loose 1 more drive of usable
>> space then RAID10 and you get less then half the performance.
>>
>> And far more protection.
>>
>
> Yes. With raidz3 even more :-)
> I put together a spreadsheet a while back to help folks make this sort
> of decision.
> http://blogs.sun.com/relling/entry/sample_raidoptimizer_output
>
> I didn't put the outputs for RAID-5+1, but RAIDoptmizer can calculate it.
> It won't calculate raidz+1 because there is no such option.  If there is
> some
> demand, I can put together a normal RAID (LVM or array) output of similar
> construction.


Good point as well.  Completely spaced on the fact raidz3 was added not so
long ago.  I don't think it's made it to any officially supported build yet
though, has it?



>
>
>  You might be thinking of a RAID50 which would be multiple raidz vdevs in a
>> zpool, or striped RAID5s.
>>
>> If not then stick with multiple mirror vdevs in a zpool (RAID10).
>>
>> -Ross
>>
>
> My vote is with Ross. KISS wins :-)
> Disclaimer: I'm also a member of BAARF.



My point is, RAIDZx+1 SHOULD be simple.  I don't entirely understand why it
hasn't been implemented.  I can only imagine like so many other things it's
because there hasn't been significant customer demand.  Unfortunate if it's
as simple as I believe it is to implement.  (No, don't ask me to do it, I
put in my time programming in college and have no desire to do it again :))




>
>
>  Raid10 won't provide as much protection.  Raidz21, you can lose any 4
>> drives, and up to 14 if it's the right 14.  Raid10, if you lose the wrong
>> two drives, you're done.
>>
>
> One of the reasons I wrote RAIDoptimizer is to help people get a
> handle on the math behind this.  You can see some of that orientation
> in my other blogs on MTTDL. But at the end of the day, you can get a
> pretty good ballpark by saying every level of parity adds about 3 orders
> of magnitude to the MTTDL. No parity is always a loss.  Single parity
> is better. Double parity even better. Eventually, common-cause problems
> dominate.
>  -- richard
>
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to