On Tue, Nov 10, 2009 at 4:06 PM, tsuraan <[email protected]> wrote:
>> 3/ The md-raid6 recovery code assumes that there is always at least
>> two good blocks to perform recovery.  That makes the current minimum
>> number of raid6 members 4, not 3.  (small nit the btrfs code calls
>> members 'stripes', in md a stripe of data is a collection of blocks
>> from all members).
>
> Why would you use RAID6 on three drives instead of mirroring across
> all of them?  I agree it's an artificial limitation, but would anybody
> use a RAID6 with fewer than 4 drives?

Here is some text I wrote on a local linux-users-group list a few months ago, on
a thread talking about cost/reliability trade-off on small arrays.
(it doesn't seem to be in a public archive)


Lets also consider another configuration:
Raid  0:  4 * 1TB WD RE3s = $640; 4TB; $0.160/GB

WD1002FBYS (1TB WD RE3) has a spec MTBF of 1.2 million hours. Lets
assume a mean time to replace for each drive of 72 hours, I think
thats a reasonably prompt response for a disk at home.

Raid 0
1.2million hours/4 = 34.22313483 yrs MTBF
$4.675/TB/MTBF_YEAR

Raid 5
1.2million_hrs * (1.2million_hrs/(4*3*72)) = 190,128 yrs MTBF
$0.00112/TB/MTBF_YEAR

Raid 0+1
(1.2million_hrs * 1.2million_hrs / (2 * 72))/2 = 570,386 yrs MTBF
$.00056102/TB/MTBF_YEAR

Raid 6
1.2million_hrs*1.2million_hrs*1.2million_hrs/(4*3*2*72*72) =
1,584,404,390 yrs MTBF
$0.00000020/TB/MTBF_YEAR
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to