I've used it from 3.8 something to current, it does not handle drive
failure well at all, which is the point of parity raid. I had a 10disk
Raid6 array on 4.1.1 and a drive failure put the filesystem in an
irrecoverable state.  Scrub speeds are also an order of magnitude or
more slower in my own experience.  The issue isn't filesystem
read/write performance, it's maintenance and operation.

That 10 drive system was rebuilt as raid10 and I haven't had problems
since, and it's handled hdd problems reasonably well.

I finally moved away from raid56 yesterday because of the time it took
to scrub.  This was a 4x3tb array raid6 that i only used for backups.
I attempted to just rebalance in to raid10 but partway through the
balance the filesystem ran in to problems, forcing the filesystem into
readonly.  I tried some things to overcome that but ultimately just
wiped it out and recreated as raid10.  I suspect one of the drives may
be having problems so I'm running tests on it now.

Personally I would still recommend zfs on illumos in production,
because it's nearly unshakeable and the creative things you can do to
deal with problems are pretty remarkable.  The unfortunate reality is
though that over time your system will probably grow and expand and
zfs is very locked in to the original configuration.  Adding vdevs is
a poor solution IMO.

On Wed, Oct 14, 2015 at 3:34 PM, Lionel Bouton
<[email protected]> wrote:
> Le 14/10/2015 22:23, Donald Pearson a écrit :
>> I would not use Raid56 in production.  I've tried using it a few
>> different ways but have run in to trouble with stability and
>> performance.  Raid10 has been working excellently for me.
>
> Hi, could you elaborate on the stability and performance problems you
> had? Which kernels were you using at the time you were testing?
>
> I'm interested because I have some RAID10 installations of 7 disks which
> don't need much write performance (large backup servers with few clients
> and few updates but very large datasets) that I plan to migrate to RAID6
> when they approach their storage capacity (at least theoretically with 7
> disks this will give better read performance and better protection
> against disk failures). 3.19 brought full RAID5/6 support and from what
> I remember there were some initial quirks but I'm unaware of any big
> RAID5/6 problem in 4.1+ kernels.
>
> Best regards,
>
> Lionel
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to