On Tue, 11 Oct 2016 17:58:22 -0600
Chris Murphy <li...@colorremedies.com> wrote:

> But consider the identical scenario with md or LVM raid5, or any
> conventional hardware raid5. A scrub check simply reports a mismatch.
> It's unknown whether data or parity is bad, so the bad data strip is
> propagated upward to user space without error. On a scrub repair, the
> data strip is assumed to be good, and good parity is overwritten with
> bad.

That's why I love to use Btrfs on top of mdadm RAID5/6 -- combining a mature
and stable RAID implementation with Btrfs anti-corruption checksumming
"watchdog". In the case that you described, no silent corruption will occur,
as Btrfs will report an uncorrectable read error -- and I can just restore the
file in question from backups.


On Wed, 12 Oct 2016 00:37:19 -0400
Zygo Blaxell <ce3g8...@umail.furryterror.org> wrote:

> A btrfs -dsingle -mdup array on a mdadm raid[56] device might have a
> snowball's chance in hell of surviving a disk failure on a live array
> with only data losses.  This would work if mdadm and btrfs successfully
> arrange to have each dup copy of metadata updated separately, and one
> of the copies survives the raid5 write hole.  I've never tested this
> configuration, and I'd test the heck out of it before considering
> using it.

Not sure what you mean here, a non-fatal disk failure (i.e. within being
compensated by redundancy) is invisible to the upper layers on mdadm arrays.
They do not need to "arrange" anything, on such failure from the point of view
of Btrfs nothing whatsoever has happened to the /dev/mdX block device, it's
still perfectly and correctly readable and writable.

-- 
With respect,
Roman

Attachment: pgpCiQALhZ93Z.pgp
Description: OpenPGP digital signature

Reply via email to