On Sat, May 17, 2014 at 3:53 AM, Mick <michaelkintz...@gmail.com> wrote:
> I am not clear on one thing:  is the corruption that you show above *because*
> of btrfs, or it would occur silently with any other fs, like e.g. ext4?

That is something I'm curious about as well as I stumbled on this
thread.  I've been running btrfs on a 5 drive array set to raid1 for
both data and metadata for several months now, and I've yet to see a
single error in my weekly scrubs.  This is on a system that is up
24x7, running mysql, mythtv, postfix, and a daily rsync backup -
basically light disk activity at all times, and heavy activity
moderately often.  The only issue I've had with btrfs is ENOSPC when
it manages to allocate all of its chunks (more of a problem on a
smaller ssd running btrfs for /), and panics when I try to remove
several snapshots at once.

I'm not sure how easy it would be to test for silent corruption on
another fs, unless you tried using ZFS instead, or used tripwire or
some other integrity checker.  Testing the drive itself would be
straightforward if you didn't need to use it in any kind of production
capacity - write patterns to it and try to read them back in a few
days.

Rich

Reply via email to