Re: [dm-devel] [RFC PATCH] fstests: Check if a fs can survive random (emulated) power loss

2018-03-02 Thread Amir Goldstein
On Thu, Mar 1, 2018 at 1:48 PM, Qu Wenruo  wrote:
>
>
> On 2018年03月01日 19:15, Amir Goldstein wrote:
>> On Thu, Mar 1, 2018 at 11:25 AM, Qu Wenruo  wrote:
>>>
>>>
>>> On 2018年03月01日 16:39, Amir Goldstein wrote:
 On Thu, Mar 1, 2018 at 7:38 AM, Qu Wenruo  wrote:
> This test case is originally designed to expose unexpected corruption
> for btrfs, where there are several reports about btrfs serious metadata
> corruption after power loss.
>
> The test case itself will trigger heavy fsstress for the fs, and use
> dm-flakey to emulate power loss by dropping all later writes.

 So you are re-posting the test with dm-flakey or converting it to
 dm-log-writes??
>>>
>>> Working on the scripts to allow us to do --find and then replay.
>>>
>>> Since for xfs and ext4, their fsck would report false alerts just for
>>> dirty journal.
>>>
>>> I'm adding new macro to locate next flush and replay to it, then mount
>>> it RW before we call fsck.
>>>
>>> Or do we have options for those fscks to skip dirty journal?
>>>
>>
>> No, you are much better off doing mount/umount before fsck.
>> Even though e2fsck can replay a journal, it does that much slower
>> then the kernel does.
>>
>> But why do you need to teach --find to find next flush?
>> You could use a helper script to run every fua with --fsck --check fua.
>> Granted, for fstests context, I agree that --find next fua may look
>> nicer, so I have no objection to this implementation.
>
> The point is, in my opinion fua is not the worst case we need to test.
> Only flush could leads us to the worst case we really need to test.
>
> In btrfs' case, if we finished flush, but without fua, we have a super
> block points to all old trees, but all new trees are already written to
> disk.
>
> In that flush entry, we could reach to the worst case scenario to verify
> all btrfs tricks are working all together to get a completely sane btrfs
> (even all data should be correct).
>
> This should also apply to journal based filesystems (if I understand the
> journal thing correctly), even when all journals written but superblock
> not updated, we should be completely fine.
> (Although for journal, we may need to reach fua entry instead of flush?)
>
> And the other reason why we need to find next flush/fua manually is,
> mount will write new data, and we need to replay all the sequence until
> next flush/fua.
>

OK. but Josef addressed this in his script using dm snapshots, rather
than replaying each time. I guess this is the reason the script is called
replay-individual-faster.sh. You don't have to do the same, but I expect
the test would run faster if you learn from experience of Josef.

>
> And finally the reason about why need manually mount is, we need to
> workaround e2fsck/xfs_repair, so that they won't report dirty journal as
> error. If we have extra options to disable such behavior, I'm completely
> OK with current --check flush/fua --fsck method.
> (BTW, for my btrfs testing, --check flush --fsck is completely good
> enough, to exposed possible free space cache related problems)
>

What I was suggesting as an alternative is --fsck ./replay-fsck-wrapper.sh
where the wrapper script does the needed mount/umount and if you
also use dm snapshot for the mounted volume you can continue to replay
from the same point and don't need to replay from the start.

Cheers,
Amir.

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

Re: [dm-devel] [RFC PATCH] fstests: Check if a fs can survive random (emulated) power loss

2018-03-01 Thread Amir Goldstein
On Thu, Mar 1, 2018 at 11:25 AM, Qu Wenruo  wrote:
>
>
> On 2018年03月01日 16:39, Amir Goldstein wrote:
>> On Thu, Mar 1, 2018 at 7:38 AM, Qu Wenruo  wrote:
>>> This test case is originally designed to expose unexpected corruption
>>> for btrfs, where there are several reports about btrfs serious metadata
>>> corruption after power loss.
>>>
>>> The test case itself will trigger heavy fsstress for the fs, and use
>>> dm-flakey to emulate power loss by dropping all later writes.
>>
>> So you are re-posting the test with dm-flakey or converting it to
>> dm-log-writes??
>
> Working on the scripts to allow us to do --find and then replay.
>
> Since for xfs and ext4, their fsck would report false alerts just for
> dirty journal.
>
> I'm adding new macro to locate next flush and replay to it, then mount
> it RW before we call fsck.
>
> Or do we have options for those fscks to skip dirty journal?
>

No, you are much better off doing mount/umount before fsck.
Even though e2fsck can replay a journal, it does that much slower
then the kernel does.

But why do you need to teach --find to find next flush?
You could use a helper script to run every fua with --fsck --check fua.
Granted, for fstests context, I agree that --find next fua may look
nicer, so I have no objection to this implementation.

Thanks,
Amir.

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

Re: [dm-devel] [RFC PATCH] fstests: Check if a fs can survive random (emulated) power loss

2018-03-01 Thread Qu Wenruo


On 2018年03月01日 19:15, Amir Goldstein wrote:
> On Thu, Mar 1, 2018 at 11:25 AM, Qu Wenruo  wrote:
>>
>>
>> On 2018年03月01日 16:39, Amir Goldstein wrote:
>>> On Thu, Mar 1, 2018 at 7:38 AM, Qu Wenruo  wrote:
 This test case is originally designed to expose unexpected corruption
 for btrfs, where there are several reports about btrfs serious metadata
 corruption after power loss.

 The test case itself will trigger heavy fsstress for the fs, and use
 dm-flakey to emulate power loss by dropping all later writes.
>>>
>>> So you are re-posting the test with dm-flakey or converting it to
>>> dm-log-writes??
>>
>> Working on the scripts to allow us to do --find and then replay.
>>
>> Since for xfs and ext4, their fsck would report false alerts just for
>> dirty journal.
>>
>> I'm adding new macro to locate next flush and replay to it, then mount
>> it RW before we call fsck.
>>
>> Or do we have options for those fscks to skip dirty journal?
>>
> 
> No, you are much better off doing mount/umount before fsck.
> Even though e2fsck can replay a journal, it does that much slower
> then the kernel does.
> 
> But why do you need to teach --find to find next flush?
> You could use a helper script to run every fua with --fsck --check fua.
> Granted, for fstests context, I agree that --find next fua may look
> nicer, so I have no objection to this implementation.

The point is, in my opinion fua is not the worst case we need to test.
Only flush could leads us to the worst case we really need to test.

In btrfs' case, if we finished flush, but without fua, we have a super
block points to all old trees, but all new trees are already written to
disk.

In that flush entry, we could reach to the worst case scenario to verify
all btrfs tricks are working all together to get a completely sane btrfs
(even all data should be correct).

This should also apply to journal based filesystems (if I understand the
journal thing correctly), even when all journals written but superblock
not updated, we should be completely fine.
(Although for journal, we may need to reach fua entry instead of flush?)

And the other reason why we need to find next flush/fua manually is,
mount will write new data, and we need to replay all the sequence until
next flush/fua.


And finally the reason about why need manually mount is, we need to
workaround e2fsck/xfs_repair, so that they won't report dirty journal as
error. If we have extra options to disable such behavior, I'm completely
OK with current --check flush/fua --fsck method.
(BTW, for my btrfs testing, --check flush --fsck is completely good
enough, to exposed possible free space cache related problems)

Thanks,
Qu

> 
> Thanks,
> Amir.
> 



signature.asc
Description: OpenPGP digital signature
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

Re: [dm-devel] [RFC PATCH] fstests: Check if a fs can survive random (emulated) power loss

2018-03-01 Thread Qu Wenruo


On 2018年03月01日 16:39, Amir Goldstein wrote:
> On Thu, Mar 1, 2018 at 7:38 AM, Qu Wenruo  wrote:
>> This test case is originally designed to expose unexpected corruption
>> for btrfs, where there are several reports about btrfs serious metadata
>> corruption after power loss.
>>
>> The test case itself will trigger heavy fsstress for the fs, and use
>> dm-flakey to emulate power loss by dropping all later writes.
> 
> So you are re-posting the test with dm-flakey or converting it to
> dm-log-writes??

Sorry, I just noticed the date right now.

I was formatting the patch to the wrong place with old patches.
Please ignore this patch.

Thanks,
Qu

> 
>>
>> For btrfs, it should be completely fine, as long as superblock write
>> (FUA write) finishes atomically, since with metadata CoW, superblock
>> either points to old trees or new tress, the fs should be as atomic as
>> superblock.
>>
>> For journal based filesystems, each metadata update should be journaled,
>> so metadata operation is as atomic as journal updates.
>>
>> It does show that XFS is doing the best work among the tested
>> filesystems (Btrfs, XFS, ext4), no kernel nor xfs_repair problem at all.
>>
>> For btrfs, although btrfs check doesn't report any problem, kernel
>> reports some data checksum error, which is a little unexpected as data
>> is CoWed by default, which should be as atomic as superblock.
>> (Unfortunately, still not the exact problem I'm chasing for)
>>
>> For EXT4, kernel is fine, but later e2fsck reports problem, which may
>> indicates there is still something to be improved.
>>
>> Signed-off-by: Qu Wenruo 
>> ---
>>  tests/generic/479 | 109 
>> ++
>>  tests/generic/479.out |   2 +
>>  tests/generic/group   |   1 +
>>  3 files changed, 112 insertions(+)
>>  create mode 100755 tests/generic/479
>>  create mode 100644 tests/generic/479.out
>>
>> diff --git a/tests/generic/479 b/tests/generic/479
>> new file mode 100755
>> index ..ab530231
>> --- /dev/null
>> +++ b/tests/generic/479
>> @@ -0,0 +1,109 @@
>> +#! /bin/bash
>> +# FS QA Test 479
>> +#
>> +# Test if a filesystem can survive emulated powerloss.
>> +#
>> +# No matter what the solution a filesystem uses (journal or CoW),
>> +# it should survive unexpected powerloss, without major metadata
>> +# corruption.
>> +#
>> +#---
>> +# Copyright (c) 2018 SuSE.  All Rights Reserved.
>> +#
>> +# This program is free software; you can redistribute it and/or
>> +# modify it under the terms of the GNU General Public License as
>> +# published by the Free Software Foundation.
>> +#
>> +# This program is distributed in the hope that it would be useful,
>> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
>> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> +# GNU General Public License for more details.
>> +#
>> +# You should have received a copy of the GNU General Public License
>> +# along with this program; if not, write the Free Software Foundation,
>> +# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
>> +#---
>> +#
>> +
>> +seq=`basename $0`
>> +seqres=$RESULT_DIR/$seq
>> +echo "QA output created by $seq"
>> +
>> +here=`pwd`
>> +tmp=/tmp/$$
>> +status=1   # failure is the default!
>> +trap "_cleanup; exit \$status" 0 1 2 3 15
>> +
>> +_cleanup()
>> +{
>> +   ps -e | grep fsstress > /dev/null 2>&1
>> +   while [ $? -eq 0 ]; do
>> +   $KILLALL_PROG -KILL fsstress > /dev/null 2>&1
>> +   wait > /dev/null 2>&1
>> +   ps -e | grep fsstress > /dev/null 2>&1
>> +   done
>> +   _unmount_flakey &> /dev/null
>> +   _cleanup_flakey
>> +   cd /
>> +   rm -f $tmp.*
>> +}
>> +
>> +# get standard environment, filters and checks
>> +. ./common/rc
>> +. ./common/filter
>> +. ./common/dmflakey
>> +
>> +# remove previous $seqres.full before test
>> +rm -f $seqres.full
>> +
>> +# real QA test starts here
>> +
>> +# Modify as appropriate.
>> +_supported_fs generic
>> +_supported_os Linux
>> +_require_scratch
>> +_require_dm_target flakey
>> +_require_command "$KILLALL_PROG" "killall"
>> +
>> +runtime=$(($TIME_FACTOR * 15))
>> +loops=$(($LOAD_FACTOR * 4))
>> +
>> +for i in $(seq -w $loops); do
>> +   echo "=== Loop $i: $(date) ===" >> $seqres.full
>> +
>> +   _scratch_mkfs >/dev/null 2>&1
>> +   _init_flakey
>> +   _mount_flakey
>> +
>> +   ($FSSTRESS_PROG $FSSTRESS_AVOID -w -d $SCRATCH_MNT -n 100 \
>> +   -p 100 >> $seqres.full &) > /dev/null 2>&1
>> +
>> +   sleep $runtime
>> +
>> +   # Here we only want to drop all write, don't need to umount the fs
>> +   _load_flakey_table $FLAKEY_DROP_WRITES
>> +
>> +   ps -e | grep fsstress > /dev/null 2>&1
>> +   while [ $? -eq 0 ]; do
>> +   $KILLALL_PROG -KILL fsstress > /dev/null 2>&1
>

Re: [dm-devel] [RFC PATCH] fstests: Check if a fs can survive random (emulated) power loss

2018-03-01 Thread Qu Wenruo


On 2018年03月01日 16:39, Amir Goldstein wrote:
> On Thu, Mar 1, 2018 at 7:38 AM, Qu Wenruo  wrote:
>> This test case is originally designed to expose unexpected corruption
>> for btrfs, where there are several reports about btrfs serious metadata
>> corruption after power loss.
>>
>> The test case itself will trigger heavy fsstress for the fs, and use
>> dm-flakey to emulate power loss by dropping all later writes.
> 
> So you are re-posting the test with dm-flakey or converting it to
> dm-log-writes??

Working on the scripts to allow us to do --find and then replay.

Since for xfs and ext4, their fsck would report false alerts just for
dirty journal.

I'm adding new macro to locate next flush and replay to it, then mount
it RW before we call fsck.

Or do we have options for those fscks to skip dirty journal?

Thanks,
Qu

> 
>>
>> For btrfs, it should be completely fine, as long as superblock write
>> (FUA write) finishes atomically, since with metadata CoW, superblock
>> either points to old trees or new tress, the fs should be as atomic as
>> superblock.
>>
>> For journal based filesystems, each metadata update should be journaled,
>> so metadata operation is as atomic as journal updates.
>>
>> It does show that XFS is doing the best work among the tested
>> filesystems (Btrfs, XFS, ext4), no kernel nor xfs_repair problem at all.
>>
>> For btrfs, although btrfs check doesn't report any problem, kernel
>> reports some data checksum error, which is a little unexpected as data
>> is CoWed by default, which should be as atomic as superblock.
>> (Unfortunately, still not the exact problem I'm chasing for)
>>
>> For EXT4, kernel is fine, but later e2fsck reports problem, which may
>> indicates there is still something to be improved.
>>
>> Signed-off-by: Qu Wenruo 
>> ---
>>  tests/generic/479 | 109 
>> ++
>>  tests/generic/479.out |   2 +
>>  tests/generic/group   |   1 +
>>  3 files changed, 112 insertions(+)
>>  create mode 100755 tests/generic/479
>>  create mode 100644 tests/generic/479.out
>>
>> diff --git a/tests/generic/479 b/tests/generic/479
>> new file mode 100755
>> index ..ab530231
>> --- /dev/null
>> +++ b/tests/generic/479
>> @@ -0,0 +1,109 @@
>> +#! /bin/bash
>> +# FS QA Test 479
>> +#
>> +# Test if a filesystem can survive emulated powerloss.
>> +#
>> +# No matter what the solution a filesystem uses (journal or CoW),
>> +# it should survive unexpected powerloss, without major metadata
>> +# corruption.
>> +#
>> +#---
>> +# Copyright (c) 2018 SuSE.  All Rights Reserved.
>> +#
>> +# This program is free software; you can redistribute it and/or
>> +# modify it under the terms of the GNU General Public License as
>> +# published by the Free Software Foundation.
>> +#
>> +# This program is distributed in the hope that it would be useful,
>> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
>> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> +# GNU General Public License for more details.
>> +#
>> +# You should have received a copy of the GNU General Public License
>> +# along with this program; if not, write the Free Software Foundation,
>> +# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
>> +#---
>> +#
>> +
>> +seq=`basename $0`
>> +seqres=$RESULT_DIR/$seq
>> +echo "QA output created by $seq"
>> +
>> +here=`pwd`
>> +tmp=/tmp/$$
>> +status=1   # failure is the default!
>> +trap "_cleanup; exit \$status" 0 1 2 3 15
>> +
>> +_cleanup()
>> +{
>> +   ps -e | grep fsstress > /dev/null 2>&1
>> +   while [ $? -eq 0 ]; do
>> +   $KILLALL_PROG -KILL fsstress > /dev/null 2>&1
>> +   wait > /dev/null 2>&1
>> +   ps -e | grep fsstress > /dev/null 2>&1
>> +   done
>> +   _unmount_flakey &> /dev/null
>> +   _cleanup_flakey
>> +   cd /
>> +   rm -f $tmp.*
>> +}
>> +
>> +# get standard environment, filters and checks
>> +. ./common/rc
>> +. ./common/filter
>> +. ./common/dmflakey
>> +
>> +# remove previous $seqres.full before test
>> +rm -f $seqres.full
>> +
>> +# real QA test starts here
>> +
>> +# Modify as appropriate.
>> +_supported_fs generic
>> +_supported_os Linux
>> +_require_scratch
>> +_require_dm_target flakey
>> +_require_command "$KILLALL_PROG" "killall"
>> +
>> +runtime=$(($TIME_FACTOR * 15))
>> +loops=$(($LOAD_FACTOR * 4))
>> +
>> +for i in $(seq -w $loops); do
>> +   echo "=== Loop $i: $(date) ===" >> $seqres.full
>> +
>> +   _scratch_mkfs >/dev/null 2>&1
>> +   _init_flakey
>> +   _mount_flakey
>> +
>> +   ($FSSTRESS_PROG $FSSTRESS_AVOID -w -d $SCRATCH_MNT -n 100 \
>> +   -p 100 >> $seqres.full &) > /dev/null 2>&1
>> +
>> +   sleep $runtime
>> +
>> +   # Here we only want to drop all write, don't need to umount the fs
>> +   _load_flakey_ta

Re: [dm-devel] [RFC PATCH] fstests: Check if a fs can survive random (emulated) power loss

2018-03-01 Thread Amir Goldstein
On Thu, Mar 1, 2018 at 7:38 AM, Qu Wenruo  wrote:
> This test case is originally designed to expose unexpected corruption
> for btrfs, where there are several reports about btrfs serious metadata
> corruption after power loss.
>
> The test case itself will trigger heavy fsstress for the fs, and use
> dm-flakey to emulate power loss by dropping all later writes.

So you are re-posting the test with dm-flakey or converting it to
dm-log-writes??

>
> For btrfs, it should be completely fine, as long as superblock write
> (FUA write) finishes atomically, since with metadata CoW, superblock
> either points to old trees or new tress, the fs should be as atomic as
> superblock.
>
> For journal based filesystems, each metadata update should be journaled,
> so metadata operation is as atomic as journal updates.
>
> It does show that XFS is doing the best work among the tested
> filesystems (Btrfs, XFS, ext4), no kernel nor xfs_repair problem at all.
>
> For btrfs, although btrfs check doesn't report any problem, kernel
> reports some data checksum error, which is a little unexpected as data
> is CoWed by default, which should be as atomic as superblock.
> (Unfortunately, still not the exact problem I'm chasing for)
>
> For EXT4, kernel is fine, but later e2fsck reports problem, which may
> indicates there is still something to be improved.
>
> Signed-off-by: Qu Wenruo 
> ---
>  tests/generic/479 | 109 
> ++
>  tests/generic/479.out |   2 +
>  tests/generic/group   |   1 +
>  3 files changed, 112 insertions(+)
>  create mode 100755 tests/generic/479
>  create mode 100644 tests/generic/479.out
>
> diff --git a/tests/generic/479 b/tests/generic/479
> new file mode 100755
> index ..ab530231
> --- /dev/null
> +++ b/tests/generic/479
> @@ -0,0 +1,109 @@
> +#! /bin/bash
> +# FS QA Test 479
> +#
> +# Test if a filesystem can survive emulated powerloss.
> +#
> +# No matter what the solution a filesystem uses (journal or CoW),
> +# it should survive unexpected powerloss, without major metadata
> +# corruption.
> +#
> +#---
> +# Copyright (c) 2018 SuSE.  All Rights Reserved.
> +#
> +# This program is free software; you can redistribute it and/or
> +# modify it under the terms of the GNU General Public License as
> +# published by the Free Software Foundation.
> +#
> +# This program is distributed in the hope that it would be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program; if not, write the Free Software Foundation,
> +# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
> +#---
> +#
> +
> +seq=`basename $0`
> +seqres=$RESULT_DIR/$seq
> +echo "QA output created by $seq"
> +
> +here=`pwd`
> +tmp=/tmp/$$
> +status=1   # failure is the default!
> +trap "_cleanup; exit \$status" 0 1 2 3 15
> +
> +_cleanup()
> +{
> +   ps -e | grep fsstress > /dev/null 2>&1
> +   while [ $? -eq 0 ]; do
> +   $KILLALL_PROG -KILL fsstress > /dev/null 2>&1
> +   wait > /dev/null 2>&1
> +   ps -e | grep fsstress > /dev/null 2>&1
> +   done
> +   _unmount_flakey &> /dev/null
> +   _cleanup_flakey
> +   cd /
> +   rm -f $tmp.*
> +}
> +
> +# get standard environment, filters and checks
> +. ./common/rc
> +. ./common/filter
> +. ./common/dmflakey
> +
> +# remove previous $seqres.full before test
> +rm -f $seqres.full
> +
> +# real QA test starts here
> +
> +# Modify as appropriate.
> +_supported_fs generic
> +_supported_os Linux
> +_require_scratch
> +_require_dm_target flakey
> +_require_command "$KILLALL_PROG" "killall"
> +
> +runtime=$(($TIME_FACTOR * 15))
> +loops=$(($LOAD_FACTOR * 4))
> +
> +for i in $(seq -w $loops); do
> +   echo "=== Loop $i: $(date) ===" >> $seqres.full
> +
> +   _scratch_mkfs >/dev/null 2>&1
> +   _init_flakey
> +   _mount_flakey
> +
> +   ($FSSTRESS_PROG $FSSTRESS_AVOID -w -d $SCRATCH_MNT -n 100 \
> +   -p 100 >> $seqres.full &) > /dev/null 2>&1
> +
> +   sleep $runtime
> +
> +   # Here we only want to drop all write, don't need to umount the fs
> +   _load_flakey_table $FLAKEY_DROP_WRITES
> +
> +   ps -e | grep fsstress > /dev/null 2>&1
> +   while [ $? -eq 0 ]; do
> +   $KILLALL_PROG -KILL fsstress > /dev/null 2>&1
> +   wait > /dev/null 2>&1
> +   ps -e | grep fsstress > /dev/null 2>&1
> +   done
> +
> +   _unmount_flakey
> +   _cleanup_flakey
> +
> +   # Mount the fs to do proper log replay for journal based fs
> +   # so later check won't report annoying dirty log and only
> +   # report real