On Sun, Aug 09, 2015 at 06:45:58PM +0800, Eryu Guan wrote:
> On Fri, Aug 07, 2015 at 08:21:27AM +1000, Dave Chinner wrote:
> > Seeing as you can reproduce the problem, I encourage you to work out
> > what the minimum number of files need to reproduce the problem is,
> > and update the test to use that so that it runs even faster...
>
> I found that 50000 files per thread is good enough for me to reproduce
> the fs corruption, sometimes WARNINGs. With 20000 or 30000 files per
> thread, only 20% to 33% runs could hit some problems. So this is what
> I'm testing (comments are not updated)
>
> [root@dhcp-66-87-213 xfstests]# git diff
> diff --git a/tests/generic/038 b/tests/generic/038
> index 3c94a3b..7564c87 100755
> --- a/tests/generic/038
> +++ b/tests/generic/038
> @@ -108,6 +108,7 @@ trim_loop()
> #
> # reating 400,000 files sequentially is really slow, so speed it up a bit
> # by doing it concurrently with 4 threads in 4 separate directories.
> +nr_files=$((50000 * LOAD_FACTOR))
> create_files()
> {
> local prefix=$1
> @@ -115,7 +116,7 @@ create_files()
> for ((n = 0; n < 4; n++)); do
> mkdir $SCRATCH_MNT/$n
> (
> - for ((i = 1; i <= 100000; i++)); do
> + for ((i = 1; i <= $nr_files; i++)); do
> $XFS_IO_PROG -f -c "pwrite -S 0xaa 0 3900" \
> $SCRATCH_MNT/$n/"${prefix}_$i" &> /dev/null
> if [ $? -ne 0 ]; then
>
> Would you like a follow up patch from me or you can just make this one a v2?
Ok, I'll fold that into my original patch, update the comment and
the commit message with:
[Eryu Guan: reduced number of files to minimum needed to reproduce
btrfs problem reliably, added $LOAD_FACTOR scaling for longer
running.]
Cheers,
Dave.
--
Dave Chinner
[email protected]
--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html