Hi all,
I just noticed a mismatch between statfs.f_bfree and statfs.f_bavail, i.e.
(squeeze)fslab2:~# ./statfs /data/fhgfs/storage1/
/data/fhgfs/storage1/: avail: 3162112 free: 801586610176
(with
uint64_t avail = statbuf.f_bavail * statbuf.f_bsize;
uint64_t free =
Hello Chris,
On 05/23/2013 10:33 PM, Chris Mason wrote:
But I was using 8 drives. I'll try with 12.
My benchmarks were on flash, so the rmw I was seeing may not have had as
big an impact.
I just further played with it and simply introduced a requeue in
raid56_rmw_stripe() if the rbio is
Hi all,
we got a new test system here and I just also tested btrfs raid6 on
that. Write performance is slightly lower than hw-raid (LSI megasas) and
md-raid6, but it probably would be much better than any of these two, if
it wouldn't read all the during the writes. Is this a known issue? This
On 05/23/2013 03:11 PM, Chris Mason wrote:
Quoting Bernd Schubert (2013-05-23 08:55:47)
Hi all,
we got a new test system here and I just also tested btrfs raid6 on
that. Write performance is slightly lower than hw-raid (LSI megasas) and
md-raid6, but it probably would be much better than any
On 05/23/2013 03:41 PM, Bob Marley wrote:
On 23/05/2013 15:22, Bernd Schubert wrote:
Yeah, I know and I'm using iostat already. md raid6 does not do rmw,
but does not fill the device queue, afaik it flushes the underlying
devices quickly as it does not have barrier support - that is another
On 05/23/2013 03:34 PM, Chris Mason wrote:
Quoting Bernd Schubert (2013-05-23 09:22:41)
On 05/23/2013 03:11 PM, Chris Mason wrote:
Quoting Bernd Schubert (2013-05-23 08:55:47)
Hi all,
we got a new test system here and I just also tested btrfs raid6 on
that. Write performance is slightly
On 05/23/2013 09:37 PM, Chris Mason wrote:
Quoting Bernd Schubert (2013-05-23 15:33:24)
Btw, any chance to generally use chunksize/chunklen instead of stripe,
such as the md layer does it? IMHO it is less confusing to use
n-datadisks * chunksize = stripesize.
Definitely, it will become much
On 03/27/2013 10:18 AM, Hugo Mills wrote:
On Wed, Mar 27, 2013 at 12:28:23AM +0100, Clemens Eisserer wrote:
I am using a btrfs loopback mounted file with lzo-compression on
Linux-3.7.9, and I ran into No space left on device messages,
although df reports only 55% of space is used:
# touch
On 01/15/2013 02:35 PM, Bernd Schubert wrote:
Hrmm, that bug then seems to cause another bug. After the file system
went into RO, I simply umounted and mounted again and a few seconds
after that my entire system failed. Relevant logs are attached.
Further log attachment:
btrfsck /dev/vg_fuj2
On 08/19/2011 09:36 PM, Josef Bacik wrote:
On 08/19/2011 12:45 PM, Bernd Schubert wrote:
Just for performance tests I run:
./bonnie++ -d /mnt/btrfs -s0 -n 1:256:256:1 -r 0
and this causes and endless number of stack traces. Those seem to
come from:
use_block_rsv()
ret
Just for performance tests I run:
./bonnie++ -d /mnt/btrfs -s0 -n 1:256:256:1 -r 0
and this causes and endless number of stack traces. Those seem to
come from:
use_block_rsv()
ret = block_rsv_use_bytes(block_rsv, blocksize);
if (!ret)
return block_rsv;
I think we either should remove it or replace by WARN_ON_ONCE()
Remove WARN_ON(1) in a common code path
From: Bernd Schubert bernd.schub...@itwm.fraunhofer.de
Something like bonnie++ -d /mnt/btrfs -s0 -n 1:256:256:1 -r 0
will trigger lots of those WARN_ON(1), so lets remove it.
Signed-off
12 matches
Mail list logo