Martin posted on Sun, 29 Sep 2013 22:55:43 +0100 as excerpted:
On 29/09/13 22:29, Martin wrote:
Looking up what's available for Gentoo, the maintainers there look to
be nicely sharp with multiple versions available all the way up to
kernel 3.11.2...
Cool, another gentooer! =:^)
That is
The crash[1] is found by xfstests/generic/208 with -o compress,
it's not reproduced everytime, but it does panic.
The bug is quite interesting, it's actually introduced by a recent commit
(573aecafca1cf7a974231b759197a1aebcf39c2a,
Btrfs: actually limit the size of delalloc range).
Btrfs
On Sep 29, 2013, at 1:13 AM, Fredrik Tolf fred...@dolda2000.com wrote:
Is there any way I can find out what's going on?
For whatever reason, it started out with every drive practically full, in terms
of chunk allocation.
e.g.devid5 size 2.73TB used 2.71TB path /dev/sdh1
I don't
On Fri, Sep 27, 2013 at 04:18:14PM -0700, Zach Brown wrote:
On Fri, Sep 27, 2013 at 04:37:46PM -0400, Josef Bacik wrote:
During transaction cleanup after an abort we are just removing roots from
the
ordered roots list which is incorrect. We have a BUG_ON() to make sure
that the
root
I was noticing the slab redzone stuff going off every once and a while during
transaction aborts. This was caused by two things
1) We would walk the pending snapshots and set their error to -ECANCELED. We
don't need to do this, the snapshot stuff waits for a transaction commit and if
there is a
On Mon, Sep 30, 2013 at 08:39:57PM +0800, Liu Bo wrote:
The crash[1] is found by xfstests/generic/208 with -o compress,
it's not reproduced everytime, but it does panic.
The bug is quite interesting, it's actually introduced by a recent commit
(573aecafca1cf7a974231b759197a1aebcf39c2a,
So I _think_ we may need to truncate the ordered range in the inode as well,
but
I haven't had a consistent reproducer for this case. I want to leave it like
this for now until I'm sure we don't need the truncate and then we could
probably just replace this with a test for FS_ERROR in
If we crash with a log, remount and recover that log, and then crash before we
can commit another transaction we will get transid verify errors on the next
mount. This is because we were not zero'ing out the log when we committed the
transaction after recovery. This is ok as long as we commit
On 29 September 2013 15:12, Josef Bacik jba...@fusionio.com wrote:
On Sun, Sep 29, 2013 at 11:22:36AM +0200, Aastha Mehta wrote:
Thank you very much for the reply. That clarifies a lot of things.
I was trying a small test case that opens a file, writes a block of
data, calls fsync and then
On Mon, Sep 30, 2013 at 09:32:54PM +0200, Aastha Mehta wrote:
On 29 September 2013 15:12, Josef Bacik jba...@fusionio.com wrote:
On Sun, Sep 29, 2013 at 11:22:36AM +0200, Aastha Mehta wrote:
Thank you very much for the reply. That clarifies a lot of things.
I was trying a small test case
It's just annoying to have to pass it around everywhere. Thanks,
Signed-off-by: Josef Bacik jba...@fusionio.com
---
cmds-check.c | 23 +++
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a/cmds-check.c b/cmds-check.c
index f05c73e..6da35ea 100644
---
On 30 September 2013 22:11, Josef Bacik jba...@fusionio.com wrote:
On Mon, Sep 30, 2013 at 09:32:54PM +0200, Aastha Mehta wrote:
On 29 September 2013 15:12, Josef Bacik jba...@fusionio.com wrote:
On Sun, Sep 29, 2013 at 11:22:36AM +0200, Aastha Mehta wrote:
Thank you very much for the reply.
On Mon, Sep 30, 2013 at 10:30:59PM +0200, Aastha Mehta wrote:
On 30 September 2013 22:11, Josef Bacik jba...@fusionio.com wrote:
On Mon, Sep 30, 2013 at 09:32:54PM +0200, Aastha Mehta wrote:
On 29 September 2013 15:12, Josef Bacik jba...@fusionio.com wrote:
On Sun, Sep 29, 2013 at
On 30 September 2013 22:47, Josef Bacik jba...@fusionio.com wrote:
On Mon, Sep 30, 2013 at 10:30:59PM +0200, Aastha Mehta wrote:
On 30 September 2013 22:11, Josef Bacik jba...@fusionio.com wrote:
On Mon, Sep 30, 2013 at 09:32:54PM +0200, Aastha Mehta wrote:
On 29 September 2013 15:12, Josef
On Mon, Sep 30, 2013 at 11:07:20PM +0200, Aastha Mehta wrote:
On 30 September 2013 22:47, Josef Bacik jba...@fusionio.com wrote:
On Mon, Sep 30, 2013 at 10:30:59PM +0200, Aastha Mehta wrote:
On 30 September 2013 22:11, Josef Bacik jba...@fusionio.com wrote:
On Mon, Sep 30, 2013 at
Hi Zach,
thank you for your answer and clarification. I cannot just unmount and
mount that filesystem, because it is running busy NFS server now, so I
will just try it on some testbench server. Can mount -o remount be
sufficient (to prevent stopping service, umount, mount and starting
service) ?
On Tue, Oct 01, 2013 at 12:03:05AM +0200, Ondřej Kunc wrote:
Hi Zach,
thank you for your answer and clarification. I cannot just unmount and
mount that filesystem, because it is running busy NFS server now, so I
will just try it on some testbench server. Can mount -o remount be
sufficient
On Sep 30, 2013, at 8:27 AM, Chris Murphy li...@colorremedies.com wrote:
On Sep 29, 2013, at 1:13 AM, Fredrik Tolf fred...@dolda2000.com wrote:
Is there any way I can find out what's going on?
For whatever reason, it started out with every drive practically full, in
terms of chunk
On Mon, Sep 30, 2013 at 01:02:49PM -0400, Josef Bacik wrote:
On Mon, Sep 30, 2013 at 08:39:57PM +0800, Liu Bo wrote:
The crash[1] is found by xfstests/generic/208 with -o compress,
it's not reproduced everytime, but it does panic.
The bug is quite interesting, it's actually introduced by
Chris Murphy posted on Mon, 30 Sep 2013 19:05:36 -0600 as excerpted:
It probably seems weird to add drives to remove drives, but sometimes
(always?) Btrfs really gets a bit piggish about allocating a lot more
chunks than there is data. Or maybe it's not deallocating space as
aggressively as
On Sep 30, 2013, at 10:43 PM, Duncan 1i5t5.dun...@cox.net wrote:
Meanwhile, I really do have to question the use case where the risks of a
single dead device killing a raid0 (or for that matter, running still
experimental btrfs) are fine, but spending days doing data maintenance on
data
21 matches
Mail list logo