On Tue, Oct 25, 2016 at 08:27:52PM -0400, Dave Jones wrote:
> DaveC: Do these look like real problems, or is this more "looks like
> random memory corruption" ? It's been a while since I did some stress
> testing on XFS, so these might not be new..
>
> XFS: Assertion failed: oldlen > newlen,
While most guys are using ctags and cscope with vim, new completion tool
like vim-clang_completion is gaining its popularity, due to its compiler
level accuracy simpleness to use.
Since ctags and cscope are already in .gitignore, I see no reason to
reject .clang_complete.
Signed-off-by: Qu
At 10/26/2016 07:52 PM, none wrote:
Le 2016-10-26 03:43, Qu Wenruo a écrit :
Unfortunately, low memory mode is right here.
If btrfs-image dump the image correctly, your extent tree is really
screwed up.
And how badly it is screwed up?
It only contains the basic block group info.
Almost
Hi,
Le 27/10/2016 à 01:54, Lionel Bouton a écrit :
>
> I'll post the final result of the btrfs replace later (it's currently at
> 5.6% after 45 minutes).
Result : kernel panic (so 4.8.4 didn't solve my main problem).
Unfortunately I don't have a remote KVM anymore so I couldn't capture
this one.
On Wed, Oct 26, 2016 at 11:11:35AM -0400, Josef Bacik wrote:
> On 10/25/2016 06:01 PM, Dave Chinner wrote:
> >On Tue, Oct 25, 2016 at 02:41:44PM -0400, Josef Bacik wrote:
> >>With anything that populates the inode/dentry cache with a lot of one time
> >>use
> >>inodes we can really put a lot of
On 10/26/2016 05:47 PM, Dave Jones wrote:
On Wed, Oct 26, 2016 at 07:38:08PM -0400, Chris Mason wrote:
> >- hctx->queued++;
> >- data->hctx = hctx;
> >- data->ctx = ctx;
> >+ data->hctx = alloc_data.hctx;
> >+ data->ctx = alloc_data.ctx;
> >+ data->hctx->queued++;
Hi,
Le 26/10/2016 à 02:57, Lionel Bouton a écrit :
> Hi,
>
> I'm currently trying to recover from a disk failure on a 6-drive Btrfs
> RAID10 filesystem. A "mount -o degraded" auto-resumes a current
> btrfs-replace from a missing dev to a new disk. This eventually triggers
> a kernel panic (and
On Wed, Oct 26, 2016 at 07:38:08PM -0400, Chris Mason wrote:
> >- hctx->queued++;
> >- data->hctx = hctx;
> >- data->ctx = ctx;
> >+ data->hctx = alloc_data.hctx;
> >+ data->ctx = alloc_data.ctx;
> >+ data->hctx->queued++;
> >return rq;
> > }
>
> This made it through
On Wed, Oct 26, 2016 at 05:20:01PM -0600, Jens Axboe wrote:
On 10/26/2016 05:08 PM, Linus Torvalds wrote:
On Wed, Oct 26, 2016 at 4:03 PM, Jens Axboe wrote:
Actually, I think I see what might trigger it. You are on nvme, iirc,
and that has a deep queue.
Yes. I have long since
On 10/26/2016 05:19 PM, Chris Mason wrote:
On Wed, Oct 26, 2016 at 05:03:45PM -0600, Jens Axboe wrote:
On 10/26/2016 04:58 PM, Linus Torvalds wrote:
On Wed, Oct 26, 2016 at 3:51 PM, Linus Torvalds
wrote:
Dave: it might be a good idea to split that
On 10/26/2016 05:08 PM, Linus Torvalds wrote:
On Wed, Oct 26, 2016 at 4:03 PM, Jens Axboe wrote:
Actually, I think I see what might trigger it. You are on nvme, iirc,
and that has a deep queue.
Yes. I have long since moved on from slow disks, so all my systems are
not just
On Wed, Oct 26, 2016 at 05:03:45PM -0600, Jens Axboe wrote:
On 10/26/2016 04:58 PM, Linus Torvalds wrote:
On Wed, Oct 26, 2016 at 3:51 PM, Linus Torvalds
wrote:
Dave: it might be a good idea to split that "WARN_ON_ONCE()" in
blk_mq_merge_queue_io() into two
I
On Wed, Oct 26, 2016 at 03:07:10PM -0700, Linus Torvalds wrote:
On Wed, Oct 26, 2016 at 1:00 PM, Chris Mason wrote:
Today I turned off every CONFIG_DEBUG_* except for list debugging, and
ran dbench 2048:
[ 2759.118711] WARNING: CPU: 2 PID: 31039 at lib/list_debug.c:33
On Wed, Oct 26, 2016 at 4:03 PM, Jens Axboe wrote:
>
> Actually, I think I see what might trigger it. You are on nvme, iirc,
> and that has a deep queue.
Yes. I have long since moved on from slow disks, so all my systems are
not just flash, but m.2 nvme ssd's.
So at least that
On Wed, Oct 26, 2016 at 05:03:45PM -0600, Jens Axboe wrote:
> On 10/26/2016 04:58 PM, Linus Torvalds wrote:
> > On Wed, Oct 26, 2016 at 3:51 PM, Linus Torvalds
> > wrote:
> >>
> >> Dave: it might be a good idea to split that "WARN_ON_ONCE()" in
> >>
On 10/26/2016 05:01 PM, Dave Jones wrote:
On Wed, Oct 26, 2016 at 03:51:01PM -0700, Linus Torvalds wrote:
> Dave: it might be a good idea to split that "WARN_ON_ONCE()" in
> blk_mq_merge_queue_io() into two, since right now it can trigger both
> for the
>
>
On 10/26/2016 04:58 PM, Linus Torvalds wrote:
On Wed, Oct 26, 2016 at 3:51 PM, Linus Torvalds
wrote:
Dave: it might be a good idea to split that "WARN_ON_ONCE()" in
blk_mq_merge_queue_io() into two
I did that myself too, since Dave sees this during boot.
But
On Wed, Oct 26, 2016 at 03:51:01PM -0700, Linus Torvalds wrote:
> Dave: it might be a good idea to split that "WARN_ON_ONCE()" in
> blk_mq_merge_queue_io() into two, since right now it can trigger both
> for the
>
> blk_mq_bio_to_request(rq, bio);
>
> path _and_ for the
>
On Wed, Oct 26, 2016 at 3:51 PM, Linus Torvalds
wrote:
>
> Dave: it might be a good idea to split that "WARN_ON_ONCE()" in
> blk_mq_merge_queue_io() into two
I did that myself too, since Dave sees this during boot.
But I'm not getting the warning ;(
Dave gets it
On 10/26/2016 04:51 PM, Linus Torvalds wrote:
On Wed, Oct 26, 2016 at 3:40 PM, Dave Jones wrote:
I gave it a shot too for shits & giggles.
This falls out during boot.
[9.278420] WARNING: CPU: 0 PID: 1 at block/blk-mq.c:1181
blk_sq_make_request+0x465/0x4a0
Hmm.
On 10/26/2016 04:40 PM, Dave Jones wrote:
On Wed, Oct 26, 2016 at 03:21:53PM -0700, Linus Torvalds wrote:
> Could you try the attached patch? It adds a couple of sanity tests:
>
> - a number of tests to verify that 'rq->queuelist' isn't already on
> some queue when it is added to a queue
On Wed, Oct 26, 2016 at 3:40 PM, Dave Jones wrote:
>
> I gave it a shot too for shits & giggles.
> This falls out during boot.
>
> [9.278420] WARNING: CPU: 0 PID: 1 at block/blk-mq.c:1181
> blk_sq_make_request+0x465/0x4a0
Hmm. That's the
WARN_ON_ONCE(rq->mq_ctx
On Wed, Oct 26, 2016 at 03:21:53PM -0700, Linus Torvalds wrote:
> Could you try the attached patch? It adds a couple of sanity tests:
>
> - a number of tests to verify that 'rq->queuelist' isn't already on
> some queue when it is added to a queue
>
> - one test to verify that rq->mq_ctx
On Wed, Oct 26, 2016 at 2:52 PM, Chris Mason wrote:
>
> This one is special because CONFIG_VMAP_STACK is not set. Btrfs triggers in
> < 10 minutes.
> I've done 30 minutes each with XFS and Ext4 without luck.
Ok, see the email I wrote that crossed yours - if it's really some
list
On Wed, Oct 26, 2016 at 04:03:54PM -0400, Josef Bacik wrote:
> On 10/25/2016 07:36 PM, Dave Chinner wrote:
> >So, 2-way has not improved. If changing referenced behaviour was an
> >obvious win for btrfs, we'd expect to see that here as well.
> >however, because 4-way improved by 20%, I think all
On Wed, Oct 26, 2016 at 1:00 PM, Chris Mason wrote:
>
> Today I turned off every CONFIG_DEBUG_* except for list debugging, and
> ran dbench 2048:
>
> [ 2759.118711] WARNING: CPU: 2 PID: 31039 at lib/list_debug.c:33
> __list_add+0xbe/0xd0
> [ 2759.119652] list_add corruption.
On 10/26/2016 04:00 PM, Chris Mason wrote:
>
>
> On 10/26/2016 03:06 PM, Linus Torvalds wrote:
>> On Wed, Oct 26, 2016 at 11:42 AM, Dave Jones wrote:
>>>
>>> The stacks show nearly all of them are stuck in sync_inodes_sb
>>
>> That's just wb_wait_for_completion(), and
On 10/25/2016 07:36 PM, Dave Chinner wrote:
On Wed, Oct 26, 2016 at 09:01:13AM +1100, Dave Chinner wrote:
On Tue, Oct 25, 2016 at 02:41:44PM -0400, Josef Bacik wrote:
With anything that populates the inode/dentry cache with a lot of one time use
inodes we can really put a lot of pressure on
On 10/26/2016 03:06 PM, Linus Torvalds wrote:
> On Wed, Oct 26, 2016 at 11:42 AM, Dave Jones wrote:
>>
>> The stacks show nearly all of them are stuck in sync_inodes_sb
>
> That's just wb_wait_for_completion(), and it means that some IO isn't
> completing.
>
> There's
On Wed, Oct 26, 2016 at 11:42 AM, Dave Jones wrote:
>
> The stacks show nearly all of them are stuck in sync_inodes_sb
That's just wb_wait_for_completion(), and it means that some IO isn't
completing.
There's also a lot of processes waiting for inode_lock(), and a few
Le 2016-10-26 03:43, Qu Wenruo a écrit :
Unfortunately, low memory mode is right here.
If btrfs-image dump the image correctly, your extent tree is really
screwed up.
And how badly it is screwed up?
It only contains the basic block group info.
Almost empty, without any really useful
On Wed, Oct 26, 2016 at 09:48:39AM -0700, Linus Torvalds wrote:
> I know you already had this in some email, but I lost it. I think you
> narrowed it down to a specific set of system calls that seems to
> trigger this best. fallocate and xattrs or something?
So I was about to give that a shot
On Wed, Oct 26, 2016 at 09:48:39AM -0700, Linus Torvalds wrote:
> On Wed, Oct 26, 2016 at 9:30 AM, Dave Jones wrote:
> >
> > I gave this a go last thing last night. It crashed within 5 minutes,
> > but it was one we've already seen (the bad page map trace) with
Hi,
btrfs-progs version 4.8.2 have been released. Build fixes again and some small
fixes that arrived. There's one change since -rc1, a mkfs fix introduced by one
of the post 4.8.1 patches.
Changes:
* convert: also convert file attributes
* convert: fix wrong tree block alignment for
On Wed, Oct 26, 2016 at 9:30 AM, Dave Jones wrote:
>
> I gave this a go last thing last night. It crashed within 5 minutes,
> but it was one we've already seen (the bad page map trace) with nothing
> additional that looked interesting.
Did the bad page map trace have any
On Tue, Oct 25, 2016 at 06:39:03PM -0700, Linus Torvalds wrote:
> On Tue, Oct 25, 2016 at 6:33 PM, Linus Torvalds
> wrote:
> >
> > Completely untested. Maybe there's some reason we can't write to the
> > whole thing like that?
>
> That hack boots and seems
Hello, Josef.
On Wed, Oct 26, 2016 at 11:20:16AM -0400, Josef Bacik wrote:
> > > @@ -3701,7 +3703,20 @@ static unsigned long
> > > node_pagecache_reclaimable(struct pglist_data *pgdat)
> > > if (unlikely(delta > nr_pagecache_reclaimable))
> > > delta = nr_pagecache_reclaimable;
> > >
On 10/25/2016 03:50 PM, Tejun Heo wrote:
Hello,
On Tue, Oct 25, 2016 at 02:41:42PM -0400, Josef Bacik wrote:
Btrfs has no bounds except memory on the amount of dirty memory that we have in
use for metadata. Historically we have used a special inode so we could take
advantage of the
On 10/25/2016 06:01 PM, Dave Chinner wrote:
On Tue, Oct 25, 2016 at 02:41:44PM -0400, Josef Bacik wrote:
With anything that populates the inode/dentry cache with a lot of one time use
inodes we can really put a lot of pressure on the system for things we don't
need to keep in cache. It takes
The helpers are not meant to be generic, the name is misleading. Convert
them to static inlines for type checking.
Signed-off-by: David Sterba
---
fs/btrfs/qgroup.c | 35 +--
1 file changed, 21 insertions(+), 14 deletions(-)
diff --git
On Wed, Oct 26, 2016 at 09:07:25AM +0800, Qu Wenruo wrote:
> At 10/25/2016 10:09 PM, David Sterba wrote:
> > On Thu, Oct 13, 2016 at 05:22:26PM +0800, Qu Wenruo wrote:
> >> Kernel clear_cache mount option will only rebuilt free space cache if
> >> used space of that chunk has changed.
> >>
> >> So
Le 2016-10-26 03:43, Qu Wenruo a écrit :
Unfortunately, low memory mode is right here.
If btrfs-image dump the image correctly, your extent tree is really
screwed up.
And how badly it is screwed up?
It only contains the basic block group info.
Almost empty, without any really useful
This issue was found when I tried to delete a heavily reflinked file,
when deleting such files, other transaction operation will not have a
chance to make progress, for example, start_transaction() will blocked
in wait_current_trans(root) for long time, sometimes it even triggers
soft lockups, and
When enabling btrfs compression, original codes can not fill fs
correctly, here we introduce _fill_fs() in common/rc, which'll keep
creating and writing files until enospc error occurs. Note _fill_fs
is copied from tests/generic/256, but with some minor modifications.
Signed-off-by: Wang
hi,
On 10/11/2016 02:47 PM, Wang Xiaoguang wrote:
If we use mount option "-o max_inline=sectorsize", say 4096, indeed
even for a fresh fs, say nodesize is 16k, we can not make the first
4k data completely inline, I found this conditon causing this issue:
!compressed_size && (actual_end &
Signed-off-by: Wang Xiaoguang
---
fs/btrfs/extent-tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 9aa6d2c..3c8f0ec 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@
46 matches
Mail list logo