Re: [PATCH] Btrfs: barrier before waitqueue_active

2012-08-05 Thread Arne Jansen
On 08/03/2012 04:43 PM, Mitch Harder wrote:
 On Wed, Aug 1, 2012 at 7:21 PM, Mitch Harder
 mitch.har...@sabayonlinux.org wrote:
 On Wed, Aug 1, 2012 at 3:25 PM, Josef Bacik jba...@fusionio.com wrote:
 We need an smb_mb() before waitqueue_active to avoid missing wakeups.
 Before Mitch was hitting a deadlock between the ordered flushers and the
 transaction commit because the ordered flushers were waiting for more refs
 and were never woken up, so those smp_mb()'s are the most important.
 Everything else I added for correctness sake and to avoid getting bitten by
 this again somewhere else.  Thanks,


 This patch seems to make it tougher to hit a deadlock, but I'm still
 encountering intermittent deadlocks using this patch when running
 multiple rsync threads.

 I've also tested Patch 2, and that has me hitting a deadlock even
 quicker (when starting several copying threads).

 I also found a slight performance hit using this patch.  On a 3.4.6
 kernel (merged with the 3.5_rc for-linus branch), I would typically
 complete my rsync test in ~265 seconds.  Also, I can't recall hitting
 a deadlock on the 3.4.6 kernel (with 3.5_rc for-linus).  When using
 this patch, the test would take ~310 seconds (when it didn't hit a
 deadlock).

 
 I've bisected my deadlock back to:
 Btrfs: hooks for qgroup to record delayed refs (commit 546adb0d).
 

I've got it reproduced here and, I think, nailed it down. I'll send a
patch tomorrow after discussing it with Jan.

-Arne

 This issue may be the same problem Alexander Block is discussing in
 another thread on the Btrfs Mailing List:
 http://article.gmane.org/gmane.comp.file-systems.btrfs/19028
 
 I'm using multiple rsync threads instead of the new send/receive
 function.  But we're both hitting deadlocks that bisect back to the
 same commit.
 --
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Btrfs: barrier before waitqueue_active

2012-08-02 Thread Liu Bo
On 08/02/2012 04:25 AM, Josef Bacik wrote:
 We need an smb_mb() before waitqueue_active to avoid missing wakeups.
 Before Mitch was hitting a deadlock between the ordered flushers and the
 transaction commit because the ordered flushers were waiting for more refs
 and were never woken up, so those smp_mb()'s are the most important.
 Everything else I added for correctness sake and to avoid getting bitten by
 this again somewhere else.  Thanks,
 

Hi Josef,

I'll appreciate a lot if you can add some comments for each memory
barrier, because not everyone knows why it is used here and there. :)

thanks,
liubo

 Signed-off-by: Josef Bacik jba...@fusionio.com
 ---
  fs/btrfs/compression.c   |1 +
  fs/btrfs/delayed-inode.c |   16 ++--
  fs/btrfs/delayed-ref.c   |   18 --
  fs/btrfs/disk-io.c   |   11 ---
  fs/btrfs/inode.c |8 +---
  fs/btrfs/volumes.c   |8 +---
  6 files changed, 41 insertions(+), 21 deletions(-)
 
 diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
 index 86eff48..43d1c5a 100644
 --- a/fs/btrfs/compression.c
 +++ b/fs/btrfs/compression.c
 @@ -818,6 +818,7 @@ static void free_workspace(int type, struct list_head 
 *workspace)
   btrfs_compress_op[idx]-free_workspace(workspace);
   atomic_dec(alloc_workspace);
  wake:
 + smp_mb();
   if (waitqueue_active(workspace_wait))
   wake_up(workspace_wait);
  }
 diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
 index 335605c..8cc9b19 100644
 --- a/fs/btrfs/delayed-inode.c
 +++ b/fs/btrfs/delayed-inode.c
 @@ -513,9 +513,11 @@ static void __btrfs_remove_delayed_item(struct 
 btrfs_delayed_item *delayed_item)
   rb_erase(delayed_item-rb_node, root);
   delayed_item-delayed_node-count--;
   atomic_dec(delayed_root-items);
 - if (atomic_read(delayed_root-items)  BTRFS_DELAYED_BACKGROUND 
 - waitqueue_active(delayed_root-wait))
 - wake_up(delayed_root-wait);
 + if (atomic_read(delayed_root-items)  BTRFS_DELAYED_BACKGROUND) {
 + smp_mb();
 + if (waitqueue_active(delayed_root-wait))
 + wake_up(delayed_root-wait);
 + }
  }
  
  static void btrfs_release_delayed_item(struct btrfs_delayed_item *item)
 @@ -1057,9 +1059,11 @@ static void btrfs_release_delayed_inode(struct 
 btrfs_delayed_node *delayed_node)
   delayed_root = delayed_node-root-fs_info-delayed_root;
   atomic_dec(delayed_root-items);
   if (atomic_read(delayed_root-items) 
 - BTRFS_DELAYED_BACKGROUND 
 - waitqueue_active(delayed_root-wait))
 - wake_up(delayed_root-wait);
 + BTRFS_DELAYED_BACKGROUND) {
 + smp_mb();
 + if (waitqueue_active(delayed_root-wait))
 + wake_up(delayed_root-wait);
 + }
   }
  }
  
 diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
 index da7419e..858ef02 100644
 --- a/fs/btrfs/delayed-ref.c
 +++ b/fs/btrfs/delayed-ref.c
 @@ -662,9 +662,12 @@ int btrfs_add_delayed_tree_ref(struct btrfs_fs_info 
 *fs_info,
   add_delayed_tree_ref(fs_info, trans, ref-node, bytenr,
  num_bytes, parent, ref_root, level, action,
  for_cow);
 - if (!need_ref_seq(for_cow, ref_root) 
 - waitqueue_active(fs_info-tree_mod_seq_wait))
 - wake_up(fs_info-tree_mod_seq_wait);
 + if (!need_ref_seq(for_cow, ref_root)) {
 + smp_mb();
 + if (waitqueue_active(fs_info-tree_mod_seq_wait))
 + wake_up(fs_info-tree_mod_seq_wait);
 + }
 +
   spin_unlock(delayed_refs-lock);
   if (need_ref_seq(for_cow, ref_root))
   btrfs_qgroup_record_ref(trans, ref-node, extent_op);
 @@ -713,9 +716,11 @@ int btrfs_add_delayed_data_ref(struct btrfs_fs_info 
 *fs_info,
   add_delayed_data_ref(fs_info, trans, ref-node, bytenr,
  num_bytes, parent, ref_root, owner, offset,
  action, for_cow);
 - if (!need_ref_seq(for_cow, ref_root) 
 - waitqueue_active(fs_info-tree_mod_seq_wait))
 - wake_up(fs_info-tree_mod_seq_wait);
 + if (!need_ref_seq(for_cow, ref_root)) {
 + smp_mb();
 + if (waitqueue_active(fs_info-tree_mod_seq_wait))
 + wake_up(fs_info-tree_mod_seq_wait);
 + }
   spin_unlock(delayed_refs-lock);
   if (need_ref_seq(for_cow, ref_root))
   btrfs_qgroup_record_ref(trans, ref-node, extent_op);
 @@ -744,6 +749,7 @@ int btrfs_add_delayed_extent_op(struct btrfs_fs_info 
 *fs_info,
  num_bytes, BTRFS_UPDATE_DELAYED_HEAD,
  extent_op-is_data);
  
 + smp_mb();
   if (waitqueue_active(fs_info-tree_mod_seq_wait))
   

Re: [PATCH] Btrfs: barrier before waitqueue_active

2012-08-02 Thread Josef Bacik
On Thu, Aug 02, 2012 at 04:46:44AM -0600, Liu Bo wrote:
 On 08/02/2012 04:25 AM, Josef Bacik wrote:
  We need an smb_mb() before waitqueue_active to avoid missing wakeups.
  Before Mitch was hitting a deadlock between the ordered flushers and the
  transaction commit because the ordered flushers were waiting for more refs
  and were never woken up, so those smp_mb()'s are the most important.
  Everything else I added for correctness sake and to avoid getting bitten by
  this again somewhere else.  Thanks,
  
 
 Hi Josef,
 
 I'll appreciate a lot if you can add some comments for each memory
 barrier, because not everyone knows why it is used here and there. :)
 

I'm not going to add comments to all those places, you need a memory barrier in
places you don't have an implicit barrier before you do waitqueue_active because
you could miss somebody being added to the waitqueue, it's just basic
correctness.  Thanks,

Josef
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Btrfs: barrier before waitqueue_active

2012-08-02 Thread cwillu
On Thu, Aug 2, 2012 at 4:46 AM, Liu Bo liub.li...@gmail.com wrote:
 On 08/02/2012 04:25 AM, Josef Bacik wrote:
 We need an smb_mb() before waitqueue_active to avoid missing wakeups.
 Before Mitch was hitting a deadlock between the ordered flushers and the
 transaction commit because the ordered flushers were waiting for more refs
 and were never woken up, so those smp_mb()'s are the most important.
 Everything else I added for correctness sake and to avoid getting bitten by
 this again somewhere else.  Thanks,


 Hi Josef,

 I'll appreciate a lot if you can add some comments for each memory
 barrier, because not everyone knows why it is used here and there. :)

Everyone who wants to know should read the memory-barriers.txt file
that's hiding in the oddly named Documentation folder of their
kernel tree.  :)
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Btrfs: barrier before waitqueue_active

2012-08-02 Thread David Sterba
On Thu, Aug 02, 2012 at 08:11:58AM -0400, Josef Bacik wrote:
 On Thu, Aug 02, 2012 at 04:46:44AM -0600, Liu Bo wrote:
  On 08/02/2012 04:25 AM, Josef Bacik wrote:
   We need an smb_mb() before waitqueue_active to avoid missing wakeups.
   Before Mitch was hitting a deadlock between the ordered flushers and the
   transaction commit because the ordered flushers were waiting for more refs
   and were never woken up, so those smp_mb()'s are the most important.
   Everything else I added for correctness sake and to avoid getting bitten 
   by
   this again somewhere else.  Thanks,
  
  I'll appreciate a lot if you can add some comments for each memory
  barrier, because not everyone knows why it is used here and there. :)
 
 I'm not going to add comments to all those places, you need a memory barrier 
 in
 places you don't have an implicit barrier before you do waitqueue_active 
 because
 you could miss somebody being added to the waitqueue, it's just basic
 correctness.  Thanks,

This asks for a helper:

+   smp_mb();
+   if (waitqueue_active(fs_info-async_submit_wait))
+   wake_up(fs_info-async_submit_wait);

-

void wake_up_if_active(wait) {
/*
 * the comment
 */
smp_mb();
if(waitqueue_active(wait)
wake_up(wait);
}
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] Btrfs: barrier before waitqueue_active V2

2012-08-02 Thread Josef Bacik
We need a barrir before calling waitqueue_active otherwise we will miss
wakeups.  So in places that do atomic_dec(); then atomic_read() use
atomic_dec_return() which imply a memory barrier (see memory-barriers.txt)
and then add an explicit memory barrier everywhere else that need them.
Thanks,

Signed-off-by: Josef Bacik jba...@fusionio.com
---
V1-V2: changed atomic_dec/atomic_read combos to atomic_dec_return() for the
implied barrier.
 fs/btrfs/compression.c   |1 +
 fs/btrfs/delayed-inode.c |7 +++
 fs/btrfs/delayed-ref.c   |   18 --
 fs/btrfs/disk-io.c   |7 ---
 fs/btrfs/inode.c |4 +---
 fs/btrfs/volumes.c   |3 +--
 6 files changed, 22 insertions(+), 18 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 86eff48..43d1c5a 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -818,6 +818,7 @@ static void free_workspace(int type, struct list_head 
*workspace)
btrfs_compress_op[idx]-free_workspace(workspace);
atomic_dec(alloc_workspace);
 wake:
+   smp_mb();
if (waitqueue_active(workspace_wait))
wake_up(workspace_wait);
 }
diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
index 335605c..ee2050c 100644
--- a/fs/btrfs/delayed-inode.c
+++ b/fs/btrfs/delayed-inode.c
@@ -512,8 +512,8 @@ static void __btrfs_remove_delayed_item(struct 
btrfs_delayed_item *delayed_item)
 
rb_erase(delayed_item-rb_node, root);
delayed_item-delayed_node-count--;
-   atomic_dec(delayed_root-items);
-   if (atomic_read(delayed_root-items)  BTRFS_DELAYED_BACKGROUND 
+   if (atomic_dec_return(delayed_root-items) 
+   BTRFS_DELAYED_BACKGROUND 
waitqueue_active(delayed_root-wait))
wake_up(delayed_root-wait);
 }
@@ -1055,8 +1055,7 @@ static void btrfs_release_delayed_inode(struct 
btrfs_delayed_node *delayed_node)
delayed_node-count--;
 
delayed_root = delayed_node-root-fs_info-delayed_root;
-   atomic_dec(delayed_root-items);
-   if (atomic_read(delayed_root-items) 
+   if (atomic_dec_return(delayed_root-items) 
BTRFS_DELAYED_BACKGROUND 
waitqueue_active(delayed_root-wait))
wake_up(delayed_root-wait);
diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
index da7419e..858ef02 100644
--- a/fs/btrfs/delayed-ref.c
+++ b/fs/btrfs/delayed-ref.c
@@ -662,9 +662,12 @@ int btrfs_add_delayed_tree_ref(struct btrfs_fs_info 
*fs_info,
add_delayed_tree_ref(fs_info, trans, ref-node, bytenr,
   num_bytes, parent, ref_root, level, action,
   for_cow);
-   if (!need_ref_seq(for_cow, ref_root) 
-   waitqueue_active(fs_info-tree_mod_seq_wait))
-   wake_up(fs_info-tree_mod_seq_wait);
+   if (!need_ref_seq(for_cow, ref_root)) {
+   smp_mb();
+   if (waitqueue_active(fs_info-tree_mod_seq_wait))
+   wake_up(fs_info-tree_mod_seq_wait);
+   }
+
spin_unlock(delayed_refs-lock);
if (need_ref_seq(for_cow, ref_root))
btrfs_qgroup_record_ref(trans, ref-node, extent_op);
@@ -713,9 +716,11 @@ int btrfs_add_delayed_data_ref(struct btrfs_fs_info 
*fs_info,
add_delayed_data_ref(fs_info, trans, ref-node, bytenr,
   num_bytes, parent, ref_root, owner, offset,
   action, for_cow);
-   if (!need_ref_seq(for_cow, ref_root) 
-   waitqueue_active(fs_info-tree_mod_seq_wait))
-   wake_up(fs_info-tree_mod_seq_wait);
+   if (!need_ref_seq(for_cow, ref_root)) {
+   smp_mb();
+   if (waitqueue_active(fs_info-tree_mod_seq_wait))
+   wake_up(fs_info-tree_mod_seq_wait);
+   }
spin_unlock(delayed_refs-lock);
if (need_ref_seq(for_cow, ref_root))
btrfs_qgroup_record_ref(trans, ref-node, extent_op);
@@ -744,6 +749,7 @@ int btrfs_add_delayed_extent_op(struct btrfs_fs_info 
*fs_info,
   num_bytes, BTRFS_UPDATE_DELAYED_HEAD,
   extent_op-is_data);
 
+   smp_mb();
if (waitqueue_active(fs_info-tree_mod_seq_wait))
wake_up(fs_info-tree_mod_seq_wait);
spin_unlock(delayed_refs-lock);
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 502b20c..2ed81c8 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -754,9 +754,7 @@ static void run_one_async_done(struct btrfs_work *work)
limit = btrfs_async_submit_limit(fs_info);
limit = limit * 2 / 3;
 
-   atomic_dec(fs_info-nr_async_submits);
-
-   if (atomic_read(fs_info-nr_async_submits)  limit 
+   if (atomic_dec_return(fs_info-nr_async_submits)  limit 
waitqueue_active(fs_info-async_submit_wait))
  

[PATCH] Btrfs: barrier before waitqueue_active

2012-08-01 Thread Josef Bacik
We need an smb_mb() before waitqueue_active to avoid missing wakeups.
Before Mitch was hitting a deadlock between the ordered flushers and the
transaction commit because the ordered flushers were waiting for more refs
and were never woken up, so those smp_mb()'s are the most important.
Everything else I added for correctness sake and to avoid getting bitten by
this again somewhere else.  Thanks,

Signed-off-by: Josef Bacik jba...@fusionio.com
---
 fs/btrfs/compression.c   |1 +
 fs/btrfs/delayed-inode.c |   16 ++--
 fs/btrfs/delayed-ref.c   |   18 --
 fs/btrfs/disk-io.c   |   11 ---
 fs/btrfs/inode.c |8 +---
 fs/btrfs/volumes.c   |8 +---
 6 files changed, 41 insertions(+), 21 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 86eff48..43d1c5a 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -818,6 +818,7 @@ static void free_workspace(int type, struct list_head 
*workspace)
btrfs_compress_op[idx]-free_workspace(workspace);
atomic_dec(alloc_workspace);
 wake:
+   smp_mb();
if (waitqueue_active(workspace_wait))
wake_up(workspace_wait);
 }
diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
index 335605c..8cc9b19 100644
--- a/fs/btrfs/delayed-inode.c
+++ b/fs/btrfs/delayed-inode.c
@@ -513,9 +513,11 @@ static void __btrfs_remove_delayed_item(struct 
btrfs_delayed_item *delayed_item)
rb_erase(delayed_item-rb_node, root);
delayed_item-delayed_node-count--;
atomic_dec(delayed_root-items);
-   if (atomic_read(delayed_root-items)  BTRFS_DELAYED_BACKGROUND 
-   waitqueue_active(delayed_root-wait))
-   wake_up(delayed_root-wait);
+   if (atomic_read(delayed_root-items)  BTRFS_DELAYED_BACKGROUND) {
+   smp_mb();
+   if (waitqueue_active(delayed_root-wait))
+   wake_up(delayed_root-wait);
+   }
 }
 
 static void btrfs_release_delayed_item(struct btrfs_delayed_item *item)
@@ -1057,9 +1059,11 @@ static void btrfs_release_delayed_inode(struct 
btrfs_delayed_node *delayed_node)
delayed_root = delayed_node-root-fs_info-delayed_root;
atomic_dec(delayed_root-items);
if (atomic_read(delayed_root-items) 
-   BTRFS_DELAYED_BACKGROUND 
-   waitqueue_active(delayed_root-wait))
-   wake_up(delayed_root-wait);
+   BTRFS_DELAYED_BACKGROUND) {
+   smp_mb();
+   if (waitqueue_active(delayed_root-wait))
+   wake_up(delayed_root-wait);
+   }
}
 }
 
diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
index da7419e..858ef02 100644
--- a/fs/btrfs/delayed-ref.c
+++ b/fs/btrfs/delayed-ref.c
@@ -662,9 +662,12 @@ int btrfs_add_delayed_tree_ref(struct btrfs_fs_info 
*fs_info,
add_delayed_tree_ref(fs_info, trans, ref-node, bytenr,
   num_bytes, parent, ref_root, level, action,
   for_cow);
-   if (!need_ref_seq(for_cow, ref_root) 
-   waitqueue_active(fs_info-tree_mod_seq_wait))
-   wake_up(fs_info-tree_mod_seq_wait);
+   if (!need_ref_seq(for_cow, ref_root)) {
+   smp_mb();
+   if (waitqueue_active(fs_info-tree_mod_seq_wait))
+   wake_up(fs_info-tree_mod_seq_wait);
+   }
+
spin_unlock(delayed_refs-lock);
if (need_ref_seq(for_cow, ref_root))
btrfs_qgroup_record_ref(trans, ref-node, extent_op);
@@ -713,9 +716,11 @@ int btrfs_add_delayed_data_ref(struct btrfs_fs_info 
*fs_info,
add_delayed_data_ref(fs_info, trans, ref-node, bytenr,
   num_bytes, parent, ref_root, owner, offset,
   action, for_cow);
-   if (!need_ref_seq(for_cow, ref_root) 
-   waitqueue_active(fs_info-tree_mod_seq_wait))
-   wake_up(fs_info-tree_mod_seq_wait);
+   if (!need_ref_seq(for_cow, ref_root)) {
+   smp_mb();
+   if (waitqueue_active(fs_info-tree_mod_seq_wait))
+   wake_up(fs_info-tree_mod_seq_wait);
+   }
spin_unlock(delayed_refs-lock);
if (need_ref_seq(for_cow, ref_root))
btrfs_qgroup_record_ref(trans, ref-node, extent_op);
@@ -744,6 +749,7 @@ int btrfs_add_delayed_extent_op(struct btrfs_fs_info 
*fs_info,
   num_bytes, BTRFS_UPDATE_DELAYED_HEAD,
   extent_op-is_data);
 
+   smp_mb();
if (waitqueue_active(fs_info-tree_mod_seq_wait))
wake_up(fs_info-tree_mod_seq_wait);
spin_unlock(delayed_refs-lock);
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 502b20c..a355c89 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -756,9 

Re: [PATCH] Btrfs: barrier before waitqueue_active

2012-08-01 Thread Mitch Harder
On Wed, Aug 1, 2012 at 3:25 PM, Josef Bacik jba...@fusionio.com wrote:
 We need an smb_mb() before waitqueue_active to avoid missing wakeups.
 Before Mitch was hitting a deadlock between the ordered flushers and the
 transaction commit because the ordered flushers were waiting for more refs
 and were never woken up, so those smp_mb()'s are the most important.
 Everything else I added for correctness sake and to avoid getting bitten by
 this again somewhere else.  Thanks,


This patch seems to make it tougher to hit a deadlock, but I'm still
encountering intermittent deadlocks using this patch when running
multiple rsync threads.

I've also tested Patch 2, and that has me hitting a deadlock even
quicker (when starting several copying threads).

I also found a slight performance hit using this patch.  On a 3.4.6
kernel (merged with the 3.5_rc for-linus branch), I would typically
complete my rsync test in ~265 seconds.  Also, I can't recall hitting
a deadlock on the 3.4.6 kernel (with 3.5_rc for-linus).  When using
this patch, the test would take ~310 seconds (when it didn't hit a
deadlock).

Here's the Delayed Tasks (Ctrl-SysRq-W) when using JUST this patch:

[ 1568.794030] SysRq : Show Blocked State
[ 1568.794101]   taskPC stack   pid father
[ 1568.794123] btrfs-endio-wri D 88012579c000 0  3845  2 0x
[ 1568.794128]  8801254f3c20 0046 8801254f2000
8801241b5a80
[ 1568.794132]  00012280 8801254f3fd8 00012280
4000
[ 1568.794136]  8801254f3fd8 00012280 880129af16a0
8801241b5a80
[ 1568.794140] Call Trace:
[ 1568.794179]  [a0068785] ? memcpy_extent_buffer+0x159/0x17a [btrfs]
[ 1568.794200]  [a0082ab7] ? find_ref_head+0xa3/0xc6 [btrfs]
[ 1568.794220]  [a008343c] ? btrfs_find_ref_cluster+0xdd/0x117 [btrfs]
[ 1568.794225]  [8162d58c] schedule+0x64/0x66
[ 1568.794241]  [a003fc86] btrfs_run_delayed_refs+0x269/0x3f0 [btrfs]
[ 1568.794246]  [8104b10e] ? wake_up_bit+0x2a/0x2a
[ 1568.794265]  [a004fdc4] __btrfs_end_transaction+0xca/0x283 [btrfs]
[ 1568.794283]  [a004ffda] btrfs_end_transaction+0x15/0x17 [btrfs]
[ 1568.794302]  [a00555da] btrfs_finish_ordered_io+0x2e4/0x334 [btrfs]
[ 1568.794306]  [8103b980] ? run_timer_softirq+0x2d4/0x2d4
[ 1568.794325]  [a005563f] finish_ordered_fn+0x15/0x17 [btrfs]
[ 1568.794344]  [a0070ef8] worker_loop+0x188/0x4e0 [btrfs]
[ 1568.794365]  [a0070d70] ? btrfs_queue_worker+0x275/0x275 [btrfs]
[ 1568.794384]  [a0070d70] ? btrfs_queue_worker+0x275/0x275 [btrfs]
[ 1568.794387]  [8104ac37] kthread+0x89/0x91
[ 1568.794391]  [8162fd74] kernel_thread_helper+0x4/0x10
[ 1568.794395]  [8104abae] ? kthread_freezable_should_stop+0x57/0x57
[ 1568.794398]  [8162fd70] ? gs_change+0xb/0xb
[ 1568.794400] btrfs-transacti D 88009912ba50 0  3851  2 0x
[ 1568.794403]  8801241cfc70 0046 8801241ce000
8801248cda80
[ 1568.794407]  00012280 8801241cffd8 00012280
4000
[ 1568.794411]  8801241cffd8 00012280 8801254b8000
8801248cda80
[ 1568.794415] Call Trace:
[ 1568.794436]  [a0066646] ? extent_writepages+0x53/0x5d [btrfs]
[ 1568.794455]  [a005357b] ?
uncompress_inline.clone.33+0x15f/0x15f [btrfs]
[ 1568.794459]  [810c9ada] ? pagevec_lookup_tag+0x24/0x2e
[ 1568.794478]  [a0052e0e] ? btrfs_writepages+0x27/0x29 [btrfs]
[ 1568.794481]  [810c90b1] ? do_writepages+0x20/0x29
[ 1568.794485]  [8162d58c] schedule+0x64/0x66
[ 1568.794505]  [a0061547]
btrfs_start_ordered_extent+0xde/0xfa [btrfs]
[ 1568.794508]  [8104b10e] ? wake_up_bit+0x2a/0x2a
[ 1568.794529]  [a0061984] ?
btrfs_lookup_first_ordered_extent+0x65/0x99 [btrfs]
[ 1568.794549]  [a0061a6a] btrfs_wait_ordered_range+0xb2/0xda [btrfs]
[ 1568.794569]  [a0061bcc]
btrfs_run_ordered_operations+0x13a/0x1c1 [btrfs]
[ 1568.794587]  [a004f5f5]
btrfs_commit_transaction+0x287/0x960 [btrfs]
[ 1568.794606]  [a00502b1] ? start_transaction+0x2d5/0x310 [btrfs]
[ 1568.794609]  [8104b10e] ? wake_up_bit+0x2a/0x2a
[ 1568.794627]  [a004913b] transaction_kthread+0x187/0x258 [btrfs]
[ 1568.794644]  [a0048fb4] ? btrfs_alloc_root+0x42/0x42 [btrfs]
[ 1568.794661]  [a0048fb4] ? btrfs_alloc_root+0x42/0x42 [btrfs]
[ 1568.794664]  [8104ac37] kthread+0x89/0x91
[ 1568.794668]  [8162fd74] kernel_thread_helper+0x4/0x10
[ 1568.794671]  [8104abae] ? kthread_freezable_should_stop+0x57/0x57
[ 1568.794674]  [8162fd70] ? gs_change+0xb/0xb
[ 1568.794676] flush-btrfs-1   D 88012579c000 0  3857  2 0x
[ 1568.794680]  880037125670 0046 880037124000
8801254b8000
[ 1568.794684]  00012280 880037125fd8 00012280
4000
[