Re: Unbreakable loop in fuse_fill_write_pages()

2020-10-15 Thread Qian Cai
On Tue, 2020-10-13 at 14:40 -0400, Vivek Goyal wrote:
> > == the thread is stuck in the loop ==
> > [10813.290694] task:trinity-c33 state:D stack:25888 pid:254219 ppid:
> > 87180
> > flags:0x4004
> > [10813.292671] Call Trace:
> > [10813.293379]  __schedule+0x71d/0x1b50
> > [10813.294182]  ? __sched_text_start+0x8/0x8
> > [10813.295146]  ? mark_held_locks+0xb0/0x110
> > [10813.296117]  schedule+0xbf/0x270
> > [10813.296782]  ? __lock_page_killable+0x276/0x830
> > [10813.297867]  io_schedule+0x17/0x60
> > [10813.298772]  __lock_page_killable+0x33b/0x830
> 
> This seems to suggest that filemap_fault() is blocked on page lock and
> is sleeping. For some reason it never wakes up. Not sure why.
> 
> And this will be called from.
> 
> fuse_fill_write_pages()
>iov_iter_fault_in_readable()
> 
> So fuse code will take inode_lock() and then looks like same process
> is sleeping waiting on page lock. And rest of the processes get blocked
> behind inode lock.
> 
> If we are woken up (while waiting on page lock), we should make forward
> progress. Question is what page it is and why the entity which is
> holding lock is not releasing lock.

FYI, it was mentioned that this is likely a deadlock in FUSE:

https://lore.kernel.org/linux-fsdevel/CAHk-=wh9Eu-gNHzqgfvUAAiO=vj+pwnzxkv+tx55xhgpfy+...@mail.gmail.com/





Re: [Virtio-fs] Unbreakable loop in fuse_fill_write_pages()

2020-10-14 Thread Dr. David Alan Gilbert
* Qian Cai (c...@redhat.com) wrote:
> On Tue, 2020-10-13 at 14:58 -0400, Vivek Goyal wrote:
> 
> > I am wondering if virtiofsd still alive and responding to requests? I
> > see another task which is blocked on getdents() for more than 120s.
> > 
> > [10580.142571][  T348] INFO: task trinity-c36:254165 blocked for more than 
> > 123
> > +seconds.
> > [10580.143924][  T348]   Tainted: G   O  5.9.0-next-20201013+ #2
> > [10580.145158][  T348] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> > +disables this message.
> > [10580.146636][  T348] task:trinity-c36 state:D stack:26704 pid:254165
> > ppid:
> > +87180 flags:0x0004
> > [10580.148260][  T348] Call Trace:
> > [10580.148789][  T348]  __schedule+0x71d/0x1b50
> > [10580.149532][  T348]  ? __sched_text_start+0x8/0x8
> > [10580.150343][  T348]  schedule+0xbf/0x270
> > [10580.151044][  T348]  schedule_preempt_disabled+0xc/0x20
> > [10580.152006][  T348]  __mutex_lock+0x9f1/0x1360
> > [10580.152777][  T348]  ? __fdget_pos+0x9c/0xb0
> > [10580.153484][  T348]  ? mutex_lock_io_nested+0x1240/0x1240
> > [10580.154432][  T348]  ? find_held_lock+0x33/0x1c0
> > [10580.155220][  T348]  ? __fdget_pos+0x9c/0xb0
> > [10580.155934][  T348]  __fdget_pos+0x9c/0xb0
> > [10580.156660][  T348]  __x64_sys_getdents+0xff/0x230
> > 
> > May be virtiofsd crashed and hence no requests are completing leading
> > to a hard lockup?
> Virtiofsd is still working. Once this happened, I manually create a file on 
> the
> guest (in virtiofs) and then I can see the content of it from the host.

If the virtiofsd is still running, attach gdb to it and get a full bt;

gdb --pid  whatever

(gdb) t a a bt full

that should show if it's stuck in one particular place.

Dave


> ___
> Virtio-fs mailing list
> virtio...@redhat.com
> https://www.redhat.com/mailman/listinfo/virtio-fs
-- 
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: Unbreakable loop in fuse_fill_write_pages()

2020-10-13 Thread Qian Cai
On Tue, 2020-10-13 at 15:57 -0400, Vivek Goyal wrote:
> Hmm..., So how do I reproduce it. Just run trinity as root and it will
> reproduce after some time?

Only need to run it as unprivileged user after mounting virtiofs on /tmp
(trinity will need to create and use files there) as many as CPUs as possible.
Also, make sure your guest's memory usage does not exceed the host's /dev/shm
size. Otherwise, horrible things could happen.

$ trinity -C 48 --arch 64

It might get coredump or exit due to some other unrelated reasons, so just keep
retrying. It is best to apply your recent patch for the virtiofs false positive
warning first, so it won't taint the kernel which will stop the trinity. Today,
I had been able to reproduce it twice within half-hour each.





Re: Unbreakable loop in fuse_fill_write_pages()

2020-10-13 Thread Vivek Goyal
On Tue, Oct 13, 2020 at 03:53:19PM -0400, Qian Cai wrote:
> On Tue, 2020-10-13 at 14:58 -0400, Vivek Goyal wrote:
> 
> > I am wondering if virtiofsd still alive and responding to requests? I
> > see another task which is blocked on getdents() for more than 120s.
> > 
> > [10580.142571][  T348] INFO: task trinity-c36:254165 blocked for more than 
> > 123
> > +seconds.
> > [10580.143924][  T348]   Tainted: G   O  5.9.0-next-20201013+ #2
> > [10580.145158][  T348] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> > +disables this message.
> > [10580.146636][  T348] task:trinity-c36 state:D stack:26704 pid:254165
> > ppid:
> > +87180 flags:0x0004
> > [10580.148260][  T348] Call Trace:
> > [10580.148789][  T348]  __schedule+0x71d/0x1b50
> > [10580.149532][  T348]  ? __sched_text_start+0x8/0x8
> > [10580.150343][  T348]  schedule+0xbf/0x270
> > [10580.151044][  T348]  schedule_preempt_disabled+0xc/0x20
> > [10580.152006][  T348]  __mutex_lock+0x9f1/0x1360
> > [10580.152777][  T348]  ? __fdget_pos+0x9c/0xb0
> > [10580.153484][  T348]  ? mutex_lock_io_nested+0x1240/0x1240
> > [10580.154432][  T348]  ? find_held_lock+0x33/0x1c0
> > [10580.155220][  T348]  ? __fdget_pos+0x9c/0xb0
> > [10580.155934][  T348]  __fdget_pos+0x9c/0xb0
> > [10580.156660][  T348]  __x64_sys_getdents+0xff/0x230
> > 
> > May be virtiofsd crashed and hence no requests are completing leading
> > to a hard lockup?
> Virtiofsd is still working. Once this happened, I manually create a file on 
> the
> guest (in virtiofs) and then I can see the content of it from the host.

Hmm..., So how do I reproduce it. Just run trinity as root and it will
reproduce after some time?

Vivek



Re: Unbreakable loop in fuse_fill_write_pages()

2020-10-13 Thread Qian Cai
On Tue, 2020-10-13 at 14:58 -0400, Vivek Goyal wrote:

> I am wondering if virtiofsd still alive and responding to requests? I
> see another task which is blocked on getdents() for more than 120s.
> 
> [10580.142571][  T348] INFO: task trinity-c36:254165 blocked for more than 123
> +seconds.
> [10580.143924][  T348]   Tainted: G   O5.9.0-next-20201013+ #2
> [10580.145158][  T348] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> +disables this message.
> [10580.146636][  T348] task:trinity-c36 state:D stack:26704 pid:254165
> ppid:
> +87180 flags:0x0004
> [10580.148260][  T348] Call Trace:
> [10580.148789][  T348]  __schedule+0x71d/0x1b50
> [10580.149532][  T348]  ? __sched_text_start+0x8/0x8
> [10580.150343][  T348]  schedule+0xbf/0x270
> [10580.151044][  T348]  schedule_preempt_disabled+0xc/0x20
> [10580.152006][  T348]  __mutex_lock+0x9f1/0x1360
> [10580.152777][  T348]  ? __fdget_pos+0x9c/0xb0
> [10580.153484][  T348]  ? mutex_lock_io_nested+0x1240/0x1240
> [10580.154432][  T348]  ? find_held_lock+0x33/0x1c0
> [10580.155220][  T348]  ? __fdget_pos+0x9c/0xb0
> [10580.155934][  T348]  __fdget_pos+0x9c/0xb0
> [10580.156660][  T348]  __x64_sys_getdents+0xff/0x230
> 
> May be virtiofsd crashed and hence no requests are completing leading
> to a hard lockup?
Virtiofsd is still working. Once this happened, I manually create a file on the
guest (in virtiofs) and then I can see the content of it from the host.



Re: Unbreakable loop in fuse_fill_write_pages()

2020-10-13 Thread Qian Cai
On Tue, 2020-10-13 at 14:58 -0400, Vivek Goyal wrote:
> I am wondering if virtiofsd still alive and responding to requests? I
> see another task which is blocked on getdents() for more than 120s.
> 
> [10580.142571][  T348] INFO: task trinity-c36:254165 blocked for more than 123
> +seconds.
> [10580.143924][  T348]   Tainted: G   O5.9.0-next-20201013+ #2
> [10580.145158][  T348] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> +disables this message.
> [10580.146636][  T348] task:trinity-c36 state:D stack:26704 pid:254165
> ppid:
> +87180 flags:0x0004
> [10580.148260][  T348] Call Trace:
> [10580.148789][  T348]  __schedule+0x71d/0x1b50
> [10580.149532][  T348]  ? __sched_text_start+0x8/0x8
> [10580.150343][  T348]  schedule+0xbf/0x270
> [10580.151044][  T348]  schedule_preempt_disabled+0xc/0x20
> [10580.152006][  T348]  __mutex_lock+0x9f1/0x1360
> [10580.152777][  T348]  ? __fdget_pos+0x9c/0xb0
> [10580.153484][  T348]  ? mutex_lock_io_nested+0x1240/0x1240
> [10580.154432][  T348]  ? find_held_lock+0x33/0x1c0
> [10580.155220][  T348]  ? __fdget_pos+0x9c/0xb0
> [10580.155934][  T348]  __fdget_pos+0x9c/0xb0
> [10580.156660][  T348]  __x64_sys_getdents+0xff/0x230
> 
> May be virtiofsd crashed and hence no requests are completing leading
> to a hard lockup?
No, it was not crashed. After I had to forcibly close the guest, the virtiofsd
daemon will exit normally. However, I can't tell exactly if the virtiofsd daemon
was still functioning normally. I'll enable the debug and retry to see if there
is anything interesting.



Re: Unbreakable loop in fuse_fill_write_pages()

2020-10-13 Thread Vivek Goyal
On Tue, Oct 13, 2020 at 02:40:26PM -0400, Vivek Goyal wrote:
> On Tue, Oct 13, 2020 at 01:11:05PM -0400, Qian Cai wrote:
> > Running some fuzzing on virtiofs with an unprivileged user on today's 
> > linux-next 
> > could trigger soft-lockups below.
> > 
> > # virtiofsd --socket-path=/tmp/vhostqemu -o source=$TESTDIR -o cache=always 
> > -o no_posix_lock
> > 
> > Basically, everything was blocking on inode_lock(inode) because one thread
> > (trinity-c33) was holding it but stuck in the loop in 
> > fuse_fill_write_pages()
> > and unable to exit for more than 10 minutes before I executed sysrq-t.
> > Afterwards, the systems was totally unresponsive:
> > 
> > kernel:NMI watchdog: Watchdog detected hard LOCKUP on cpu 8
> > 
> > To exit the loop, it needs,
> > 
> > iov_iter_advance(ii, tmp) to set "tmp" to non-zero for each iteration.
> > 
> > and
> > 
> > } while (iov_iter_count(ii) && count < fc->max_write &&
> >  ap->num_pages < max_pages && offset == 0);
> > 
> > == the thread is stuck in the loop ==
> > [10813.290694] task:trinity-c33 state:D stack:25888 pid:254219 ppid: 
> > 87180
> > flags:0x4004
> > [10813.292671] Call Trace:
> > [10813.293379]  __schedule+0x71d/0x1b50
> > [10813.294182]  ? __sched_text_start+0x8/0x8
> > [10813.295146]  ? mark_held_locks+0xb0/0x110
> > [10813.296117]  schedule+0xbf/0x270
> > [10813.296782]  ? __lock_page_killable+0x276/0x830
> > [10813.297867]  io_schedule+0x17/0x60
> > [10813.298772]  __lock_page_killable+0x33b/0x830
> 
> This seems to suggest that filemap_fault() is blocked on page lock and
> is sleeping. For some reason it never wakes up. Not sure why.
> 
> And this will be called from.
> 
> fuse_fill_write_pages()
>iov_iter_fault_in_readable()
> 
> So fuse code will take inode_lock() and then looks like same process
> is sleeping waiting on page lock. And rest of the processes get blocked
> behind inode lock.
> 
> If we are woken up (while waiting on page lock), we should make forward
> progress. Question is what page it is and why the entity which is
> holding lock is not releasing lock.

I am wondering if virtiofsd still alive and responding to requests? I
see another task which is blocked on getdents() for more than 120s.

[10580.142571][  T348] INFO: task trinity-c36:254165 blocked for more than 123
+seconds.
[10580.143924][  T348]   Tainted: G   O  5.9.0-next-20201013+ #2
[10580.145158][  T348] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
+disables this message.
[10580.146636][  T348] task:trinity-c36 state:D stack:26704 pid:254165 ppid:
+87180 flags:0x0004
[10580.148260][  T348] Call Trace:
[10580.148789][  T348]  __schedule+0x71d/0x1b50
[10580.149532][  T348]  ? __sched_text_start+0x8/0x8
[10580.150343][  T348]  schedule+0xbf/0x270
[10580.151044][  T348]  schedule_preempt_disabled+0xc/0x20
[10580.152006][  T348]  __mutex_lock+0x9f1/0x1360
[10580.152777][  T348]  ? __fdget_pos+0x9c/0xb0
[10580.153484][  T348]  ? mutex_lock_io_nested+0x1240/0x1240
[10580.154432][  T348]  ? find_held_lock+0x33/0x1c0
[10580.155220][  T348]  ? __fdget_pos+0x9c/0xb0
[10580.155934][  T348]  __fdget_pos+0x9c/0xb0
[10580.156660][  T348]  __x64_sys_getdents+0xff/0x230

May be virtiofsd crashed and hence no requests are completing leading
to a hard lockup?

Vivek



Re: Unbreakable loop in fuse_fill_write_pages()

2020-10-13 Thread Vivek Goyal
On Tue, Oct 13, 2020 at 01:11:05PM -0400, Qian Cai wrote:
> Running some fuzzing on virtiofs with an unprivileged user on today's 
> linux-next 
> could trigger soft-lockups below.
> 
> # virtiofsd --socket-path=/tmp/vhostqemu -o source=$TESTDIR -o cache=always 
> -o no_posix_lock
> 
> Basically, everything was blocking on inode_lock(inode) because one thread
> (trinity-c33) was holding it but stuck in the loop in fuse_fill_write_pages()
> and unable to exit for more than 10 minutes before I executed sysrq-t.
> Afterwards, the systems was totally unresponsive:
> 
> kernel:NMI watchdog: Watchdog detected hard LOCKUP on cpu 8
> 
> To exit the loop, it needs,
> 
> iov_iter_advance(ii, tmp) to set "tmp" to non-zero for each iteration.
> 
> and
> 
>   } while (iov_iter_count(ii) && count < fc->max_write &&
>ap->num_pages < max_pages && offset == 0);
> 
> == the thread is stuck in the loop ==
> [10813.290694] task:trinity-c33 state:D stack:25888 pid:254219 ppid: 87180
> flags:0x4004
> [10813.292671] Call Trace:
> [10813.293379]  __schedule+0x71d/0x1b50
> [10813.294182]  ? __sched_text_start+0x8/0x8
> [10813.295146]  ? mark_held_locks+0xb0/0x110
> [10813.296117]  schedule+0xbf/0x270
> [10813.296782]  ? __lock_page_killable+0x276/0x830
> [10813.297867]  io_schedule+0x17/0x60
> [10813.298772]  __lock_page_killable+0x33b/0x830

This seems to suggest that filemap_fault() is blocked on page lock and
is sleeping. For some reason it never wakes up. Not sure why.

And this will be called from.

fuse_fill_write_pages()
   iov_iter_fault_in_readable()

So fuse code will take inode_lock() and then looks like same process
is sleeping waiting on page lock. And rest of the processes get blocked
behind inode lock.

If we are woken up (while waiting on page lock), we should make forward
progress. Question is what page it is and why the entity which is
holding lock is not releasing lock.

Thanks
Vivek

> [10813.299695]  ? wait_on_page_bit+0x710/0x710
> [10813.300609]  ? __lock_page_or_retry+0x3c0/0x3c0
> [10813.301894]  ? up_read+0x1a3/0x730
> [10813.302791]  ? page_cache_free_page.isra.45+0x390/0x390
> [10813.304077]  filemap_fault+0x2bd/0x2040
> [10813.305019]  ? read_cache_page_gfp+0x10/0x10
> [10813.306041]  ? lock_downgrade+0x700/0x700
> [10813.306958]  ? replace_page_cache_page+0x1130/0x1130
> [10813.308124]  __do_fault+0xf5/0x530
> [10813.308968]  handle_mm_fault+0x1c0e/0x25b0
> [10813.309955]  ? copy_page_range+0xfe0/0xfe0
> [10813.310895]  do_user_addr_fault+0x383/0x820
> [10813.312084]  exc_page_fault+0x56/0xb0
> [10813.312979]  asm_exc_page_fault+0x1e/0x30
> [10813.313978] RIP: 0010:iov_iter_fault_in_readable+0x271/0x350
> fault_in_pages_readable at include/linux/pagemap.h:745
> (inlined by) iov_iter_fault_in_readable at lib/iov_iter.c:438
> [10813.315293] Code: 48 39 d7 0f 82 1a ff ff ff 0f 01 cb 0f ae e8 44 89 c0 8a 
> 0a
> 0f 01 ca 88 4c 24 70 85 c0 74 da e9 f8 fe ff ff 0f 01 cb 0f ae e8 <8a> 11 0f 
> 01
> ca 88 54 24 30 85 c0 0f 85 04 ff ff ff 48 29 ee e9
>  45
> [10813.319196] RSP: 0018:c90017ccf830 EFLAGS: 00050246
> [10813.320446] RAX:  RBX: 192002f99f08 RCX: 
> 7fe284f1004c
> [10813.322202] RDX: 0001 RSI: 1000 RDI: 
> 8887a7664000
> [10813.323729] RBP: 1000 R08:  R09: 
> 
> [10813.325282] R10: c90017ccfd48 R11: ed102789d5ff R12: 
> 8887a7664020
> [10813.326898] R13: c90017ccfd40 R14: dc00 R15: 
> 00e0df6a
> [10813.328456]  ? iov_iter_revert+0x8e0/0x8e0
> [10813.329404]  ? copyin+0x96/0xc0
> [10813.330230]  ? iov_iter_copy_from_user_atomic+0x1f0/0xa40
> [10813.331742]  fuse_perform_write+0x3eb/0xf20 [fuse]
> fuse_fill_write_pages at fs/fuse/file.c:1150
> (inlined by) fuse_perform_write at fs/fuse/file.c:1226
> [10813.332880]  ? fuse_file_fallocate+0x5f0/0x5f0 [fuse]
> [10813.334090]  fuse_file_write_iter+0x6b7/0x900 [fuse]
> [10813.335191]  do_iter_readv_writev+0x42b/0x6d0
> [10813.336161]  ? new_sync_write+0x610/0x610
> [10813.337194]  do_iter_write+0x11f/0x5b0
> [10813.338177]  ? __sb_start_write+0x229/0x2d0
> [10813.339169]  vfs_writev+0x16d/0x2d0
> [10813.339973]  ? vfs_iter_write+0xb0/0xb0
> [10813.340950]  ? __fdget_pos+0x9c/0xb0
> [10813.342039]  ? rcu_read_lock_sched_held+0x9c/0xd0
> [10813.343120]  ? rcu_read_lock_bh_held+0xb0/0xb0
> [10813.344104]  ? find_held_lock+0x33/0x1c0
> [10813.345050]  do_writev+0xfb/0x1e0
> [10813.345920]  ? vfs_writev+0x2d0/0x2d0
> [10813.346802]  ? lockdep_hardirqs_on_prepare+0x27c/0x3d0
> [10813.348026]  ? syscall_enter_from_user_mode+0x1c/0x50
> [10813.349197]  do_syscall_64+0x33/0x40
> [10813.350026]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> 



Unbreakable loop in fuse_fill_write_pages()

2020-10-13 Thread Qian Cai
Running some fuzzing on virtiofs with an unprivileged user on today's 
linux-next 
could trigger soft-lockups below.

# virtiofsd --socket-path=/tmp/vhostqemu -o source=$TESTDIR -o cache=always -o 
no_posix_lock

Basically, everything was blocking on inode_lock(inode) because one thread
(trinity-c33) was holding it but stuck in the loop in fuse_fill_write_pages()
and unable to exit for more than 10 minutes before I executed sysrq-t.
Afterwards, the systems was totally unresponsive:

kernel:NMI watchdog: Watchdog detected hard LOCKUP on cpu 8

To exit the loop, it needs,

iov_iter_advance(ii, tmp) to set "tmp" to non-zero for each iteration.

and

} while (iov_iter_count(ii) && count < fc->max_write &&
 ap->num_pages < max_pages && offset == 0);

== the thread is stuck in the loop ==
[10813.290694] task:trinity-c33 state:D stack:25888 pid:254219 ppid: 87180
flags:0x4004
[10813.292671] Call Trace:
[10813.293379]  __schedule+0x71d/0x1b50
[10813.294182]  ? __sched_text_start+0x8/0x8
[10813.295146]  ? mark_held_locks+0xb0/0x110
[10813.296117]  schedule+0xbf/0x270
[10813.296782]  ? __lock_page_killable+0x276/0x830
[10813.297867]  io_schedule+0x17/0x60
[10813.298772]  __lock_page_killable+0x33b/0x830
[10813.299695]  ? wait_on_page_bit+0x710/0x710
[10813.300609]  ? __lock_page_or_retry+0x3c0/0x3c0
[10813.301894]  ? up_read+0x1a3/0x730
[10813.302791]  ? page_cache_free_page.isra.45+0x390/0x390
[10813.304077]  filemap_fault+0x2bd/0x2040
[10813.305019]  ? read_cache_page_gfp+0x10/0x10
[10813.306041]  ? lock_downgrade+0x700/0x700
[10813.306958]  ? replace_page_cache_page+0x1130/0x1130
[10813.308124]  __do_fault+0xf5/0x530
[10813.308968]  handle_mm_fault+0x1c0e/0x25b0
[10813.309955]  ? copy_page_range+0xfe0/0xfe0
[10813.310895]  do_user_addr_fault+0x383/0x820
[10813.312084]  exc_page_fault+0x56/0xb0
[10813.312979]  asm_exc_page_fault+0x1e/0x30
[10813.313978] RIP: 0010:iov_iter_fault_in_readable+0x271/0x350
fault_in_pages_readable at include/linux/pagemap.h:745
(inlined by) iov_iter_fault_in_readable at lib/iov_iter.c:438
[10813.315293] Code: 48 39 d7 0f 82 1a ff ff ff 0f 01 cb 0f ae e8 44 89 c0 8a 0a
0f 01 ca 88 4c 24 70 85 c0 74 da e9 f8 fe ff ff 0f 01 cb 0f ae e8 <8a> 11 0f 01
ca 88 54 24 30 85 c0 0f 85 04 ff ff ff 48 29 ee e9
 45
[10813.319196] RSP: 0018:c90017ccf830 EFLAGS: 00050246
[10813.320446] RAX:  RBX: 192002f99f08 RCX: 7fe284f1004c
[10813.322202] RDX: 0001 RSI: 1000 RDI: 8887a7664000
[10813.323729] RBP: 1000 R08:  R09: 
[10813.325282] R10: c90017ccfd48 R11: ed102789d5ff R12: 8887a7664020
[10813.326898] R13: c90017ccfd40 R14: dc00 R15: 00e0df6a
[10813.328456]  ? iov_iter_revert+0x8e0/0x8e0
[10813.329404]  ? copyin+0x96/0xc0
[10813.330230]  ? iov_iter_copy_from_user_atomic+0x1f0/0xa40
[10813.331742]  fuse_perform_write+0x3eb/0xf20 [fuse]
fuse_fill_write_pages at fs/fuse/file.c:1150
(inlined by) fuse_perform_write at fs/fuse/file.c:1226
[10813.332880]  ? fuse_file_fallocate+0x5f0/0x5f0 [fuse]
[10813.334090]  fuse_file_write_iter+0x6b7/0x900 [fuse]
[10813.335191]  do_iter_readv_writev+0x42b/0x6d0
[10813.336161]  ? new_sync_write+0x610/0x610
[10813.337194]  do_iter_write+0x11f/0x5b0
[10813.338177]  ? __sb_start_write+0x229/0x2d0
[10813.339169]  vfs_writev+0x16d/0x2d0
[10813.339973]  ? vfs_iter_write+0xb0/0xb0
[10813.340950]  ? __fdget_pos+0x9c/0xb0
[10813.342039]  ? rcu_read_lock_sched_held+0x9c/0xd0
[10813.343120]  ? rcu_read_lock_bh_held+0xb0/0xb0
[10813.344104]  ? find_held_lock+0x33/0x1c0
[10813.345050]  do_writev+0xfb/0x1e0
[10813.345920]  ? vfs_writev+0x2d0/0x2d0
[10813.346802]  ? lockdep_hardirqs_on_prepare+0x27c/0x3d0
[10813.348026]  ? syscall_enter_from_user_mode+0x1c/0x50
[10813.349197]  do_syscall_64+0x33/0x40
[10813.350026]  entry_SYSCALL_64_after_hwframe+0x44/0xa9

== soft-lockups ==
[10579.953730][  T348]   Tainted: G   O  5.9.0-next-20201013+ #2
[10579.955016][  T348] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.
[10579.956467][  T348] task:trinity-c25 state:D stack:26704 pid:253906 
ppid: 87180 flags:0x4002
[10579.958028][  T348] Call Trace:
[10579.958609][  T348]  __schedule+0x71d/0x1b50
[10579.959309][  T348]  ? __sched_text_start+0x8/0x8
[10579.960144][  T348]  schedule+0xbf/0x270
[10579.960774][  T348]  rwsem_down_write_slowpath+0x8ea/0xf30
[10579.961828][  T348]  ? rwsem_mark_wake+0x8d0/0x8d0
[10579.962675][  T348]  ? lockdep_hardirqs_on_prepare+0x3d0/0x3d0
[10579.963721][  T348]  ? rcu_read_lock_sched_held+0x9c/0xd0
[10579.964658][  T348]  ? lock_acquire+0x1c8/0x820
[10579.965453][  T348]  ? down_write+0x138/0x150
[10579.966237][  T348]  ? down_write+0xb3/0x150
[10579.966994][  T348]  down_write+0x138/0x150
[10579.967787][  T348]  ? down_write_killable_nested+0x170/0x170
[10579.968844][  T348]  fuse_flush+0x1a0/0x500 [fuse]
[10579.969732][  T348]  ? fuse_file_lock+0x190/0x19