Re: v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
On Mon, Feb 26, 2018 at 01:44:55PM +0100, Jan Kara wrote: > On Mon 26-02-18 11:38:19, Mark Rutland wrote: > > That seems to be it! > > > > With the below patch applied, I can't trigger the bug after ~10 minutes, > > whereas prior to the patch I can trigger it in ~10 seconds. I'll leave > > that running for a while just in case there's another part to the > > problem, but FWIW: > > > > Tested-by: Mark Rutland > > Thanks for testing! Sent the patch to Jens for inclusion. Cheers! FWIW, I left my test case running for a day with no issue, so this looks rock solid. Mark.
Re: v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
On Mon 26-02-18 11:38:19, Mark Rutland wrote: > On Mon, Feb 26, 2018 at 11:52:56AM +0100, Jan Kara wrote: > > On Fri 23-02-18 15:47:36, Mark Rutland wrote: > > > Hi all, > > > > > > While fuzzing arm64/v4.16-rc2 with syzkaller, I simultaneously hit a > > > number of splats in the block layer: > > > > > > * inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-R} usage in > > > jbd2_trans_will_send_data_barrier > > > > > > * BUG: sleeping function called from invalid context at mm/mempool.c:320 > > > > > > * WARNING: CPU: 0 PID: 0 at block/blk.h:297 > > > generic_make_request_checks+0x670/0x750 > > > > > > ... I've included the full splats at the end of the mail. > > > > > > These all happen in the context of the virtio block IRQ handler, so I > > > wonder if this calls something that doesn't expect to be called from IRQ > > > context. Is it valid to call blk_mq_complete_request() or > > > blk_mq_end_request() from an IRQ handler? > > > > No, it's likely a bug in detection whether IO completion should be deferred > > to a workqueue or not. Does attached patch fix the problem? I don't see > > exactly this being triggered by the syzkaller but it's close enough :) > > > > Honza > > That seems to be it! > > With the below patch applied, I can't trigger the bug after ~10 minutes, > whereas prior to the patch I can trigger it in ~10 seconds. I'll leave > that running for a while just in case there's another part to the > problem, but FWIW: > > Tested-by: Mark Rutland Thanks for testing! Sent the patch to Jens for inclusion. Honza -- Jan Kara SUSE Labs, CR
Re: v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
On Mon, Feb 26, 2018 at 11:52:56AM +0100, Jan Kara wrote: > On Fri 23-02-18 15:47:36, Mark Rutland wrote: > > Hi all, > > > > While fuzzing arm64/v4.16-rc2 with syzkaller, I simultaneously hit a > > number of splats in the block layer: > > > > * inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-R} usage in > > jbd2_trans_will_send_data_barrier > > > > * BUG: sleeping function called from invalid context at mm/mempool.c:320 > > > > * WARNING: CPU: 0 PID: 0 at block/blk.h:297 > > generic_make_request_checks+0x670/0x750 > > > > ... I've included the full splats at the end of the mail. > > > > These all happen in the context of the virtio block IRQ handler, so I > > wonder if this calls something that doesn't expect to be called from IRQ > > context. Is it valid to call blk_mq_complete_request() or > > blk_mq_end_request() from an IRQ handler? > > No, it's likely a bug in detection whether IO completion should be deferred > to a workqueue or not. Does attached patch fix the problem? I don't see > exactly this being triggered by the syzkaller but it's close enough :) > > Honza That seems to be it! With the below patch applied, I can't trigger the bug after ~10 minutes, whereas prior to the patch I can trigger it in ~10 seconds. I'll leave that running for a while just in case there's another part to the problem, but FWIW: Tested-by: Mark Rutland Thanks, Mark. > From 501d97ed88f5020a55a0de4d546df5ad11461cea Mon Sep 17 00:00:00 2001 > From: Jan Kara > Date: Mon, 26 Feb 2018 11:36:52 +0100 > Subject: [PATCH] direct-io: Fix sleep in atomic due to sync AIO > > Commit e864f39569f4 "fs: add RWF_DSYNC aand RWF_SYNC" added additional > way for direct IO to become synchronous and thus trigger fsync from the > IO completion handler. Then commit 9830f4be159b "fs: Use RWF_* flags for > AIO operations" allowed these flags to be set for AIO as well. However > that commit forgot to update the condition checking whether the IO > completion handling should be defered to a workqueue and thus AIO DIO > with RWF_[D]SYNC set will call fsync() from IRQ context resulting in > sleep in atomic. > > Fix the problem by checking directly iocb flags (the same way as it is > done in dio_complete()) instead of checking all conditions that could > lead to IO being synchronous. > > CC: Christoph Hellwig > CC: Goldwyn Rodrigues > CC: sta...@vger.kernel.org > Reported-by: Mark Rutland > Fixes: 9830f4be159b29399d107bffb99e0132bc5aedd4 > Signed-off-by: Jan Kara > --- > fs/direct-io.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/fs/direct-io.c b/fs/direct-io.c > index a0ca9e48e993..1357ef563893 100644 > --- a/fs/direct-io.c > +++ b/fs/direct-io.c > @@ -1274,8 +1274,7 @@ do_blockdev_direct_IO(struct kiocb *iocb, struct inode > *inode, >*/ > if (dio->is_async && iov_iter_rw(iter) == WRITE) { > retval = 0; > - if ((iocb->ki_filp->f_flags & O_DSYNC) || > - IS_SYNC(iocb->ki_filp->f_mapping->host)) > + if (iocb->ki_flags & IOCB_DSYNC) > retval = dio_set_defer_completion(dio); > else if (!dio->inode->i_sb->s_dio_done_wq) { > /* > -- > 2.13.6 >
Re: v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
On Fri 23-02-18 15:47:36, Mark Rutland wrote: > Hi all, > > While fuzzing arm64/v4.16-rc2 with syzkaller, I simultaneously hit a > number of splats in the block layer: > > * inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-R} usage in > jbd2_trans_will_send_data_barrier > > * BUG: sleeping function called from invalid context at mm/mempool.c:320 > > * WARNING: CPU: 0 PID: 0 at block/blk.h:297 > generic_make_request_checks+0x670/0x750 > > ... I've included the full splats at the end of the mail. > > These all happen in the context of the virtio block IRQ handler, so I > wonder if this calls something that doesn't expect to be called from IRQ > context. Is it valid to call blk_mq_complete_request() or > blk_mq_end_request() from an IRQ handler? No, it's likely a bug in detection whether IO completion should be deferred to a workqueue or not. Does attached patch fix the problem? I don't see exactly this being triggered by the syzkaller but it's close enough :) Honza > Syzkaller came up with a minimized reproducer, but it's a bit wacky (the > fcntl and bpf calls should have no practical effect), and I haven't > managed to come up with a C reproducer. > > Any ideas? > > Thanks, > Mark. > > > Syzkaller reproducer: > # {Threaded:true Collide:true Repeat:false Procs:1 Sandbox:setuid Fault:false > FaultCall:-1 FaultNth:0 EnableTun:true UseTmpDir:true HandleSegv:true > WaitRepeat:false Debug:false Repro:false} > mmap(&(0x7f00/0x24000)=nil, 0x24000, 0x3, 0x32, 0x, > 0x0) > r0 = openat(0xff9c, &(0x7f019000-0x8)='./file0\x00', 0x42, > 0x0) > fcntl$setstatus(r0, 0x4, 0x1) > ftruncate(r0, 0x400) > io_setup(0x1f, &(0x7f018000)=0x0) > io_submit(r1, 0x1, &(0x7f01d000-0x28)=[&(0x7f01b000)={0x0, 0x0, 0x0, > 0x1, 0x0, r0, > &(0x7f022000-0x1000)="0 > > 000", > 0x200, 0x0, 0x0, 0x0, 0x0}]) > bpf$BPF_PROG_ATTACH(0x, &(0x7f01b000)={0x0, 0x0, 0x3, 0x2}, > 0x4000) > > > Full splat: > [ 162.337073] > [ 162.338055] WARNING: inconsistent lock state > [ 162.339017] 4.16.0-rc2 #1 Not tainted > [ 162.339797] > [ 162.340725] inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-R} usage. > [ 162.342030] swapper/0/0 [HC1[1]:SC0[0]:HE0:SE1] takes: > [ 162.343061] (&journal->j_state_lock){+?++}, at: [<3b9c3e4b>] > jbd2_trans_will_send_data_barrier+0x44/0xc8 > [ 162.353187] {HARDIRQ-ON-W} state was registered at: > [ 162.354433] lock_acquire+0x48/0x68 > [ 162.358640] _raw_write_lock+0x3c/0x50 > [ 162.360716] ext4_init_journal_params.isra.6+0x40/0xa0 > [ 162.363445] ext4_fill_super+0x25cc/0x2e88 > [ 162.364481] mount_bdev+0x19c/0x1d8 > [ 162.365345] ext4_mount+0x14/0x20 > [ 162.366130] mount_fs+0x34/0x160 > [ 162.366790] vfs_kern_mount.part.8+0x54/0x160 > [ 162.367874] do_mount+0x540/0xd40 > [ 162.373776] SyS_mount+0x68/0x100 > [ 162.374467] mount_block_root+0x11c/0x28c > [ 162.376558] mount_root+0x130/0x164 > [ 162.380753] prepare_namespace+0x138/0x180 > [ 162.381729] kernel_init_freeable+0x25c/0x280 > [ 162.382625] kernel_init+0x10/0x100 > [ 162.383337] ret_from_fork+0x10/0x18 > [ 162.384072] irq event stamp: 3670810 > [ 162.384787] hardirqs last enabled at (3670805): [] > arch_cpu_idle+0x14/0x28 > [ 162.386505] hardirqs last disabled at (3670806): [<341112e2>] > el1_irq+0x74/0x130 > [ 162.388107] softirqs last enabled at (3670810): [ ] > _local_bh_enable+0x20/0x40 > [ 162.389880] softirqs last disabled at (3670809): [ ] > irq_enter+0x54/0x70 > [ 162.391443] > [ 162.391443] other info that might help us debug this: > [ 162.392680] Possible unsafe locking scenario: > [ 162.392680] > [ 162.405967]CPU0 > [ 162.406513] > [ 162.407055] lock(&journal->j_state_lock); > [
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
Hi all, While fuzzing arm64/v4.16-rc2 with syzkaller, I simultaneously hit a number of splats in the block layer: * inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-R} usage in jbd2_trans_will_send_data_barrier * BUG: sleeping function called from invalid context at mm/mempool.c:320 * WARNING: CPU: 0 PID: 0 at block/blk.h:297 generic_make_request_checks+0x670/0x750 ... I've included the full splats at the end of the mail. These all happen in the context of the virtio block IRQ handler, so I wonder if this calls something that doesn't expect to be called from IRQ context. Is it valid to call blk_mq_complete_request() or blk_mq_end_request() from an IRQ handler? Syzkaller came up with a minimized reproducer, but it's a bit wacky (the fcntl and bpf calls should have no practical effect), and I haven't managed to come up with a C reproducer. Any ideas? Thanks, Mark. Syzkaller reproducer: # {Threaded:true Collide:true Repeat:false Procs:1 Sandbox:setuid Fault:false FaultCall:-1 FaultNth:0 EnableTun:true UseTmpDir:true HandleSegv:true WaitRepeat:false Debug:false Repro:false} mmap(&(0x7f00/0x24000)=nil, 0x24000, 0x3, 0x32, 0x, 0x0) r0 = openat(0xff9c, &(0x7f019000-0x8)='./file0\x00', 0x42, 0x0) fcntl$setstatus(r0, 0x4, 0x1) ftruncate(r0, 0x400) io_setup(0x1f, &(0x7f018000)=0x0) io_submit(r1, 0x1, &(0x7f01d000-0x28)=[&(0x7f01b000)={0x0, 0x0, 0x0, 0x1, 0x0, r0, &(0x7f022000-0x1000)="", 0x200, 0x0, 0x0, 0x0, 0x0}]) bpf$BPF_PROG_ATTACH(0x, &(0x7f01b000)={0x0, 0x0, 0x3, 0x2}, 0x4000) Full splat: [ 162.337073] [ 162.338055] WARNING: inconsistent lock state [ 162.339017] 4.16.0-rc2 #1 Not tainted [ 162.339797] [ 162.340725] inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-R} usage. [ 162.342030] swapper/0/0 [HC1[1]:SC0[0]:HE0:SE1] takes: [ 162.343061] (&journal->j_state_lock){+?++}, at: [<3b9c3e4b>] jbd2_trans_will_send_data_barrier+0x44/0xc8 [ 162.353187] {HARDIRQ-ON-W} state was registered at: [ 162.354433] lock_acquire+0x48/0x68 [ 162.358640] _raw_write_lock+0x3c/0x50 [ 162.360716] ext4_init_journal_params.isra.6+0x40/0xa0 [ 162.363445] ext4_fill_super+0x25cc/0x2e88 [ 162.364481] mount_bdev+0x19c/0x1d8 [ 162.365345] ext4_mount+0x14/0x20 [ 162.366130] mount_fs+0x34/0x160 [ 162.366790] vfs_kern_mount.part.8+0x54/0x160 [ 162.367874] do_mount+0x540/0xd40 [ 162.373776] SyS_mount+0x68/0x100 [ 162.374467] mount_block_root+0x11c/0x28c [ 162.376558] mount_root+0x130/0x164 [ 162.380753] prepare_namespace+0x138/0x180 [ 162.381729] kernel_init_freeable+0x25c/0x280 [ 162.382625] kernel_init+0x10/0x100 [ 162.383337] ret_from_fork+0x10/0x18 [ 162.384072] irq event stamp: 3670810 [ 162.384787] hardirqs last enabled at (3670805): [] arch_cpu_idle+0x14/0x28 [ 162.386505] hardirqs last disabled at (3670806): [<341112e2>] el1_irq+0x74/0x130 [ 162.388107] softirqs last enabled at (3670810): [ ] _local_bh_enable+0x20/0x40 [ 162.389880] softirqs last disabled at (3670809): [ ] irq_enter+0x54/0x70 [ 162.391443] [ 162.391443] other info that might help us debug this: [ 162.392680] Possible unsafe locking scenario: [ 162.392680] [ 162.405967]CPU0 [ 162.406513] [ 162.407055] lock(&journal->j_state_lock); [ 162.407880] [ 162.408400] lock(&journal->j_state_lock); [ 162.409287] [ 162.409287] *** DEADLOCK *** [ 162.409287] [ 162.410447] 2 locks held by swapper/0/0: [ 162.411248] #0: (&(&vblk->vqs[i].lock)->rlock){-.-.}, at: [ ] virtblk_done+0x50/0xf8 [ 162.413101] #1: (rcu_read_lock){}, at: [<2bf2a216>] hctx_lock+0x1c/0xe8 [ 162.414630] [ 162.414630] stack backtrace: [ 162.415492] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.16.0-rc2 #1 [ 162.429624] Hardware name: linux,dummy