On Sun, Jun 19, 2022 at 09:05:59AM +0200, Christoph Hellwig wrote:
> When trying to run xfstests on gfs2 (locally with the lock_nolock
> cluster managed) the first mount already hits this warning in
> inode_to_wb called from mark_buffer_dirty.  This all seems standard
> code from folio_account_dirtied, so not sure what is going there.

I don't think this is new to pagecache/for-next.
https://lore.kernel.org/linux-mm/cf8bc8dd-8e16-3590-a714-51203e6f4...@redhat.com/

> 
> [   30.440408] ------------[ cut here ]------------
> [   30.440409] WARNING: CPU: 1 PID: 931 at include/linux/backing-dev.h:261 
> __folio_mark_dirty+0x2f0/0x380
> [   30.446424] Modules linked in:
> [   30.446828] CPU: 1 PID: 931 Comm: kworker/1:2 Not tainted 5.19.0-rc2+ #1702
> [   30.447714] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
> 1.14.0-2 04/01/2014
> [   30.448770] Workqueue: gfs_recovery gfs2_recover_func
> [   30.449441] RIP: 0010:__folio_mark_dirty+0x2f0/0x380
> [   30.450113] Code: e8 b5 69 12 01 85 c0 0f 85 6a fe ff ff 48 8b 83 a8 01 00 
> 00 be ff ff ff ff 48 8d 78 2
> [   30.452490] RSP: 0018:ffffc90001b77bd0 EFLAGS: 00010046
> [   30.453141] RAX: 0000000000000000 RBX: ffff8881004a3d00 RCX: 
> 0000000000000001
> [   30.454067] RDX: 0000000000000000 RSI: ffffffff82f592db RDI: 
> ffffffff830380ae
> [   30.454970] RBP: ffffea000455f680 R08: 0000000000000001 R09: 
> ffffffff84747570
> [   30.455921] R10: 0000000000000017 R11: ffff88810260b1c0 R12: 
> 0000000000000282
> [   30.456910] R13: ffff88810dd92170 R14: 0000000000000001 R15: 
> 0000000000000001
> [   30.457871] FS:  0000000000000000(0000) GS:ffff88813bd00000(0000) 
> knlGS:0000000000000000
> [   30.458912] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [   30.459608] CR2: 00007efc1d5adc80 CR3: 0000000116416000 CR4: 
> 00000000000006e0
> [   30.460564] Call Trace:
> [   30.460871]  <TASK>
> [   30.461130]  mark_buffer_dirty+0x173/0x1d0
> [   30.461687]  update_statfs_inode+0x146/0x187
> [   30.462276]  gfs2_recover_func.cold+0x48f/0x864
> [   30.462875]  ? add_lock_to_list+0x8b/0xf0
> [   30.463337]  ? __lock_acquire+0xf7e/0x1e30
> [   30.463812]  ? lock_acquire+0xd4/0x300
> [   30.464267]  ? lock_acquire+0xe4/0x300
> [   30.464715]  ? gfs2_recover_func.cold+0x217/0x864
> [   30.465334]  process_one_work+0x239/0x550
> [   30.465920]  ? process_one_work+0x550/0x550
> [   30.466485]  worker_thread+0x4d/0x3a0
> [   30.466966]  ? process_one_work+0x550/0x550
> [   30.467509]  kthread+0xe2/0x110
> [   30.467941]  ? kthread_complete_and_exit+0x20/0x20
> [   30.468558]  ret_from_fork+0x22/0x30
> [   30.469047]  </TASK>
> [   30.469346] irq event stamp: 36146
> [   30.469796] hardirqs last  enabled at (36145): [<ffffffff8139185c>] 
> folio_memcg_lock+0x8c/0x180
> [   30.470919] hardirqs last disabled at (36146): [<ffffffff82429799>] 
> _raw_spin_lock_irqsave+0x59/0x60
> [   30.472024] softirqs last  enabled at (33630): [<ffffffff81157307>] 
> __irq_exit_rcu+0xd7/0x130
> [   30.473051] softirqs last disabled at (33619): [<ffffffff81157307>] 
> __irq_exit_rcu+0xd7/0x130
> [   30.474107] ---[ end trace 0000000000000000 ]---
> [   30.475367] ------------[ cut here ]------------
> 
> 

Reply via email to