On Wed, Jan 28, 2026 at 02:56:02PM -0800, Eric Biggers wrote:
> On Wed, Jan 28, 2026 at 04:26:20PM +0100, Christoph Hellwig wrote:
> > Currently all reads of the fsverity hashes is kicked off from the data
> > I/O completion handler, leading to needlessly dependent I/O.  This is
> > worked around a bit by performing readahead on the level 0 nodes, but
> > still fairly ineffective.
> > 
> > Switch to a model where the ->read_folio and ->readahead methods instead
> > kick off explicit readahead of the fsverity hashed so they are usually
> > available at I/O completion time.
> > 
> > For 64k sequential reads on my test VM this improves read performance
> > from 2.4GB/s - 2.6GB/s to 3.5GB/s - 3.9GB/s.  The improvements for
> > random reads are likely to be even bigger.
> > 
> > Signed-off-by: Christoph Hellwig <[email protected]>
> > Acked-by: David Sterba <[email protected]> [btrfs]
> 
> Unfortunately, this patch causes recursive down_read() of
> address_space::invalidate_lock.  How was this meant to work?

Usually the filesystem calls filemap_invalidate_lock{,_shared} if it
needs to coordinate truncate vs. page removal (i.e. fallocate hole
punch).  That said, there are a few places where the pagecache itself
will take that lock too...

> [   20.563185] ============================================
> [   20.564179] WARNING: possible recursive locking detected
> [   20.565170] 6.19.0-rc7-00041-g7bd72c6393ab #2 Not tainted
> [   20.566180] --------------------------------------------
> [   20.567169] cmp/2320 is trying to acquire lock:
> [   20.568019] ffff888108465030 (mapping.invalidate_lock#2){++++}-{4:4}, at: 
> page_cache_ra_unbounded+0x6f/0x280
> [   20.569828] 
> [   20.569828] but task is already holding lock:
> [   20.570914] ffff888108465030 (mapping.invalidate_lock#2){++++}-{4:4}, at: 
> page_cache_ra_unbounded+0x6f/0x280
> [   20.572739] 
> [   20.572739] other info that might help us debug this:
> [   20.573938]  Possible unsafe locking scenario:
> [   20.573938] 
> [   20.575042]        CPU0
> [   20.575522]        ----
> [   20.576003]   lock(mapping.invalidate_lock#2);
> [   20.576849]   lock(mapping.invalidate_lock#2);
> [   20.577698] 
> [   20.577698]  *** DEADLOCK ***
> [   20.577698] 
> [   20.578795]  May be due to missing lock nesting notation
> [   20.578795] 
> [   20.580045] 1 lock held by cmp/2320:
> [   20.580726]  #0: ffff888108465030 (mapping.invalidate_lock#2){++++}-{4:4}, 
> at: page_cache_ra_unbounded+0x6f/0x20
> [   20.582596] 
> [   20.582596] stack backtrace:
> [   20.583428] CPU: 0 UID: 0 PID: 2320 Comm: cmp Not tainted 
> 6.19.0-rc7-00041-g7bd72c6393ab #2 PREEMPT(none) 
> [   20.583433] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS Arch 
> Linux 1.17.0-2-2 04/01/2014
> [   20.583435] Call Trace:
> [   20.583437]  <TASK>
> [   20.583438]  show_stack+0x48/0x60
> [   20.583446]  dump_stack_lvl+0x75/0xb0
> [   20.583451]  dump_stack+0x14/0x1a
> [   20.583452]  print_deadlock_bug.cold+0xc0/0xca
> [   20.583457]  validate_chain+0x4ca/0x970
> [   20.583463]  __lock_acquire+0x587/0xc40
> [   20.583465]  ? find_held_lock+0x31/0x90
> [   20.583470]  lock_acquire.part.0+0xaf/0x230
> [   20.583472]  ? page_cache_ra_unbounded+0x6f/0x280
> [   20.583474]  ? debug_smp_processor_id+0x1b/0x30
> [   20.583481]  lock_acquire+0x67/0x140
> [   20.583483]  ? page_cache_ra_unbounded+0x6f/0x280
> [   20.583484]  down_read+0x40/0x180
> [   20.583487]  ? page_cache_ra_unbounded+0x6f/0x280
> [   20.583489]  page_cache_ra_unbounded+0x6f/0x280

...and it looks like this is one of those places where the pagecache
takes it for us...

> [   20.583491]  ? lock_acquire.part.0+0xaf/0x230
> [   20.583492]  ? __this_cpu_preempt_check+0x17/0x20
> [   20.583495]  generic_readahead_merkle_tree+0x133/0x140
> [   20.583501]  ext4_readahead_merkle_tree+0x2a/0x30
> [   20.583507]  fsverity_readahead+0x9d/0xc0
> [   20.583510]  ext4_mpage_readpages+0x194/0x9b0
> [   20.583515]  ? __lock_release.isra.0+0x5e/0x160
> [   20.583517]  ext4_readahead+0x3a/0x40
> [   20.583521]  read_pages+0x84/0x370
> [   20.583523]  page_cache_ra_unbounded+0x16c/0x280

...except that pagecache_ra_unbounded is being called recursively from
an actual file data read.  My guess is that we'd need a flag or
something to ask for "unlocked" readahead if we still want readahead to
spur more readahead.

--D

> [   20.583525]  page_cache_ra_order+0x10c/0x170
> [   20.583527]  page_cache_sync_ra+0x1a1/0x360
> [   20.583528]  filemap_get_pages+0x141/0x4c0
> [   20.583532]  ? __this_cpu_preempt_check+0x17/0x20
> [   20.583534]  filemap_read+0x11f/0x540
> [   20.583536]  ? __folio_batch_add_and_move+0x7c/0x330
> [   20.583539]  ? __this_cpu_preempt_check+0x17/0x20
> [   20.583541]  generic_file_read_iter+0xc1/0x110
> [   20.583543]  ? do_pte_missing+0x13a/0x450
> [   20.583547]  ext4_file_read_iter+0x51/0x17
> 


_______________________________________________
Linux-f2fs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to