On 03/09, Chao Yu wrote:
> On 2/24/26 09:04, Greg Kroah-Hartman wrote:
> > f2fs_convert_inline_inode() holds the page lock of the inline data page
> > and then calls f2fs_lock_op(), which acquires cp_rwsem in read mode.
> > At the same time, f2fs_write_checkpoint() can acquire cp_rwsem in write
> > mode and then will wait for page locks, like during
> > f2fs_write_node_pages() or data flushing, leading to a deadlock.
> 
> - f2fs_convert_inline_inode           - f2fs_write_checkpoint()
>  - f2fs_grab_cache_folio page #0
>                                        - block_operations
>                                         - f2fs_lock_all
>  - f2fs_lock_op
>                                         - f2fs_sync_node_pages
>                                          - flush_inline_data
>                                           - f2fs_filemap_get_folio page #0

flush_inline_data -> f2fs_filemap_get_folio(FGP_LOCK|FGP_NOWAIT) does not
wait for the lock, right?

> 
> It seems true, although I didn't hit such deadlock.
> 
> To Jaegeuk, what do you think?
> 
> > 
> > Fix this by acquiring the lock_op before locking the page. This ensures
> > the correct lock ordering, op before page, and avoids the deadlock.
> > 
> > Cc: Jaegeuk Kim <[email protected]>
> > Cc: Chao Yu <[email protected]>
> > Cc: stable <[email protected]>
> > Assisted-by: gkh_clanker_2000
> > Signed-off-by: Greg Kroah-Hartman <[email protected]>
> > ---
> > 
> > This issue was found by running a tool to compare a past kernel CVE to
> > try to find any potential places in the existing codebase that was
> > missed with the original fix.  I do not know if this patch or text
> > really is correct, but the code paths seems sane.
> > 
> > Note that the majority of the changelog text came from an untrusted and
> > experimental LLM model that is known for making crap up.  So it might be
> > totally lying here, and if so, I am very sorry for wasting anyone's time
> > and I'll just go back to running this on code that I actually understand
> > and know how to verify myself, but I figured it was worth at least
> > asking you all about it.
> > 
> > fs/f2fs/inline.c | 10 ++++++----
> >  1 file changed, 6 insertions(+), 4 deletions(-)
> > 
> > diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
> > index 0a1052d5ee62..98bc920a4f35 100644
> > --- a/fs/f2fs/inline.c
> > +++ b/fs/f2fs/inline.c
> > @@ -232,12 +232,14 @@ int f2fs_convert_inline_inode(struct inode *inode)
> >     if (err)
> >             return err;
> >  
> > -   folio = f2fs_grab_cache_folio(inode->i_mapping, 0, false);
> > -   if (IS_ERR(folio))
> > -           return PTR_ERR(folio);
> > -
> >     f2fs_lock_op(sbi, &lc);
> >  
> > +   folio = f2fs_grab_cache_folio(inode->i_mapping, 0, false);
> > +   if (IS_ERR(folio)) {
> > +           f2fs_unlock_op(sbi, &lc);
> > +           return PTR_ERR(folio);
> > +   }
> > +
> 
> It will be better to adjust the unlock order in between cp_rwsem and folio 
> lock?
> 
> out:
>       f2fs_folio_put(folio, true);
>       f2fs_unlock_op(sbi, &lc);
> 
> Thanks,
> 
> >     ifolio = f2fs_get_inode_folio(sbi, inode->i_ino);
> >     if (IS_ERR(ifolio)) {
> >             err = PTR_ERR(ifolio);


_______________________________________________
Linux-f2fs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to