Re: [f2fs-dev] [PATCH v4 00/23] Convert to filemap_get_folios_tag()

2022-11-03 Thread Dave Chinner
On Wed, Nov 02, 2022 at 09:10:08AM -0700, Vishal Moola (Oracle) wrote:
> This patch series replaces find_get_pages_range_tag() with
> filemap_get_folios_tag(). This also allows the removal of multiple
> calls to compound_head() throughout.
> It also makes a good chunk of the straightforward conversions to folios,
> and takes the opportunity to introduce a function that grabs a folio
> from the pagecache.
> 
> F2fs and Ceph have quite a lot of work to be done regarding folios, so
> for now those patches only have the changes necessary for the removal of
> find_get_pages_range_tag(), and only support folios of size 1 (which is
> all they use right now anyways).
> 
> I've run xfstests on btrfs, ext4, f2fs, and nilfs2, but more testing may be
> beneficial. The page-writeback and filemap changes implicitly work. Testing
> and review of the other changes (afs, ceph, cifs, gfs2) would be appreciated.

Same question as last time: have you tested this with multipage
folios enabled? If you haven't tested XFS, then I'm guessing the
answer is no, and you haven't fixed the bug I pointed out in
the write_cache_pages() implementation

-Dave.
-- 
Dave Chinner
da...@fromorbit.com


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


[f2fs-dev] [Bug 216050] f2fs_gc occupies 100% cpu

2022-11-03 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=216050

--- Comment #72 from Guido (guido.iod...@gmail.com) ---
I tried kernel 6.0.6, after 2 days the problem reoccurred :-(

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


[f2fs-dev] [Bug 216050] f2fs_gc occupies 100% cpu

2022-11-03 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=216050

--- Comment #73 from Guido (guido.iod...@gmail.com) ---
just to try it out, I will now give background_gc=sync a chance

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH v4 00/23] Convert to filemap_get_folios_tag()

2022-11-03 Thread Vishal Moola
On Thu, Nov 3, 2022 at 12:08 AM Dave Chinner  wrote:
>
> On Wed, Nov 02, 2022 at 09:10:08AM -0700, Vishal Moola (Oracle) wrote:
> > This patch series replaces find_get_pages_range_tag() with
> > filemap_get_folios_tag(). This also allows the removal of multiple
> > calls to compound_head() throughout.
> > It also makes a good chunk of the straightforward conversions to folios,
> > and takes the opportunity to introduce a function that grabs a folio
> > from the pagecache.
> >
> > F2fs and Ceph have quite a lot of work to be done regarding folios, so
> > for now those patches only have the changes necessary for the removal of
> > find_get_pages_range_tag(), and only support folios of size 1 (which is
> > all they use right now anyways).
> >
> > I've run xfstests on btrfs, ext4, f2fs, and nilfs2, but more testing may be
> > beneficial. The page-writeback and filemap changes implicitly work. Testing
> > and review of the other changes (afs, ceph, cifs, gfs2) would be 
> > appreciated.
>
> Same question as last time: have you tested this with multipage
> folios enabled? If you haven't tested XFS, then I'm guessing the
> answer is no, and you haven't fixed the bug I pointed out in
> the write_cache_pages() implementation
>

I haven't tested the series with multipage folios or XFS.

I don't seem to have gotten your earlier comments, and I
can't seem to find them on the mailing lists. Could you
please send them again so I can take a look?


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


[f2fs-dev] [Bug 216050] f2fs_gc occupies 100% cpu

2022-11-03 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=216050

--- Comment #74 from Matteo Croce (rootki...@yahoo.it) ---
Hi all,

The only way to find where the issue is, is to bisect from the latest working
kernel to the first non working one

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


[f2fs-dev] [Bug 216050] f2fs_gc occupies 100% cpu

2022-11-03 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=216050

--- Comment #75 from bogdan.nico...@gmail.com ---
Guido just pointed that out in #71: the issue appeared since 5.18

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


[f2fs-dev] [Bug 216050] f2fs_gc occupies 100% cpu

2022-11-03 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=216050

--- Comment #76 from Guido (guido.iod...@gmail.com) ---
(In reply to Matteo Croce from comment #74)
> Hi all,
> 
> The only way to find where the issue is, is to bisect from the latest
> working kernel to the first non working one

the last working was 5.17.15, the first with the bug is 5.18

I tried to give a look to the diff in gc.c file in kernel, they are very few,
maybe the problem is related to GC_URGENT_HIGH / MID mechanism...

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


[f2fs-dev] [Bug 216050] f2fs_gc occupies 100% cpu

2022-11-03 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=216050

--- Comment #77 from Matteo Croce (rootki...@yahoo.it) ---
Great. If you do a bisect, you will find the problem in, let's say, 14 steps.
Really worth a try.

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH v4 00/23] Convert to filemap_get_folios_tag()

2022-11-03 Thread Dave Chinner
On Thu, Nov 03, 2022 at 09:38:48AM -0700, Vishal Moola wrote:
> On Thu, Nov 3, 2022 at 12:08 AM Dave Chinner  wrote:
> >
> > On Wed, Nov 02, 2022 at 09:10:08AM -0700, Vishal Moola (Oracle) wrote:
> > > This patch series replaces find_get_pages_range_tag() with
> > > filemap_get_folios_tag(). This also allows the removal of multiple
> > > calls to compound_head() throughout.
> > > It also makes a good chunk of the straightforward conversions to folios,
> > > and takes the opportunity to introduce a function that grabs a folio
> > > from the pagecache.
> > >
> > > F2fs and Ceph have quite a lot of work to be done regarding folios, so
> > > for now those patches only have the changes necessary for the removal of
> > > find_get_pages_range_tag(), and only support folios of size 1 (which is
> > > all they use right now anyways).
> > >
> > > I've run xfstests on btrfs, ext4, f2fs, and nilfs2, but more testing may 
> > > be
> > > beneficial. The page-writeback and filemap changes implicitly work. 
> > > Testing
> > > and review of the other changes (afs, ceph, cifs, gfs2) would be 
> > > appreciated.
> >
> > Same question as last time: have you tested this with multipage
> > folios enabled? If you haven't tested XFS, then I'm guessing the
> > answer is no, and you haven't fixed the bug I pointed out in
> > the write_cache_pages() implementation
> >
> 
> I haven't tested the series with multipage folios or XFS.
> 
> I don't seem to have gotten your earlier comments, and I
> can't seem to find them on the mailing lists. Could you
> please send them again so I can take a look?

They are in the lore -fsdevel archive - no idea why you couldn't
find them

https://lore.kernel.org/linux-fsdevel/20221018210152.gh2703...@dread.disaster.area/
https://lore.kernel.org/linux-fsdevel/20221018214544.gi2703...@dread.disaster.area/

-Dave.
-- 
Dave Chinner
da...@fromorbit.com


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH 00/23] Convert to filemap_get_folios_tag()

2022-11-03 Thread Vishal Moola
On Wed, Oct 19, 2022 at 08:45:44AM +1100, Dave Chinner wrote:
> On Thu, Sep 01, 2022 at 03:01:15PM -0700, Vishal Moola (Oracle) wrote:
> > This patch series replaces find_get_pages_range_tag() with
> > filemap_get_folios_tag(). This also allows the removal of multiple
> > calls to compound_head() throughout.
> > It also makes a good chunk of the straightforward conversions to folios,
> > and takes the opportunity to introduce a function that grabs a folio
> > from the pagecache.
> > 
> > F2fs and Ceph have quite alot of work to be done regarding folios, so
> > for now those patches only have the changes necessary for the removal of
> > find_get_pages_range_tag(), and only support folios of size 1 (which is
> > all they use right now anyways).
> > 
> > I've run xfstests on btrfs, ext4, f2fs, and nilfs2, but more testing may be
> > beneficial.
> 
> Well, that answers my question about how filesystems that enable
> multi-page folios were tested: they weren't. 
> 
> I'd suggest that anyone working on further extending the
> filemap/folio infrastructure really needs to be testing XFS as a
> first priority, and then other filesystems as a secondary concern.
> 
> That's because XFS (via the fs/iomap infrastructure) is one of only
> 3 filesystems in the kernel (AFS and tmpfs are the others) that
> interact with the page cache and page cache "pages" solely via folio
> interfaces. As such they are able to support multi-page folios in
> the page cache. All of the tested filesystems still use the fixed
> PAGE_SIZE page interfaces to interact with the page cache, so they
> don't actually exercise interactions with multi-page folios at all.
> 

Thanks for the explanation! That makes perfect sense. I wholeheartedly
agree, and I'll be sure to test any future changes on XFS to try to
ensure multi-page folio functionality. 

I know David ran tests on AFS, so hopefully those hit multipage folios
well enough. But I'm not sure whether it was just for the AFS patch or
with the whole series applied. Regardless I'll run my own set of tests
on XFS and see if I run into any issues as well.


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH 04/23] page-writeback: Convert write_cache_pages() to use filemap_get_folios_tag()

2022-11-03 Thread Vishal Moola
On Wed, Oct 19, 2022 at 08:01:52AM +1100, Dave Chinner wrote:
> On Thu, Sep 01, 2022 at 03:01:19PM -0700, Vishal Moola (Oracle) wrote:
> > Converted function to use folios throughout. This is in preparation for
> > the removal of find_get_pages_range_tag().
> > 
> > Signed-off-by: Vishal Moola (Oracle) 
> > ---
> >  mm/page-writeback.c | 44 +++-
> >  1 file changed, 23 insertions(+), 21 deletions(-)
> > 
> > diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> > index 032a7bf8d259..087165357a5a 100644
> > --- a/mm/page-writeback.c
> > +++ b/mm/page-writeback.c
> > @@ -2285,15 +2285,15 @@ int write_cache_pages(struct address_space *mapping,
> > int ret = 0;
> > int done = 0;
> > int error;
> > -   struct pagevec pvec;
> > -   int nr_pages;
> > +   struct folio_batch fbatch;
> > +   int nr_folios;
> > pgoff_t index;
> > pgoff_t end;/* Inclusive */
> > pgoff_t done_index;
> > int range_whole = 0;
> > xa_mark_t tag;
> >  
> > -   pagevec_init(&pvec);
> > +   folio_batch_init(&fbatch);
> > if (wbc->range_cyclic) {
> > index = mapping->writeback_index; /* prev offset */
> > end = -1;
> > @@ -2313,17 +2313,18 @@ int write_cache_pages(struct address_space *mapping,
> > while (!done && (index <= end)) {
> > int i;
> >  
> > -   nr_pages = pagevec_lookup_range_tag(&pvec, mapping, &index, end,
> > -   tag);
> > -   if (nr_pages == 0)
> > +   nr_folios = filemap_get_folios_tag(mapping, &index, end,
> > +   tag, &fbatch);
> 
> This can find and return dirty multi-page folios if the filesystem
> enables them in the mapping at instantiation time, right?

Yup, it will.

> > +
> > +   if (nr_folios == 0)
> > break;
> >  
> > -   for (i = 0; i < nr_pages; i++) {
> > -   struct page *page = pvec.pages[i];
> > +   for (i = 0; i < nr_folios; i++) {
> > +   struct folio *folio = fbatch.folios[i];
> >  
> > -   done_index = page->index;
> > +   done_index = folio->index;
> >  
> > -   lock_page(page);
> > +   folio_lock(folio);
> >  
> > /*
> >  * Page truncated or invalidated. We can freely skip it
> > @@ -2333,30 +2334,30 @@ int write_cache_pages(struct address_space *mapping,
> >  * even if there is now a new, dirty page at the same
> >  * pagecache address.
> >  */
> > -   if (unlikely(page->mapping != mapping)) {
> > +   if (unlikely(folio->mapping != mapping)) {
> >  continue_unlock:
> > -   unlock_page(page);
> > +   folio_unlock(folio);
> > continue;
> > }
> >  
> > -   if (!PageDirty(page)) {
> > +   if (!folio_test_dirty(folio)) {
> > /* someone wrote it for us */
> > goto continue_unlock;
> > }
> >  
> > -   if (PageWriteback(page)) {
> > +   if (folio_test_writeback(folio)) {
> > if (wbc->sync_mode != WB_SYNC_NONE)
> > -   wait_on_page_writeback(page);
> > +   folio_wait_writeback(folio);
> > else
> > goto continue_unlock;
> > }
> >  
> > -   BUG_ON(PageWriteback(page));
> > -   if (!clear_page_dirty_for_io(page))
> > +   BUG_ON(folio_test_writeback(folio));
> > +   if (!folio_clear_dirty_for_io(folio))
> > goto continue_unlock;
> >  
> > trace_wbc_writepage(wbc, inode_to_bdi(mapping->host));
> > -   error = (*writepage)(page, wbc, data);
> > +   error = writepage(&folio->page, wbc, data);
> 
> Yet, IIUC, this treats all folios as if they are single page folios.
> i.e. it passes the head page of a multi-page folio to a callback
> that will treat it as a single PAGE_SIZE page, because that's all
> the writepage callbacks are currently expected to be passed...
> 
> So won't this break writeback of dirty multipage folios?

Yes, it appears it would. But it wouldn't because its already 'broken'.

The current find_get_pages_range_tag() actually has the exact same
issue. The current code to fill up the pages array is:

pages[ret] = &folio->page;
if (++ret == nr_pages) {
*index = folio->index + folio_nr_pages(folio);
goto out;

which behaves the same way as the issue you pointed out (both break
large folios). When I spoke to Matthew a

Re: [f2fs-dev] [PATCH 04/23] page-writeback: Convert write_cache_pages() to use filemap_get_folios_tag()

2022-11-03 Thread Dave Chinner
On Thu, Nov 03, 2022 at 03:28:05PM -0700, Vishal Moola wrote:
> On Wed, Oct 19, 2022 at 08:01:52AM +1100, Dave Chinner wrote:
> > On Thu, Sep 01, 2022 at 03:01:19PM -0700, Vishal Moola (Oracle) wrote:
> > > Converted function to use folios throughout. This is in preparation for
> > > the removal of find_get_pages_range_tag().
> > > 
> > > Signed-off-by: Vishal Moola (Oracle) 
> > > ---
> > >  mm/page-writeback.c | 44 +++-
> > >  1 file changed, 23 insertions(+), 21 deletions(-)
> > > 
> > > diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> > > index 032a7bf8d259..087165357a5a 100644
> > > --- a/mm/page-writeback.c
> > > +++ b/mm/page-writeback.c
> > > @@ -2285,15 +2285,15 @@ int write_cache_pages(struct address_space 
> > > *mapping,
> > >   int ret = 0;
> > >   int done = 0;
> > >   int error;
> > > - struct pagevec pvec;
> > > - int nr_pages;
> > > + struct folio_batch fbatch;
> > > + int nr_folios;
> > >   pgoff_t index;
> > >   pgoff_t end;/* Inclusive */
> > >   pgoff_t done_index;
> > >   int range_whole = 0;
> > >   xa_mark_t tag;
> > >  
> > > - pagevec_init(&pvec);
> > > + folio_batch_init(&fbatch);
> > >   if (wbc->range_cyclic) {
> > >   index = mapping->writeback_index; /* prev offset */
> > >   end = -1;
> > > @@ -2313,17 +2313,18 @@ int write_cache_pages(struct address_space 
> > > *mapping,
> > >   while (!done && (index <= end)) {
> > >   int i;
> > >  
> > > - nr_pages = pagevec_lookup_range_tag(&pvec, mapping, &index, end,
> > > - tag);
> > > - if (nr_pages == 0)
> > > + nr_folios = filemap_get_folios_tag(mapping, &index, end,
> > > + tag, &fbatch);
> > 
> > This can find and return dirty multi-page folios if the filesystem
> > enables them in the mapping at instantiation time, right?
> 
> Yup, it will.
> 
> > > +
> > > + if (nr_folios == 0)
> > >   break;
> > >  
> > > - for (i = 0; i < nr_pages; i++) {
> > > - struct page *page = pvec.pages[i];
> > > + for (i = 0; i < nr_folios; i++) {
> > > + struct folio *folio = fbatch.folios[i];
> > >  
> > > - done_index = page->index;
> > > + done_index = folio->index;
> > >  
> > > - lock_page(page);
> > > + folio_lock(folio);
> > >  
> > >   /*
> > >* Page truncated or invalidated. We can freely skip it
> > > @@ -2333,30 +2334,30 @@ int write_cache_pages(struct address_space 
> > > *mapping,
> > >* even if there is now a new, dirty page at the same
> > >* pagecache address.
> > >*/
> > > - if (unlikely(page->mapping != mapping)) {
> > > + if (unlikely(folio->mapping != mapping)) {
> > >  continue_unlock:
> > > - unlock_page(page);
> > > + folio_unlock(folio);
> > >   continue;
> > >   }
> > >  
> > > - if (!PageDirty(page)) {
> > > + if (!folio_test_dirty(folio)) {
> > >   /* someone wrote it for us */
> > >   goto continue_unlock;
> > >   }
> > >  
> > > - if (PageWriteback(page)) {
> > > + if (folio_test_writeback(folio)) {
> > >   if (wbc->sync_mode != WB_SYNC_NONE)
> > > - wait_on_page_writeback(page);
> > > + folio_wait_writeback(folio);
> > >   else
> > >   goto continue_unlock;
> > >   }
> > >  
> > > - BUG_ON(PageWriteback(page));
> > > - if (!clear_page_dirty_for_io(page))
> > > + BUG_ON(folio_test_writeback(folio));
> > > + if (!folio_clear_dirty_for_io(folio))
> > >   goto continue_unlock;
> > >  
> > >   trace_wbc_writepage(wbc, inode_to_bdi(mapping->host));
> > > - error = (*writepage)(page, wbc, data);
> > > + error = writepage(&folio->page, wbc, data);
> > 
> > Yet, IIUC, this treats all folios as if they are single page folios.
> > i.e. it passes the head page of a multi-page folio to a callback
> > that will treat it as a single PAGE_SIZE page, because that's all
> > the writepage callbacks are currently expected to be passed...
> > 
> > So won't this break writeback of dirty multipage folios?
> 
> Yes, it appears it would. But it wouldn't because its already 'broken'.

It is? Then why isn't XFS broken on existing kernels? Oh, we don't
know because it hasn't been tested?

Seriously - if this really is broken, and this patchset further
propagating the brokeness, then somebody needs to explain to me why
this is not co

Re: [f2fs-dev] [PATCH 04/23] page-writeback: Convert write_cache_pages() to use filemap_get_folios_tag()

2022-11-03 Thread Darrick J. Wong
On Fri, Nov 04, 2022 at 11:32:35AM +1100, Dave Chinner wrote:
> On Thu, Nov 03, 2022 at 03:28:05PM -0700, Vishal Moola wrote:
> > On Wed, Oct 19, 2022 at 08:01:52AM +1100, Dave Chinner wrote:
> > > On Thu, Sep 01, 2022 at 03:01:19PM -0700, Vishal Moola (Oracle) wrote:
> > > > Converted function to use folios throughout. This is in preparation for
> > > > the removal of find_get_pages_range_tag().
> > > > 
> > > > Signed-off-by: Vishal Moola (Oracle) 
> > > > ---
> > > >  mm/page-writeback.c | 44 +++-
> > > >  1 file changed, 23 insertions(+), 21 deletions(-)
> > > > 
> > > > diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> > > > index 032a7bf8d259..087165357a5a 100644
> > > > --- a/mm/page-writeback.c
> > > > +++ b/mm/page-writeback.c
> > > > @@ -2285,15 +2285,15 @@ int write_cache_pages(struct address_space 
> > > > *mapping,
> > > > int ret = 0;
> > > > int done = 0;
> > > > int error;
> > > > -   struct pagevec pvec;
> > > > -   int nr_pages;
> > > > +   struct folio_batch fbatch;
> > > > +   int nr_folios;
> > > > pgoff_t index;
> > > > pgoff_t end;/* Inclusive */
> > > > pgoff_t done_index;
> > > > int range_whole = 0;
> > > > xa_mark_t tag;
> > > >  
> > > > -   pagevec_init(&pvec);
> > > > +   folio_batch_init(&fbatch);
> > > > if (wbc->range_cyclic) {
> > > > index = mapping->writeback_index; /* prev offset */
> > > > end = -1;
> > > > @@ -2313,17 +2313,18 @@ int write_cache_pages(struct address_space 
> > > > *mapping,
> > > > while (!done && (index <= end)) {
> > > > int i;
> > > >  
> > > > -   nr_pages = pagevec_lookup_range_tag(&pvec, mapping, 
> > > > &index, end,
> > > > -   tag);
> > > > -   if (nr_pages == 0)
> > > > +   nr_folios = filemap_get_folios_tag(mapping, &index, end,
> > > > +   tag, &fbatch);
> > > 
> > > This can find and return dirty multi-page folios if the filesystem
> > > enables them in the mapping at instantiation time, right?
> > 
> > Yup, it will.
> > 
> > > > +
> > > > +   if (nr_folios == 0)
> > > > break;
> > > >  
> > > > -   for (i = 0; i < nr_pages; i++) {
> > > > -   struct page *page = pvec.pages[i];
> > > > +   for (i = 0; i < nr_folios; i++) {
> > > > +   struct folio *folio = fbatch.folios[i];
> > > >  
> > > > -   done_index = page->index;
> > > > +   done_index = folio->index;
> > > >  
> > > > -   lock_page(page);
> > > > +   folio_lock(folio);
> > > >  
> > > > /*
> > > >  * Page truncated or invalidated. We can freely 
> > > > skip it
> > > > @@ -2333,30 +2334,30 @@ int write_cache_pages(struct address_space 
> > > > *mapping,
> > > >  * even if there is now a new, dirty page at 
> > > > the same
> > > >  * pagecache address.
> > > >  */
> > > > -   if (unlikely(page->mapping != mapping)) {
> > > > +   if (unlikely(folio->mapping != mapping)) {
> > > >  continue_unlock:
> > > > -   unlock_page(page);
> > > > +   folio_unlock(folio);
> > > > continue;
> > > > }
> > > >  
> > > > -   if (!PageDirty(page)) {
> > > > +   if (!folio_test_dirty(folio)) {
> > > > /* someone wrote it for us */
> > > > goto continue_unlock;
> > > > }
> > > >  
> > > > -   if (PageWriteback(page)) {
> > > > +   if (folio_test_writeback(folio)) {
> > > > if (wbc->sync_mode != WB_SYNC_NONE)
> > > > -   wait_on_page_writeback(page);
> > > > +   folio_wait_writeback(folio);
> > > > else
> > > > goto continue_unlock;
> > > > }
> > > >  
> > > > -   BUG_ON(PageWriteback(page));
> > > > -   if (!clear_page_dirty_for_io(page))
> > > > +   BUG_ON(folio_test_writeback(folio));
> > > > +   if (!folio_clear_dirty_for_io(folio))
> > > > goto continue_unlock;
> > > >  
> > > > trace_wbc_writepage(wbc, 
> > > > inode_to_bdi(mapping->host));
> > > > -   error = (*writepage)(page, wbc, data);
> > > > +   error = writepage(

Re: [f2fs-dev] [PATCH 04/23] page-writeback: Convert write_cache_pages() to use filemap_get_folios_tag()

2022-11-03 Thread Dave Chinner
On Thu, Nov 03, 2022 at 07:45:01PM -0700, Darrick J. Wong wrote:
> On Fri, Nov 04, 2022 at 11:32:35AM +1100, Dave Chinner wrote:
> > On Thu, Nov 03, 2022 at 03:28:05PM -0700, Vishal Moola wrote:
> > > On Wed, Oct 19, 2022 at 08:01:52AM +1100, Dave Chinner wrote:
> > > > On Thu, Sep 01, 2022 at 03:01:19PM -0700, Vishal Moola (Oracle) wrote:
> > > > > - BUG_ON(PageWriteback(page));
> > > > > - if (!clear_page_dirty_for_io(page))
> > > > > + BUG_ON(folio_test_writeback(folio));
> > > > > + if (!folio_clear_dirty_for_io(folio))
> > > > >   goto continue_unlock;
> > > > >  
> > > > >   trace_wbc_writepage(wbc, 
> > > > > inode_to_bdi(mapping->host));
> > > > > - error = (*writepage)(page, wbc, data);
> > > > > + error = writepage(&folio->page, wbc, data);
> > > > 
> > > > Yet, IIUC, this treats all folios as if they are single page folios.
> > > > i.e. it passes the head page of a multi-page folio to a callback
> > > > that will treat it as a single PAGE_SIZE page, because that's all
> > > > the writepage callbacks are currently expected to be passed...
> > > > 
> > > > So won't this break writeback of dirty multipage folios?
> > > 
> > > Yes, it appears it would. But it wouldn't because its already 'broken'.
> > 
> > It is? Then why isn't XFS broken on existing kernels? Oh, we don't
> > know because it hasn't been tested?
> > 
> > Seriously - if this really is broken, and this patchset further
> > propagating the brokeness, then somebody needs to explain to me why
> > this is not corrupting data in XFS.
> 
> It looks like iomap_do_writepage finds the folio size correctly
> 
>   end_pos = folio_pos(folio) + folio_size(folio);
> 
> and iomap_writpage_map will map out the correct number of blocks
> 
>   unsigned nblocks = i_blocks_per_folio(inode, folio);
> 
>   for (i = 0; i < nblocks && pos < end_pos; i++, pos += len) {
> 
> right?

Yup, that's how I read it, too.

But my recent experience with folios involved being repeatedly
burnt by edge case corruptions due to multipage folios showing up
when and where I least expected them.

Hence doing a 1:1 conversion of page based code to folio based code
and just assuming large folios will work without any testing seems
akin to playing russian roulette with loose cannons that have been
doused with napalm and then set on fire by an air-dropped barrel
bomb...

Cheers,

Dave.
-- 
Dave Chinner
da...@fromorbit.com


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel