Re: [f2fs-dev] [PATCH v3] f2fs: avoid inifinite loop to wait for flushing node pages at cp_error

2020-05-27 Thread Chao Yu
On 2020/5/28 4:56, Jaegeuk Kim wrote:
> On 05/27, Chao Yu wrote:
>> On 2020/5/26 9:56, Jaegeuk Kim wrote:
>>> On 05/26, Chao Yu wrote:
 On 2020/5/26 9:11, Chao Yu wrote:
> On 2020/5/25 23:06, Jaegeuk Kim wrote:
>> On 05/25, Chao Yu wrote:
>>> On 2020/5/25 11:56, Jaegeuk Kim wrote:
 Shutdown test is somtimes hung, since it keeps trying to flush dirty 
 node pages
>>
>> 71.07% 0.01%  kworker/u256:1+  [kernel.kallsyms]  [k] wb_writeback
>> |
>>  --71.06%--wb_writeback
>>|
>>|--68.96%--__writeback_inodes_wb
>>|  |
>>|   --68.95%--writeback_sb_inodes
>>| |
>>| 
>> |--65.08%--__writeback_single_inode
>>| |  |
>>| |   
>> --64.35%--do_writepages
>>| | |
>>| | 
>> |--59.83%--f2fs_write_node_pages
>>| | | 
>>  |
>>| | | 
>>   --59.74%--f2fs_sync_node_pages
>>| | | 
>> |
>>| | | 
>> |--27.91%--pagevec_lookup_range_tag
>>| | | 
>> |  |
>>| | | 
>> |   --27.90%--find_get_pages_range_tag
>>
>> Before umount, kworker will always hold one core, that looks not reasonable,
>> to avoid that, could we just allow node write, since it's out-place-update,
>> and cp is not allowed, we don't need to worry about its effect on data on
>> previous checkpoint, and it can decrease memory footprint cost by node pages.
> 
> It can cause some roll-forward recovery?

Yup, recovery should be considered,

Later fsync() will fail due to:

int f2fs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
{
if (unlikely(f2fs_cp_error(F2FS_I_SB(file_inode(file)
return -EIO;


And we need to adjust f2fs_fsync_node_pages() to handle in-process fsyncing node
pages as well:

if (f2fs_cp_error()) {
set_fsync_mark(page, 0);
set_dentry_mark(page, 0);
if (atomic) {
unlock & put page;
ret = -EIO;
break;
}
}

ret = __write_node_page();

Thanks,

> 
>>
>> Thanks,
>>
>>>
>>> IMO, for umount case, we should drop dirty reference and dirty pages on 
>>> meta/data
>>> pages like we change for node pages to avoid potential dead loop...
>>
>> I believe we're doing for them. :P
>
> Actually, I mean do we need to drop dirty meta/data pages explicitly as 
> below:
>
> diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
> index 3dc3ac6fe143..4c08fd0a680a 100644
> --- a/fs/f2fs/checkpoint.c
> +++ b/fs/f2fs/checkpoint.c
> @@ -299,8 +299,15 @@ static int __f2fs_write_meta_page(struct page *page,
>
>   trace_f2fs_writepage(page, META);
>
> - if (unlikely(f2fs_cp_error(sbi)))
> + if (unlikely(f2fs_cp_error(sbi))) {
> + if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
> + ClearPageUptodate(page);
> + dec_page_count(sbi, F2FS_DIRTY_META);
> + unlock_page(page);
> + return 0;
> + }
>   goto redirty_out;
> + }
>   if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
>   goto redirty_out;
>   if (wbc->for_reclaim && page->index < GET_SUM_BLOCK(sbi, 0))
> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> index 48a622b95b76..94b342802513 100644
> --- a/fs/f2fs/data.c
> +++ b/fs/f2fs/data.c
> @@ -2682,6 +2682,12 @@ int f2fs_write_single_data_page(struct page *page, 
> int *submitted,
>
>   /* we should bypass data pages to proceed the kworkder jobs */
>   if (unlikely(f2fs_cp_error(sbi))) {
> + if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
> + ClearPageUptodate(page);
> + inode_dec_dirty_pages(inode);
> + unlock_page(page);
> + return 0;
> + }

 Oh, I notice previously, we will drop non-directory inode's dirty pages 
 directly,
 however, during umount, we'd better drop directory inode's dirty pages as 
 well, right?
>>>
>>> Hmm, I remember I dropped them before. Need to double check.
>>>

>   

Re: [f2fs-dev] [PATCH v3] f2fs: avoid inifinite loop to wait for flushing node pages at cp_error

2020-05-27 Thread Jaegeuk Kim
On 05/27, Chao Yu wrote:
> On 2020/5/26 9:56, Jaegeuk Kim wrote:
> > On 05/26, Chao Yu wrote:
> >> On 2020/5/26 9:11, Chao Yu wrote:
> >>> On 2020/5/25 23:06, Jaegeuk Kim wrote:
>  On 05/25, Chao Yu wrote:
> > On 2020/5/25 11:56, Jaegeuk Kim wrote:
> >> Shutdown test is somtimes hung, since it keeps trying to flush dirty 
> >> node pages
> 
> 71.07% 0.01%  kworker/u256:1+  [kernel.kallsyms]  [k] wb_writeback
> |
>  --71.06%--wb_writeback
>|
>|--68.96%--__writeback_inodes_wb
>|  |
>|   --68.95%--writeback_sb_inodes
>| |
>| 
> |--65.08%--__writeback_single_inode
>| |  |
>| |   
> --64.35%--do_writepages
>| | |
>| | 
> |--59.83%--f2fs_write_node_pages
>| | |  
> |
>| | |  
>  --59.74%--f2fs_sync_node_pages
>| | |  
>|
>| | |  
>|--27.91%--pagevec_lookup_range_tag
>| | |  
>|  |
>| | |  
>|   --27.90%--find_get_pages_range_tag
> 
> Before umount, kworker will always hold one core, that looks not reasonable,
> to avoid that, could we just allow node write, since it's out-place-update,
> and cp is not allowed, we don't need to worry about its effect on data on
> previous checkpoint, and it can decrease memory footprint cost by node pages.

It can cause some roll-forward recovery?

> 
> Thanks,
> 
> >
> > IMO, for umount case, we should drop dirty reference and dirty pages on 
> > meta/data
> > pages like we change for node pages to avoid potential dead loop...
> 
>  I believe we're doing for them. :P
> >>>
> >>> Actually, I mean do we need to drop dirty meta/data pages explicitly as 
> >>> below:
> >>>
> >>> diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
> >>> index 3dc3ac6fe143..4c08fd0a680a 100644
> >>> --- a/fs/f2fs/checkpoint.c
> >>> +++ b/fs/f2fs/checkpoint.c
> >>> @@ -299,8 +299,15 @@ static int __f2fs_write_meta_page(struct page *page,
> >>>
> >>>   trace_f2fs_writepage(page, META);
> >>>
> >>> - if (unlikely(f2fs_cp_error(sbi)))
> >>> + if (unlikely(f2fs_cp_error(sbi))) {
> >>> + if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
> >>> + ClearPageUptodate(page);
> >>> + dec_page_count(sbi, F2FS_DIRTY_META);
> >>> + unlock_page(page);
> >>> + return 0;
> >>> + }
> >>>   goto redirty_out;
> >>> + }
> >>>   if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
> >>>   goto redirty_out;
> >>>   if (wbc->for_reclaim && page->index < GET_SUM_BLOCK(sbi, 0))
> >>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> >>> index 48a622b95b76..94b342802513 100644
> >>> --- a/fs/f2fs/data.c
> >>> +++ b/fs/f2fs/data.c
> >>> @@ -2682,6 +2682,12 @@ int f2fs_write_single_data_page(struct page *page, 
> >>> int *submitted,
> >>>
> >>>   /* we should bypass data pages to proceed the kworkder jobs */
> >>>   if (unlikely(f2fs_cp_error(sbi))) {
> >>> + if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
> >>> + ClearPageUptodate(page);
> >>> + inode_dec_dirty_pages(inode);
> >>> + unlock_page(page);
> >>> + return 0;
> >>> + }
> >>
> >> Oh, I notice previously, we will drop non-directory inode's dirty pages 
> >> directly,
> >> however, during umount, we'd better drop directory inode's dirty pages as 
> >> well, right?
> > 
> > Hmm, I remember I dropped them before. Need to double check.
> > 
> >>
> >>>   mapping_set_error(page->mapping, -EIO);
> >>>   /*
> >>>* don't drop any dirty dentry pages for keeping lastest
> >>>
> 
> >
> > Thanks,
> >
> >> in an inifinite loop. Let's drop dirty pages at umount in that case.
> >>
> >> Signed-off-by: Jaegeuk Kim 
> >> ---
> >> v3:
> >>  - fix wrong unlock
> >>
> >> v2:
> >>  - fix typos
> >>
> >>  fs/f2fs/node.c | 9 -
> >>  1 file changed, 8 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
> >> index e632de10aedab..e0bb0f7e0506e 100644
> >> --- a/fs/f2fs/node.c
> >> +++ b/fs/f2fs/node.c
> >> 

Re: [f2fs-dev] [PATCH v3] f2fs: avoid inifinite loop to wait for flushing node pages at cp_error

2020-05-26 Thread Chao Yu
On 2020/5/26 9:56, Jaegeuk Kim wrote:
> On 05/26, Chao Yu wrote:
>> On 2020/5/26 9:11, Chao Yu wrote:
>>> On 2020/5/25 23:06, Jaegeuk Kim wrote:
 On 05/25, Chao Yu wrote:
> On 2020/5/25 11:56, Jaegeuk Kim wrote:
>> Shutdown test is somtimes hung, since it keeps trying to flush dirty 
>> node pages

71.07% 0.01%  kworker/u256:1+  [kernel.kallsyms]  [k] wb_writeback
|
 --71.06%--wb_writeback
   |
   |--68.96%--__writeback_inodes_wb
   |  |
   |   --68.95%--writeback_sb_inodes
   | |
   | |--65.08%--__writeback_single_inode
   | |  |
   | |   --64.35%--do_writepages
   | | |
   | | 
|--59.83%--f2fs_write_node_pages
   | | |  |
   | | |   
--59.74%--f2fs_sync_node_pages
   | | |
 |
   | | |
 |--27.91%--pagevec_lookup_range_tag
   | | |
 |  |
   | | |
 |   --27.90%--find_get_pages_range_tag

Before umount, kworker will always hold one core, that looks not reasonable,
to avoid that, could we just allow node write, since it's out-place-update,
and cp is not allowed, we don't need to worry about its effect on data on
previous checkpoint, and it can decrease memory footprint cost by node pages.

Thanks,

>
> IMO, for umount case, we should drop dirty reference and dirty pages on 
> meta/data
> pages like we change for node pages to avoid potential dead loop...

 I believe we're doing for them. :P
>>>
>>> Actually, I mean do we need to drop dirty meta/data pages explicitly as 
>>> below:
>>>
>>> diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
>>> index 3dc3ac6fe143..4c08fd0a680a 100644
>>> --- a/fs/f2fs/checkpoint.c
>>> +++ b/fs/f2fs/checkpoint.c
>>> @@ -299,8 +299,15 @@ static int __f2fs_write_meta_page(struct page *page,
>>>
>>> trace_f2fs_writepage(page, META);
>>>
>>> -   if (unlikely(f2fs_cp_error(sbi)))
>>> +   if (unlikely(f2fs_cp_error(sbi))) {
>>> +   if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
>>> +   ClearPageUptodate(page);
>>> +   dec_page_count(sbi, F2FS_DIRTY_META);
>>> +   unlock_page(page);
>>> +   return 0;
>>> +   }
>>> goto redirty_out;
>>> +   }
>>> if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
>>> goto redirty_out;
>>> if (wbc->for_reclaim && page->index < GET_SUM_BLOCK(sbi, 0))
>>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
>>> index 48a622b95b76..94b342802513 100644
>>> --- a/fs/f2fs/data.c
>>> +++ b/fs/f2fs/data.c
>>> @@ -2682,6 +2682,12 @@ int f2fs_write_single_data_page(struct page *page, 
>>> int *submitted,
>>>
>>> /* we should bypass data pages to proceed the kworkder jobs */
>>> if (unlikely(f2fs_cp_error(sbi))) {
>>> +   if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
>>> +   ClearPageUptodate(page);
>>> +   inode_dec_dirty_pages(inode);
>>> +   unlock_page(page);
>>> +   return 0;
>>> +   }
>>
>> Oh, I notice previously, we will drop non-directory inode's dirty pages 
>> directly,
>> however, during umount, we'd better drop directory inode's dirty pages as 
>> well, right?
> 
> Hmm, I remember I dropped them before. Need to double check.
> 
>>
>>> mapping_set_error(page->mapping, -EIO);
>>> /*
>>>  * don't drop any dirty dentry pages for keeping lastest
>>>

>
> Thanks,
>
>> in an inifinite loop. Let's drop dirty pages at umount in that case.
>>
>> Signed-off-by: Jaegeuk Kim 
>> ---
>> v3:
>>  - fix wrong unlock
>>
>> v2:
>>  - fix typos
>>
>>  fs/f2fs/node.c | 9 -
>>  1 file changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
>> index e632de10aedab..e0bb0f7e0506e 100644
>> --- a/fs/f2fs/node.c
>> +++ b/fs/f2fs/node.c
>> @@ -1520,8 +1520,15 @@ static int __write_node_page(struct page *page, 
>> bool atomic, bool *submitted,
>>  
>>  trace_f2fs_writepage(page, NODE);
>>  
>> -if (unlikely(f2fs_cp_error(sbi)))
>> +if 

Re: [f2fs-dev] [PATCH v3] f2fs: avoid inifinite loop to wait for flushing node pages at cp_error

2020-05-25 Thread Jaegeuk Kim
On 05/26, Chao Yu wrote:
> On 2020/5/26 9:11, Chao Yu wrote:
> > On 2020/5/25 23:06, Jaegeuk Kim wrote:
> >> On 05/25, Chao Yu wrote:
> >>> On 2020/5/25 11:56, Jaegeuk Kim wrote:
>  Shutdown test is somtimes hung, since it keeps trying to flush dirty 
>  node pages
> >>>
> >>> IMO, for umount case, we should drop dirty reference and dirty pages on 
> >>> meta/data
> >>> pages like we change for node pages to avoid potential dead loop...
> >>
> >> I believe we're doing for them. :P
> > 
> > Actually, I mean do we need to drop dirty meta/data pages explicitly as 
> > below:
> > 
> > diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
> > index 3dc3ac6fe143..4c08fd0a680a 100644
> > --- a/fs/f2fs/checkpoint.c
> > +++ b/fs/f2fs/checkpoint.c
> > @@ -299,8 +299,15 @@ static int __f2fs_write_meta_page(struct page *page,
> > 
> > trace_f2fs_writepage(page, META);
> > 
> > -   if (unlikely(f2fs_cp_error(sbi)))
> > +   if (unlikely(f2fs_cp_error(sbi))) {
> > +   if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
> > +   ClearPageUptodate(page);
> > +   dec_page_count(sbi, F2FS_DIRTY_META);
> > +   unlock_page(page);
> > +   return 0;
> > +   }
> > goto redirty_out;
> > +   }
> > if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
> > goto redirty_out;
> > if (wbc->for_reclaim && page->index < GET_SUM_BLOCK(sbi, 0))
> > diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> > index 48a622b95b76..94b342802513 100644
> > --- a/fs/f2fs/data.c
> > +++ b/fs/f2fs/data.c
> > @@ -2682,6 +2682,12 @@ int f2fs_write_single_data_page(struct page *page, 
> > int *submitted,
> > 
> > /* we should bypass data pages to proceed the kworkder jobs */
> > if (unlikely(f2fs_cp_error(sbi))) {
> > +   if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
> > +   ClearPageUptodate(page);
> > +   inode_dec_dirty_pages(inode);
> > +   unlock_page(page);
> > +   return 0;
> > +   }
> 
> Oh, I notice previously, we will drop non-directory inode's dirty pages 
> directly,
> however, during umount, we'd better drop directory inode's dirty pages as 
> well, right?

Hmm, I remember I dropped them before. Need to double check.

> 
> > mapping_set_error(page->mapping, -EIO);
> > /*
> >  * don't drop any dirty dentry pages for keeping lastest
> > 
> >>
> >>>
> >>> Thanks,
> >>>
>  in an inifinite loop. Let's drop dirty pages at umount in that case.
> 
>  Signed-off-by: Jaegeuk Kim 
>  ---
>  v3:
>   - fix wrong unlock
> 
>  v2:
>   - fix typos
> 
>   fs/f2fs/node.c | 9 -
>   1 file changed, 8 insertions(+), 1 deletion(-)
> 
>  diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
>  index e632de10aedab..e0bb0f7e0506e 100644
>  --- a/fs/f2fs/node.c
>  +++ b/fs/f2fs/node.c
>  @@ -1520,8 +1520,15 @@ static int __write_node_page(struct page *page, 
>  bool atomic, bool *submitted,
>   
>   trace_f2fs_writepage(page, NODE);
>   
>  -if (unlikely(f2fs_cp_error(sbi)))
>  +if (unlikely(f2fs_cp_error(sbi))) {
>  +if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
>  +ClearPageUptodate(page);
>  +dec_page_count(sbi, F2FS_DIRTY_NODES);
>  +unlock_page(page);
>  +return 0;
>  +}
>   goto redirty_out;
>  +}
>   
>   if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
>   goto redirty_out;
> 
> >> .
> >>
> > 
> > 
> > ___
> > Linux-f2fs-devel mailing list
> > Linux-f2fs-devel@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
> > .
> > 


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH v3] f2fs: avoid inifinite loop to wait for flushing node pages at cp_error

2020-05-25 Thread Chao Yu
On 2020/5/26 9:11, Chao Yu wrote:
> On 2020/5/25 23:06, Jaegeuk Kim wrote:
>> On 05/25, Chao Yu wrote:
>>> On 2020/5/25 11:56, Jaegeuk Kim wrote:
 Shutdown test is somtimes hung, since it keeps trying to flush dirty node 
 pages
>>>
>>> IMO, for umount case, we should drop dirty reference and dirty pages on 
>>> meta/data
>>> pages like we change for node pages to avoid potential dead loop...
>>
>> I believe we're doing for them. :P
> 
> Actually, I mean do we need to drop dirty meta/data pages explicitly as below:
> 
> diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
> index 3dc3ac6fe143..4c08fd0a680a 100644
> --- a/fs/f2fs/checkpoint.c
> +++ b/fs/f2fs/checkpoint.c
> @@ -299,8 +299,15 @@ static int __f2fs_write_meta_page(struct page *page,
> 
>   trace_f2fs_writepage(page, META);
> 
> - if (unlikely(f2fs_cp_error(sbi)))
> + if (unlikely(f2fs_cp_error(sbi))) {
> + if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
> + ClearPageUptodate(page);
> + dec_page_count(sbi, F2FS_DIRTY_META);
> + unlock_page(page);
> + return 0;
> + }
>   goto redirty_out;
> + }
>   if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
>   goto redirty_out;
>   if (wbc->for_reclaim && page->index < GET_SUM_BLOCK(sbi, 0))
> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> index 48a622b95b76..94b342802513 100644
> --- a/fs/f2fs/data.c
> +++ b/fs/f2fs/data.c
> @@ -2682,6 +2682,12 @@ int f2fs_write_single_data_page(struct page *page, int 
> *submitted,
> 
>   /* we should bypass data pages to proceed the kworkder jobs */
>   if (unlikely(f2fs_cp_error(sbi))) {
> + if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
> + ClearPageUptodate(page);
> + inode_dec_dirty_pages(inode);
> + unlock_page(page);
> + return 0;
> + }

Oh, I notice previously, we will drop non-directory inode's dirty pages 
directly,
however, during umount, we'd better drop directory inode's dirty pages as well, 
right?

>   mapping_set_error(page->mapping, -EIO);
>   /*
>* don't drop any dirty dentry pages for keeping lastest
> 
>>
>>>
>>> Thanks,
>>>
 in an inifinite loop. Let's drop dirty pages at umount in that case.

 Signed-off-by: Jaegeuk Kim 
 ---
 v3:
  - fix wrong unlock

 v2:
  - fix typos

  fs/f2fs/node.c | 9 -
  1 file changed, 8 insertions(+), 1 deletion(-)

 diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
 index e632de10aedab..e0bb0f7e0506e 100644
 --- a/fs/f2fs/node.c
 +++ b/fs/f2fs/node.c
 @@ -1520,8 +1520,15 @@ static int __write_node_page(struct page *page, 
 bool atomic, bool *submitted,
  
trace_f2fs_writepage(page, NODE);
  
 -  if (unlikely(f2fs_cp_error(sbi)))
 +  if (unlikely(f2fs_cp_error(sbi))) {
 +  if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
 +  ClearPageUptodate(page);
 +  dec_page_count(sbi, F2FS_DIRTY_NODES);
 +  unlock_page(page);
 +  return 0;
 +  }
goto redirty_out;
 +  }
  
if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
goto redirty_out;

>> .
>>
> 
> 
> ___
> Linux-f2fs-devel mailing list
> Linux-f2fs-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
> .
> 


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH v3] f2fs: avoid inifinite loop to wait for flushing node pages at cp_error

2020-05-25 Thread Chao Yu
On 2020/5/25 23:06, Jaegeuk Kim wrote:
> On 05/25, Chao Yu wrote:
>> On 2020/5/25 11:56, Jaegeuk Kim wrote:
>>> Shutdown test is somtimes hung, since it keeps trying to flush dirty node 
>>> pages
>>
>> IMO, for umount case, we should drop dirty reference and dirty pages on 
>> meta/data
>> pages like we change for node pages to avoid potential dead loop...
> 
> I believe we're doing for them. :P

Actually, I mean do we need to drop dirty meta/data pages explicitly as below:

diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index 3dc3ac6fe143..4c08fd0a680a 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -299,8 +299,15 @@ static int __f2fs_write_meta_page(struct page *page,

trace_f2fs_writepage(page, META);

-   if (unlikely(f2fs_cp_error(sbi)))
+   if (unlikely(f2fs_cp_error(sbi))) {
+   if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
+   ClearPageUptodate(page);
+   dec_page_count(sbi, F2FS_DIRTY_META);
+   unlock_page(page);
+   return 0;
+   }
goto redirty_out;
+   }
if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
goto redirty_out;
if (wbc->for_reclaim && page->index < GET_SUM_BLOCK(sbi, 0))
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 48a622b95b76..94b342802513 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -2682,6 +2682,12 @@ int f2fs_write_single_data_page(struct page *page, int 
*submitted,

/* we should bypass data pages to proceed the kworkder jobs */
if (unlikely(f2fs_cp_error(sbi))) {
+   if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
+   ClearPageUptodate(page);
+   inode_dec_dirty_pages(inode);
+   unlock_page(page);
+   return 0;
+   }
mapping_set_error(page->mapping, -EIO);
/*
 * don't drop any dirty dentry pages for keeping lastest

> 
>>
>> Thanks,
>>
>>> in an inifinite loop. Let's drop dirty pages at umount in that case.
>>>
>>> Signed-off-by: Jaegeuk Kim 
>>> ---
>>> v3:
>>>  - fix wrong unlock
>>>
>>> v2:
>>>  - fix typos
>>>
>>>  fs/f2fs/node.c | 9 -
>>>  1 file changed, 8 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
>>> index e632de10aedab..e0bb0f7e0506e 100644
>>> --- a/fs/f2fs/node.c
>>> +++ b/fs/f2fs/node.c
>>> @@ -1520,8 +1520,15 @@ static int __write_node_page(struct page *page, bool 
>>> atomic, bool *submitted,
>>>  
>>> trace_f2fs_writepage(page, NODE);
>>>  
>>> -   if (unlikely(f2fs_cp_error(sbi)))
>>> +   if (unlikely(f2fs_cp_error(sbi))) {
>>> +   if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
>>> +   ClearPageUptodate(page);
>>> +   dec_page_count(sbi, F2FS_DIRTY_NODES);
>>> +   unlock_page(page);
>>> +   return 0;
>>> +   }
>>> goto redirty_out;
>>> +   }
>>>  
>>> if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
>>> goto redirty_out;
>>>
> .
> 


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH v3] f2fs: avoid inifinite loop to wait for flushing node pages at cp_error

2020-05-25 Thread Jaegeuk Kim
On 05/25, Chao Yu wrote:
> On 2020/5/25 11:56, Jaegeuk Kim wrote:
> > Shutdown test is somtimes hung, since it keeps trying to flush dirty node 
> > pages
> 
> IMO, for umount case, we should drop dirty reference and dirty pages on 
> meta/data
> pages like we change for node pages to avoid potential dead loop...

I believe we're doing for them. :P

> 
> Thanks,
> 
> > in an inifinite loop. Let's drop dirty pages at umount in that case.
> > 
> > Signed-off-by: Jaegeuk Kim 
> > ---
> > v3:
> >  - fix wrong unlock
> > 
> > v2:
> >  - fix typos
> > 
> >  fs/f2fs/node.c | 9 -
> >  1 file changed, 8 insertions(+), 1 deletion(-)
> > 
> > diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
> > index e632de10aedab..e0bb0f7e0506e 100644
> > --- a/fs/f2fs/node.c
> > +++ b/fs/f2fs/node.c
> > @@ -1520,8 +1520,15 @@ static int __write_node_page(struct page *page, bool 
> > atomic, bool *submitted,
> >  
> > trace_f2fs_writepage(page, NODE);
> >  
> > -   if (unlikely(f2fs_cp_error(sbi)))
> > +   if (unlikely(f2fs_cp_error(sbi))) {
> > +   if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
> > +   ClearPageUptodate(page);
> > +   dec_page_count(sbi, F2FS_DIRTY_NODES);
> > +   unlock_page(page);
> > +   return 0;
> > +   }
> > goto redirty_out;
> > +   }
> >  
> > if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
> > goto redirty_out;
> > 


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH v3] f2fs: avoid inifinite loop to wait for flushing node pages at cp_error

2020-05-25 Thread Chao Yu
On 2020/5/25 11:56, Jaegeuk Kim wrote:
> Shutdown test is somtimes hung, since it keeps trying to flush dirty node 
> pages

IMO, for umount case, we should drop dirty reference and dirty pages on 
meta/data
pages like we change for node pages to avoid potential dead loop...

Thanks,

> in an inifinite loop. Let's drop dirty pages at umount in that case.
> 
> Signed-off-by: Jaegeuk Kim 
> ---
> v3:
>  - fix wrong unlock
> 
> v2:
>  - fix typos
> 
>  fs/f2fs/node.c | 9 -
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
> index e632de10aedab..e0bb0f7e0506e 100644
> --- a/fs/f2fs/node.c
> +++ b/fs/f2fs/node.c
> @@ -1520,8 +1520,15 @@ static int __write_node_page(struct page *page, bool 
> atomic, bool *submitted,
>  
>   trace_f2fs_writepage(page, NODE);
>  
> - if (unlikely(f2fs_cp_error(sbi)))
> + if (unlikely(f2fs_cp_error(sbi))) {
> + if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
> + ClearPageUptodate(page);
> + dec_page_count(sbi, F2FS_DIRTY_NODES);
> + unlock_page(page);
> + return 0;
> + }
>   goto redirty_out;
> + }
>  
>   if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
>   goto redirty_out;
> 


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH v3] f2fs: avoid inifinite loop to wait for flushing node pages at cp_error

2020-05-24 Thread Jaegeuk Kim
Shutdown test is somtimes hung, since it keeps trying to flush dirty node pages
in an inifinite loop. Let's drop dirty pages at umount in that case.

Signed-off-by: Jaegeuk Kim 
---
v3:
 - fix wrong unlock

v2:
 - fix typos

 fs/f2fs/node.c | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index e632de10aedab..e0bb0f7e0506e 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -1520,8 +1520,15 @@ static int __write_node_page(struct page *page, bool 
atomic, bool *submitted,
 
trace_f2fs_writepage(page, NODE);
 
-   if (unlikely(f2fs_cp_error(sbi)))
+   if (unlikely(f2fs_cp_error(sbi))) {
+   if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
+   ClearPageUptodate(page);
+   dec_page_count(sbi, F2FS_DIRTY_NODES);
+   unlock_page(page);
+   return 0;
+   }
goto redirty_out;
+   }
 
if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
goto redirty_out;
-- 
2.27.0.rc0.183.gde8f92d652-goog



___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel