On 2020/5/29 3:00, Jaegeuk Kim wrote:
> On 05/28, Chao Yu wrote:
>> On 2020/5/28 9:26, Jaegeuk Kim wrote:
>>> On 05/28, Chao Yu wrote:
>>>> On 2020/5/28 5:02, Jaegeuk Kim wrote:
>>>>> On 05/27, Chao Yu wrote:
>>>>>> meta inode page should be flushed under cp_lock, fix it.
>>>>>
>>>>> It doesn't matter for this case, yes?
>>>>
>>>> It's not related to discard issue.
>>>
>>> I meant we really need this or not. :P
>>
>> Yes, let's keep that rule: flush meta pages under cp_lock, otherwise
>> checkpoint flush order may be broken due to race, right? as checkpoint
>> should write 2rd cp park page after flushing all meta pages.
> 
> Well, this is for shutdown test, and thus we don't need to sync up here.

I'm a little worried about race condition:

f2fs_write_checkpoint
 do_checkpoint
  ...
                                        shutdown
                                        f2fs_sync_meta_pages
                                        stop_checkpoint
  ...

Though, I haven't figure out potential damage of their racing.

> 
>>
>>>
>>>>
>>>> Now, I got some progress, I can reproduce that bug occasionally.
>>>>
>>>> Thanks,
>>>>
>>>>>
>>>>>>
>>>>>> Signed-off-by: Chao Yu <yuch...@huawei.com>
>>>>>> ---
>>>>>>  fs/f2fs/file.c | 2 ++
>>>>>>  1 file changed, 2 insertions(+)
>>>>>>
>>>>>> diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
>>>>>> index f7de2a1da528..0fcae4d90074 100644
>>>>>> --- a/fs/f2fs/file.c
>>>>>> +++ b/fs/f2fs/file.c
>>>>>> @@ -2260,7 +2260,9 @@ static int f2fs_ioc_shutdown(struct file *filp, 
>>>>>> unsigned long arg)
>>>>>>                  set_sbi_flag(sbi, SBI_IS_SHUTDOWN);
>>>>>>                  break;
>>>>>>          case F2FS_GOING_DOWN_METAFLUSH:
>>>>>> +                mutex_lock(&sbi->cp_mutex);
>>>>>>                  f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_META_IO);
>>>>>> +                mutex_unlock(&sbi->cp_mutex);
>>>>>>                  f2fs_stop_checkpoint(sbi, false);
>>>>>>                  set_sbi_flag(sbi, SBI_IS_SHUTDOWN);
>>>>>>                  break;
>>>>>> -- 
>>>>>> 2.18.0.rc1
>>>>> .
>>>>>
>>> .
>>>
> .
> 

Reply via email to