Re: [PATCH] f2fs: enlarge block plug coverage

2018-04-13 Thread Chao Yu
On 2018/4/13 12:07, Jaegeuk Kim wrote:
> On 04/13, Chao Yu wrote:
>> On 2018/4/13 9:06, Jaegeuk Kim wrote:
>>> On 04/10, Chao Yu wrote:
 On 2018/4/10 12:10, Jaegeuk Kim wrote:
> On 04/10, Chao Yu wrote:
>> On 2018/4/10 2:02, Jaegeuk Kim wrote:
>>> On 04/08, Chao Yu wrote:
 On 2018/4/5 11:51, Jaegeuk Kim wrote:
> On 04/04, Chao Yu wrote:
>> This patch enlarges block plug coverage in __issue_discard_cmd, in
>> order to collect more pending bios before issuing them, to avoid
>> being disturbed by previous discard I/O in IO aware discard mode.
>
> Hmm, then we need to wait for huge discard IO for over 10 secs, which

 We found that total discard latency is rely on total discard number we 
 issued
 last time instead of range or length discard covered. IMO, if we don't 
 change
 .max_requests value, we will not suffer longer latency.

> will affect following read/write IOs accordingly. In order to avoid 
> that,
> we actually need to limit the discard size.
>>
>> Do you mean limit discard count or discard length?
>
> Both of them.
>
>>

 If you are worry about I/O interference in between discard and rw, I 
 suggest to
 decrease .max_requests value.
>>>
>>> What do you mean? This will produce more pending requests in the queue?
>>
>> I mean after applying this patch, we can queue more discard IOs in plug 
>> inside
>> task, otherwise, previous issued discard in block layer can make 
>> is_idle() be false,
>> then it can stop IO awared user to issue pending discard command.
>
> Then, unplug will issue lots of discard commands, which affects the 
> following rw
> latencies. My preference would be issuing discard commands one by one as 
> much as
> possible.

 Hmm.. for you concern, we can turn down IO priority of discard from 
 background?
>>>
>>> That makes much more sense to me. :P
>>
>> Then, this patch which enlarge plug coverage will not still a problem, 
>> right? ;)
> 
> This is different one.

Yup, if there will be no IO interference as you concerned before, can we accept
it now?

Thanks,

> 
>>
>> Thanks,
>>
>>>

 Thanks,

>
>>
>> Thanks,
>>
>>>

 Thanks,

>
> Thanks,
>
>>
>> Signed-off-by: Chao Yu 
>> ---
>>  fs/f2fs/segment.c | 7 +--
>>  1 file changed, 5 insertions(+), 2 deletions(-)
>>
>> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
>> index 8f0b5ba46315..4287e208c040 100644
>> --- a/fs/f2fs/segment.c
>> +++ b/fs/f2fs/segment.c
>> @@ -1208,10 +1208,12 @@ static int __issue_discard_cmd(struct 
>> f2fs_sb_info *sbi,
>>  pend_list = >pend_list[i];
>>  
>>  mutex_lock(>cmd_lock);
>> +
>> +blk_start_plug();
>> +
>>  if (list_empty(pend_list))
>>  goto next;
>>  f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, 
>> >root));
>> -blk_start_plug();
>>  list_for_each_entry_safe(dc, tmp, pend_list, list) {
>>  f2fs_bug_on(sbi, dc->state != D_PREP);
>>  
>> @@ -1227,8 +1229,9 @@ static int __issue_discard_cmd(struct 
>> f2fs_sb_info *sbi,
>>  if (++iter >= dpolicy->max_requests)
>>  break;
>>  }
>> -blk_finish_plug();
>>  next:
>> +blk_finish_plug();
>> +
>>  mutex_unlock(>cmd_lock);
>>  
>>  if (iter >= dpolicy->max_requests)
>> -- 
>> 2.15.0.55.gc2ece9dc4de6
>
> .
>
>>>
>>> .
>>>
>
> .
>
>>>
>>> .
>>>
> 
> .
> 



Re: [PATCH] f2fs: enlarge block plug coverage

2018-04-13 Thread Chao Yu
On 2018/4/13 12:07, Jaegeuk Kim wrote:
> On 04/13, Chao Yu wrote:
>> On 2018/4/13 9:06, Jaegeuk Kim wrote:
>>> On 04/10, Chao Yu wrote:
 On 2018/4/10 12:10, Jaegeuk Kim wrote:
> On 04/10, Chao Yu wrote:
>> On 2018/4/10 2:02, Jaegeuk Kim wrote:
>>> On 04/08, Chao Yu wrote:
 On 2018/4/5 11:51, Jaegeuk Kim wrote:
> On 04/04, Chao Yu wrote:
>> This patch enlarges block plug coverage in __issue_discard_cmd, in
>> order to collect more pending bios before issuing them, to avoid
>> being disturbed by previous discard I/O in IO aware discard mode.
>
> Hmm, then we need to wait for huge discard IO for over 10 secs, which

 We found that total discard latency is rely on total discard number we 
 issued
 last time instead of range or length discard covered. IMO, if we don't 
 change
 .max_requests value, we will not suffer longer latency.

> will affect following read/write IOs accordingly. In order to avoid 
> that,
> we actually need to limit the discard size.
>>
>> Do you mean limit discard count or discard length?
>
> Both of them.
>
>>

 If you are worry about I/O interference in between discard and rw, I 
 suggest to
 decrease .max_requests value.
>>>
>>> What do you mean? This will produce more pending requests in the queue?
>>
>> I mean after applying this patch, we can queue more discard IOs in plug 
>> inside
>> task, otherwise, previous issued discard in block layer can make 
>> is_idle() be false,
>> then it can stop IO awared user to issue pending discard command.
>
> Then, unplug will issue lots of discard commands, which affects the 
> following rw
> latencies. My preference would be issuing discard commands one by one as 
> much as
> possible.

 Hmm.. for you concern, we can turn down IO priority of discard from 
 background?
>>>
>>> That makes much more sense to me. :P
>>
>> Then, this patch which enlarge plug coverage will not still a problem, 
>> right? ;)
> 
> This is different one.

Yup, if there will be no IO interference as you concerned before, can we accept
it now?

Thanks,

> 
>>
>> Thanks,
>>
>>>

 Thanks,

>
>>
>> Thanks,
>>
>>>

 Thanks,

>
> Thanks,
>
>>
>> Signed-off-by: Chao Yu 
>> ---
>>  fs/f2fs/segment.c | 7 +--
>>  1 file changed, 5 insertions(+), 2 deletions(-)
>>
>> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
>> index 8f0b5ba46315..4287e208c040 100644
>> --- a/fs/f2fs/segment.c
>> +++ b/fs/f2fs/segment.c
>> @@ -1208,10 +1208,12 @@ static int __issue_discard_cmd(struct 
>> f2fs_sb_info *sbi,
>>  pend_list = >pend_list[i];
>>  
>>  mutex_lock(>cmd_lock);
>> +
>> +blk_start_plug();
>> +
>>  if (list_empty(pend_list))
>>  goto next;
>>  f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, 
>> >root));
>> -blk_start_plug();
>>  list_for_each_entry_safe(dc, tmp, pend_list, list) {
>>  f2fs_bug_on(sbi, dc->state != D_PREP);
>>  
>> @@ -1227,8 +1229,9 @@ static int __issue_discard_cmd(struct 
>> f2fs_sb_info *sbi,
>>  if (++iter >= dpolicy->max_requests)
>>  break;
>>  }
>> -blk_finish_plug();
>>  next:
>> +blk_finish_plug();
>> +
>>  mutex_unlock(>cmd_lock);
>>  
>>  if (iter >= dpolicy->max_requests)
>> -- 
>> 2.15.0.55.gc2ece9dc4de6
>
> .
>
>>>
>>> .
>>>
>
> .
>
>>>
>>> .
>>>
> 
> .
> 



Re: [PATCH] f2fs: enlarge block plug coverage

2018-04-12 Thread Jaegeuk Kim
On 04/13, Chao Yu wrote:
> On 2018/4/13 9:06, Jaegeuk Kim wrote:
> > On 04/10, Chao Yu wrote:
> >> On 2018/4/10 12:10, Jaegeuk Kim wrote:
> >>> On 04/10, Chao Yu wrote:
>  On 2018/4/10 2:02, Jaegeuk Kim wrote:
> > On 04/08, Chao Yu wrote:
> >> On 2018/4/5 11:51, Jaegeuk Kim wrote:
> >>> On 04/04, Chao Yu wrote:
>  This patch enlarges block plug coverage in __issue_discard_cmd, in
>  order to collect more pending bios before issuing them, to avoid
>  being disturbed by previous discard I/O in IO aware discard mode.
> >>>
> >>> Hmm, then we need to wait for huge discard IO for over 10 secs, which
> >>
> >> We found that total discard latency is rely on total discard number we 
> >> issued
> >> last time instead of range or length discard covered. IMO, if we don't 
> >> change
> >> .max_requests value, we will not suffer longer latency.
> >>
> >>> will affect following read/write IOs accordingly. In order to avoid 
> >>> that,
> >>> we actually need to limit the discard size.
> 
>  Do you mean limit discard count or discard length?
> >>>
> >>> Both of them.
> >>>
> 
> >>
> >> If you are worry about I/O interference in between discard and rw, I 
> >> suggest to
> >> decrease .max_requests value.
> >
> > What do you mean? This will produce more pending requests in the queue?
> 
>  I mean after applying this patch, we can queue more discard IOs in plug 
>  inside
>  task, otherwise, previous issued discard in block layer can make 
>  is_idle() be false,
>  then it can stop IO awared user to issue pending discard command.
> >>>
> >>> Then, unplug will issue lots of discard commands, which affects the 
> >>> following rw
> >>> latencies. My preference would be issuing discard commands one by one as 
> >>> much as
> >>> possible.
> >>
> >> Hmm.. for you concern, we can turn down IO priority of discard from 
> >> background?
> > 
> > That makes much more sense to me. :P
> 
> Then, this patch which enlarge plug coverage will not still a problem, right? 
> ;)

This is different one.

> 
> Thanks,
> 
> > 
> >>
> >> Thanks,
> >>
> >>>
> 
>  Thanks,
> 
> >
> >>
> >> Thanks,
> >>
> >>>
> >>> Thanks,
> >>>
> 
>  Signed-off-by: Chao Yu 
>  ---
>   fs/f2fs/segment.c | 7 +--
>   1 file changed, 5 insertions(+), 2 deletions(-)
> 
>  diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
>  index 8f0b5ba46315..4287e208c040 100644
>  --- a/fs/f2fs/segment.c
>  +++ b/fs/f2fs/segment.c
>  @@ -1208,10 +1208,12 @@ static int __issue_discard_cmd(struct 
>  f2fs_sb_info *sbi,
>   pend_list = >pend_list[i];
>   
>   mutex_lock(>cmd_lock);
>  +
>  +blk_start_plug();
>  +
>   if (list_empty(pend_list))
>   goto next;
>   f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, 
>  >root));
>  -blk_start_plug();
>   list_for_each_entry_safe(dc, tmp, pend_list, list) {
>   f2fs_bug_on(sbi, dc->state != D_PREP);
>   
>  @@ -1227,8 +1229,9 @@ static int __issue_discard_cmd(struct 
>  f2fs_sb_info *sbi,
>   if (++iter >= dpolicy->max_requests)
>   break;
>   }
>  -blk_finish_plug();
>   next:
>  +blk_finish_plug();
>  +
>   mutex_unlock(>cmd_lock);
>   
>   if (iter >= dpolicy->max_requests)
>  -- 
>  2.15.0.55.gc2ece9dc4de6
> >>>
> >>> .
> >>>
> >
> > .
> >
> >>>
> >>> .
> >>>
> > 
> > .
> > 


Re: [PATCH] f2fs: enlarge block plug coverage

2018-04-12 Thread Jaegeuk Kim
On 04/13, Chao Yu wrote:
> On 2018/4/13 9:06, Jaegeuk Kim wrote:
> > On 04/10, Chao Yu wrote:
> >> On 2018/4/10 12:10, Jaegeuk Kim wrote:
> >>> On 04/10, Chao Yu wrote:
>  On 2018/4/10 2:02, Jaegeuk Kim wrote:
> > On 04/08, Chao Yu wrote:
> >> On 2018/4/5 11:51, Jaegeuk Kim wrote:
> >>> On 04/04, Chao Yu wrote:
>  This patch enlarges block plug coverage in __issue_discard_cmd, in
>  order to collect more pending bios before issuing them, to avoid
>  being disturbed by previous discard I/O in IO aware discard mode.
> >>>
> >>> Hmm, then we need to wait for huge discard IO for over 10 secs, which
> >>
> >> We found that total discard latency is rely on total discard number we 
> >> issued
> >> last time instead of range or length discard covered. IMO, if we don't 
> >> change
> >> .max_requests value, we will not suffer longer latency.
> >>
> >>> will affect following read/write IOs accordingly. In order to avoid 
> >>> that,
> >>> we actually need to limit the discard size.
> 
>  Do you mean limit discard count or discard length?
> >>>
> >>> Both of them.
> >>>
> 
> >>
> >> If you are worry about I/O interference in between discard and rw, I 
> >> suggest to
> >> decrease .max_requests value.
> >
> > What do you mean? This will produce more pending requests in the queue?
> 
>  I mean after applying this patch, we can queue more discard IOs in plug 
>  inside
>  task, otherwise, previous issued discard in block layer can make 
>  is_idle() be false,
>  then it can stop IO awared user to issue pending discard command.
> >>>
> >>> Then, unplug will issue lots of discard commands, which affects the 
> >>> following rw
> >>> latencies. My preference would be issuing discard commands one by one as 
> >>> much as
> >>> possible.
> >>
> >> Hmm.. for you concern, we can turn down IO priority of discard from 
> >> background?
> > 
> > That makes much more sense to me. :P
> 
> Then, this patch which enlarge plug coverage will not still a problem, right? 
> ;)

This is different one.

> 
> Thanks,
> 
> > 
> >>
> >> Thanks,
> >>
> >>>
> 
>  Thanks,
> 
> >
> >>
> >> Thanks,
> >>
> >>>
> >>> Thanks,
> >>>
> 
>  Signed-off-by: Chao Yu 
>  ---
>   fs/f2fs/segment.c | 7 +--
>   1 file changed, 5 insertions(+), 2 deletions(-)
> 
>  diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
>  index 8f0b5ba46315..4287e208c040 100644
>  --- a/fs/f2fs/segment.c
>  +++ b/fs/f2fs/segment.c
>  @@ -1208,10 +1208,12 @@ static int __issue_discard_cmd(struct 
>  f2fs_sb_info *sbi,
>   pend_list = >pend_list[i];
>   
>   mutex_lock(>cmd_lock);
>  +
>  +blk_start_plug();
>  +
>   if (list_empty(pend_list))
>   goto next;
>   f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, 
>  >root));
>  -blk_start_plug();
>   list_for_each_entry_safe(dc, tmp, pend_list, list) {
>   f2fs_bug_on(sbi, dc->state != D_PREP);
>   
>  @@ -1227,8 +1229,9 @@ static int __issue_discard_cmd(struct 
>  f2fs_sb_info *sbi,
>   if (++iter >= dpolicy->max_requests)
>   break;
>   }
>  -blk_finish_plug();
>   next:
>  +blk_finish_plug();
>  +
>   mutex_unlock(>cmd_lock);
>   
>   if (iter >= dpolicy->max_requests)
>  -- 
>  2.15.0.55.gc2ece9dc4de6
> >>>
> >>> .
> >>>
> >
> > .
> >
> >>>
> >>> .
> >>>
> > 
> > .
> > 


Re: [PATCH] f2fs: enlarge block plug coverage

2018-04-12 Thread Chao Yu
On 2018/4/13 9:06, Jaegeuk Kim wrote:
> On 04/10, Chao Yu wrote:
>> On 2018/4/10 12:10, Jaegeuk Kim wrote:
>>> On 04/10, Chao Yu wrote:
 On 2018/4/10 2:02, Jaegeuk Kim wrote:
> On 04/08, Chao Yu wrote:
>> On 2018/4/5 11:51, Jaegeuk Kim wrote:
>>> On 04/04, Chao Yu wrote:
 This patch enlarges block plug coverage in __issue_discard_cmd, in
 order to collect more pending bios before issuing them, to avoid
 being disturbed by previous discard I/O in IO aware discard mode.
>>>
>>> Hmm, then we need to wait for huge discard IO for over 10 secs, which
>>
>> We found that total discard latency is rely on total discard number we 
>> issued
>> last time instead of range or length discard covered. IMO, if we don't 
>> change
>> .max_requests value, we will not suffer longer latency.
>>
>>> will affect following read/write IOs accordingly. In order to avoid 
>>> that,
>>> we actually need to limit the discard size.

 Do you mean limit discard count or discard length?
>>>
>>> Both of them.
>>>

>>
>> If you are worry about I/O interference in between discard and rw, I 
>> suggest to
>> decrease .max_requests value.
>
> What do you mean? This will produce more pending requests in the queue?

 I mean after applying this patch, we can queue more discard IOs in plug 
 inside
 task, otherwise, previous issued discard in block layer can make is_idle() 
 be false,
 then it can stop IO awared user to issue pending discard command.
>>>
>>> Then, unplug will issue lots of discard commands, which affects the 
>>> following rw
>>> latencies. My preference would be issuing discard commands one by one as 
>>> much as
>>> possible.
>>
>> Hmm.. for you concern, we can turn down IO priority of discard from 
>> background?
> 
> That makes much more sense to me. :P

Then, this patch which enlarge plug coverage will not still a problem, right? ;)

Thanks,

> 
>>
>> Thanks,
>>
>>>

 Thanks,

>
>>
>> Thanks,
>>
>>>
>>> Thanks,
>>>

 Signed-off-by: Chao Yu 
 ---
  fs/f2fs/segment.c | 7 +--
  1 file changed, 5 insertions(+), 2 deletions(-)

 diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
 index 8f0b5ba46315..4287e208c040 100644
 --- a/fs/f2fs/segment.c
 +++ b/fs/f2fs/segment.c
 @@ -1208,10 +1208,12 @@ static int __issue_discard_cmd(struct 
 f2fs_sb_info *sbi,
pend_list = >pend_list[i];
  
mutex_lock(>cmd_lock);
 +
 +  blk_start_plug();
 +
if (list_empty(pend_list))
goto next;
f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, 
 >root));
 -  blk_start_plug();
list_for_each_entry_safe(dc, tmp, pend_list, list) {
f2fs_bug_on(sbi, dc->state != D_PREP);
  
 @@ -1227,8 +1229,9 @@ static int __issue_discard_cmd(struct 
 f2fs_sb_info *sbi,
if (++iter >= dpolicy->max_requests)
break;
}
 -  blk_finish_plug();
  next:
 +  blk_finish_plug();
 +
mutex_unlock(>cmd_lock);
  
if (iter >= dpolicy->max_requests)
 -- 
 2.15.0.55.gc2ece9dc4de6
>>>
>>> .
>>>
>
> .
>
>>>
>>> .
>>>
> 
> .
> 



Re: [PATCH] f2fs: enlarge block plug coverage

2018-04-12 Thread Chao Yu
On 2018/4/13 9:06, Jaegeuk Kim wrote:
> On 04/10, Chao Yu wrote:
>> On 2018/4/10 12:10, Jaegeuk Kim wrote:
>>> On 04/10, Chao Yu wrote:
 On 2018/4/10 2:02, Jaegeuk Kim wrote:
> On 04/08, Chao Yu wrote:
>> On 2018/4/5 11:51, Jaegeuk Kim wrote:
>>> On 04/04, Chao Yu wrote:
 This patch enlarges block plug coverage in __issue_discard_cmd, in
 order to collect more pending bios before issuing them, to avoid
 being disturbed by previous discard I/O in IO aware discard mode.
>>>
>>> Hmm, then we need to wait for huge discard IO for over 10 secs, which
>>
>> We found that total discard latency is rely on total discard number we 
>> issued
>> last time instead of range or length discard covered. IMO, if we don't 
>> change
>> .max_requests value, we will not suffer longer latency.
>>
>>> will affect following read/write IOs accordingly. In order to avoid 
>>> that,
>>> we actually need to limit the discard size.

 Do you mean limit discard count or discard length?
>>>
>>> Both of them.
>>>

>>
>> If you are worry about I/O interference in between discard and rw, I 
>> suggest to
>> decrease .max_requests value.
>
> What do you mean? This will produce more pending requests in the queue?

 I mean after applying this patch, we can queue more discard IOs in plug 
 inside
 task, otherwise, previous issued discard in block layer can make is_idle() 
 be false,
 then it can stop IO awared user to issue pending discard command.
>>>
>>> Then, unplug will issue lots of discard commands, which affects the 
>>> following rw
>>> latencies. My preference would be issuing discard commands one by one as 
>>> much as
>>> possible.
>>
>> Hmm.. for you concern, we can turn down IO priority of discard from 
>> background?
> 
> That makes much more sense to me. :P

Then, this patch which enlarge plug coverage will not still a problem, right? ;)

Thanks,

> 
>>
>> Thanks,
>>
>>>

 Thanks,

>
>>
>> Thanks,
>>
>>>
>>> Thanks,
>>>

 Signed-off-by: Chao Yu 
 ---
  fs/f2fs/segment.c | 7 +--
  1 file changed, 5 insertions(+), 2 deletions(-)

 diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
 index 8f0b5ba46315..4287e208c040 100644
 --- a/fs/f2fs/segment.c
 +++ b/fs/f2fs/segment.c
 @@ -1208,10 +1208,12 @@ static int __issue_discard_cmd(struct 
 f2fs_sb_info *sbi,
pend_list = >pend_list[i];
  
mutex_lock(>cmd_lock);
 +
 +  blk_start_plug();
 +
if (list_empty(pend_list))
goto next;
f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, 
 >root));
 -  blk_start_plug();
list_for_each_entry_safe(dc, tmp, pend_list, list) {
f2fs_bug_on(sbi, dc->state != D_PREP);
  
 @@ -1227,8 +1229,9 @@ static int __issue_discard_cmd(struct 
 f2fs_sb_info *sbi,
if (++iter >= dpolicy->max_requests)
break;
}
 -  blk_finish_plug();
  next:
 +  blk_finish_plug();
 +
mutex_unlock(>cmd_lock);
  
if (iter >= dpolicy->max_requests)
 -- 
 2.15.0.55.gc2ece9dc4de6
>>>
>>> .
>>>
>
> .
>
>>>
>>> .
>>>
> 
> .
> 



Re: [PATCH] f2fs: enlarge block plug coverage

2018-04-12 Thread Jaegeuk Kim
On 04/10, Chao Yu wrote:
> On 2018/4/10 12:10, Jaegeuk Kim wrote:
> > On 04/10, Chao Yu wrote:
> >> On 2018/4/10 2:02, Jaegeuk Kim wrote:
> >>> On 04/08, Chao Yu wrote:
>  On 2018/4/5 11:51, Jaegeuk Kim wrote:
> > On 04/04, Chao Yu wrote:
> >> This patch enlarges block plug coverage in __issue_discard_cmd, in
> >> order to collect more pending bios before issuing them, to avoid
> >> being disturbed by previous discard I/O in IO aware discard mode.
> >
> > Hmm, then we need to wait for huge discard IO for over 10 secs, which
> 
>  We found that total discard latency is rely on total discard number we 
>  issued
>  last time instead of range or length discard covered. IMO, if we don't 
>  change
>  .max_requests value, we will not suffer longer latency.
> 
> > will affect following read/write IOs accordingly. In order to avoid 
> > that,
> > we actually need to limit the discard size.
> >>
> >> Do you mean limit discard count or discard length?
> > 
> > Both of them.
> > 
> >>
> 
>  If you are worry about I/O interference in between discard and rw, I 
>  suggest to
>  decrease .max_requests value.
> >>>
> >>> What do you mean? This will produce more pending requests in the queue?
> >>
> >> I mean after applying this patch, we can queue more discard IOs in plug 
> >> inside
> >> task, otherwise, previous issued discard in block layer can make is_idle() 
> >> be false,
> >> then it can stop IO awared user to issue pending discard command.
> > 
> > Then, unplug will issue lots of discard commands, which affects the 
> > following rw
> > latencies. My preference would be issuing discard commands one by one as 
> > much as
> > possible.
> 
> Hmm.. for you concern, we can turn down IO priority of discard from 
> background?

That makes much more sense to me. :P

> 
> Thanks,
> 
> > 
> >>
> >> Thanks,
> >>
> >>>
> 
>  Thanks,
> 
> >
> > Thanks,
> >
> >>
> >> Signed-off-by: Chao Yu 
> >> ---
> >>  fs/f2fs/segment.c | 7 +--
> >>  1 file changed, 5 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
> >> index 8f0b5ba46315..4287e208c040 100644
> >> --- a/fs/f2fs/segment.c
> >> +++ b/fs/f2fs/segment.c
> >> @@ -1208,10 +1208,12 @@ static int __issue_discard_cmd(struct 
> >> f2fs_sb_info *sbi,
> >>pend_list = >pend_list[i];
> >>  
> >>mutex_lock(>cmd_lock);
> >> +
> >> +  blk_start_plug();
> >> +
> >>if (list_empty(pend_list))
> >>goto next;
> >>f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, 
> >> >root));
> >> -  blk_start_plug();
> >>list_for_each_entry_safe(dc, tmp, pend_list, list) {
> >>f2fs_bug_on(sbi, dc->state != D_PREP);
> >>  
> >> @@ -1227,8 +1229,9 @@ static int __issue_discard_cmd(struct 
> >> f2fs_sb_info *sbi,
> >>if (++iter >= dpolicy->max_requests)
> >>break;
> >>}
> >> -  blk_finish_plug();
> >>  next:
> >> +  blk_finish_plug();
> >> +
> >>mutex_unlock(>cmd_lock);
> >>  
> >>if (iter >= dpolicy->max_requests)
> >> -- 
> >> 2.15.0.55.gc2ece9dc4de6
> >
> > .
> >
> >>>
> >>> .
> >>>
> > 
> > .
> > 


Re: [PATCH] f2fs: enlarge block plug coverage

2018-04-12 Thread Jaegeuk Kim
On 04/10, Chao Yu wrote:
> On 2018/4/10 12:10, Jaegeuk Kim wrote:
> > On 04/10, Chao Yu wrote:
> >> On 2018/4/10 2:02, Jaegeuk Kim wrote:
> >>> On 04/08, Chao Yu wrote:
>  On 2018/4/5 11:51, Jaegeuk Kim wrote:
> > On 04/04, Chao Yu wrote:
> >> This patch enlarges block plug coverage in __issue_discard_cmd, in
> >> order to collect more pending bios before issuing them, to avoid
> >> being disturbed by previous discard I/O in IO aware discard mode.
> >
> > Hmm, then we need to wait for huge discard IO for over 10 secs, which
> 
>  We found that total discard latency is rely on total discard number we 
>  issued
>  last time instead of range or length discard covered. IMO, if we don't 
>  change
>  .max_requests value, we will not suffer longer latency.
> 
> > will affect following read/write IOs accordingly. In order to avoid 
> > that,
> > we actually need to limit the discard size.
> >>
> >> Do you mean limit discard count or discard length?
> > 
> > Both of them.
> > 
> >>
> 
>  If you are worry about I/O interference in between discard and rw, I 
>  suggest to
>  decrease .max_requests value.
> >>>
> >>> What do you mean? This will produce more pending requests in the queue?
> >>
> >> I mean after applying this patch, we can queue more discard IOs in plug 
> >> inside
> >> task, otherwise, previous issued discard in block layer can make is_idle() 
> >> be false,
> >> then it can stop IO awared user to issue pending discard command.
> > 
> > Then, unplug will issue lots of discard commands, which affects the 
> > following rw
> > latencies. My preference would be issuing discard commands one by one as 
> > much as
> > possible.
> 
> Hmm.. for you concern, we can turn down IO priority of discard from 
> background?

That makes much more sense to me. :P

> 
> Thanks,
> 
> > 
> >>
> >> Thanks,
> >>
> >>>
> 
>  Thanks,
> 
> >
> > Thanks,
> >
> >>
> >> Signed-off-by: Chao Yu 
> >> ---
> >>  fs/f2fs/segment.c | 7 +--
> >>  1 file changed, 5 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
> >> index 8f0b5ba46315..4287e208c040 100644
> >> --- a/fs/f2fs/segment.c
> >> +++ b/fs/f2fs/segment.c
> >> @@ -1208,10 +1208,12 @@ static int __issue_discard_cmd(struct 
> >> f2fs_sb_info *sbi,
> >>pend_list = >pend_list[i];
> >>  
> >>mutex_lock(>cmd_lock);
> >> +
> >> +  blk_start_plug();
> >> +
> >>if (list_empty(pend_list))
> >>goto next;
> >>f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, 
> >> >root));
> >> -  blk_start_plug();
> >>list_for_each_entry_safe(dc, tmp, pend_list, list) {
> >>f2fs_bug_on(sbi, dc->state != D_PREP);
> >>  
> >> @@ -1227,8 +1229,9 @@ static int __issue_discard_cmd(struct 
> >> f2fs_sb_info *sbi,
> >>if (++iter >= dpolicy->max_requests)
> >>break;
> >>}
> >> -  blk_finish_plug();
> >>  next:
> >> +  blk_finish_plug();
> >> +
> >>mutex_unlock(>cmd_lock);
> >>  
> >>if (iter >= dpolicy->max_requests)
> >> -- 
> >> 2.15.0.55.gc2ece9dc4de6
> >
> > .
> >
> >>>
> >>> .
> >>>
> > 
> > .
> > 


Re: [PATCH] f2fs: enlarge block plug coverage

2018-04-10 Thread Chao Yu
On 2018/4/10 12:10, Jaegeuk Kim wrote:
> On 04/10, Chao Yu wrote:
>> On 2018/4/10 2:02, Jaegeuk Kim wrote:
>>> On 04/08, Chao Yu wrote:
 On 2018/4/5 11:51, Jaegeuk Kim wrote:
> On 04/04, Chao Yu wrote:
>> This patch enlarges block plug coverage in __issue_discard_cmd, in
>> order to collect more pending bios before issuing them, to avoid
>> being disturbed by previous discard I/O in IO aware discard mode.
>
> Hmm, then we need to wait for huge discard IO for over 10 secs, which

 We found that total discard latency is rely on total discard number we 
 issued
 last time instead of range or length discard covered. IMO, if we don't 
 change
 .max_requests value, we will not suffer longer latency.

> will affect following read/write IOs accordingly. In order to avoid that,
> we actually need to limit the discard size.
>>
>> Do you mean limit discard count or discard length?
> 
> Both of them.
> 
>>

 If you are worry about I/O interference in between discard and rw, I 
 suggest to
 decrease .max_requests value.
>>>
>>> What do you mean? This will produce more pending requests in the queue?
>>
>> I mean after applying this patch, we can queue more discard IOs in plug 
>> inside
>> task, otherwise, previous issued discard in block layer can make is_idle() 
>> be false,
>> then it can stop IO awared user to issue pending discard command.
> 
> Then, unplug will issue lots of discard commands, which affects the following 
> rw
> latencies. My preference would be issuing discard commands one by one as much 
> as
> possible.

Hmm.. for you concern, we can turn down IO priority of discard from background?

Thanks,

> 
>>
>> Thanks,
>>
>>>

 Thanks,

>
> Thanks,
>
>>
>> Signed-off-by: Chao Yu 
>> ---
>>  fs/f2fs/segment.c | 7 +--
>>  1 file changed, 5 insertions(+), 2 deletions(-)
>>
>> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
>> index 8f0b5ba46315..4287e208c040 100644
>> --- a/fs/f2fs/segment.c
>> +++ b/fs/f2fs/segment.c
>> @@ -1208,10 +1208,12 @@ static int __issue_discard_cmd(struct 
>> f2fs_sb_info *sbi,
>>  pend_list = >pend_list[i];
>>  
>>  mutex_lock(>cmd_lock);
>> +
>> +blk_start_plug();
>> +
>>  if (list_empty(pend_list))
>>  goto next;
>>  f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, 
>> >root));
>> -blk_start_plug();
>>  list_for_each_entry_safe(dc, tmp, pend_list, list) {
>>  f2fs_bug_on(sbi, dc->state != D_PREP);
>>  
>> @@ -1227,8 +1229,9 @@ static int __issue_discard_cmd(struct f2fs_sb_info 
>> *sbi,
>>  if (++iter >= dpolicy->max_requests)
>>  break;
>>  }
>> -blk_finish_plug();
>>  next:
>> +blk_finish_plug();
>> +
>>  mutex_unlock(>cmd_lock);
>>  
>>  if (iter >= dpolicy->max_requests)
>> -- 
>> 2.15.0.55.gc2ece9dc4de6
>
> .
>
>>>
>>> .
>>>
> 
> .
> 



Re: [PATCH] f2fs: enlarge block plug coverage

2018-04-10 Thread Chao Yu
On 2018/4/10 12:10, Jaegeuk Kim wrote:
> On 04/10, Chao Yu wrote:
>> On 2018/4/10 2:02, Jaegeuk Kim wrote:
>>> On 04/08, Chao Yu wrote:
 On 2018/4/5 11:51, Jaegeuk Kim wrote:
> On 04/04, Chao Yu wrote:
>> This patch enlarges block plug coverage in __issue_discard_cmd, in
>> order to collect more pending bios before issuing them, to avoid
>> being disturbed by previous discard I/O in IO aware discard mode.
>
> Hmm, then we need to wait for huge discard IO for over 10 secs, which

 We found that total discard latency is rely on total discard number we 
 issued
 last time instead of range or length discard covered. IMO, if we don't 
 change
 .max_requests value, we will not suffer longer latency.

> will affect following read/write IOs accordingly. In order to avoid that,
> we actually need to limit the discard size.
>>
>> Do you mean limit discard count or discard length?
> 
> Both of them.
> 
>>

 If you are worry about I/O interference in between discard and rw, I 
 suggest to
 decrease .max_requests value.
>>>
>>> What do you mean? This will produce more pending requests in the queue?
>>
>> I mean after applying this patch, we can queue more discard IOs in plug 
>> inside
>> task, otherwise, previous issued discard in block layer can make is_idle() 
>> be false,
>> then it can stop IO awared user to issue pending discard command.
> 
> Then, unplug will issue lots of discard commands, which affects the following 
> rw
> latencies. My preference would be issuing discard commands one by one as much 
> as
> possible.

Hmm.. for you concern, we can turn down IO priority of discard from background?

Thanks,

> 
>>
>> Thanks,
>>
>>>

 Thanks,

>
> Thanks,
>
>>
>> Signed-off-by: Chao Yu 
>> ---
>>  fs/f2fs/segment.c | 7 +--
>>  1 file changed, 5 insertions(+), 2 deletions(-)
>>
>> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
>> index 8f0b5ba46315..4287e208c040 100644
>> --- a/fs/f2fs/segment.c
>> +++ b/fs/f2fs/segment.c
>> @@ -1208,10 +1208,12 @@ static int __issue_discard_cmd(struct 
>> f2fs_sb_info *sbi,
>>  pend_list = >pend_list[i];
>>  
>>  mutex_lock(>cmd_lock);
>> +
>> +blk_start_plug();
>> +
>>  if (list_empty(pend_list))
>>  goto next;
>>  f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, 
>> >root));
>> -blk_start_plug();
>>  list_for_each_entry_safe(dc, tmp, pend_list, list) {
>>  f2fs_bug_on(sbi, dc->state != D_PREP);
>>  
>> @@ -1227,8 +1229,9 @@ static int __issue_discard_cmd(struct f2fs_sb_info 
>> *sbi,
>>  if (++iter >= dpolicy->max_requests)
>>  break;
>>  }
>> -blk_finish_plug();
>>  next:
>> +blk_finish_plug();
>> +
>>  mutex_unlock(>cmd_lock);
>>  
>>  if (iter >= dpolicy->max_requests)
>> -- 
>> 2.15.0.55.gc2ece9dc4de6
>
> .
>
>>>
>>> .
>>>
> 
> .
> 



Re: [PATCH] f2fs: enlarge block plug coverage

2018-04-09 Thread Jaegeuk Kim
On 04/10, Chao Yu wrote:
> On 2018/4/10 2:02, Jaegeuk Kim wrote:
> > On 04/08, Chao Yu wrote:
> >> On 2018/4/5 11:51, Jaegeuk Kim wrote:
> >>> On 04/04, Chao Yu wrote:
>  This patch enlarges block plug coverage in __issue_discard_cmd, in
>  order to collect more pending bios before issuing them, to avoid
>  being disturbed by previous discard I/O in IO aware discard mode.
> >>>
> >>> Hmm, then we need to wait for huge discard IO for over 10 secs, which
> >>
> >> We found that total discard latency is rely on total discard number we 
> >> issued
> >> last time instead of range or length discard covered. IMO, if we don't 
> >> change
> >> .max_requests value, we will not suffer longer latency.
> >>
> >>> will affect following read/write IOs accordingly. In order to avoid that,
> >>> we actually need to limit the discard size.
> 
> Do you mean limit discard count or discard length?

Both of them.

> 
> >>
> >> If you are worry about I/O interference in between discard and rw, I 
> >> suggest to
> >> decrease .max_requests value.
> > 
> > What do you mean? This will produce more pending requests in the queue?
> 
> I mean after applying this patch, we can queue more discard IOs in plug inside
> task, otherwise, previous issued discard in block layer can make is_idle() be 
> false,
> then it can stop IO awared user to issue pending discard command.

Then, unplug will issue lots of discard commands, which affects the following rw
latencies. My preference would be issuing discard commands one by one as much as
possible.

> 
> Thanks,
> 
> > 
> >>
> >> Thanks,
> >>
> >>>
> >>> Thanks,
> >>>
> 
>  Signed-off-by: Chao Yu 
>  ---
>   fs/f2fs/segment.c | 7 +--
>   1 file changed, 5 insertions(+), 2 deletions(-)
> 
>  diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
>  index 8f0b5ba46315..4287e208c040 100644
>  --- a/fs/f2fs/segment.c
>  +++ b/fs/f2fs/segment.c
>  @@ -1208,10 +1208,12 @@ static int __issue_discard_cmd(struct 
>  f2fs_sb_info *sbi,
>   pend_list = >pend_list[i];
>   
>   mutex_lock(>cmd_lock);
>  +
>  +blk_start_plug();
>  +
>   if (list_empty(pend_list))
>   goto next;
>   f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, 
>  >root));
>  -blk_start_plug();
>   list_for_each_entry_safe(dc, tmp, pend_list, list) {
>   f2fs_bug_on(sbi, dc->state != D_PREP);
>   
>  @@ -1227,8 +1229,9 @@ static int __issue_discard_cmd(struct f2fs_sb_info 
>  *sbi,
>   if (++iter >= dpolicy->max_requests)
>   break;
>   }
>  -blk_finish_plug();
>   next:
>  +blk_finish_plug();
>  +
>   mutex_unlock(>cmd_lock);
>   
>   if (iter >= dpolicy->max_requests)
>  -- 
>  2.15.0.55.gc2ece9dc4de6
> >>>
> >>> .
> >>>
> > 
> > .
> > 


Re: [PATCH] f2fs: enlarge block plug coverage

2018-04-09 Thread Jaegeuk Kim
On 04/10, Chao Yu wrote:
> On 2018/4/10 2:02, Jaegeuk Kim wrote:
> > On 04/08, Chao Yu wrote:
> >> On 2018/4/5 11:51, Jaegeuk Kim wrote:
> >>> On 04/04, Chao Yu wrote:
>  This patch enlarges block plug coverage in __issue_discard_cmd, in
>  order to collect more pending bios before issuing them, to avoid
>  being disturbed by previous discard I/O in IO aware discard mode.
> >>>
> >>> Hmm, then we need to wait for huge discard IO for over 10 secs, which
> >>
> >> We found that total discard latency is rely on total discard number we 
> >> issued
> >> last time instead of range or length discard covered. IMO, if we don't 
> >> change
> >> .max_requests value, we will not suffer longer latency.
> >>
> >>> will affect following read/write IOs accordingly. In order to avoid that,
> >>> we actually need to limit the discard size.
> 
> Do you mean limit discard count or discard length?

Both of them.

> 
> >>
> >> If you are worry about I/O interference in between discard and rw, I 
> >> suggest to
> >> decrease .max_requests value.
> > 
> > What do you mean? This will produce more pending requests in the queue?
> 
> I mean after applying this patch, we can queue more discard IOs in plug inside
> task, otherwise, previous issued discard in block layer can make is_idle() be 
> false,
> then it can stop IO awared user to issue pending discard command.

Then, unplug will issue lots of discard commands, which affects the following rw
latencies. My preference would be issuing discard commands one by one as much as
possible.

> 
> Thanks,
> 
> > 
> >>
> >> Thanks,
> >>
> >>>
> >>> Thanks,
> >>>
> 
>  Signed-off-by: Chao Yu 
>  ---
>   fs/f2fs/segment.c | 7 +--
>   1 file changed, 5 insertions(+), 2 deletions(-)
> 
>  diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
>  index 8f0b5ba46315..4287e208c040 100644
>  --- a/fs/f2fs/segment.c
>  +++ b/fs/f2fs/segment.c
>  @@ -1208,10 +1208,12 @@ static int __issue_discard_cmd(struct 
>  f2fs_sb_info *sbi,
>   pend_list = >pend_list[i];
>   
>   mutex_lock(>cmd_lock);
>  +
>  +blk_start_plug();
>  +
>   if (list_empty(pend_list))
>   goto next;
>   f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, 
>  >root));
>  -blk_start_plug();
>   list_for_each_entry_safe(dc, tmp, pend_list, list) {
>   f2fs_bug_on(sbi, dc->state != D_PREP);
>   
>  @@ -1227,8 +1229,9 @@ static int __issue_discard_cmd(struct f2fs_sb_info 
>  *sbi,
>   if (++iter >= dpolicy->max_requests)
>   break;
>   }
>  -blk_finish_plug();
>   next:
>  +blk_finish_plug();
>  +
>   mutex_unlock(>cmd_lock);
>   
>   if (iter >= dpolicy->max_requests)
>  -- 
>  2.15.0.55.gc2ece9dc4de6
> >>>
> >>> .
> >>>
> > 
> > .
> > 


Re: [PATCH] f2fs: enlarge block plug coverage

2018-04-09 Thread Chao Yu
On 2018/4/10 2:02, Jaegeuk Kim wrote:
> On 04/08, Chao Yu wrote:
>> On 2018/4/5 11:51, Jaegeuk Kim wrote:
>>> On 04/04, Chao Yu wrote:
 This patch enlarges block plug coverage in __issue_discard_cmd, in
 order to collect more pending bios before issuing them, to avoid
 being disturbed by previous discard I/O in IO aware discard mode.
>>>
>>> Hmm, then we need to wait for huge discard IO for over 10 secs, which
>>
>> We found that total discard latency is rely on total discard number we issued
>> last time instead of range or length discard covered. IMO, if we don't change
>> .max_requests value, we will not suffer longer latency.
>>
>>> will affect following read/write IOs accordingly. In order to avoid that,
>>> we actually need to limit the discard size.

Do you mean limit discard count or discard length?

>>
>> If you are worry about I/O interference in between discard and rw, I suggest 
>> to
>> decrease .max_requests value.
> 
> What do you mean? This will produce more pending requests in the queue?

I mean after applying this patch, we can queue more discard IOs in plug inside
task, otherwise, previous issued discard in block layer can make is_idle() be 
false,
then it can stop IO awared user to issue pending discard command.

Thanks,

> 
>>
>> Thanks,
>>
>>>
>>> Thanks,
>>>

 Signed-off-by: Chao Yu 
 ---
  fs/f2fs/segment.c | 7 +--
  1 file changed, 5 insertions(+), 2 deletions(-)

 diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
 index 8f0b5ba46315..4287e208c040 100644
 --- a/fs/f2fs/segment.c
 +++ b/fs/f2fs/segment.c
 @@ -1208,10 +1208,12 @@ static int __issue_discard_cmd(struct f2fs_sb_info 
 *sbi,
pend_list = >pend_list[i];
  
mutex_lock(>cmd_lock);
 +
 +  blk_start_plug();
 +
if (list_empty(pend_list))
goto next;
f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, >root));
 -  blk_start_plug();
list_for_each_entry_safe(dc, tmp, pend_list, list) {
f2fs_bug_on(sbi, dc->state != D_PREP);
  
 @@ -1227,8 +1229,9 @@ static int __issue_discard_cmd(struct f2fs_sb_info 
 *sbi,
if (++iter >= dpolicy->max_requests)
break;
}
 -  blk_finish_plug();
  next:
 +  blk_finish_plug();
 +
mutex_unlock(>cmd_lock);
  
if (iter >= dpolicy->max_requests)
 -- 
 2.15.0.55.gc2ece9dc4de6
>>>
>>> .
>>>
> 
> .
> 



Re: [PATCH] f2fs: enlarge block plug coverage

2018-04-09 Thread Chao Yu
On 2018/4/10 2:02, Jaegeuk Kim wrote:
> On 04/08, Chao Yu wrote:
>> On 2018/4/5 11:51, Jaegeuk Kim wrote:
>>> On 04/04, Chao Yu wrote:
 This patch enlarges block plug coverage in __issue_discard_cmd, in
 order to collect more pending bios before issuing them, to avoid
 being disturbed by previous discard I/O in IO aware discard mode.
>>>
>>> Hmm, then we need to wait for huge discard IO for over 10 secs, which
>>
>> We found that total discard latency is rely on total discard number we issued
>> last time instead of range or length discard covered. IMO, if we don't change
>> .max_requests value, we will not suffer longer latency.
>>
>>> will affect following read/write IOs accordingly. In order to avoid that,
>>> we actually need to limit the discard size.

Do you mean limit discard count or discard length?

>>
>> If you are worry about I/O interference in between discard and rw, I suggest 
>> to
>> decrease .max_requests value.
> 
> What do you mean? This will produce more pending requests in the queue?

I mean after applying this patch, we can queue more discard IOs in plug inside
task, otherwise, previous issued discard in block layer can make is_idle() be 
false,
then it can stop IO awared user to issue pending discard command.

Thanks,

> 
>>
>> Thanks,
>>
>>>
>>> Thanks,
>>>

 Signed-off-by: Chao Yu 
 ---
  fs/f2fs/segment.c | 7 +--
  1 file changed, 5 insertions(+), 2 deletions(-)

 diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
 index 8f0b5ba46315..4287e208c040 100644
 --- a/fs/f2fs/segment.c
 +++ b/fs/f2fs/segment.c
 @@ -1208,10 +1208,12 @@ static int __issue_discard_cmd(struct f2fs_sb_info 
 *sbi,
pend_list = >pend_list[i];
  
mutex_lock(>cmd_lock);
 +
 +  blk_start_plug();
 +
if (list_empty(pend_list))
goto next;
f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, >root));
 -  blk_start_plug();
list_for_each_entry_safe(dc, tmp, pend_list, list) {
f2fs_bug_on(sbi, dc->state != D_PREP);
  
 @@ -1227,8 +1229,9 @@ static int __issue_discard_cmd(struct f2fs_sb_info 
 *sbi,
if (++iter >= dpolicy->max_requests)
break;
}
 -  blk_finish_plug();
  next:
 +  blk_finish_plug();
 +
mutex_unlock(>cmd_lock);
  
if (iter >= dpolicy->max_requests)
 -- 
 2.15.0.55.gc2ece9dc4de6
>>>
>>> .
>>>
> 
> .
> 



Re: [PATCH] f2fs: enlarge block plug coverage

2018-04-09 Thread Jaegeuk Kim
On 04/08, Chao Yu wrote:
> On 2018/4/5 11:51, Jaegeuk Kim wrote:
> > On 04/04, Chao Yu wrote:
> >> This patch enlarges block plug coverage in __issue_discard_cmd, in
> >> order to collect more pending bios before issuing them, to avoid
> >> being disturbed by previous discard I/O in IO aware discard mode.
> > 
> > Hmm, then we need to wait for huge discard IO for over 10 secs, which
> 
> We found that total discard latency is rely on total discard number we issued
> last time instead of range or length discard covered. IMO, if we don't change
> .max_requests value, we will not suffer longer latency.
> 
> > will affect following read/write IOs accordingly. In order to avoid that,
> > we actually need to limit the discard size.
> 
> If you are worry about I/O interference in between discard and rw, I suggest 
> to
> decrease .max_requests value.

What do you mean? This will produce more pending requests in the queue?

> 
> Thanks,
> 
> > 
> > Thanks,
> > 
> >>
> >> Signed-off-by: Chao Yu 
> >> ---
> >>  fs/f2fs/segment.c | 7 +--
> >>  1 file changed, 5 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
> >> index 8f0b5ba46315..4287e208c040 100644
> >> --- a/fs/f2fs/segment.c
> >> +++ b/fs/f2fs/segment.c
> >> @@ -1208,10 +1208,12 @@ static int __issue_discard_cmd(struct f2fs_sb_info 
> >> *sbi,
> >>pend_list = >pend_list[i];
> >>  
> >>mutex_lock(>cmd_lock);
> >> +
> >> +  blk_start_plug();
> >> +
> >>if (list_empty(pend_list))
> >>goto next;
> >>f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, >root));
> >> -  blk_start_plug();
> >>list_for_each_entry_safe(dc, tmp, pend_list, list) {
> >>f2fs_bug_on(sbi, dc->state != D_PREP);
> >>  
> >> @@ -1227,8 +1229,9 @@ static int __issue_discard_cmd(struct f2fs_sb_info 
> >> *sbi,
> >>if (++iter >= dpolicy->max_requests)
> >>break;
> >>}
> >> -  blk_finish_plug();
> >>  next:
> >> +  blk_finish_plug();
> >> +
> >>mutex_unlock(>cmd_lock);
> >>  
> >>if (iter >= dpolicy->max_requests)
> >> -- 
> >> 2.15.0.55.gc2ece9dc4de6
> > 
> > .
> > 


Re: [PATCH] f2fs: enlarge block plug coverage

2018-04-09 Thread Jaegeuk Kim
On 04/08, Chao Yu wrote:
> On 2018/4/5 11:51, Jaegeuk Kim wrote:
> > On 04/04, Chao Yu wrote:
> >> This patch enlarges block plug coverage in __issue_discard_cmd, in
> >> order to collect more pending bios before issuing them, to avoid
> >> being disturbed by previous discard I/O in IO aware discard mode.
> > 
> > Hmm, then we need to wait for huge discard IO for over 10 secs, which
> 
> We found that total discard latency is rely on total discard number we issued
> last time instead of range or length discard covered. IMO, if we don't change
> .max_requests value, we will not suffer longer latency.
> 
> > will affect following read/write IOs accordingly. In order to avoid that,
> > we actually need to limit the discard size.
> 
> If you are worry about I/O interference in between discard and rw, I suggest 
> to
> decrease .max_requests value.

What do you mean? This will produce more pending requests in the queue?

> 
> Thanks,
> 
> > 
> > Thanks,
> > 
> >>
> >> Signed-off-by: Chao Yu 
> >> ---
> >>  fs/f2fs/segment.c | 7 +--
> >>  1 file changed, 5 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
> >> index 8f0b5ba46315..4287e208c040 100644
> >> --- a/fs/f2fs/segment.c
> >> +++ b/fs/f2fs/segment.c
> >> @@ -1208,10 +1208,12 @@ static int __issue_discard_cmd(struct f2fs_sb_info 
> >> *sbi,
> >>pend_list = >pend_list[i];
> >>  
> >>mutex_lock(>cmd_lock);
> >> +
> >> +  blk_start_plug();
> >> +
> >>if (list_empty(pend_list))
> >>goto next;
> >>f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, >root));
> >> -  blk_start_plug();
> >>list_for_each_entry_safe(dc, tmp, pend_list, list) {
> >>f2fs_bug_on(sbi, dc->state != D_PREP);
> >>  
> >> @@ -1227,8 +1229,9 @@ static int __issue_discard_cmd(struct f2fs_sb_info 
> >> *sbi,
> >>if (++iter >= dpolicy->max_requests)
> >>break;
> >>}
> >> -  blk_finish_plug();
> >>  next:
> >> +  blk_finish_plug();
> >> +
> >>mutex_unlock(>cmd_lock);
> >>  
> >>if (iter >= dpolicy->max_requests)
> >> -- 
> >> 2.15.0.55.gc2ece9dc4de6
> > 
> > .
> > 


Re: [PATCH] f2fs: enlarge block plug coverage

2018-04-07 Thread Chao Yu
On 2018/4/5 11:51, Jaegeuk Kim wrote:
> On 04/04, Chao Yu wrote:
>> This patch enlarges block plug coverage in __issue_discard_cmd, in
>> order to collect more pending bios before issuing them, to avoid
>> being disturbed by previous discard I/O in IO aware discard mode.
> 
> Hmm, then we need to wait for huge discard IO for over 10 secs, which

We found that total discard latency is rely on total discard number we issued
last time instead of range or length discard covered. IMO, if we don't change
.max_requests value, we will not suffer longer latency.

> will affect following read/write IOs accordingly. In order to avoid that,
> we actually need to limit the discard size.

If you are worry about I/O interference in between discard and rw, I suggest to
decrease .max_requests value.

Thanks,

> 
> Thanks,
> 
>>
>> Signed-off-by: Chao Yu 
>> ---
>>  fs/f2fs/segment.c | 7 +--
>>  1 file changed, 5 insertions(+), 2 deletions(-)
>>
>> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
>> index 8f0b5ba46315..4287e208c040 100644
>> --- a/fs/f2fs/segment.c
>> +++ b/fs/f2fs/segment.c
>> @@ -1208,10 +1208,12 @@ static int __issue_discard_cmd(struct f2fs_sb_info 
>> *sbi,
>>  pend_list = >pend_list[i];
>>  
>>  mutex_lock(>cmd_lock);
>> +
>> +blk_start_plug();
>> +
>>  if (list_empty(pend_list))
>>  goto next;
>>  f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, >root));
>> -blk_start_plug();
>>  list_for_each_entry_safe(dc, tmp, pend_list, list) {
>>  f2fs_bug_on(sbi, dc->state != D_PREP);
>>  
>> @@ -1227,8 +1229,9 @@ static int __issue_discard_cmd(struct f2fs_sb_info 
>> *sbi,
>>  if (++iter >= dpolicy->max_requests)
>>  break;
>>  }
>> -blk_finish_plug();
>>  next:
>> +blk_finish_plug();
>> +
>>  mutex_unlock(>cmd_lock);
>>  
>>  if (iter >= dpolicy->max_requests)
>> -- 
>> 2.15.0.55.gc2ece9dc4de6
> 
> .
> 



Re: [PATCH] f2fs: enlarge block plug coverage

2018-04-07 Thread Chao Yu
On 2018/4/5 11:51, Jaegeuk Kim wrote:
> On 04/04, Chao Yu wrote:
>> This patch enlarges block plug coverage in __issue_discard_cmd, in
>> order to collect more pending bios before issuing them, to avoid
>> being disturbed by previous discard I/O in IO aware discard mode.
> 
> Hmm, then we need to wait for huge discard IO for over 10 secs, which

We found that total discard latency is rely on total discard number we issued
last time instead of range or length discard covered. IMO, if we don't change
.max_requests value, we will not suffer longer latency.

> will affect following read/write IOs accordingly. In order to avoid that,
> we actually need to limit the discard size.

If you are worry about I/O interference in between discard and rw, I suggest to
decrease .max_requests value.

Thanks,

> 
> Thanks,
> 
>>
>> Signed-off-by: Chao Yu 
>> ---
>>  fs/f2fs/segment.c | 7 +--
>>  1 file changed, 5 insertions(+), 2 deletions(-)
>>
>> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
>> index 8f0b5ba46315..4287e208c040 100644
>> --- a/fs/f2fs/segment.c
>> +++ b/fs/f2fs/segment.c
>> @@ -1208,10 +1208,12 @@ static int __issue_discard_cmd(struct f2fs_sb_info 
>> *sbi,
>>  pend_list = >pend_list[i];
>>  
>>  mutex_lock(>cmd_lock);
>> +
>> +blk_start_plug();
>> +
>>  if (list_empty(pend_list))
>>  goto next;
>>  f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, >root));
>> -blk_start_plug();
>>  list_for_each_entry_safe(dc, tmp, pend_list, list) {
>>  f2fs_bug_on(sbi, dc->state != D_PREP);
>>  
>> @@ -1227,8 +1229,9 @@ static int __issue_discard_cmd(struct f2fs_sb_info 
>> *sbi,
>>  if (++iter >= dpolicy->max_requests)
>>  break;
>>  }
>> -blk_finish_plug();
>>  next:
>> +blk_finish_plug();
>> +
>>  mutex_unlock(>cmd_lock);
>>  
>>  if (iter >= dpolicy->max_requests)
>> -- 
>> 2.15.0.55.gc2ece9dc4de6
> 
> .
> 



Re: [PATCH] f2fs: enlarge block plug coverage

2018-04-04 Thread Jaegeuk Kim
On 04/04, Chao Yu wrote:
> This patch enlarges block plug coverage in __issue_discard_cmd, in
> order to collect more pending bios before issuing them, to avoid
> being disturbed by previous discard I/O in IO aware discard mode.

Hmm, then we need to wait for huge discard IO for over 10 secs, which
will affect following read/write IOs accordingly. In order to avoid that,
we actually need to limit the discard size.

Thanks,

> 
> Signed-off-by: Chao Yu 
> ---
>  fs/f2fs/segment.c | 7 +--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
> index 8f0b5ba46315..4287e208c040 100644
> --- a/fs/f2fs/segment.c
> +++ b/fs/f2fs/segment.c
> @@ -1208,10 +1208,12 @@ static int __issue_discard_cmd(struct f2fs_sb_info 
> *sbi,
>   pend_list = >pend_list[i];
>  
>   mutex_lock(>cmd_lock);
> +
> + blk_start_plug();
> +
>   if (list_empty(pend_list))
>   goto next;
>   f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, >root));
> - blk_start_plug();
>   list_for_each_entry_safe(dc, tmp, pend_list, list) {
>   f2fs_bug_on(sbi, dc->state != D_PREP);
>  
> @@ -1227,8 +1229,9 @@ static int __issue_discard_cmd(struct f2fs_sb_info *sbi,
>   if (++iter >= dpolicy->max_requests)
>   break;
>   }
> - blk_finish_plug();
>  next:
> + blk_finish_plug();
> +
>   mutex_unlock(>cmd_lock);
>  
>   if (iter >= dpolicy->max_requests)
> -- 
> 2.15.0.55.gc2ece9dc4de6


Re: [PATCH] f2fs: enlarge block plug coverage

2018-04-04 Thread Jaegeuk Kim
On 04/04, Chao Yu wrote:
> This patch enlarges block plug coverage in __issue_discard_cmd, in
> order to collect more pending bios before issuing them, to avoid
> being disturbed by previous discard I/O in IO aware discard mode.

Hmm, then we need to wait for huge discard IO for over 10 secs, which
will affect following read/write IOs accordingly. In order to avoid that,
we actually need to limit the discard size.

Thanks,

> 
> Signed-off-by: Chao Yu 
> ---
>  fs/f2fs/segment.c | 7 +--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
> index 8f0b5ba46315..4287e208c040 100644
> --- a/fs/f2fs/segment.c
> +++ b/fs/f2fs/segment.c
> @@ -1208,10 +1208,12 @@ static int __issue_discard_cmd(struct f2fs_sb_info 
> *sbi,
>   pend_list = >pend_list[i];
>  
>   mutex_lock(>cmd_lock);
> +
> + blk_start_plug();
> +
>   if (list_empty(pend_list))
>   goto next;
>   f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, >root));
> - blk_start_plug();
>   list_for_each_entry_safe(dc, tmp, pend_list, list) {
>   f2fs_bug_on(sbi, dc->state != D_PREP);
>  
> @@ -1227,8 +1229,9 @@ static int __issue_discard_cmd(struct f2fs_sb_info *sbi,
>   if (++iter >= dpolicy->max_requests)
>   break;
>   }
> - blk_finish_plug();
>  next:
> + blk_finish_plug();
> +
>   mutex_unlock(>cmd_lock);
>  
>   if (iter >= dpolicy->max_requests)
> -- 
> 2.15.0.55.gc2ece9dc4de6