Re: [PATCH BUGFIX V3] block, bfq: add requeue-request hook

2018-02-24 Thread Paolo Valente


> Il giorno 24 feb 2018, alle ore 13:05, Ming Lei  ha 
> scritto:
> 
> On Sat, Feb 24, 2018 at 08:54:31AM +0100, Paolo Valente wrote:
>> 
>> 
>>> Il giorno 23 feb 2018, alle ore 17:17, Ming Lei  ha 
>>> scritto:
>>> 
>>> Hi Paolo,
>>> 
>>> On Fri, Feb 23, 2018 at 04:41:36PM +0100, Paolo Valente wrote:
 
 
> Il giorno 23 feb 2018, alle ore 16:07, Ming Lei  ha 
> scritto:
> 
> Hi Paolo,
> 
> On Wed, Feb 07, 2018 at 10:19:20PM +0100, Paolo Valente wrote:
>> Commit 'a6a252e64914 ("blk-mq-sched: decide how to handle flush rq via
>> RQF_FLUSH_SEQ")' makes all non-flush re-prepared requests for a device
>> be re-inserted into the active I/O scheduler for that device. As a
> 
> No, this behaviour isn't related with commit a6a252e64914, and
> it has been there since blk_mq_requeue_request() is introduced.
> 
 
 Hi Ming,
 actually, we wrote the above statement after simply following the call
 chain that led to the failure.  In this respect, the change in commit
 a6a252e64914:
 
 static bool blk_mq_sched_bypass_insert(struct blk_mq_hw_ctx *hctx,
 +  bool has_sched,
  struct request *rq)
 {
 -   if (rq->tag == -1) {
 +   /* dispatch flush rq directly */
 +   if (rq->rq_flags & RQF_FLUSH_SEQ) {
 +   spin_lock(>lock);
 +   list_add(>queuelist, >dispatch);
 +   spin_unlock(>lock);
 +   return true;
 +   }
 +
 +   if (has_sched) {
   rq->rq_flags |= RQF_SORTED;
 -   return false;
 +   WARN_ON(rq->tag != -1);
   }
 
 -   /*
 -* If we already have a real request tag, send directly to
 -* the dispatch list.
 -*/
 -   spin_lock(>lock);
 -   list_add(>queuelist, >dispatch);
 -   spin_unlock(>lock);
 -   return true;
 +   return false;
 }
 
 makes blk_mq_sched_bypass_insert return false for all non-flush
 requests.  From that, the anomaly described in our commit follows, for
 bfq any stateful scheduler that waits for the completion of requests
 that passed through it.  I'm elaborating again a little bit on this in
 my replies to your next points below.
>>> 
>>> Before a6a252e64914, follows blk_mq_sched_bypass_insert()
>>> 
>>>  if (rq->tag == -1) {
>>>  rq->rq_flags |= RQF_SORTED;
>>>  return false;
>>>}
>>> 
>>>  /*
>>>   * If we already have a real request tag, send directly to
>>>   * the dispatch list.
>>>   */
>>>  spin_lock(>lock);
>>>  list_add(>queuelist, >dispatch);
>>>  spin_unlock(>lock);
>>>  return true;
>>> 
>>> This function still returns false for all non-flush request, so nothing
>>> changes wrt. this kind of handling.
>>> 
>> 
>> Yep Ming.  I don't have the expertise to tell you why, but the failure
>> in the USB case was caused by an rq that is not flush, but for which
>> rq->tag != -1.  So, the previous version of blk_mq_sched_bypass_insert
>> returned true, and there was not failure, while after commit
>> a6a252e64914 the function returns true and the failure occurs if bfq
>> does not exploit the requeue hook.
>> 
>> You have actually shown it yourself, several months ago, through the
>> simple code instrumentation you made and used to show that bfq was
>> stuck.  And I guess it can still be reproduced very easily, unless
>> something else has changed in blk-mq.
>> 
>> BTW, if you can shed a light on this fact, that would be a great
>> occasion to learn for me.
> 
> The difference should be made by commit 923218f6166a84 (blk-mq: don't
> allocate driver tag upfront for flush rq), which releases driver tag
> before requeuing the request, but that is definitely the correct thing
> to do.
> 

Ok.  We just did a bisection, guided by changes in the
function blk_mq_sched_bypass_insert, as that is the function whose
different return value led to the hang.  And commit a6a252e64914
apparently was the latest commit after which the hang occurs.

Anyway, what matters is just that, from some commit on, a requeued
request that before that commit was not re-inserted into the scheduler
started to be re-inserted.  And this was the trigger of the failure.

>> 
 
 I don't mean that this change is an error, it simply sends a stateful
 scheduler in an inconsistent state, unless the scheduler properly
 handles the requeue that precedes the re-insertion into the
 scheduler.
 
 If this clarifies the situation, but there is still some misleading
 statement in the commit, just let me better understand, and I'll be
 glad to rectify it, if possible somehow.
 
 
> And you can see blk_mq_requeue_request() is called 

Re: [PATCH BUGFIX V3] block, bfq: add requeue-request hook

2018-02-24 Thread Ming Lei
On Sat, Feb 24, 2018 at 08:54:31AM +0100, Paolo Valente wrote:
> 
> 
> > Il giorno 23 feb 2018, alle ore 17:17, Ming Lei  ha 
> > scritto:
> > 
> > Hi Paolo,
> > 
> > On Fri, Feb 23, 2018 at 04:41:36PM +0100, Paolo Valente wrote:
> >> 
> >> 
> >>> Il giorno 23 feb 2018, alle ore 16:07, Ming Lei  ha 
> >>> scritto:
> >>> 
> >>> Hi Paolo,
> >>> 
> >>> On Wed, Feb 07, 2018 at 10:19:20PM +0100, Paolo Valente wrote:
>  Commit 'a6a252e64914 ("blk-mq-sched: decide how to handle flush rq via
>  RQF_FLUSH_SEQ")' makes all non-flush re-prepared requests for a device
>  be re-inserted into the active I/O scheduler for that device. As a
> >>> 
> >>> No, this behaviour isn't related with commit a6a252e64914, and
> >>> it has been there since blk_mq_requeue_request() is introduced.
> >>> 
> >> 
> >> Hi Ming,
> >> actually, we wrote the above statement after simply following the call
> >> chain that led to the failure.  In this respect, the change in commit
> >> a6a252e64914:
> >> 
> >> static bool blk_mq_sched_bypass_insert(struct blk_mq_hw_ctx *hctx,
> >> +  bool has_sched,
> >>   struct request *rq)
> >> {
> >> -   if (rq->tag == -1) {
> >> +   /* dispatch flush rq directly */
> >> +   if (rq->rq_flags & RQF_FLUSH_SEQ) {
> >> +   spin_lock(>lock);
> >> +   list_add(>queuelist, >dispatch);
> >> +   spin_unlock(>lock);
> >> +   return true;
> >> +   }
> >> +
> >> +   if (has_sched) {
> >>rq->rq_flags |= RQF_SORTED;
> >> -   return false;
> >> +   WARN_ON(rq->tag != -1);
> >>}
> >> 
> >> -   /*
> >> -* If we already have a real request tag, send directly to
> >> -* the dispatch list.
> >> -*/
> >> -   spin_lock(>lock);
> >> -   list_add(>queuelist, >dispatch);
> >> -   spin_unlock(>lock);
> >> -   return true;
> >> +   return false;
> >> }
> >> 
> >> makes blk_mq_sched_bypass_insert return false for all non-flush
> >> requests.  From that, the anomaly described in our commit follows, for
> >> bfq any stateful scheduler that waits for the completion of requests
> >> that passed through it.  I'm elaborating again a little bit on this in
> >> my replies to your next points below.
> > 
> > Before a6a252e64914, follows blk_mq_sched_bypass_insert()
> > 
> >   if (rq->tag == -1) {
> >   rq->rq_flags |= RQF_SORTED;
> >   return false;
> >}
> > 
> >   /*
> >* If we already have a real request tag, send directly to
> >* the dispatch list.
> >*/
> >   spin_lock(>lock);
> >   list_add(>queuelist, >dispatch);
> >   spin_unlock(>lock);
> >   return true;
> > 
> > This function still returns false for all non-flush request, so nothing
> > changes wrt. this kind of handling.
> > 
> 
> Yep Ming.  I don't have the expertise to tell you why, but the failure
> in the USB case was caused by an rq that is not flush, but for which
> rq->tag != -1.  So, the previous version of blk_mq_sched_bypass_insert
> returned true, and there was not failure, while after commit
> a6a252e64914 the function returns true and the failure occurs if bfq
> does not exploit the requeue hook.
> 
> You have actually shown it yourself, several months ago, through the
> simple code instrumentation you made and used to show that bfq was
> stuck.  And I guess it can still be reproduced very easily, unless
> something else has changed in blk-mq.
> 
> BTW, if you can shed a light on this fact, that would be a great
> occasion to learn for me.

The difference should be made by commit 923218f6166a84 (blk-mq: don't
allocate driver tag upfront for flush rq), which releases driver tag
before requeuing the request, but that is definitely the correct thing
to do.

> 
> >> 
> >> I don't mean that this change is an error, it simply sends a stateful
> >> scheduler in an inconsistent state, unless the scheduler properly
> >> handles the requeue that precedes the re-insertion into the
> >> scheduler.
> >> 
> >> If this clarifies the situation, but there is still some misleading
> >> statement in the commit, just let me better understand, and I'll be
> >> glad to rectify it, if possible somehow.
> >> 
> >> 
> >>> And you can see blk_mq_requeue_request() is called by lots of drivers,
> >>> especially it is often used in error handler, see SCSI's example.
> >>> 
>  consequence, I/O schedulers may get the same request inserted again,
>  even several times, without a finish_request invoked on that request
>  before each re-insertion.
>  
> >> 
> >> ...
> >> 
>  @@ -5426,7 +5482,8 @@ static struct elevator_type iosched_bfq_mq = {
>   .ops.mq = {
>   .limit_depth= bfq_limit_depth,
>   .prepare_request= bfq_prepare_request,
>  -

Re: [PATCH BUGFIX V3] block, bfq: add requeue-request hook

2018-02-23 Thread Paolo Valente


> Il giorno 23 feb 2018, alle ore 17:17, Ming Lei  ha 
> scritto:
> 
> Hi Paolo,
> 
> On Fri, Feb 23, 2018 at 04:41:36PM +0100, Paolo Valente wrote:
>> 
>> 
>>> Il giorno 23 feb 2018, alle ore 16:07, Ming Lei  ha 
>>> scritto:
>>> 
>>> Hi Paolo,
>>> 
>>> On Wed, Feb 07, 2018 at 10:19:20PM +0100, Paolo Valente wrote:
 Commit 'a6a252e64914 ("blk-mq-sched: decide how to handle flush rq via
 RQF_FLUSH_SEQ")' makes all non-flush re-prepared requests for a device
 be re-inserted into the active I/O scheduler for that device. As a
>>> 
>>> No, this behaviour isn't related with commit a6a252e64914, and
>>> it has been there since blk_mq_requeue_request() is introduced.
>>> 
>> 
>> Hi Ming,
>> actually, we wrote the above statement after simply following the call
>> chain that led to the failure.  In this respect, the change in commit
>> a6a252e64914:
>> 
>> static bool blk_mq_sched_bypass_insert(struct blk_mq_hw_ctx *hctx,
>> +  bool has_sched,
>>   struct request *rq)
>> {
>> -   if (rq->tag == -1) {
>> +   /* dispatch flush rq directly */
>> +   if (rq->rq_flags & RQF_FLUSH_SEQ) {
>> +   spin_lock(>lock);
>> +   list_add(>queuelist, >dispatch);
>> +   spin_unlock(>lock);
>> +   return true;
>> +   }
>> +
>> +   if (has_sched) {
>>rq->rq_flags |= RQF_SORTED;
>> -   return false;
>> +   WARN_ON(rq->tag != -1);
>>}
>> 
>> -   /*
>> -* If we already have a real request tag, send directly to
>> -* the dispatch list.
>> -*/
>> -   spin_lock(>lock);
>> -   list_add(>queuelist, >dispatch);
>> -   spin_unlock(>lock);
>> -   return true;
>> +   return false;
>> }
>> 
>> makes blk_mq_sched_bypass_insert return false for all non-flush
>> requests.  From that, the anomaly described in our commit follows, for
>> bfq any stateful scheduler that waits for the completion of requests
>> that passed through it.  I'm elaborating again a little bit on this in
>> my replies to your next points below.
> 
> Before a6a252e64914, follows blk_mq_sched_bypass_insert()
> 
>   if (rq->tag == -1) {
>   rq->rq_flags |= RQF_SORTED;
>   return false;
>  }
> 
>   /*
>* If we already have a real request tag, send directly to
>* the dispatch list.
>*/
>   spin_lock(>lock);
>   list_add(>queuelist, >dispatch);
>   spin_unlock(>lock);
>   return true;
> 
> This function still returns false for all non-flush request, so nothing
> changes wrt. this kind of handling.
> 

Yep Ming.  I don't have the expertise to tell you why, but the failure
in the USB case was caused by an rq that is not flush, but for which
rq->tag != -1.  So, the previous version of blk_mq_sched_bypass_insert
returned true, and there was not failure, while after commit
a6a252e64914 the function returns true and the failure occurs if bfq
does not exploit the requeue hook.

You have actually shown it yourself, several months ago, through the
simple code instrumentation you made and used to show that bfq was
stuck.  And I guess it can still be reproduced very easily, unless
something else has changed in blk-mq.

BTW, if you can shed a light on this fact, that would be a great
occasion to learn for me.

>> 
>> I don't mean that this change is an error, it simply sends a stateful
>> scheduler in an inconsistent state, unless the scheduler properly
>> handles the requeue that precedes the re-insertion into the
>> scheduler.
>> 
>> If this clarifies the situation, but there is still some misleading
>> statement in the commit, just let me better understand, and I'll be
>> glad to rectify it, if possible somehow.
>> 
>> 
>>> And you can see blk_mq_requeue_request() is called by lots of drivers,
>>> especially it is often used in error handler, see SCSI's example.
>>> 
 consequence, I/O schedulers may get the same request inserted again,
 even several times, without a finish_request invoked on that request
 before each re-insertion.
 
>> 
>> ...
>> 
 @@ -5426,7 +5482,8 @@ static struct elevator_type iosched_bfq_mq = {
.ops.mq = {
.limit_depth= bfq_limit_depth,
.prepare_request= bfq_prepare_request,
 -  .finish_request = bfq_finish_request,
 +  .requeue_request= bfq_finish_requeue_request,
 +  .finish_request = bfq_finish_requeue_request,
.exit_icq   = bfq_exit_icq,
.insert_requests= bfq_insert_requests,
.dispatch_request   = bfq_dispatch_request,
>>> 
>>> This way may not be correct since blk_mq_sched_requeue_request() can be
>>> called for one request which won't enter io scheduler.
>>> 
>> 
>> Exactly, there are two 

Re: [PATCH BUGFIX V3] block, bfq: add requeue-request hook

2018-02-23 Thread Ming Lei
Hi Paolo,

On Fri, Feb 23, 2018 at 04:41:36PM +0100, Paolo Valente wrote:
> 
> 
> > Il giorno 23 feb 2018, alle ore 16:07, Ming Lei  ha 
> > scritto:
> > 
> > Hi Paolo,
> > 
> > On Wed, Feb 07, 2018 at 10:19:20PM +0100, Paolo Valente wrote:
> >> Commit 'a6a252e64914 ("blk-mq-sched: decide how to handle flush rq via
> >> RQF_FLUSH_SEQ")' makes all non-flush re-prepared requests for a device
> >> be re-inserted into the active I/O scheduler for that device. As a
> > 
> > No, this behaviour isn't related with commit a6a252e64914, and
> > it has been there since blk_mq_requeue_request() is introduced.
> > 
> 
> Hi Ming,
> actually, we wrote the above statement after simply following the call
> chain that led to the failure.  In this respect, the change in commit
> a6a252e64914:
> 
>  static bool blk_mq_sched_bypass_insert(struct blk_mq_hw_ctx *hctx,
> +  bool has_sched,
>struct request *rq)
>  {
> -   if (rq->tag == -1) {
> +   /* dispatch flush rq directly */
> +   if (rq->rq_flags & RQF_FLUSH_SEQ) {
> +   spin_lock(>lock);
> +   list_add(>queuelist, >dispatch);
> +   spin_unlock(>lock);
> +   return true;
> +   }
> +
> +   if (has_sched) {
> rq->rq_flags |= RQF_SORTED;
> -   return false;
> +   WARN_ON(rq->tag != -1);
> }
>  
> -   /*
> -* If we already have a real request tag, send directly to
> -* the dispatch list.
> -*/
> -   spin_lock(>lock);
> -   list_add(>queuelist, >dispatch);
> -   spin_unlock(>lock);
> -   return true;
> +   return false;
>  }
> 
> makes blk_mq_sched_bypass_insert return false for all non-flush
> requests.  From that, the anomaly described in our commit follows, for
> bfq any stateful scheduler that waits for the completion of requests
> that passed through it.  I'm elaborating again a little bit on this in
> my replies to your next points below.

Before a6a252e64914, follows blk_mq_sched_bypass_insert()

   if (rq->tag == -1) {
   rq->rq_flags |= RQF_SORTED;
   return false;
   }

   /*
* If we already have a real request tag, send directly to
* the dispatch list.
*/
   spin_lock(>lock);
   list_add(>queuelist, >dispatch);
   spin_unlock(>lock);
   return true;

This function still returns false for all non-flush request, so nothing
changes wrt. this kind of handling.

> 
> I don't mean that this change is an error, it simply sends a stateful
> scheduler in an inconsistent state, unless the scheduler properly
> handles the requeue that precedes the re-insertion into the
> scheduler.
> 
> If this clarifies the situation, but there is still some misleading
> statement in the commit, just let me better understand, and I'll be
> glad to rectify it, if possible somehow.
> 
> 
> > And you can see blk_mq_requeue_request() is called by lots of drivers,
> > especially it is often used in error handler, see SCSI's example.
> > 
> >> consequence, I/O schedulers may get the same request inserted again,
> >> even several times, without a finish_request invoked on that request
> >> before each re-insertion.
> >> 
> 
> ...
> 
> >> @@ -5426,7 +5482,8 @@ static struct elevator_type iosched_bfq_mq = {
> >>.ops.mq = {
> >>.limit_depth= bfq_limit_depth,
> >>.prepare_request= bfq_prepare_request,
> >> -  .finish_request = bfq_finish_request,
> >> +  .requeue_request= bfq_finish_requeue_request,
> >> +  .finish_request = bfq_finish_requeue_request,
> >>.exit_icq   = bfq_exit_icq,
> >>.insert_requests= bfq_insert_requests,
> >>.dispatch_request   = bfq_dispatch_request,
> > 
> > This way may not be correct since blk_mq_sched_requeue_request() can be
> > called for one request which won't enter io scheduler.
> > 
> 
> Exactly, there are two cases: requeues that lead to subsequent
> re-insertions, and requeues that don't.  The function
> bfq_finish_requeue_request handles both, and both need to be handled,
> to inform bfq that it has not to wait for the completion of rq any
> longer.
> 
> One special case is when bfq_finish_requeue_request gets invoked even
> if rq has nothing to do with any scheduler.  In that case,
> bfq_finish_requeue_request exists immediately.
> 
> 
> > __blk_mq_requeue_request() is called for two cases:
> > 
> > - one is that the requeued request is added to hctx->dispatch, such
> > as blk_mq_dispatch_rq_list()
> 
> yes
> 
> > - another case is that the request is requeued to io scheduler, such as
> > blk_mq_requeue_request().
> > 
> 
> yes
> 
> > For the 1st case, blk_mq_sched_requeue_request() shouldn't be called
> > since it is nothing to do with scheduler,
> 
> No, if that 

Re: [PATCH BUGFIX V3] block, bfq: add requeue-request hook

2018-02-23 Thread Paolo Valente


> Il giorno 23 feb 2018, alle ore 16:07, Ming Lei  ha 
> scritto:
> 
> Hi Paolo,
> 
> On Wed, Feb 07, 2018 at 10:19:20PM +0100, Paolo Valente wrote:
>> Commit 'a6a252e64914 ("blk-mq-sched: decide how to handle flush rq via
>> RQF_FLUSH_SEQ")' makes all non-flush re-prepared requests for a device
>> be re-inserted into the active I/O scheduler for that device. As a
> 
> No, this behaviour isn't related with commit a6a252e64914, and
> it has been there since blk_mq_requeue_request() is introduced.
> 

Hi Ming,
actually, we wrote the above statement after simply following the call
chain that led to the failure.  In this respect, the change in commit
a6a252e64914:

 static bool blk_mq_sched_bypass_insert(struct blk_mq_hw_ctx *hctx,
+  bool has_sched,
   struct request *rq)
 {
-   if (rq->tag == -1) {
+   /* dispatch flush rq directly */
+   if (rq->rq_flags & RQF_FLUSH_SEQ) {
+   spin_lock(>lock);
+   list_add(>queuelist, >dispatch);
+   spin_unlock(>lock);
+   return true;
+   }
+
+   if (has_sched) {
rq->rq_flags |= RQF_SORTED;
-   return false;
+   WARN_ON(rq->tag != -1);
}
 
-   /*
-* If we already have a real request tag, send directly to
-* the dispatch list.
-*/
-   spin_lock(>lock);
-   list_add(>queuelist, >dispatch);
-   spin_unlock(>lock);
-   return true;
+   return false;
 }

makes blk_mq_sched_bypass_insert return false for all non-flush
requests.  From that, the anomaly described in our commit follows, for
bfq any stateful scheduler that waits for the completion of requests
that passed through it.  I'm elaborating again a little bit on this in
my replies to your next points below.

I don't mean that this change is an error, it simply sends a stateful
scheduler in an inconsistent state, unless the scheduler properly
handles the requeue that precedes the re-insertion into the
scheduler.

If this clarifies the situation, but there is still some misleading
statement in the commit, just let me better understand, and I'll be
glad to rectify it, if possible somehow.


> And you can see blk_mq_requeue_request() is called by lots of drivers,
> especially it is often used in error handler, see SCSI's example.
> 
>> consequence, I/O schedulers may get the same request inserted again,
>> even several times, without a finish_request invoked on that request
>> before each re-insertion.
>> 

...

>> @@ -5426,7 +5482,8 @@ static struct elevator_type iosched_bfq_mq = {
>>  .ops.mq = {
>>  .limit_depth= bfq_limit_depth,
>>  .prepare_request= bfq_prepare_request,
>> -.finish_request = bfq_finish_request,
>> +.requeue_request= bfq_finish_requeue_request,
>> +.finish_request = bfq_finish_requeue_request,
>>  .exit_icq   = bfq_exit_icq,
>>  .insert_requests= bfq_insert_requests,
>>  .dispatch_request   = bfq_dispatch_request,
> 
> This way may not be correct since blk_mq_sched_requeue_request() can be
> called for one request which won't enter io scheduler.
> 

Exactly, there are two cases: requeues that lead to subsequent
re-insertions, and requeues that don't.  The function
bfq_finish_requeue_request handles both, and both need to be handled,
to inform bfq that it has not to wait for the completion of rq any
longer.

One special case is when bfq_finish_requeue_request gets invoked even
if rq has nothing to do with any scheduler.  In that case,
bfq_finish_requeue_request exists immediately.


> __blk_mq_requeue_request() is called for two cases:
> 
>   - one is that the requeued request is added to hctx->dispatch, such
>   as blk_mq_dispatch_rq_list()

yes

>   - another case is that the request is requeued to io scheduler, such as
>   blk_mq_requeue_request().
> 

yes

> For the 1st case, blk_mq_sched_requeue_request() shouldn't be called
> since it is nothing to do with scheduler,

No, if that rq has been inserted and then extracted from the scheduler
through a dispatch_request, then it has.  The scheduler is stateful,
and keeps state for rq, because it must do so, until a completion or a
requeue arrive.  In particular, bfq may decide that no other
bfq_queues must be served before rq has been completed, because this
is necessary to preserve its target service guarantees.  If bfq is not
informed, either through its completion or its requeue hook, then it
will wait forever.

> seems we only need to do that
> for 2nd case.
> 

> So looks we need the following patch:
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 23de7fd8099a..a216f3c3c3ce 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -712,7 +714,6 @@ static void __blk_mq_requeue_request(struct request *rq)
> 

Re: [PATCH BUGFIX V3] block, bfq: add requeue-request hook

2018-02-23 Thread Ming Lei
Hi Paolo,

On Wed, Feb 07, 2018 at 10:19:20PM +0100, Paolo Valente wrote:
> Commit 'a6a252e64914 ("blk-mq-sched: decide how to handle flush rq via
> RQF_FLUSH_SEQ")' makes all non-flush re-prepared requests for a device
> be re-inserted into the active I/O scheduler for that device. As a

No, this behaviour isn't related with commit a6a252e64914, and
it has been there since blk_mq_requeue_request() is introduced.

And you can see blk_mq_requeue_request() is called by lots of drivers,
especially it is often used in error handler, see SCSI's example.

> consequence, I/O schedulers may get the same request inserted again,
> even several times, without a finish_request invoked on that request
> before each re-insertion.
> 
> This fact is the cause of the failure reported in [1]. For an I/O
> scheduler, every re-insertion of the same re-prepared request is
> equivalent to the insertion of a new request. For schedulers like
> mq-deadline or kyber, this fact causes no harm. In contrast, it
> confuses a stateful scheduler like BFQ, which keeps state for an I/O
> request, until the finish_request hook is invoked on the request. In
> particular, BFQ may get stuck, waiting forever for the number of
> request dispatches, of the same request, to be balanced by an equal
> number of request completions (while there will be one completion for
> that request). In this state, BFQ may refuse to serve I/O requests
> from other bfq_queues. The hang reported in [1] then follows.
> 
> However, the above re-prepared requests undergo a requeue, thus the
> requeue_request hook of the active elevator is invoked for these
> requests, if set. This commit then addresses the above issue by
> properly implementing the hook requeue_request in BFQ.
> 
> [1] https://marc.info/?l=linux-block=15127608676
> 
> Reported-by: Ivan Kozik 
> Reported-by: Alban Browaeys 
> Tested-by: Mike Galbraith 
> Signed-off-by: Paolo Valente 
> Signed-off-by: Serena Ziviani 
> ---
> V2: contains fix to bug reported in [2]
> V3: implements the improvement suggested in [3]
> 
> [2] https://lkml.org/lkml/2018/2/5/599
> [3] https://lkml.org/lkml/2018/2/7/532
> 
>  block/bfq-iosched.c | 107 
> 
>  1 file changed, 82 insertions(+), 25 deletions(-)
> 
> diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
> index 47e6ec7427c4..aeca22d91101 100644
> --- a/block/bfq-iosched.c
> +++ b/block/bfq-iosched.c
> @@ -3823,24 +3823,26 @@ static struct request *__bfq_dispatch_request(struct 
> blk_mq_hw_ctx *hctx)
>   }
> 
>   /*
> -  * We exploit the bfq_finish_request hook to decrement
> -  * rq_in_driver, but bfq_finish_request will not be
> -  * invoked on this request. So, to avoid unbalance,
> -  * just start this request, without incrementing
> -  * rq_in_driver. As a negative consequence,
> -  * rq_in_driver is deceptively lower than it should be
> -  * while this request is in service. This may cause
> -  * bfq_schedule_dispatch to be invoked uselessly.
> +  * We exploit the bfq_finish_requeue_request hook to
> +  * decrement rq_in_driver, but
> +  * bfq_finish_requeue_request will not be invoked on
> +  * this request. So, to avoid unbalance, just start
> +  * this request, without incrementing rq_in_driver. As
> +  * a negative consequence, rq_in_driver is deceptively
> +  * lower than it should be while this request is in
> +  * service. This may cause bfq_schedule_dispatch to be
> +  * invoked uselessly.
>*
>* As for implementing an exact solution, the
> -  * bfq_finish_request hook, if defined, is probably
> -  * invoked also on this request. So, by exploiting
> -  * this hook, we could 1) increment rq_in_driver here,
> -  * and 2) decrement it in bfq_finish_request. Such a
> -  * solution would let the value of the counter be
> -  * always accurate, but it would entail using an extra
> -  * interface function. This cost seems higher than the
> -  * benefit, being the frequency of non-elevator-private
> +  * bfq_finish_requeue_request hook, if defined, is
> +  * probably invoked also on this request. So, by
> +  * exploiting this hook, we could 1) increment
> +  * rq_in_driver here, and 2) decrement it in
> +  * bfq_finish_requeue_request. Such a solution would
> +  * let the value of the counter be always accurate,
> +  * but it would entail using an extra interface
> +  * function. This cost seems higher than the benefit,
> +  * being the 

Re: [PATCH BUGFIX V3] block, bfq: add requeue-request hook

2018-02-11 Thread Paolo Valente


> Il giorno 10 feb 2018, alle ore 09:29, Oleksandr Natalenko 
>  ha scritto:
> 
> Hi.
> 
> On pátek 9. února 2018 18:29:39 CET Mike Galbraith wrote:
>> On Fri, 2018-02-09 at 14:21 +0100, Oleksandr Natalenko wrote:
>>> In addition to this I think it should be worth considering CC'ing Greg
>>> to pull this fix into 4.15 stable tree.
>> 
>> This isn't one he can cherry-pick, some munging required, in which case
>> he usually wants a properly tested backport.
>> 
>>  -Mike
> 
> Maybe, this could be a good opportunity to push all the pending BFQ patches 
> into the stable 4.15 branch? Because IIUC currently BFQ in 4.15 is just 
> unusable.
> 
> Paolo?
> 

Of course ok for me, and thanks Oleksandr for proposing this.  These
commits should apply cleanly on 4.15, and FWIW have been tested, by me
and BFQ users, on 4.15 too in these months.

Thanks,
Paolo

> ---
> 
> block, bfq: add requeue-request hook
> bfq-iosched: don't call bfqg_and_blkg_put for !CONFIG_BFQ_GROUP_IOSCHED
> block, bfq: release oom-queue ref to root group on exit
> block, bfq: put async queues for root bfq groups too
> block, bfq: limit sectors served with interactive weight raising
> block, bfq: limit tags for writes and async I/O
> block, bfq: increase threshold to deem I/O as random
> block, bfq: remove superfluous check in queue-merging setup
> block, bfq: let a queue be merged only shortly after starting I/O
> block, bfq: check low_latency flag in bfq_bfqq_save_state()
> block, bfq: add missing rq_pos_tree update on rq removal
> block, bfq: fix occurrences of request finish method's old name
> block, bfq: consider also past I/O in soft real-time detection
> block, bfq: remove batches of confusing ifdefs
> 
> 



Re: [PATCH BUGFIX V3] block, bfq: add requeue-request hook

2018-02-10 Thread Oleksandr Natalenko
Hi.

On pátek 9. února 2018 18:29:39 CET Mike Galbraith wrote:
> On Fri, 2018-02-09 at 14:21 +0100, Oleksandr Natalenko wrote:
> > In addition to this I think it should be worth considering CC'ing Greg
> > to pull this fix into 4.15 stable tree.
> 
> This isn't one he can cherry-pick, some munging required, in which case
> he usually wants a properly tested backport.
> 
>   -Mike

Maybe, this could be a good opportunity to push all the pending BFQ patches 
into the stable 4.15 branch? Because IIUC currently BFQ in 4.15 is just 
unusable.

Paolo?

---

block, bfq: add requeue-request hook
bfq-iosched: don't call bfqg_and_blkg_put for !CONFIG_BFQ_GROUP_IOSCHED
block, bfq: release oom-queue ref to root group on exit
block, bfq: put async queues for root bfq groups too
block, bfq: limit sectors served with interactive weight raising
block, bfq: limit tags for writes and async I/O
block, bfq: increase threshold to deem I/O as random
block, bfq: remove superfluous check in queue-merging setup
block, bfq: let a queue be merged only shortly after starting I/O
block, bfq: check low_latency flag in bfq_bfqq_save_state()
block, bfq: add missing rq_pos_tree update on rq removal
block, bfq: fix occurrences of request finish method's old name
block, bfq: consider also past I/O in soft real-time detection
block, bfq: remove batches of confusing ifdefs




Re: [PATCH BUGFIX V3] block, bfq: add requeue-request hook

2018-02-09 Thread Mike Galbraith
On Fri, 2018-02-09 at 14:21 +0100, Oleksandr Natalenko wrote:
> 
> In addition to this I think it should be worth considering CC'ing Greg 
> to pull this fix into 4.15 stable tree.

This isn't one he can cherry-pick, some munging required, in which case
he usually wants a properly tested backport.

-Mike


Re: [PATCH BUGFIX V3] block, bfq: add requeue-request hook

2018-02-09 Thread Jens Axboe
On 2/9/18 6:21 AM, Oleksandr Natalenko wrote:
> Hi.
> 
> 08.02.2018 08:16, Paolo Valente wrote:
>>> Il giorno 07 feb 2018, alle ore 23:18, Jens Axboe  ha 
>>> scritto:
>>>
>>> On 2/7/18 2:19 PM, Paolo Valente wrote:
 Commit 'a6a252e64914 ("blk-mq-sched: decide how to handle flush rq 
 via
 RQF_FLUSH_SEQ")' makes all non-flush re-prepared requests for a 
 device
 be re-inserted into the active I/O scheduler for that device. As a
 consequence, I/O schedulers may get the same request inserted again,
 even several times, without a finish_request invoked on that request
 before each re-insertion.

 This fact is the cause of the failure reported in [1]. For an I/O
 scheduler, every re-insertion of the same re-prepared request is
 equivalent to the insertion of a new request. For schedulers like
 mq-deadline or kyber, this fact causes no harm. In contrast, it
 confuses a stateful scheduler like BFQ, which keeps state for an I/O
 request, until the finish_request hook is invoked on the request. In
 particular, BFQ may get stuck, waiting forever for the number of
 request dispatches, of the same request, to be balanced by an equal
 number of request completions (while there will be one completion for
 that request). In this state, BFQ may refuse to serve I/O requests
 from other bfq_queues. The hang reported in [1] then follows.

 However, the above re-prepared requests undergo a requeue, thus the
 requeue_request hook of the active elevator is invoked for these
 requests, if set. This commit then addresses the above issue by
 properly implementing the hook requeue_request in BFQ.
>>>
>>> Thanks, applied.
>>>
>>
>> I Jens,
>> I forgot to add
>> Tested-by: Oleksandr Natalenko 
>> in the patch.
>>
>> Is it still possible to add it?
>>
> 
> In addition to this I think it should be worth considering CC'ing Greg 
> to pull this fix into 4.15 stable tree.

I can't add the tested-by anymore, but it's easy enough to target for
stable after-the-fact.


-- 
Jens Axboe



Re: [PATCH BUGFIX V3] block, bfq: add requeue-request hook

2018-02-09 Thread Oleksandr Natalenko

Hi.

08.02.2018 08:16, Paolo Valente wrote:
Il giorno 07 feb 2018, alle ore 23:18, Jens Axboe  ha 
scritto:


On 2/7/18 2:19 PM, Paolo Valente wrote:
Commit 'a6a252e64914 ("blk-mq-sched: decide how to handle flush rq 
via
RQF_FLUSH_SEQ")' makes all non-flush re-prepared requests for a 
device

be re-inserted into the active I/O scheduler for that device. As a
consequence, I/O schedulers may get the same request inserted again,
even several times, without a finish_request invoked on that request
before each re-insertion.

This fact is the cause of the failure reported in [1]. For an I/O
scheduler, every re-insertion of the same re-prepared request is
equivalent to the insertion of a new request. For schedulers like
mq-deadline or kyber, this fact causes no harm. In contrast, it
confuses a stateful scheduler like BFQ, which keeps state for an I/O
request, until the finish_request hook is invoked on the request. In
particular, BFQ may get stuck, waiting forever for the number of
request dispatches, of the same request, to be balanced by an equal
number of request completions (while there will be one completion for
that request). In this state, BFQ may refuse to serve I/O requests
from other bfq_queues. The hang reported in [1] then follows.

However, the above re-prepared requests undergo a requeue, thus the
requeue_request hook of the active elevator is invoked for these
requests, if set. This commit then addresses the above issue by
properly implementing the hook requeue_request in BFQ.


Thanks, applied.



I Jens,
I forgot to add
Tested-by: Oleksandr Natalenko 
in the patch.

Is it still possible to add it?



In addition to this I think it should be worth considering CC'ing Greg 
to pull this fix into 4.15 stable tree.


Oleksandr


Re: [PATCH BUGFIX V3] block, bfq: add requeue-request hook

2018-02-07 Thread Paolo Valente


> Il giorno 07 feb 2018, alle ore 23:18, Jens Axboe  ha 
> scritto:
> 
> On 2/7/18 2:19 PM, Paolo Valente wrote:
>> Commit 'a6a252e64914 ("blk-mq-sched: decide how to handle flush rq via
>> RQF_FLUSH_SEQ")' makes all non-flush re-prepared requests for a device
>> be re-inserted into the active I/O scheduler for that device. As a
>> consequence, I/O schedulers may get the same request inserted again,
>> even several times, without a finish_request invoked on that request
>> before each re-insertion.
>> 
>> This fact is the cause of the failure reported in [1]. For an I/O
>> scheduler, every re-insertion of the same re-prepared request is
>> equivalent to the insertion of a new request. For schedulers like
>> mq-deadline or kyber, this fact causes no harm. In contrast, it
>> confuses a stateful scheduler like BFQ, which keeps state for an I/O
>> request, until the finish_request hook is invoked on the request. In
>> particular, BFQ may get stuck, waiting forever for the number of
>> request dispatches, of the same request, to be balanced by an equal
>> number of request completions (while there will be one completion for
>> that request). In this state, BFQ may refuse to serve I/O requests
>> from other bfq_queues. The hang reported in [1] then follows.
>> 
>> However, the above re-prepared requests undergo a requeue, thus the
>> requeue_request hook of the active elevator is invoked for these
>> requests, if set. This commit then addresses the above issue by
>> properly implementing the hook requeue_request in BFQ.
> 
> Thanks, applied.
> 

I Jens,
I forgot to add
Tested-by: Oleksandr Natalenko 
in the patch.

Is it still possible to add it?

Thanks,
Paolo

> -- 
> Jens Axboe



[PATCH BUGFIX V3] block, bfq: add requeue-request hook

2018-02-07 Thread Paolo Valente
Commit 'a6a252e64914 ("blk-mq-sched: decide how to handle flush rq via
RQF_FLUSH_SEQ")' makes all non-flush re-prepared requests for a device
be re-inserted into the active I/O scheduler for that device. As a
consequence, I/O schedulers may get the same request inserted again,
even several times, without a finish_request invoked on that request
before each re-insertion.

This fact is the cause of the failure reported in [1]. For an I/O
scheduler, every re-insertion of the same re-prepared request is
equivalent to the insertion of a new request. For schedulers like
mq-deadline or kyber, this fact causes no harm. In contrast, it
confuses a stateful scheduler like BFQ, which keeps state for an I/O
request, until the finish_request hook is invoked on the request. In
particular, BFQ may get stuck, waiting forever for the number of
request dispatches, of the same request, to be balanced by an equal
number of request completions (while there will be one completion for
that request). In this state, BFQ may refuse to serve I/O requests
from other bfq_queues. The hang reported in [1] then follows.

However, the above re-prepared requests undergo a requeue, thus the
requeue_request hook of the active elevator is invoked for these
requests, if set. This commit then addresses the above issue by
properly implementing the hook requeue_request in BFQ.

[1] https://marc.info/?l=linux-block=15127608676

Reported-by: Ivan Kozik 
Reported-by: Alban Browaeys 
Tested-by: Mike Galbraith 
Signed-off-by: Paolo Valente 
Signed-off-by: Serena Ziviani 
---
V2: contains fix to bug reported in [2]
V3: implements the improvement suggested in [3]

[2] https://lkml.org/lkml/2018/2/5/599
[3] https://lkml.org/lkml/2018/2/7/532

 block/bfq-iosched.c | 107 
 1 file changed, 82 insertions(+), 25 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 47e6ec7427c4..aeca22d91101 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -3823,24 +3823,26 @@ static struct request *__bfq_dispatch_request(struct 
blk_mq_hw_ctx *hctx)
}

/*
-* We exploit the bfq_finish_request hook to decrement
-* rq_in_driver, but bfq_finish_request will not be
-* invoked on this request. So, to avoid unbalance,
-* just start this request, without incrementing
-* rq_in_driver. As a negative consequence,
-* rq_in_driver is deceptively lower than it should be
-* while this request is in service. This may cause
-* bfq_schedule_dispatch to be invoked uselessly.
+* We exploit the bfq_finish_requeue_request hook to
+* decrement rq_in_driver, but
+* bfq_finish_requeue_request will not be invoked on
+* this request. So, to avoid unbalance, just start
+* this request, without incrementing rq_in_driver. As
+* a negative consequence, rq_in_driver is deceptively
+* lower than it should be while this request is in
+* service. This may cause bfq_schedule_dispatch to be
+* invoked uselessly.
 *
 * As for implementing an exact solution, the
-* bfq_finish_request hook, if defined, is probably
-* invoked also on this request. So, by exploiting
-* this hook, we could 1) increment rq_in_driver here,
-* and 2) decrement it in bfq_finish_request. Such a
-* solution would let the value of the counter be
-* always accurate, but it would entail using an extra
-* interface function. This cost seems higher than the
-* benefit, being the frequency of non-elevator-private
+* bfq_finish_requeue_request hook, if defined, is
+* probably invoked also on this request. So, by
+* exploiting this hook, we could 1) increment
+* rq_in_driver here, and 2) decrement it in
+* bfq_finish_requeue_request. Such a solution would
+* let the value of the counter be always accurate,
+* but it would entail using an extra interface
+* function. This cost seems higher than the benefit,
+* being the frequency of non-elevator-private
 * requests very low.
 */
goto start_rq;
@@ -4515,6 +4517,8 @@ static inline void bfq_update_insert_stats(struct 
request_queue *q,
   unsigned int cmd_flags) {}
 #endif

+static void bfq_prepare_request(struct request *rq, struct bio *bio);
+
 static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request