Re: [PATCH 7/8] blk-mq: use bd->last == true for list inserts

2018-11-27 Thread Jens Axboe
On 11/27/18 6:49 PM, Ming Lei wrote:
> On Mon, Nov 26, 2018 at 09:35:55AM -0700, Jens Axboe wrote:
>> If we are issuing a list of requests, we know if we're at the last one.
>> If we fail issuing, ensure that we call ->commits_rqs() to flush any
>> potential previous requests.
>>
>> Signed-off-by: Jens Axboe 
>> ---
>>  block/blk-core.c |  2 +-
>>  block/blk-mq.c   | 32 
>>  block/blk-mq.h   |  2 +-
>>  3 files changed, 26 insertions(+), 10 deletions(-)
>>
>> diff --git a/block/blk-core.c b/block/blk-core.c
>> index c9758d185357..808a65d23f1a 100644
>> --- a/block/blk-core.c
>> +++ b/block/blk-core.c
>> @@ -1334,7 +1334,7 @@ blk_status_t blk_insert_cloned_request(struct 
>> request_queue *q, struct request *
>>   * bypass a potential scheduler on the bottom device for
>>   * insert.
>>   */
>> -return blk_mq_request_issue_directly(rq);
>> +return blk_mq_request_issue_directly(rq, true);
>>  }
>>  EXPORT_SYMBOL_GPL(blk_insert_cloned_request);
>>  
>> diff --git a/block/blk-mq.c b/block/blk-mq.c
>> index 6a249bf6ed00..0a12cec0b426 100644
>> --- a/block/blk-mq.c
>> +++ b/block/blk-mq.c
>> @@ -1260,6 +1260,14 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, 
>> struct list_head *list,
>>  if (!list_empty(list)) {
>>  bool needs_restart;
>>  
>> +/*
>> + * If we didn't flush the entire list, we could have told
>> + * the driver there was more coming, but that turned out to
>> + * be a lie.
>> + */
>> +if (q->mq_ops->commit_rqs)
>> +q->mq_ops->commit_rqs(hctx);
>> +
> 
> Looks you miss to do it for blk_mq_do_dispatch_sched() and
> blk_mq_do_dispatch_ctx(), in which only one request is added to
> the rq_list.

blk_mq_do_dispatch() is handled through blk_mq_dispatch_rq_list(),
it doesn't need to do this on its own, it's only working on
single requests.

Ditto for blk_mq_do_dispatch()

> Maybe we can call the .commit_rqs(hctx) just at the end of
> blk_mq_sched_dispatch_requests() for cover all non-direct-issue
> cases.

I think you miss the point of ->commits_rqs() - it's only meant to
catch the case where we set bd->last == false, and never got to
the end. Like not getting a driver tag, for instance. It's not
meant to always be a kick things into gear, it's more efficient
to use bd->last for the general case, as most drivers require some
sort of locking for the submission.

-- 
Jens Axboe



Re: [PATCH 7/8] blk-mq: use bd->last == true for list inserts

2018-11-27 Thread Ming Lei
On Mon, Nov 26, 2018 at 09:35:55AM -0700, Jens Axboe wrote:
> If we are issuing a list of requests, we know if we're at the last one.
> If we fail issuing, ensure that we call ->commits_rqs() to flush any
> potential previous requests.
> 
> Signed-off-by: Jens Axboe 
> ---
>  block/blk-core.c |  2 +-
>  block/blk-mq.c   | 32 
>  block/blk-mq.h   |  2 +-
>  3 files changed, 26 insertions(+), 10 deletions(-)
> 
> diff --git a/block/blk-core.c b/block/blk-core.c
> index c9758d185357..808a65d23f1a 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -1334,7 +1334,7 @@ blk_status_t blk_insert_cloned_request(struct 
> request_queue *q, struct request *
>* bypass a potential scheduler on the bottom device for
>* insert.
>*/
> - return blk_mq_request_issue_directly(rq);
> + return blk_mq_request_issue_directly(rq, true);
>  }
>  EXPORT_SYMBOL_GPL(blk_insert_cloned_request);
>  
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 6a249bf6ed00..0a12cec0b426 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1260,6 +1260,14 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, 
> struct list_head *list,
>   if (!list_empty(list)) {
>   bool needs_restart;
>  
> + /*
> +  * If we didn't flush the entire list, we could have told
> +  * the driver there was more coming, but that turned out to
> +  * be a lie.
> +  */
> + if (q->mq_ops->commit_rqs)
> + q->mq_ops->commit_rqs(hctx);
> +

Looks you miss to do it for blk_mq_do_dispatch_sched() and
blk_mq_do_dispatch_ctx(), in which only one request is added to
the rq_list.

Maybe we can call the .commit_rqs(hctx) just at the end of
blk_mq_sched_dispatch_requests() for cover all non-direct-issue
cases.

Thanks,
Ming


Re: [PATCH 7/8] blk-mq: use bd->last == true for list inserts

2018-11-27 Thread Jens Axboe
On 11/27/18 4:49 PM, Omar Sandoval wrote:
> On Mon, Nov 26, 2018 at 09:35:55AM -0700, Jens Axboe wrote:
>> If we are issuing a list of requests, we know if we're at the last one.
>> If we fail issuing, ensure that we call ->commits_rqs() to flush any
>> potential previous requests.
> 
> One comment below, otherwise
> 
> Reviewed-by: Omar Sandoval 

>> @@ -1260,6 +1260,14 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, 
>> struct list_head *list,
>>  if (!list_empty(list)) {
>>  bool needs_restart;
>>  
>> +/*
>> + * If we didn't flush the entire list, we could have told
>> + * the driver there was more coming, but that turned out to
>> + * be a lie.
>> + */
>> +if (q->mq_ops->commit_rqs)
>> +q->mq_ops->commit_rqs(hctx);
>> +
> 
> This hunk seems like it should go with the patch adding commit_rqs.

Agree, that would be better, since that also makes that patch fix an
actual issue instead of just being a prep patch. I'll shuffle that hunk
to that patch.

-- 
Jens Axboe



Re: [PATCH 7/8] blk-mq: use bd->last == true for list inserts

2018-11-27 Thread Omar Sandoval
On Mon, Nov 26, 2018 at 09:35:55AM -0700, Jens Axboe wrote:
> If we are issuing a list of requests, we know if we're at the last one.
> If we fail issuing, ensure that we call ->commits_rqs() to flush any
> potential previous requests.

One comment below, otherwise

Reviewed-by: Omar Sandoval 

> Signed-off-by: Jens Axboe 
> ---
>  block/blk-core.c |  2 +-
>  block/blk-mq.c   | 32 
>  block/blk-mq.h   |  2 +-
>  3 files changed, 26 insertions(+), 10 deletions(-)
> 
> diff --git a/block/blk-core.c b/block/blk-core.c
> index c9758d185357..808a65d23f1a 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -1334,7 +1334,7 @@ blk_status_t blk_insert_cloned_request(struct 
> request_queue *q, struct request *
>* bypass a potential scheduler on the bottom device for
>* insert.
>*/
> - return blk_mq_request_issue_directly(rq);
> + return blk_mq_request_issue_directly(rq, true);
>  }
>  EXPORT_SYMBOL_GPL(blk_insert_cloned_request);
>  
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 6a249bf6ed00..0a12cec0b426 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1260,6 +1260,14 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, 
> struct list_head *list,
>   if (!list_empty(list)) {
>   bool needs_restart;
>  
> + /*
> +  * If we didn't flush the entire list, we could have told
> +  * the driver there was more coming, but that turned out to
> +  * be a lie.
> +  */
> + if (q->mq_ops->commit_rqs)
> + q->mq_ops->commit_rqs(hctx);
> +

This hunk seems like it should go with the patch adding commit_rqs.

>   spin_lock(>lock);
>   list_splice_init(list, >dispatch);
>   spin_unlock(>lock);
> @@ -1736,12 +1744,12 @@ static blk_qc_t request_to_qc_t(struct blk_mq_hw_ctx 
> *hctx, struct request *rq)
>  
>  static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx,
>   struct request *rq,
> - blk_qc_t *cookie)
> + blk_qc_t *cookie, bool last)
>  {
>   struct request_queue *q = rq->q;
>   struct blk_mq_queue_data bd = {
>   .rq = rq,
> - .last = true,
> + .last = last,
>   };
>   blk_qc_t new_cookie;
>   blk_status_t ret;
> @@ -1776,7 +1784,7 @@ static blk_status_t __blk_mq_issue_directly(struct 
> blk_mq_hw_ctx *hctx,
>  static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
>   struct request *rq,
>   blk_qc_t *cookie,
> - bool bypass_insert)
> + bool bypass_insert, bool last)
>  {
>   struct request_queue *q = rq->q;
>   bool run_queue = true;
> @@ -1805,7 +1813,7 @@ static blk_status_t __blk_mq_try_issue_directly(struct 
> blk_mq_hw_ctx *hctx,
>   goto insert;
>   }
>  
> - return __blk_mq_issue_directly(hctx, rq, cookie);
> + return __blk_mq_issue_directly(hctx, rq, cookie, last);
>  insert:
>   if (bypass_insert)
>   return BLK_STS_RESOURCE;
> @@ -1824,7 +1832,7 @@ static void blk_mq_try_issue_directly(struct 
> blk_mq_hw_ctx *hctx,
>  
>   hctx_lock(hctx, _idx);
>  
> - ret = __blk_mq_try_issue_directly(hctx, rq, cookie, false);
> + ret = __blk_mq_try_issue_directly(hctx, rq, cookie, false, true);
>   if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE)
>   blk_mq_sched_insert_request(rq, false, true, false);
>   else if (ret != BLK_STS_OK)
> @@ -1833,7 +1841,7 @@ static void blk_mq_try_issue_directly(struct 
> blk_mq_hw_ctx *hctx,
>   hctx_unlock(hctx, srcu_idx);
>  }
>  
> -blk_status_t blk_mq_request_issue_directly(struct request *rq)
> +blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
>  {
>   blk_status_t ret;
>   int srcu_idx;
> @@ -1841,7 +1849,7 @@ blk_status_t blk_mq_request_issue_directly(struct 
> request *rq)
>   struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
>  
>   hctx_lock(hctx, _idx);
> - ret = __blk_mq_try_issue_directly(hctx, rq, _cookie, true);
> + ret = __blk_mq_try_issue_directly(hctx, rq, _cookie, true, last);
>   hctx_unlock(hctx, srcu_idx);
>  
>   return ret;
> @@ -1856,7 +1864,7 @@ void blk_mq_try_issue_list_directly(struct 
> blk_mq_hw_ctx *hctx,
>   queuelist);
>  
>   list_del_init(>queuelist);
> - ret = blk_mq_request_issue_directly(rq);
> + ret = blk_mq_request_issue_directly(rq, list_empty(list));
>   if (ret != BLK_STS_OK) {
>   if (ret == BLK_STS_RESOURCE ||
>   ret == BLK_STS_DEV_RESOURCE) {
> @@ -1866,6 

[PATCH 7/8] blk-mq: use bd->last == true for list inserts

2018-11-26 Thread Jens Axboe
If we are issuing a list of requests, we know if we're at the last one.
If we fail issuing, ensure that we call ->commits_rqs() to flush any
potential previous requests.

Signed-off-by: Jens Axboe 
---
 block/blk-core.c |  2 +-
 block/blk-mq.c   | 32 
 block/blk-mq.h   |  2 +-
 3 files changed, 26 insertions(+), 10 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index c9758d185357..808a65d23f1a 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1334,7 +1334,7 @@ blk_status_t blk_insert_cloned_request(struct 
request_queue *q, struct request *
 * bypass a potential scheduler on the bottom device for
 * insert.
 */
-   return blk_mq_request_issue_directly(rq);
+   return blk_mq_request_issue_directly(rq, true);
 }
 EXPORT_SYMBOL_GPL(blk_insert_cloned_request);
 
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 6a249bf6ed00..0a12cec0b426 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1260,6 +1260,14 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, 
struct list_head *list,
if (!list_empty(list)) {
bool needs_restart;
 
+   /*
+* If we didn't flush the entire list, we could have told
+* the driver there was more coming, but that turned out to
+* be a lie.
+*/
+   if (q->mq_ops->commit_rqs)
+   q->mq_ops->commit_rqs(hctx);
+
spin_lock(>lock);
list_splice_init(list, >dispatch);
spin_unlock(>lock);
@@ -1736,12 +1744,12 @@ static blk_qc_t request_to_qc_t(struct blk_mq_hw_ctx 
*hctx, struct request *rq)
 
 static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx,
struct request *rq,
-   blk_qc_t *cookie)
+   blk_qc_t *cookie, bool last)
 {
struct request_queue *q = rq->q;
struct blk_mq_queue_data bd = {
.rq = rq,
-   .last = true,
+   .last = last,
};
blk_qc_t new_cookie;
blk_status_t ret;
@@ -1776,7 +1784,7 @@ static blk_status_t __blk_mq_issue_directly(struct 
blk_mq_hw_ctx *hctx,
 static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
struct request *rq,
blk_qc_t *cookie,
-   bool bypass_insert)
+   bool bypass_insert, bool last)
 {
struct request_queue *q = rq->q;
bool run_queue = true;
@@ -1805,7 +1813,7 @@ static blk_status_t __blk_mq_try_issue_directly(struct 
blk_mq_hw_ctx *hctx,
goto insert;
}
 
-   return __blk_mq_issue_directly(hctx, rq, cookie);
+   return __blk_mq_issue_directly(hctx, rq, cookie, last);
 insert:
if (bypass_insert)
return BLK_STS_RESOURCE;
@@ -1824,7 +1832,7 @@ static void blk_mq_try_issue_directly(struct 
blk_mq_hw_ctx *hctx,
 
hctx_lock(hctx, _idx);
 
-   ret = __blk_mq_try_issue_directly(hctx, rq, cookie, false);
+   ret = __blk_mq_try_issue_directly(hctx, rq, cookie, false, true);
if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE)
blk_mq_sched_insert_request(rq, false, true, false);
else if (ret != BLK_STS_OK)
@@ -1833,7 +1841,7 @@ static void blk_mq_try_issue_directly(struct 
blk_mq_hw_ctx *hctx,
hctx_unlock(hctx, srcu_idx);
 }
 
-blk_status_t blk_mq_request_issue_directly(struct request *rq)
+blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
 {
blk_status_t ret;
int srcu_idx;
@@ -1841,7 +1849,7 @@ blk_status_t blk_mq_request_issue_directly(struct request 
*rq)
struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
 
hctx_lock(hctx, _idx);
-   ret = __blk_mq_try_issue_directly(hctx, rq, _cookie, true);
+   ret = __blk_mq_try_issue_directly(hctx, rq, _cookie, true, last);
hctx_unlock(hctx, srcu_idx);
 
return ret;
@@ -1856,7 +1864,7 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx 
*hctx,
queuelist);
 
list_del_init(>queuelist);
-   ret = blk_mq_request_issue_directly(rq);
+   ret = blk_mq_request_issue_directly(rq, list_empty(list));
if (ret != BLK_STS_OK) {
if (ret == BLK_STS_RESOURCE ||
ret == BLK_STS_DEV_RESOURCE) {
@@ -1866,6 +1874,14 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx 
*hctx,
blk_mq_end_request(rq, ret);
}
}
+
+   /*
+* If we didn't flush the entire list, we could have told
+* the driver there was more coming, but that turned out to
+