blk-mq: make sure that correct hctx->dispatch_from is set

2018-05-18 Thread
Author: huhai 
Date:   Fri May 18 17:09:56 2018 +0800

blk-mq: make sure that correct hctx->dispatch_from is set

When the number of hardware queues is changed, the drivers will call
blk_mq_update_nr_hw_queues() to remap hardware queues, and then
the ctx mapped on hctx will also change, but the current code forgets to
make sure that correct hctx->dispatch_from is set, and hctx->dispatch_from
may point to a ctx that does not belong to the current hctx.

Signed-off-by: huhai 

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 2545081..55d8a3d 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2214,6 +2214,8 @@ static void blk_mq_map_swqueue(struct request_queue *q)
hctx->tags = set->tags[i];
WARN_ON(!hctx->tags);
 
+   hctx->dispatch_from = NULL;
+
/*
 * Set the map size to the number of mapped software queues.
 * This is more accurate and more efficient than looping

blk-mq: for sync case, whether it is mq or sq make_request instances, we should send the request directly

2018-05-15 Thread
Author: huhai 
Date:   Wed May 16 10:34:22 2018 +0800

blk-mq: for sync case, whether it is mq or sq make_request instances, we 
should send the request directly

For sq make_request instances, we should issue sync request directly too, 
otherwise it will break down the semantics of sync request, 
the current code logic is to send synchronous requests asynchronously.

Signed-off-by: huhai 

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 5629f18..fcf2f16 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1771,7 +1771,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue 
*q, struct bio *bio)
blk_mq_try_issue_directly(data.hctx, same_queue_rq,
);
}
-   } else if (q->nr_hw_queues > 1 && is_sync) {
+   } else if (is_sync) {
blk_mq_put_ctx(data.ctx);
blk_mq_bio_to_request(rq, bio);
blk_mq_try_issue_directly(data.hctx, rq, );

blk-mq: for sync case, whether it is mq or sq make_request instances, we should send the request directly

2018-05-15 Thread
Author: huhai 
Date:   Wed May 16 10:34:22 2018 +0800

blk-mq: for sync case, whether it is mq or sq make_request instances, we 
should send the request directly

For sq make_request instances, we should issue sync request directly too, 
otherwise it will break down the semantics of sync request, 
the current code logic is to send synchronous requests asynchronously.

Signed-off-by: huhai 

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 5629f18..fcf2f16 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1771,7 +1771,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue 
*q, struct bio *bio)
blk_mq_try_issue_directly(data.hctx, same_queue_rq,
);
}
-   } else if (q->nr_hw_queues > 1 && is_sync) {
+   } else if (is_sync) {
blk_mq_put_ctx(data.ctx);
blk_mq_bio_to_request(rq, bio);
blk_mq_try_issue_directly(data.hctx, rq, );

blk-mq: remove unnecessary judgement from blk_mq_make_request

2018-05-15 Thread
Author: huhai 
Date:   Tue May 15 15:15:06 2018 +0800

blk-mq: remove unnecessary judgement from blk_mq_make_request

Whether q->elevator is true or not, we can use blk_mq_sched_insert_request 
to complete the work.

Signed-off-by: huhai 

diff --git a/block/blk-mq.c b/block/blk-mq.c
index fcf2f16..2545081 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1580,15 +1580,6 @@ static void blk_mq_bio_to_request(struct request *rq, 
struct bio *bio)
blk_account_io_start(rq, true);
 }
 
-static inline void blk_mq_queue_io(struct blk_mq_hw_ctx *hctx,
-  struct blk_mq_ctx *ctx,
-  struct request *rq)
-{
-   spin_lock(>lock);
-   __blk_mq_insert_request(hctx, rq, false);
-   spin_unlock(>lock);
-}
-
 static blk_qc_t request_to_qc_t(struct blk_mq_hw_ctx *hctx, struct request *rq)
 {
if (rq->tag != -1)
@@ -1775,15 +1766,10 @@ static blk_qc_t blk_mq_make_request(struct 
request_queue *q, struct bio *bio)
blk_mq_put_ctx(data.ctx);
blk_mq_bio_to_request(rq, bio);
blk_mq_try_issue_directly(data.hctx, rq, );
-   } else if (q->elevator) {
+   } else  {
blk_mq_put_ctx(data.ctx);
blk_mq_bio_to_request(rq, bio);
blk_mq_sched_insert_request(rq, false, true, true, true);
-   } else {
-   blk_mq_put_ctx(data.ctx);
-   blk_mq_bio_to_request(rq, bio);
-   blk_mq_queue_io(data.hctx, data.ctx, rq);
-   blk_mq_run_hw_queue(data.hctx, true);
}
 
return cookie;