Hi Neil
Thanks so much for your suggestion.
On 2019/8/23 9:04, NeilBrown wrote:
>> I have checked
>> v3.10.108
>> v3.18.140
>> v4.1.49
>> but there seems not fix for it.
>>
>> And maybe it would be fixed until
>> 8ae126660fddbeebb9251a174e6fa45b6ad8f932
>> block: kill merge_bvec_fn() complet
Would anyone please give some comment here ?
Should we discard the merge_bvec_fn for raid5 and backport the bio split code
there ?
Thanks in advance.
Jianchao
On 2019/8/21 19:42, Jianchao Wang wrote:
> Hi dear all
>
> This is a question in older kernel versions.
>
> We are us
Hi dear all
This is a question in older kernel versions.
We are using 3.10 series kernel in our production. And we encountered issue as
below,
When add a page into a bio, .merge_bvec_fn will be invoked down to the bottom,
and the bio->bi_rw would be saved into bvec_merge_data.bi_rw as the follo
ue of (32776 + 8) is not expected.
Suggested-by: Jens Axboe
Signed-off-by: Jianchao Wang
---
V2:
- refactor the code based on Jens' suggestion
block/blk-mq.c | 12 ++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 8f5b533..9437
ue of (32776 + 8) is not expected.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 12
1 file changed, 12 insertions(+)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 8f5b533..2d93eb5 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -737,6 +737,18 @@ static void blk_mq_requeue
o check and clear the RESTART flag.
Fixes: bd166ef1 (blk-mq-sched: add framework for MQ capable IO schedulers)
Reported-by: Florian Stecker
Tested-by: Florian Stecker
Signed-off-by: Jianchao Wang
---
block/blk-flush.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/bl
Swap REQ_NOWAIT and REQ_NOUNMAP and add REQ_HIPRI.
Signed-off-by: Jianchao Wang
---
block/blk-mq-debugfs.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 90d6876..f812083 100644
--- a/block/blk-mq-debugfs.c
+++ b/block
hctx type, like,
ctx->hctxs[type]
Signed-off-by: Jianchao Wang
---
block/blk-mq-sched.c | 2 +-
block/blk-mq-tag.c | 2 +-
block/blk-mq.c | 4 ++--
block/blk-mq.h | 7 ---
block/blk.h | 2 +-
5 files changed, 9 insertions(+), 8 deletions(-)
diff --git a/block/blk-
the poll is enabled or not, because
the caller would clear the REQ_HIPRI in that case.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 9 -
block/blk-mq.h | 13 +
2 files changed, 13 insertions(+), 9 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 445d0a2..8a8
Hi Jens
These two patches are small optimization for accessing the queue mapping
in hot path. It saves the queue mapping results into blk_mq_ctx directly,
then we needn't do the complicated bounce on queue_hw_ctx[] map[] and
mq_map[].
Jianchao Wang (2)
blk-mq: save queue mapping result int
hctx type, like,
ctx->hctxs[type]
Signed-off-by: Jianchao Wang
---
block/blk-mq-sched.c | 2 +-
block/blk-mq-tag.c | 2 +-
block/blk-mq.c | 4 ++--
block/blk-mq.h | 5 +++--
block/blk.h | 2 +-
5 files changed, 8 insertions(+), 7 deletions(-)
diff --git a/block/blk-
the poll is enabled or not, because
the caller would clear the REQ_HIPRI in that case.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 9 -
block/blk-mq.h | 15 ++-
2 files changed, 14 insertions(+), 10 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 6898d24..1da
Hi Jens
These two patches are small optimization for accessing the queue mapping
in hot path. It saves the queue mapping results into blk_mq_ctx directly,
then we needn't do the complicated bounce on queue_hw_ctx[] map[] and
mq_map[].
Jianchao Wang(2)
blk-mq: save queue mapping result int
Replace blk_mq_request_issue_directly with blk_mq_try_issue_directly
in blk_insert_cloned_request and kill it as nobody uses it any more.
Signed-off-by: Jianchao Wang
---
block/blk-core.c | 4 +++-
block/blk-mq.c | 9 +
block/blk-mq.h | 6 --
3 files changed, 8 insertions(+), 11
. It introduce a new decision result which indicates to
insert request with blk_mq_request_bypass_insert.
- Modify the code to adapt the new patch 1.
V2:
- Add 1st and 2nd patch to refactor the code.
Jianchao Wang (3)
blk-mq: refactor the code of issue request directly
blk-mq: issue directly
d will harm the merging. We just need to do that for
the requests that has been through .queue_rq. This patch also
could fix this.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 103 ++---
1 file changed, 54 insertions(+), 49 deletions(-)
diff --
patch check,
because blk_mq_try_issue_directly can handle it well.If request is
direct-issued unsuccessfully, insert the reset.
Signed-off-by: Jianchao Wang
---
block/blk-mq-sched.c | 8 +++-
block/blk-mq.c | 20 +---
2 files changed, 12 insertions(+), 16 deletions(-)
diff --git a/block/bl
mq_make_request is introduced
to decide insert, end or just return based on the return value of .queue_rq
and bypass_insert (1/4)
- Add the 2nd patch. It introduce a new decision result which indicates to
insert request with blk_mq_request_bypass_insert.
- Modify the code to adapt the ne
Replace blk_mq_request_issue_directly with blk_mq_try_issue_directly
in blk_insert_cloned_request and kill it as nobody uses it any more.
Signed-off-by: Jianchao Wang
---
block/blk-core.c | 4 +++-
block/blk-mq.c | 9 +
block/blk-mq.h | 6 --
3 files changed, 8 insertions(+), 11
patch check,
because blk_mq_try_issue_directly can handle it well.If request is
direct-issued unsuccessfully, insert the reset.
Signed-off-by: Jianchao Wang
---
block/blk-mq-sched.c | 8 +++-
block/blk-mq.c | 20 +---
2 files changed, 12 insertions(+), 16 deletions(-)
diff --git a/block/bl
hen caller needn't any other
handling any more and then code could be cleaned up.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 103 ++---
1 file changed, 54 insertions(+), 49 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.
rt, end or just return based on the return value of .queue_rq
and bypass_insert (1/4)
- Add the 2nd patch. It introduce a new decision result which indicates to
insert request with blk_mq_request_bypass_insert.
- Modify the code to adapt the new patch 1.
V2:
- Add 1st and 2nd patch to refac
hen caller needn't any other
handling any more and then code could be cleaned up.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 104 ++---
1 file changed, 55 insertions(+), 49 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.
Replace blk_mq_request_issue_directly with blk_mq_try_issue_directly
in blk_insert_cloned_request and kill it as nobody uses it any more.
Signed-off-by: Jianchao Wang
---
block/blk-core.c | 4 +++-
block/blk-mq.c | 9 +
block/blk-mq.h | 6 --
3 files changed, 8 insertions(+), 11
patch check,
because blk_mq_try_issue_directly can handle it well.If request is
direct-issued unsuccessfully, insert the reset.
Signed-off-by: Jianchao Wang
---
block/blk-mq-sched.c | 8 +++-
block/blk-mq.c | 20 +---
2 files changed, 12 insertions(+), 16 deletions(-)
diff --git a/block/bl
Replace blk_mq_request_issue_directly with blk_mq_try_issue_directly
in blk_insert_cloned_request and kill it as nobody uses it any more.
Signed-off-by: Jianchao Wang
---
block/blk-core.c | 4 +++-
block/blk-mq.c | 9 +
block/blk-mq.h | 6 --
3 files changed, 8 insertions(+), 11
patch check,
because blk_mq_try_issue_directly can handle it well.If request is
direct-issued unsuccessfully, insert the reset.
Signed-off-by: Jianchao Wang
---
block/blk-mq-sched.c | 8 +++-
block/blk-mq.c | 29 ++---
2 files changed, 13 insertions(+), 24 deletions(-)
diff --git a/bl
hen caller needn't any other
handling any more and then code could be cleaned up.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 120 -
1 file changed, 59 insertions(+), 61 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.
tch. It introduce a new decision result which indicates to
insert request with blk_mq_request_bypass_insert.
- Modify the code to adapt the new patch 1.
V2:
- Add 1st and 2nd patch to refactor the code.
Jianchao Wang (4)
blk-mq: insert to hctx dispatch list when
blk-mq: refactor the code of issu
serting the non-read-write request to hctx dispatch
list to avoid to involve merge and io scheduler when bypass_insert
is true.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 18 --
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.
Replace blk_mq_request_issue_directly with blk_mq_try_issue_directly
in blk_insert_cloned_request and kill it as nobody uses it any more.
Signed-off-by: Jianchao Wang
---
block/blk-core.c | 4 +++-
block/blk-mq.c | 9 +
block/blk-mq.h | 6 --
3 files changed, 8 insertions(+), 11
serting the non-read-write request to hctx dispatch
list to avoid to involve merge and io scheduler when bypass_insert
is true.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 18 --
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.
rt, end or just return based on the return value of .queue_rq
and bypass_insert (1/4)
- Add the 2nd patch. It introduce a new decision result which indicates to
insert request with blk_mq_request_bypass_insert.
- Modify the code to adapt the new patch 1.
V2:
- Add 1st and 2nd patch to refacto
hen caller needn't any other
handling any more and then code could be cleaned up.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 112 ++---
1 file changed, 51 insertions(+), 61 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.
patch check,
because blk_mq_try_issue_directly can handle it well.
With respect to commit_rqs hook, we only need to care about the last
request's result. If it is inserted, invoke commit_rqs.
Signed-off-by: Jianchao Wang
---
block/blk-mq-sched.c | 8 +++-
block/blk-mq.c | 26 +-
patch check,
because blk_mq_try_issue_directly can handle it well.
With respect to commit_rqs hook, we only need to care about the last
request's result. If it is inserted, invoke commit_rqs. We identify
the actual result of blk_mq_try_issue_directly with outputed cookie.
Signed-off-by: Jianchao Wang
---
block
1.
V2:
- Add 1st and 2nd patch to refactor the code.
Jianchao Wang (4)
blk-mq: insert to hctx dispatch list when
blk-mq: refactor the code of issue request directly
blk-mq: issue directly with bypass 'false' in
blk-mq: replace and kill blk_mq_request_issue_directly
block/blk-core.c
Replace blk_mq_request_issue_directly with blk_mq_try_issue_directly
in blk_insert_cloned_request and kill it as nobody uses it any more.
Signed-off-by: Jianchao Wang
---
block/blk-core.c | 4 +++-
block/blk-mq.c | 9 +
block/blk-mq.h | 6 --
3 files changed, 8 insertions(+), 11
hen caller needn't any other
handling any more and then code could be cleaned up.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 116 +++--
1 file changed, 54 insertions(+), 62 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.
serting the non-read-write request to hctx dispatch
list to avoid to involve merge and io scheduler when bypass_insert
is true.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 19 +--
1 file changed, 17 insertions(+), 2 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.
the return value of .queue_rq
and bypass_insert (1/4)
- Add the 2nd patch. It introduce a new decision result which indicates to
insert request with blk_mq_request_bypass_insert.
- Modify the code to adapt the new patch 1.
V2:
- Add 1st and 2nd patch to refactor the code.
Jianchao
It is not necessary to issue request directly with bypass 'true'
in blk_mq_sched_insert_requests and handle the non-issued requests
itself. Just set bypass to 'false' and let blk_mq_try_issue_directly
handle them totally.
Signed-off-by: Jianchao Wang
---
block
hen caller needn't any other
handling any more.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 89 --
1 file changed, 43 insertions(+), 46 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 411be60..1b57449 100644
--- a
Replace blk_mq_request_issue_directly with blk_mq_try_issue_directly
in blk_insert_cloned_request and kill it as nobody uses it any more.
Signed-off-by: Jianchao Wang
---
block/blk-core.c | 4 +++-
block/blk-mq.c | 9 +
block/blk-mq.h | 7 ---
3 files changed, 8 insertions
It is not necessary to issue request directly with bypass 'true'
in blk_mq_sched_insert_requests and handle the non-issued requests
itself. Just set bypass to 'false' and let blk_mq_try_issue_directly
handle them totally.
Signed-off-by: Jianchao Wang
---
block
hen caller needn't any other
handling any more.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 93 --
1 file changed, 45 insertions(+), 48 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 411be60..14b4d06 100644
--- a
id to pass through the underlying
path's io scheduler.
To fix it, use blk_mq_request_bypass_insert to insert the request
to hctx->dispatch when we cannot pass through io scheduler but have
to insert.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 7 +--
1 file changed, 5 insertio
Replace blk_mq_request_issue_directly with blk_mq_try_issue_directly
in blk_insert_cloned_request and kill it as nobody uses it any more.
Signed-off-by: Jianchao Wang
---
block/blk-core.c | 4 +++-
block/blk-mq.c | 9 +
block/blk-mq.h | 7 ---
3 files changed, 8 insertions
ode to adapt the new patch 1.
V2:
- Add 1st and 2nd patch to refactor the code.
Jianchao Wang(4)
blk-mq: refactor the code of issue request directly
blk-mq: fix issue directly case when q is stopped or quiesced
blk-mq: issue directly with bypass 'false' in
hen caller needn't any other
handling any more.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 93 --
1 file changed, 45 insertions(+), 48 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 411be60..14b4d06 100644
--- a
It is not necessary to issue request directly with bypass 'true'
in blk_mq_sched_insert_requests and handle the non-issued requests
itself. Just set bypass to 'false' and let blk_mq_try_issue_directly
handle them totally.
Signed-off-by: Jianchao Wang
---
block
forcibly.
- invoke __blk_mq_issue_directly under preemption disabled.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 14 +-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 11c52bb..58f15cc 100644
--- a/block/blk-mq.c
+++ b
id to pass through the underlying
path's io scheduler.
To fix it, use blk_mq_request_bypass_insert to insert the request
to hctx->dispatch when we cannot pass through io scheduler but have
to insert.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 7 +--
1 file changed, 5 insertio
Replace blk_mq_request_issue_directly with blk_mq_try_issue_directly
in blk_insert_cloned_request and kill it as nobody uses it any more.
Signed-off-by: Jianchao Wang
---
block/blk-core.c | 4 +++-
block/blk-mq.c | 9 +
block/blk-mq.h | 7 ---
3 files changed, 8 insertions
based on the return value of .queue_rq
and bypass_insert (1/4)
- Add the 2nd patch. It introduce a new decision result which indicates to
insert request with blk_mq_request_bypass_insert.
- Modify the code to adapt the new patch 1.
V2:
- Add 1st and 2nd patch to refactor the code.
Jianchao
id to pass through the underlying
path's io scheduler.
To fix it, add new mq_issue_decision entry MQ_ISSUE_INSERT_DISPATCH
for above case where the request need to be inserted forcibly.
And use blk_mq_request_bypass_insert to insert the request into
hctx->dispatch directly.
Signed-off
hen caller needn't any other
handling any more.
To make code clearer, introduce new helpers enum mq_issue_decision
and blk_mq_make_decision to decide how to handle the non-issued
requests.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 108 +
Replace blk_mq_request_issue_directly with blk_mq_try_issue_directly
in blk_insert_cloned_request and remove blk_mq_request_issue_directly
as nobody uses it.
Signed-off-by: Jianchao Wang
---
block/blk-core.c | 2 +-
block/blk-mq.c | 7 +--
block/blk-mq.h | 6 --
3 files changed, 6
Make __blk_mq_issue_directly be able to accept a NULL cookie pointer
and remove the dummy unused_cookie in blk_mq_request_issue_directly.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 13 ++---
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/block/blk-mq.c b/block/blk
forcibly.
- invoke __blk_mq_issue_directly under preemption disabled.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 14 +-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index f54c092..7915f44 100644
--- a/block/blk-mq.c
+++ b
It is not necessary to issue request directly with bypass 'true'
in blk_mq_sched_insert_requests and insert the non-issued requests
itself. Just set bypass to 'false' and let blk_mq_try_issue_directly
handle them totally.
Signed-off-by: Jianchao Wang
---
block
2:
- Add 1st and 2nd patch to refactor the code.
Jianchao Wang(6)
blk-mq: make __blk_mq_issue_directly be able to accept NULL cookie pointer
blk-mq: refactor the code of issue request directly
blk-mq: fix issue directly case when q is stopped or quiesced
blk-mq: ensure hctx to be ran on mapped cp
It is not necessary to issue request directly with bypass 'true'
in blk_mq_sched_insert_requests and insert the non-issued requests
itself. Just set bypass to 'false' and let blk_mq_try_issue_directly
handle them totally.
Signed-off-by: Jianchao Wang
---
block
.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 104 +
1 file changed, 61 insertions(+), 43 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index af5b591..962fdfc 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1729,78 +1729,96
Make __blk_mq_issue_directly be able to accept a NULL cookie pointer
and remove the dummy unused_cookie in blk_mq_request_issue_directly.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 13 +++--
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/block/blk-mq.c b/block/blk
id to pass through the underlying
path's io scheduler.
To fix it, add new mq_issue_decision entry MQ_ISSUE_INSERT_DISPATCH
for above case where the request need to be inserted forcibly.
And use blk_mq_request_bypass_insert to insert the request into
hctx->dispatch directly.
Signed-off
f .queue_rq
and bypass_insert (1/4)
- Add the 2nd patch. It introduce a new decision result which indicates to
insert request with blk_mq_request_bypass_insert.
- Modify the code to adapt the new patch 1.
V2:
- Add 1st and 2nd patch
Jianchao Wang (5)
blk-mq: make __blk_mq_issue_directly be abl
__blk_mq_issue_directly under preemption
disabled.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 14 +-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index bf8b144..4450eb6 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1771,6 +1771,17 @@ static
and bypass_insert (1/4)
- Add the 2nd patch. It introduce a new decision result which indicates to
insert request with blk_mq_request_bypass_insert.
- Modify the code to adapt the new patch 1.
V2:
- Add 1st and 2nd patch
Jianchao Wang(4)
blk-mq: refactor the code of issue request direc
Merge blk_mq_try_issue_directly and __blk_mq_try_issue_directly
into one interface which is able to handle the return value from
.queue_rq callback. To make the code clearer, introduce new helpers
blk_mq_make_decision and enum mq_decision.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 104
__blk_mq_issue_directly under preemption
disabled.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 14 +-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 9c6c858..ced3346 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1770,6 +1770,17 @@ static
It is not necessary to issue request directly with bypass_insert
'true' in blk_mq_sched_insert_requests and insert the non-issued
requests itself. Just set bypass_insert to 'false' and let
blk_mq_try_issue_directly handle them.
Signed-off-by: Jianchao Wang
---
block
ould avoid to pass through the underlying
paths' io scheduler.
To fix it, use blk_mq_request_bypass_insert to insert the request
into hctx->dispatch directly.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 17 -
1 file changed, 12 insertions(+), 5 deletions(-)
diff
It is not necessary to issue request directly with bypass_insert
'true' in blk_mq_sched_insert_requests and insert the non-issued
requests itself. Just set bypass_insert to 'false' and let
blk_mq_try_issue_directly handle them.
Signed-off-by: Jianchao Wang
---
block
ould avoid to pass through the underlying
paths' io scheduler.
To fix it, use blk_mq_request_bypass_insert to insert the request
into hctx->dispatch directly.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 17 -
1 file changed, 12 insertions(+), 5 deletions(-)
diff
__blk_mq_issue_directly under preemption
disabled.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 14 +-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 9c6c858..ced3346 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1770,6 +1770,17 @@ static
Merge blk_mq_try_issue_directly and __blk_mq_try_issue_directly
into one interface which is able to handle the return value from
.queue_rq callback. To make the code clearer, introduce new helpers
blk_mq_make_decision and enum mq_decision.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c
- Correct the code about the case bypass_insert is true and io scheduler
attached. The request still need to be issued in case above. (1/4)
- Refactor the code to make code clearer. (1/4)
- Add the 2nd patch.
- Modify the code to adapt the new patch 1.
V2:
- Add 1st and 2nd patch
Jianc
consider contiguity
for DISCARD for the case max_discard_segments > 1 and cannot merge
contiguous DISCARD for the case max_discard_segments == 1, because
rq_attempt_discard_merge always returns false in this case.
This patch fixes both of the two cases above.
Signed-off-by: Jianchao Wang
---
V5:
-
consider contiguity
for DISCARD for the case max_discard_segments > 1 and cannot merge
contiguous DISCARD for the case max_discard_segments == 1, because
rq_attempt_discard_merge always returns false in this case.
This patch fixes both of the two cases above.
Signed-off-by: Jianchao Wang
---
V4:
Merge blk_mq_try_issue_directly and __blk_mq_try_issue_directly
into one interface which is able to handle the return value from
.queue_rq callback. Due to we can only issue directly w/o io
scheduler, so remove the blk_mq_get_driver_tag.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 109
It is not necessary to issue request directly with bypass 'true' in
blk_mq_sched_insert_requests and insert the non-issued requests
itself. Just set bypass to 'false' and let blk_mq_try_issue_directly
handle them.
Signed-off-by: Jianchao Wang
---
block/blk-mq-sched.c | 8
__blk_mq_issue_directly under preemption
disabled.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 9 +
1 file changed, 9 insertions(+)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 71b829c..4f1dedb 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1745,6 +1745,14 @@ static blk_status_t
bypass "false" instead of the "true", then needn't to handle the non-issued
requests any more.
The 3rd patch ensures the hctx to be ran on its mapped cpu in issue directly
path.
V2:
- Add 1st and 2nd patch.
Jianchao Wang(3)
blk-mq: refactor the code of issue request d
__blk_mq_issue_directly under preemption
disabled.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 17 -
1 file changed, 16 insertions(+), 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index e3c39ea..0cdc306 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1717,6 +1717,12
consider contiguity
for DISCARD for the case max_discard_segments > 1 and cannot merge
contiguous DISCARD for the case max_discard_segments == 1, because
rq_attempt_discard_merge always returns false in this case.
This patch fixes both of the two cases above.
Signed-off-by: Jianchao Wang
---
V3:
consider contiguity
for DISCARD for the case max_discard_segments > 1 and cannot merge
contiguous DISCARD for the case max_discard_segments == 1, because
rq_attempt_discard_merge always returns false in this case.
This patch fixes both of the two cases above.
Signed-off-by: Jianchao Wang
---
Discard command supports multiple ranges of blocks, so needn't
checking position contiguity when merging. Let's do the same thing
in attempt_merge as the blk_try_merge.
Signed-off-by: Jianchao Wang
---
block/blk-merge.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
di
updating hw queues uses bio retrieve to drain the queues. unquiescing
queues before that is not needed and will cause requests to be issued
to dead hw queue. So move unquiescing queues after updating hw queues,
as well as the wait freeze.
Signed-off-by: Jianchao Wang
---
drivers/nvme/host/pci.c
to drain request_queue.
Signed-off-by: Jianchao Wang
---
block/blk-core.c | 2 ++
block/blk-mq-sched.c | 88 ++
block/blk-mq.c | 42
include/linux/blk-mq.h | 4 +++
include/linux/blkdev.h | 2 ++
5 fi
retrieve bios of all requests on queue to drain requests, then
needn't depend on storage device to drain the queue any more.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index f7
V2:
- clear BIO_QUEUE_ENTERED of requeued bios (1/3)
- sync bio_requeue_work in blk_cleanup_queue (1/3)
- discard the unnecessary synchronize_sched (2/3)
- discard the 4th patch which is wrong
- some misc comment changes
Jianchao Wang(3)
blk-mq: introduce bio retrieve mechanism
blk-mq
.
- allocate kcqs per khd
Jens Axboe (1)
0001-blk-mq-abstract-out-blk-mq-sched-rq-list-iteration-b.patch
Jianchao Wang (1)
0002-block-kyber-make-kyber-more-friendly-with-merging.patch
block/blk-mq-sched.c | 34 ++---
block/kyber-iosched.c | 197
each 1662MB/s and 425k
on my platform.
Signed-off-by: Jianchao Wang
Tested-by: Holger Hoffstätte
---
block/kyber-iosched.c | 197 +-
1 file changed, 162 insertions(+), 35 deletions(-)
diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c
i
From: Jens Axboe
No functional changes in this patch, just a prep patch for utilizing
this in an IO scheduler.
Signed-off-by: Jens Axboe
---
block/blk-mq-sched.c | 34 --
include/linux/blk-mq.h | 3 ++-
2 files changed, 26 insertions(+), 11 deletions(-)
diff
0/0 |
+--+
| w/ | 1083/616 | 277k/154k | 4.93/6.95 | 1830.62/3279.95 | 223k/3k |
+--+
When set numjobs to 16, the bw and iops could reach 1662MB/s and 425k
on my platform.
Signed-off-by: Jianchao Wang
---
block/kyber-iosch
There is no plug trace event for multiple hw queues. This is
confusing when check block trace event log and find unplug one
there. Add plug trace event when request is added to a empty plug
list.
Signed-off-by: Jianchao Wang
---
block/blk-mq.c | 3 +++
1 file changed, 3 insertions(+)
diff
.
Cc: Bart Van Assche
Cc: Tejun Heo
Cc: Ming Lei
Cc: Martin Steigerwald
Cc: sta...@vger.kernel.org
Signed-off-by: Jianchao Wang
---
block/blk-core.c | 4
block/blk-mq.c | 7 +++
2 files changed, 11 insertions(+)
diff --git a/block/blk-core.c b/block/blk-core.c
index abcb868..ce626
.
Cc: Bart Van Assche
Cc: Tejun Heo
Cc: Ming Lei
Cc: Martin Steigerwald
Cc: sta...@vger.kernel.org
Signed-off-by: Jianchao Wang
---
block/blk-core.c | 4
block/blk-mq.c | 7 +++
2 files changed, 11 insertions(+)
diff --git a/block/blk-core.c b/block/blk-core.c
index abcb868..ce626
When get budget fails, blk_mq_sched_dispatch_requests does not do
anything to ensure the hctx to be restarted. We can survive from
this, because only the scsi implements .get_budget and it always
runs the hctx queues when request is completed.
Signed-off-by: Jianchao Wang
---
block/blk-mq
1 - 100 of 109 matches
Mail list logo