On Tue, 2017-08-01 at 00:51 +0800, Ming Lei wrote:
> During dispatch, we moved all requests from hctx->dispatch to
> one temporary list, then dispatch them one by one from this list.
> Unfortunately duirng this period, run queue from other contexts
> may think the queue is idle and start to
On Tue, 2017-08-01 at 00:51 +0800, Ming Lei wrote:
> SCSI devices use host-wide tagset, and the shared
> driver tag space is often quite big. Meantime
> there is also queue depth for each lun(.cmd_per_lun),
> which is often small.
>
> So lots of requests may stay in sw queue, and we
> always
On Tue, 2017-08-01 at 00:51 +0800, Ming Lei wrote:
> @@ -810,7 +810,11 @@ static void blk_mq_timeout_work(struct work_struct *work)
>
> struct ctx_iter_data {
> struct blk_mq_hw_ctx *hctx;
> - struct list_head *list;
> +
> + union {
> + struct list_head *list;
> +
On Tue, 2017-08-01 at 00:50 +0800, Ming Lei wrote:
> The following patch need to reuse this data structure,
> so rename as one generic name.
Hello Ming,
Please drop this patch (see also my comments on the next patch).
Thanks,
Bart.
On Tue, 2017-08-01 at 00:50 +0800, Ming Lei wrote:
> When hw queue is busy, we shouldn't take requests from
> scheduler queue any more, otherwise IO merge will be
> difficult to do.
>
> This patch fixes the awful IO performance on some
> SCSI devices(lpfc, qla2xxx, ...) when mq-deadline/kyber
>
On Mon, 24 Jul 2017 12:52:58 +0200
Thomas Gleixner wrote:
> The BNX2I module init/exit code installs/removes the hotplug callbacks with
> the cpu hotplug lock held. This worked with the old CPU locking
> implementation which allowed recursive locking, but with the new percpu
From: Michael Hernandez
This patch fixes system hang/crash while firmware dump is attempted with
Block MQ enabled in qla2xxx driver. Fix is to remove check in fw dump
template entries for existing request and response queues so that full
buffer size is calculated
SCSI device often has per-request_queue queue depth
(.cmd_per_lun), which is applied among all hw queues
actually, and this patchset calls this as shared
queue depth.
One theory of scheduler is that we shouldn't dequeue
request from sw/scheduler queue and dispatch it to
driver when the low level
We need to support per-request_queue dispatch list for avoiding
early dispatch in case of shared queue depth.
Signed-off-by: Ming Lei
---
block/blk-mq-sched.c | 6 +++---
block/blk-mq.h | 15 +--
2 files changed, 12 insertions(+), 9 deletions(-)
diff
Prepare to support per-request-queue dispatch list,
so introduce dispatch lock and list for avoiding to
do runtime check.
Signed-off-by: Ming Lei
---
block/blk-mq-debugfs.c | 10 +-
block/blk-mq.c | 7 +--
block/blk-mq.h | 26
Signed-off-by: Ming Lei
---
block/blk-mq-sched.c | 19 +++
block/blk-mq.c | 18 +++---
block/blk-mq.h | 44
3 files changed, 58 insertions(+), 23 deletions(-)
diff --git
Signed-off-by: Ming Lei
---
block/blk-mq-sched.c | 6 +++---
block/blk-mq.h | 15 +++
2 files changed, 18 insertions(+), 3 deletions(-)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 07ff53187617..112270961af0 100644
---
SCSI devices often provides one per-requeest_queue depth via
q->queue_depth(.cmd_per_lun), which is a global limit on all
hw queues. After the pending I/O submitted to one rquest queue
reaches this limit, BLK_STS_RESOURCE will be returned to all
dispatch path. That means when one hw queue is
This patch split blk_mq_sched_dispatch_requests()
into two parts:
1) the 1st part is for checking if queue is busy, and
handle the busy situation
2) the 2nd part is moved to __blk_mq_sched_dispatch_requests()
which focuses on dispatch from sw queue or scheduler queue.
Signed-off-by: Ming Lei
SCSI sets q->queue_depth from shost->cmd_per_lun, and
q->queue_depth is per request_queue and more related to
scheduler queue compared with hw queue depth, which can be
shared by queues, such as TAG_SHARED.
This patch trys to use q->queue_depth as hint for computing
q->nr_requests, which should
The following patch will propose some hints to figure out
default queue depth for scheduler queue, so introduce helper
of blk_mq_sched_queue_depth() for this purpose.
Reviewed-by: Christoph Hellwig
Signed-off-by: Ming Lei
---
block/blk-mq-sched.c | 8 +---
During dispatch, we moved all requests from hctx->dispatch to
one temporary list, then dispatch them one by one from this list.
Unfortunately duirng this period, run queue from other contexts
may think the queue is idle and start to dequeue from sw/scheduler
queue and try to dispatch because
SCSI devices use host-wide tagset, and the shared
driver tag space is often quite big. Meantime
there is also queue depth for each lun(.cmd_per_lun),
which is often small.
So lots of requests may stay in sw queue, and we
always flush all belonging to same hw queue and
dispatch them all to driver,
This function is introduced for picking up request
from sw queue so that we can dispatch in scheduler's way.
More importantly, for some SCSI devices, driver
tags are host wide, and the number is quite big,
but each lun has very limited queue depth. This
function is introduced for avoiding to take
The following patch need to reuse this data structure,
so rename as one generic name.
Signed-off-by: Ming Lei
---
block/blk-mq.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index b70a4ad78b63..94818f78c099 100644
When hw queue is busy, we shouldn't take requests from
scheduler queue any more, otherwise IO merge will be
difficult to do.
This patch fixes the awful IO performance on some
SCSI devices(lpfc, qla2xxx, ...) when mq-deadline/kyber
is used by not taking requests if hw queue is busy.
In Red Hat internal storage test wrt. blk-mq scheduler, we
found that its performance is quite bad, especially
about sequential I/O on some multi-queue SCSI devcies.
Turns out one big issue causes the performance regression: requests
are still dequeued from sw queue/scheduler queue even when
In Red Hat internal storage test wrt. blk-mq scheduler, we
found that its performance is quite bad, especially
about sequential I/O on some multi-queue SCSI devcies.
Turns out one big issue causes the performance regression: requests
are still dequeued from sw queue/scheduler queue even when
Fail probe if FCoE capability is not enabled in the
firmware.
Signed-off-by: Varun Prakash
---
drivers/scsi/csiostor/csio_hw.c | 4 +++-
drivers/scsi/csiostor/csio_init.c | 12
2 files changed, 11 insertions(+), 5 deletions(-)
diff --git
Hello,
BarclaysHomeFinance is offering loan at a low interest rate of 2.5%.do You need
a loan of any kind ? if yes email now for more info
best regards,
Taylor Anderson
On Mon, Jul 31, 2017 at 02:23:11PM +0530, Arvind Yadav wrote:
> Yes, We can add all of them in single patch. But other maintainer wants
> single single patch. thats why I have send 29 patch. :(
Ultimately it's up to Martin and James but I don't see a hughe benefit in
having it all in a separate
On Monday 31 July 2017 01:26 PM, Johannes Thumshirn wrote:
On Sun, Jul 30, 2017 at 02:07:09PM +0530, Arvind Yadav wrote:
pci_device_id are not supposed to change at runtime. All functions
working with pci_device_id provided by work with
const pci_device_id. So mark the non-const structs as
Compliment of the day,
I am Mr.Kere Casmire I Have a Business Proposal of $5.3 million For You.
I am aware of the unsafe nature of the internet,
and was compelled to use this medium due to the nature of this project.
I have access to very vital information that can be used to transfer
this huge
On Sun, Jul 30, 2017 at 10:37 AM, Arvind Yadav
wrote:
> pci_device_id are not supposed to change at runtime. All functions
> working with pci_device_id provided by work with
> const pci_device_id. So mark the non-const structs as const.
>
> Signed-off-by: Arvind Yadav
On Sun, Jul 30, 2017 at 02:07:09PM +0530, Arvind Yadav wrote:
> pci_device_id are not supposed to change at runtime. All functions
> working with pci_device_id provided by work with
> const pci_device_id. So mark the non-const structs as const.
Can't this go all in one patch instead of
https://bugzilla.kernel.org/show_bug.cgi?id=196543
--- Comment #1 from james.bottom...@hansenpartnership.com ---
On Mon, 2017-07-31 at 02:26 +, bugzilla-dae...@bugzilla.kernel.org
wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=196543
>
> Bug ID: 196543
> Summary:
31 matches
Mail list logo