On Thu, Nov 02, 2017 at 11:24:31PM +0800, Ming Lei wrote:
> Hi Jens,
>
> This patchset avoids to allocate driver tag beforehand for flush rq
> in case of I/O scheduler, then flush rq isn't treated specially
> wrt. get/put driver tag, code gets cleanup much, such as,
> reorder_tags_to_front() is
On Sat, Nov 04, 2017 at 12:47:31AM +0800, Ming Lei wrote:
> On Fri, Nov 03, 2017 at 12:13:09PM -0400, Laurence Oberman wrote:
> > Hi
> > I had it working some time back. I am off today to take my son to the
> > doctor.
> > I will get Bart's test working again this weekend.
>
> Hello Laurence and
It is very expensive to atomic_inc/atomic_dec the host wide counter of
host->busy_count, and it should have been avoided via blk-mq's mechanism
of getting driver tag, which uses the more efficient way of sbitmap queue.
Also we don't check atomic_read(>device_busy) in scsi_mq_get_budget()
and
Use the sgl_alloc() and sgl_free() functions instead of open coding
these functions.
Signed-off-by: Bart Van Assche
Reviewed-by: Johannes Thumshirn
Reviewed-by: Hannes Reinecke
Cc: Keith Busch
Cc: Christoph
Hello Jens,
As you know there are multiple drivers that both allocate a scatter/gather
list and populate that list with pages. This patch series moves the code for
allocating and freeing such scatterlists from these drivers into
lib/scatterlist.c. Please consider this patch series for kernel
Many kernel drivers contain code that allocates and frees both a
scatterlist and the pages that populate that scatterlist.
Introduce functions in lib/scatterlist.c that perform these tasks
instead of duplicating this functionality in multiple drivers.
Only include these functions in the build if
Use the sgl_alloc_order() and sgl_free_order() functions instead
of open coding these functions.
Signed-off-by: Bart Van Assche
Reviewed-by: Johannes Thumshirn
Cc: linux-s...@vger.kernel.org
Cc: Martin K. Petersen
Cc: Anil
Signed-off-by: Bart Van Assche
Reviewed-by: Johannes Thumshirn
Cc: linux-s...@vger.kernel.org
Cc: Martin K. Petersen
Cc: Anil Ravindranath
---
drivers/scsi/pmcraid.h | 1 -
1 file
Use the sgl_alloc_order() and sgl_free_order() functions instead
of open coding these functions.
Signed-off-by: Bart Van Assche
Acked-by: Brian King
Reviewed-by: Johannes Thumshirn
Reviewed-by: Hannes Reinecke
Use the sgl_alloc() and sgl_free() functions instead of open coding
these functions.
Signed-off-by: Bart Van Assche
Reviewed-by: Johannes Thumshirn
Reviewed-by: Hannes Reinecke
Cc: Keith Busch
Cc: Christoph
Use the sgl_alloc() and sgl_free() functions instead of open coding
these functions.
Signed-off-by: Bart Van Assche
Cc: Ard Biesheuvel
Cc: Herbert Xu
---
crypto/Kconfig | 1 +
crypto/scompress.c | 51
Use the sgl_alloc_order() and sgl_free() functions instead of open
coding these functions.
Signed-off-by: Bart Van Assche
Cc: Nicholas A. Bellinger
Cc: Christoph Hellwig
Cc: Hannes Reinecke
Cc: Sagi Grimberg
On 11/03/2017 12:31 PM, Bart Van Assche wrote:
> On Mon, 2017-10-16 at 16:32 -0700, Bart Van Assche wrote:
>> blk_mq_get_tag() can modify data->ctx. This means that in the
>> error path of blk_mq_get_request() data->ctx should be passed to
>> blk_mq_put_ctx() instead of local_ctx. Note: since
On Mon, 2017-10-16 at 16:32 -0700, Bart Van Assche wrote:
> blk_mq_get_tag() can modify data->ctx. This means that in the
> error path of blk_mq_get_request() data->ctx should be passed to
> blk_mq_put_ctx() instead of local_ctx. Note: since blk_mq_put_ctx()
> ignores its argument, this patch does
MD's rdev_set_badblocks() expects that badblocks_set() returns 1 if
badblocks are disabled, otherwise, rdev_set_badblocks() will record
superblock changes and return success in that case and md will fail to
report an IO error which it should.
This bug has existed since badblocks were introduced
On 09/22/2017 09:36 AM, weiping zhang wrote:
> if blk-mq use "none" io scheduler, nr_request get a wrong value when
> input a number > tag_set->queue_depth. blk_mq_tag_update_depth will get
> the smaller one min(nr, set->queue_depth), and then q->nr_request get a
> wrong value.
>
> Reproduce:
>
On Fri, Sep 22, 2017 at 11:36:28PM +0800, weiping zhang wrote:
> if blk-mq use "none" io scheduler, nr_request get a wrong value when
> input a number > tag_set->queue_depth. blk_mq_tag_update_depth will get
> the smaller one min(nr, set->queue_depth), and then q->nr_request get a
> wrong value.
>
On Fri, Nov 03, 2017 at 10:13:38AM -0600, Liu Bo wrote:
> Hi Shaohua,
>
> Given it's related to md, can you please take this thru your tree?
Yes, the patch makes sense. Can you resend the patch to me? I can't find it in
my inbox
Thanks,
Shaohua
> Thanks,
>
> -liubo
>
> On Wed, Sep 27, 2017
Hi Shaohua,
Given it's related to md, can you please take this thru your tree?
Thanks,
-liubo
On Wed, Sep 27, 2017 at 04:13:17PM -0600, Liu Bo wrote:
> MD's rdev_set_badblocks() expects that badblocks_set() returns 1 if
> badblocks are disabled, otherwise, rdev_set_badblocks() will record
>
On Fri, Nov 03, 2017 at 12:13:09PM -0400, Laurence Oberman wrote:
> Hi
> I had it working some time back. I am off today to take my son to the
> doctor.
> I will get Bart's test working again this weekend.
Hello Laurence and Bart,
Just found srp-test starts to work now with v4.14-rc4 kernel, and
On 11/02/2017 12:29 PM, Christoph Hellwig wrote:
> Hi Jens,
>
> these patches add the block layer helpers / tweaks for NVMe multipath
> support. Can you review them for inclusion?
>
> There have been no functional changes to the versions posted with
> previous nvme multipath patchset.
I've
On 11/03/2017 06:17 AM, Christoph Hellwig wrote:
> Hi Jens,
>
> below are the currently queue nvme updates for Linux 4.15. There are
> a few more things that could make it for this merge window, but I'd
> like to get things into linux-next, especially for the unlikely case
> that Linus decided
On Fri, 2017-11-03 at 23:47 +0800, Ming Lei wrote:
> Forget to mention, there is failure when running 'make' under srp-test
> because shellcheck package is missed in RHEL7. Can that be the issue
> of test failure? If yes, could you provide a special version of srp-test
> which doesn't depend on
On Fri, 2017-11-03 at 23:18 +0800, Ming Lei wrote:
> BTW, Laurence found there is kernel crash in his IB/SRP test when running
> for-next branch of block tree, so we just test v4.14-rc4 w/wo my blk-mq
> patches.
One fix for a *sporadic* initiator crash has been queued for the v4.15 merge
window.
On Fri, Nov 03, 2017 at 03:23:14PM +, Bart Van Assche wrote:
> On Fri, 2017-11-03 at 11:50 +0800, Ming Lei wrote:
> > On Fri, Nov 03, 2017 at 02:42:50AM +, Bart Van Assche wrote:
> > > On Fri, 2017-11-03 at 10:12 +0800, Ming Lei wrote:
> > > > [root@ibclient srp-test]# ./run_tests
> > > >
On Fri, 2017-11-03 at 23:18 +0800, Ming Lei wrote:
> Subject: [PATCH] SCSI_MQ: fix IO hang in case of queue busy
>
> We have to insert the rq back before checking .device_busy,
> otherwise When IO completes just after the check and before
> this req is added to hctx->dispatch, this queue may
On Fri, 2017-11-03 at 11:50 +0800, Ming Lei wrote:
> On Fri, Nov 03, 2017 at 02:42:50AM +, Bart Van Assche wrote:
> > On Fri, 2017-11-03 at 10:12 +0800, Ming Lei wrote:
> > > [root@ibclient srp-test]# ./run_tests
> > > modprobe: FATAL: Module target_core_mod is in use.
> >
> > LIO must be
On Fri, Nov 03, 2017 at 02:42:50AM +, Bart Van Assche wrote:
> On Fri, 2017-11-03 at 10:12 +0800, Ming Lei wrote:
> > [root@ibclient srp-test]# ./run_tests
> > modprobe: FATAL: Module target_core_mod is in use.
>
> LIO must be unloaded before srp-test software is started.
Hi Bart,
Even with
On Fri, Nov 03, 2017 at 01:55:16PM +0100, Christoph Hellwig wrote:
> On Fri, Nov 03, 2017 at 11:02:50AM +0100, Javier González wrote:
> > Signed-off-by: Javier González
> > ---
> > drivers/nvme/host/core.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> >
On Fri, Nov 03, 2017 at 01:53:40PM +0100, Christoph Hellwig wrote:
> > - if (ns && ns->ms &&
> > + if (ns->ms &&
> > (!ns->pi_type || ns->ms != sizeof(struct t10_pi_tuple)) &&
> > !blk_integrity_rq(req) && !blk_rq_is_passthrough(req))
> > return BLK_STS_NOTSUPP;
>
Add error-handling comments to explain what would also be done for blk-mq
if it used the legacy error-handling.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/block.c | 36 +++-
1 file changed, 35 insertions(+), 1 deletion(-)
diff
Until mmc has blk-mq support fully implemented and tested, add a
parameter use_blk_mq, default to false unless config option MMC_MQ_DEFAULT
is selected.
Signed-off-by: Adrian Hunter
---
drivers/mmc/Kconfig | 11 +++
drivers/mmc/core/core.c | 7 +++
From: Venkat Gopalakrishnan
This patch adds CMDQ support for command-queue compatible
hosts.
Command queue is added in eMMC-5.1 specification. This
enables the controller to process upto 32 requests at
a time.
Adrian Hunter contributed renaming to cqhci, recovery,
For blk-mq, add support for completing requests directly in the ->done
callback. That means that error handling and urgent background operations
must be handled by recovery_work in that case.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/block.c | 100
Add CQHCI initialization and implement CQHCI operations for Intel GLK.
Signed-off-by: Adrian Hunter
---
drivers/mmc/host/Kconfig | 1 +
drivers/mmc/host/sdhci-pci-core.c | 155 +-
2 files changed, 155 insertions(+), 1
card_busy_detect() doesn't set a correct timeout, and it doesn't take care
of error status bits. Stop using it for blk-mq.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/block.c | 117 +++
1 file changed, 109 insertions(+),
Recovery is simpler to understand if it is only used for errors. Create a
separate function for card polling.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/block.c | 27 ++-
1 file changed, 26 insertions(+), 1 deletion(-)
diff --git
There are only a few things the recovery needs to do. Primarily, it just
needs to:
Determine the number of bytes transferred
Get the card back to transfer state
Determine whether to retry
There are also a couple of additional features:
Reset the card before the
Add CQE support to the block driver, including:
- optionally using DCMD for flush requests
- "manually" issuing discard requests
- issuing read / write requests to the CQE
- supporting block-layer timeouts
- handling recovery
- supporting re-tuning
CQE offers 25% - 50%
Define and use a blk-mq queue. Discards and flushes are processed
synchronously, but reads and writes asynchronously. In order to support
slow DMA unmapping, DMA unmapping is not done until after the next request
is started. That means the request is not completed until then. If there is
no next
Hi
Here is V13 of the hardware command queue patches without the software
command queue patches, now using blk-mq and now with blk-mq support for
non-CQE I/O.
HW CMDQ offers 25% - 50% better random multi-threaded I/O. I see a slight
2% drop in sequential read speed but no change to sequential
> On 3 Nov 2017, at 13.53, Christoph Hellwig wrote:
>
>> -if (ns && ns->ms &&
>> +if (ns->ms &&
>> (!ns->pi_type || ns->ms != sizeof(struct t10_pi_tuple)) &&
>> !blk_integrity_rq(req) && !blk_rq_is_passthrough(req))
>> return BLK_STS_NOTSUPP;
>
> On 3 Nov 2017, at 13.54, Christoph Hellwig wrote:
>
> On Fri, Nov 03, 2017 at 11:02:49AM +0100, Javier González wrote:
>> Compare subnqns using NVMF_NQN_SIZE as it is < 256
>>
>> Signed-off-by: Javier González
>> ---
>> drivers/nvme/host/core.c | 2 +-
>> 1
On Fri, Nov 03, 2017 at 11:02:50AM +0100, Javier González wrote:
> Signed-off-by: Javier González
> ---
> drivers/nvme/host/core.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index
On Fri, Nov 03, 2017 at 11:02:49AM +0100, Javier González wrote:
> Compare subnqns using NVMF_NQN_SIZE as it is < 256
>
> Signed-off-by: Javier González
> ---
> drivers/nvme/host/core.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git
Hi Jens,
below are the currently queue nvme updates for Linux 4.15. There are
a few more things that could make it for this merge window, but I'd
like to get things into linux-next, especially for the unlikely case
that Linus decided to cut -rc8.
Highlights:
- support for SGLs in the PCIe
On the rw path, the ns is assumed to be set. However, a check is still
done, inherited from the time the code resided at nvme_queue_rq().
Eliminate this check, which also eliminates a smatch complain for not
doing proper NULL checks on ns.
Signed-off-by: Javier González
---
Compare subnqns using NVMF_NQN_SIZE as it is < 256
Signed-off-by: Javier González
---
drivers/nvme/host/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index bd1d5ff911c9..ae8ab0a1ef0d 100644
---
Signed-off-by: Javier González
---
drivers/nvme/host/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index ae8ab0a1ef0d..f05c81774abf 100644
--- a/drivers/nvme/host/core.c
+++
Fix a number of small things reported by smatch on the nvme driver
Javier González (3):
nvme: do not check for ns on rw path
nvme: compare NQN string with right size
nvme: fix eui_show() print format
drivers/nvme/host/core.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
--
50 matches
Mail list logo