the structure member indentation
Christoph Hellwig (1):
nvmet: mark nvmet_genctr static
Hannes Reinecke (1):
nvme: add a numa_node field to struct nvme_ctrl
Israel Rukshin (3):
nvme: Remove unused forward declaration
nvmet-rdma: Add unlikely for response allocated check
nvme
The following changes since commit ba7aeae5539c7a74cf07a2bc61281a93c50e:
block, bfq: fix decrement of num_active_groups (2018-12-07 07:40:07 -0700)
are available in the Git repository at:
git://git.infradead.org/nvme.git nvme-4.20
for you to fetch changes up to
On Thu, Dec 06, 2018 at 02:48:12PM -0200, Thadeu Lima de Souza Cascardo wrote:
> Without this exposure, lsblk will fail as it tries to find out the
> device's dev_t numbers. This causes a real problem for nvme multipath
> devices, as their slaves are hidden.
>
> Exposing them fixes the problem,
On Thu, Dec 06, 2018 at 12:14:29PM -0700, Jens Axboe wrote:
> On 12/6/18 12:11 PM, Jeff Moyer wrote:
> > Jens Axboe writes:
> >
> >> From: Christoph Hellwig
> >>
> >> Just call blk_poll on the iocb cookie, we can derive the block device
> >>
On Thu, Dec 06, 2018 at 08:06:16AM -0800, Bart Van Assche wrote:
> On Wed, 2018-12-05 at 22:59 +0800, Weiping Zhang wrote:
> > Weiping Zhang 于2018年12月5日周三 下午10:49写道:
> > > Christoph Hellwig 于2018年12月5日周三 下午10:40写道:
> > > > Can you please also send a pa
On Wed, Nov 28, 2018 at 06:38:17AM -0200, Thadeu Lima de Souza Cascardo wrote:
> I will send a followup that includes your two patches, fixing up this one,
> plus
> another two because that is simply reverting the revert of Hanne's patch.
> Which
> means it still has the lsblk bug.
>
> Then, we
being invalid:
Looks good as the first quick fix:
Reviewed-by: Christoph Hellwig
But as we discussed I think we need to be even more careful. including
the only read and write check below in the thread or some higher level
approach.
On Tue, Dec 04, 2018 at 04:02:36PM +0100, Ard Biesheuvel wrote:
> On Tue, 4 Dec 2018 at 16:01, Christoph Hellwig wrote:
> >
> > Why does this go to linux-block?
>
> Because xor_blocks() is part of the RAID driver?
The only caller of xor_blocks() seems btrfs. And the RAID
On Mon, Dec 03, 2018 at 05:00:59PM -0800, Sagi Grimberg wrote:
>
>> @@ -2428,7 +2426,8 @@ static void nvme_dev_disable(struct nvme_dev *dev,
>> bool shutdown)
>> nvme_stop_queues(>ctrl);
>> if (!dead && dev->ctrl.queue_count > 0) {
>> -nvme_disable_io_queues(dev);
>> +
On Mon, Dec 03, 2018 at 05:11:43PM -0800, Sagi Grimberg wrote:
>> If it really becomes an issue we
>> should rework the nvme code to also skip the multipath code for any
>> private namespace, even if that could mean some trouble when rescanning.
>>
>
> This requires some explanation? skip the
On Mon, Dec 03, 2018 at 04:58:25PM -0800, Sagi Grimberg wrote:
>
>> +static int nvme_poll_irqdisable(struct nvme_queue *nvmeq, unsigned int tag)
>
> Do we still need to carry the tag around?
Yes, the timeout handler polls for a specific tag.
On Mon, Dec 03, 2018 at 04:54:15PM -0800, Sagi Grimberg wrote:
>
>> @@ -2173,6 +2157,8 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
>> if (nr_io_queues == 0)
>> return 0;
>> +
>> +clear_bit(NVMEQ_ENABLED, >flags);
>>
>
> This is a change of behavior, looks
Why does this go to linux-block?
On Mon, Dec 03, 2018 at 04:49:56PM -0800, Sagi Grimberg wrote:
>> @@ -103,12 +101,17 @@ static inline struct blk_mq_hw_ctx
>> *blk_mq_map_queue(struct request_queue *q,
>> unsigned int flags,
>>
On Fri, Nov 30, 2018 at 07:21:02PM +, Al Viro wrote:
> On Fri, Nov 30, 2018 at 09:56:43AM -0700, Jens Axboe wrote:
> > For an ITER_KVEC, we can just iterate the iov and add the pages
> > to the bio directly.
>
> > + page = virt_to_page(kv->iov_base);
> > + size =
On Fri, Nov 30, 2018 at 09:56:31AM -0700, Jens Axboe wrote:
> Replace the percpu_ref_put() + kmem_cache_free() with a call to
> iocb_put() instead.
>
> Signed-off-by: Jens Axboe
Looks good,
Reviewed-by: Christoph Hellwig
On Fri, Nov 30, 2018 at 09:56:30AM -0700, Jens Axboe wrote:
> Plugging is meant to optimize submission of a string of IOs, if we don't
> have more than 2 being submitted, don't bother setting up a plug.
>
> Signed-off-by: Jens Axboe
Looks fine,
Reviewed-by: Christoph Hellwig
> - req->ki_ctx = ctx;
> return req;
Why the reformatting? Otherwise this looks fine to me:
Reviewed-by: Christoph Hellwig
On Fri, Nov 30, 2018 at 10:17:49AM -0700, Jens Axboe wrote:
> > Setting REQ_NOWAIT from inside the block layer will make the code that
> > submits requests harder to review. Have you considered to make this code
> > fail I/O if REQ_NOWAIT has not been set and to require that the context
> > that
On Fri, Nov 30, 2018 at 10:14:31AM -0700, Jens Axboe wrote:
> On 11/30/18 10:13 AM, Christoph Hellwig wrote:
> > I think we'll need to queue this up for 4.21 ASAP independent of the
> > rest, given that with separate poll queues userspace could otherwise
> > submit I/O that
This was intended to support users like nvme multipath, but is just
getting in the way and adding another indirect call.
Signed-off-by: Christoph Hellwig
---
block/blk-core.c | 23 ---
block/blk-mq.c | 24 +++-
include/linux/blkdev.h | 2
If the user did setup polling in the driver we should not require
another know in the block layer to enable it.
Signed-off-by: Christoph Hellwig
---
block/blk-mq.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 9c90c5038d07..a550a00ac00c 100644
interrupts.
With that we can stop taking the cq_lock for normal queues.
Signed-off-by: Christoph Hellwig
Reviewed-by: Keith Busch
---
drivers/nvme/host/pci.c | 37 ++---
1 file changed, 26 insertions(+), 11 deletions(-)
diff --git a/drivers/nvme/host/pci.c b
This is the last place outside of nvme_irq that handles CQEs from
interrupt context, and thus is in the way of removing the cq_lock for
normal queues, and avoiding lockdep warnings on the poll queues, for
which we already take it without IRQ disabling.
Signed-off-by: Christoph Hellwig
uld rework the nvme code to also skip the multipath code for any
private namespace, even if that could mean some trouble when rescanning.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/multipath.c | 16
1 file changed, 16 deletions(-)
diff --git a/drivers/nvme/host/multipath.
The code was always a bit of a hack that digs far too much into
RDMA core internals. Lets kick it out and reimplement proper
dedicated poll queues as needed.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/rdma.c | 24
1 file changed, 24 deletions(-)
diff --git
This avoids having to have differnet mq_ops for different setups
with or without poll queues.
Signed-off-by: Christoph Hellwig
---
block/blk-sysfs.c | 2 +-
drivers/nvme/host/pci.c | 29 +
2 files changed, 10 insertions(+), 21 deletions(-)
diff --git a/block
Pass the opcode for the delete SQ/CQ command as an argument instead of
the somewhat confusing pass loop.
Signed-off-by: Christoph Hellwig
Reviewed-by: Keith Busch
---
drivers/nvme/host/pci.c | 41 -
1 file changed, 20 insertions(+), 21 deletions(-)
diff
This gets rid of all the messing with cq_vector and the ->polled field
by using an atomic bitop to mark the queue enabled or not.
Signed-off-by: Christoph Hellwig
Reviewed-by: Keith Busch
---
drivers/nvme/host/pci.c | 43 ++---
1 file changed, 15 inserti
queue for everything not explicitly
marked, the optional ones are read and poll queues.
Signed-off-by: Christoph Hellwig
---
block/blk-mq-sysfs.c| 9 +-
block/blk-mq.h | 21 +++--
drivers/nvme/host/pci.c | 68 +++--
include/linux/blk-mq.h
We have three places that can poll for I/O completions on a normal
interrupt-enabled queue. All of them are in slow path code, so
consolidate them to a single helper that uses spin_lock_irqsave and
removes the fast path cqe_pending check.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host
This will allow us to simplify both the regular NVMe interrupt handler
and the upcoming aio poll code. In addition to that the separate
queues are generally a good idea for performance reasons.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/pci.c | 18 +-
1 file changed
Hi all,
this series optimizes a few bits in the block layer and nvme code
related to polling.
It starts by moving the queue types recently introduce entirely into
the block layer instead of requiring an indirect call for them.
It then switches nvme and the block layer to only allow polling
with
Use a bit flag to mark if the SQ was allocated from the CMB, and clean
up the surrounding code a bit.
Signed-off-by: Christoph Hellwig
Reviewed-by: Keith Busch
---
drivers/nvme/host/pci.c | 33 +++--
1 file changed, 15 insertions(+), 18 deletions(-)
diff --git
On Fri, Nov 30, 2018 at 01:36:09PM -0700, Jens Axboe wrote:
> On 11/30/18 1:26 PM, Keith Busch wrote:
> > A driver may wish to iterate every tagged request, not just ones that
> > satisfy blk_mq_request_started(). The intended use is so a driver may
> > terminate entered requests on quiesced
Looks good,
Reviewed-by: Christoph Hellwig
On Fri, Nov 30, 2018 at 08:56:25AM -0700, Jens Axboe wrote:
> We only need the request fields and the end_io time if we have stats
> enabled, or if we have a scheduler attached as those may use it for
> completion time stats.
>
> Signed-off-by: Jens Axboe
> ---
> block/blk-mq.c | 13
I think we'll need to queue this up for 4.21 ASAP independent of the
rest, given that with separate poll queues userspace could otherwise
submit I/O that will never get polled for anywhere.
On Fri, Nov 30, 2018 at 08:26:24AM -0700, Jens Axboe wrote:
> On 11/30/18 8:24 AM, Christoph Hellwig wrote:
> > Various fixlets all over, including throwing in a 'default y' for the
> > multipath code, given that we want people to actually enable it for full
> > functiona
)
Christoph Hellwig (2):
nvme: enable multipathing by default
nvme: warn when finding multi-port subsystems without multipathing enabled
Ewan D. Milne (1):
nvme-fc: initialize nvme_req(rq)->ctrl after calling
__nvme_fc_init_requ
On Fri, Nov 30, 2018 at 03:20:51PM +, Jens Axboe wrote:
> Thanks - are you going to post a v3? Would like to get this staged.
Yes, will do. Either late tonight or over the weekend.
:
---
>From 923ee8e2358de04037ba6f7269aaf321f7b2e173 Mon Sep 17 00:00:00 2001
From: Christoph Hellwig
Date: Fri, 30 Nov 2018 09:23:48 +0100
Subject: block: avoid extra bio reference for async O_DIRECT
The bio referencing has a trick that doesn't do any actual atomic
inc/dec on the reference co
On Thu, Nov 29, 2018 at 02:08:40PM -0700, Keith Busch wrote:
> On Thu, Nov 29, 2018 at 08:13:05PM +0100, Christoph Hellwig wrote:
> > @@ -1050,12 +1051,16 @@ static irqreturn_t nvme_irq(int irq, void *data)
> > irqreturn_t ret = IRQ_NONE;
> > u16 start, end;
&g
On Thu, Nov 29, 2018 at 01:36:32PM -0700, Keith Busch wrote:
> On Thu, Nov 29, 2018 at 08:13:04PM +0100, Christoph Hellwig wrote:
> > This is the last place outside of nvme_irq that handles CQEs from
> > interrupt context, and thus is in the way of removing the cq_lock for
&g
On Thu, Nov 29, 2018 at 01:19:14PM -0700, Keith Busch wrote:
> On Thu, Nov 29, 2018 at 08:12:58PM +0100, Christoph Hellwig wrote:
> > +enum hctx_type {
> > + HCTX_TYPE_DEFAULT, /* all I/O not otherwise accounted for */
> > + HCTX_TYPE_READ,
On Thu, Nov 29, 2018 at 07:50:09PM +, Jens Axboe wrote:
> > in our post-spectre world. Also having too many queue type is just
> > going to create confusion, so I'd rather manage them centrally.
> >
> > Note that the queue type naming and ordering changes a bit - the
> > first index now is
This was intended to support users like nvme multipath, but is just
getting in the way and adding another indirect call.
Signed-off-by: Christoph Hellwig
---
block/blk-core.c | 23 ---
block/blk-mq.c | 24 +++-
include/linux/blkdev.h | 2
This will allow us to simplify both the regular NVMe interrupt handler
and the upcoming aio poll code. In addition to that the separate
queues are generally a good idea for performance reasons.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/pci.c | 18 +-
1 file changed
This avoids having to have differnet mq_ops for different setups
with or without poll queues.
Signed-off-by: Christoph Hellwig
---
block/blk-sysfs.c | 2 +-
drivers/nvme/host/pci.c | 29 +
2 files changed, 10 insertions(+), 21 deletions(-)
diff --git a/block
Pass the opcode for the delete SQ/CQ command as an argument instead of
the somewhat confusing pass loop.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/pci.c | 41 -
1 file changed, 20 insertions(+), 21 deletions(-)
diff --git a/drivers/nvme/host
interrupts.
With that we can stop taking the cq_lock for normal queues.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/pci.c | 37 ++---
1 file changed, 26 insertions(+), 11 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index
If the user did setup polling in the driver we should not require
another know in the block layer to enable it.
Signed-off-by: Christoph Hellwig
---
block/blk-mq.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 9c90c5038d07..a550a00ac00c 100644
This is the last place outside of nvme_irq that handles CQEs from
interrupt context, and thus is in the way of removing the cq_lock for
normal queues, and avoiding lockdep warnings on the poll queues, for
which we already take it without IRQ disabling.
Signed-off-by: Christoph Hellwig
uld rework the nvme code to also skip the multipath code for any
private namespace, even if that could mean some trouble when rescanning.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/multipath.c | 16
1 file changed, 16 deletions(-)
diff --git a/drivers/nvme/host/multipath.
The code was always a bit of a hack that digs far too much into
RDMA core internals. Lets kick it out and reimplement proper
dedicated poll queues as needed.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/rdma.c | 24
1 file changed, 24 deletions(-)
diff --git
queue for everything not explicitly
marked, the optional ones are read and poll queues.
Signed-off-by: Christoph Hellwig
---
block/blk-mq.h | 21 +++--
drivers/nvme/host/pci.c | 68 +++--
include/linux/blk-mq.h | 15 -
3 files changed, 43
This gets rid of all the messing with cq_vector and the ->polled field
by using an atomic bitop to mark the queue enabled or not.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/pci.c | 43 ++---
1 file changed, 15 insertions(+), 28 deletions(-)
d
Hi all,
this series optimizes a few bits in the block layer and nvme code
related to polling.
It starts by moving the queue types recently introduce entirely into
the block layer instead of requiring an indirect call for them.
It then switches nvme and the block layer to only allow polling
with
Use a bit flag to mark if the SQ was allocated from the CMB, and clean
up the surrounding code a bit.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/pci.c | 33 +++--
1 file changed, 15 insertions(+), 18 deletions(-)
diff --git a/drivers/nvme/host/pci.c b
We have three places that can poll for I/O completions on a normal
interrupt-enabled queue. All of them are in slow path code, so
consolidate them to a single helper that uses spin_lock_irqsave and
removes the fast path cqe_pending check.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host
On Thu, Nov 29, 2018 at 10:02:25AM -0700, Jens Axboe wrote:
> On 11/29/18 8:47 AM, Christoph Hellwig wrote:
> >> +static inline int nvme_next_ring_index(struct nvme_queue *nvmeq, u16
> >> index)
> >> +{
> >> + if (++index == nvmeq->q_depth)
> >&g
I think we need a check for the presence of a timeout method and
only show this attribute if the driver actually supports block level
timeouts.
On Thu, Nov 29, 2018 at 07:49:59AM -0800, Christoph Hellwig wrote:
> > + /*
> > +* Use plugging if we have a ->commit_rqs() hook as well,
> > +* as we know the driver uses bd->last in a smart
> > +* fashion.
> > +
> + /*
> + * Use plugging if we have a ->commit_rqs() hook as well,
> + * as we know the driver uses bd->last in a smart
> + * fashion.
> + */
Nipick: this could flow on just two lines:
/*
* Use
ed-off-by: Jens Axboe
Looks good,
Reviewed-by: Christoph Hellwig
On Wed, Nov 28, 2018 at 06:35:36AM -0700, Jens Axboe wrote:
> We need this for blk-mq to kick things into gear, if we told it that
> we had more IO coming, but then failed to deliver on that promise.
>
> Reviewed-by: Omar Sandoval
> Signed-off-by: Jens Axboe
Looks good,
Reviewe
> +static inline int nvme_next_ring_index(struct nvme_queue *nvmeq, u16 index)
> +{
> + if (++index == nvmeq->q_depth)
> + return 0;
> +
> + return index;
> +}
This is unused now.
Also what about this little cleanup on top?
diff --git a/drivers/nvme/host/pci.c
responsible for flushing pending requests, if it uses bd->last to
> optimize that part. This works like before, no changes there.
>
> Reviewed-by: Omar Sandoval
> Reviewed-by: Ming Lei
> Reviewed-by: Christoph Hellwig
I don't think I actually reviewed it before in this form.
> + } else if (plug && q->mq_ops->commit_rqs) {
> + /*
> + * If we have a ->commit_rqs(), then we know the driver can
> + * batch submission doorbell updates. Add rq to plug list,
> + * and flush if we exceed the plug count only.
> +
On Mon, Nov 26, 2018 at 09:35:54AM -0700, Jens Axboe wrote:
> We need this for blk-mq to kick things into gear, if we told it that
> we had more IO coming, but then failed to deliver on that promise.
queue_rq also calls finish_fdc under atari_disable_irq( IRQ_MFP_FDC ),
do we need that here as
On Mon, Nov 26, 2018 at 09:35:53AM -0700, Jens Axboe wrote:
> We need this for blk-mq to kick things into gear, if we told it that
> we had more IO coming, but then failed to deliver on that promise.
>
> Signed-off-by: Jens Axboe
Looks fine,
Reviewed-by: Christoph Hellwig
> +static inline int nvme_next_ring_index(struct nvme_queue *nvmeq, u16 index)
> +{
> + if (++index == nvmeq->q_depth)
> + return 0;
> +
> + return index;
Can you please just drop this helper? It makes the code not only
less readable but also longer.
Otherwise the change
responsible for flushing pending requests, if it uses bd->last to
> optimize that part. This works like before, no changes there.
>
> Signed-off-by: Jens Axboe
This looks fine, but normally I would only add the method together with
the actual user..
Reviewed-by: Christoph Hellwig
Thanks,
applied to nvme-4.20 with slight tweaks to the changelog.
pensive list loop for this.
>
> Signed-off-by: Jens Axboe
Looks good,
Reviewed-by: Christoph Hellwig
On Fri, Nov 23, 2018 at 11:34:11AM -0700, Jens Axboe wrote:
> It's pointless to do so, we are by definition on the CPU we want/need
> to be, as that's the one waiting for a completion event.
>
> Signed-off-by: Jens Axboe
Looks good,
Reviewed-by: Christoph Hellwig
> -int blk_poll(struct request_queue *q, blk_qc_t cookie)
> +int blk_poll(struct request_queue *q, blk_qc_t cookie, bool spin)
The parameter will need some documentation.
verything has settled.
Reviewed-by: Christoph Hellwig
> -bool blk_poll(struct request_queue *q, blk_qc_t cookie)
> +int blk_poll(struct request_queue *q, blk_qc_t cookie)
Can you add a comment explaining the return value?
> {
> if (!q->poll_fn || !blk_qc_t_valid(cookie))
> return false;
And false certainly isn't an integer
If the user did setup polling in the driver we should not require
another know in the block layer to enable it.
Signed-off-by: Christoph Hellwig
---
block/blk-mq.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 32e43bea36e3..9847e9d2fa7b 100644
Pass the opcode for the delete SQ/CQ command as an argument instead of
the somewhat confusing pass loop.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/pci.c | 41 -
1 file changed, 20 insertions(+), 21 deletions(-)
diff --git a/drivers/nvme/host
This avoids having to have differnet mq_ops for different setups
with or without poll queues.
Signed-off-by: Christoph Hellwig
---
block/blk-sysfs.c | 2 +-
drivers/nvme/host/pci.c | 27 ---
2 files changed, 9 insertions(+), 20 deletions(-)
diff --git a/block/blk
queue for everything not explicitly
marked, the optional ones are read and poll queues.
Signed-off-by: Christoph Hellwig
---
block/blk-mq.h | 21 +++--
drivers/nvme/host/pci.c | 68 +++--
include/linux/blk-mq.h | 15 -
3 files changed, 43
interrupts.
With that we can stop taking the cq_lock for normal queues.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/pci.c | 37 ++---
1 file changed, 26 insertions(+), 11 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index
This was intended to support users like nvme multipath, but is just
getting in the way and adding another indirect call.
Signed-off-by: Christoph Hellwig
---
block/blk-core.c | 11 ---
block/blk-mq.c | 12 +++-
include/linux/blkdev.h | 2 --
3 files changed, 7
Hi all,
this series optimizes a few bits in the block layer and nvme code
related to polling.
It starts by moving the queue types recently introduce entirely into
the block layer instead of requiring an indirect call for them.
It then switches nvme and the block layer to only allow polling
with
The code was always a bit of a hack that digs far too much into
RDMA core internals. Lets kick it out and reimplement proper
dedicated poll queues as needed.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/rdma.c | 24
1 file changed, 24 deletions(-)
diff --git
This is the last place outside of nvme_irq that handles CQEs from
interrupt context, and thus is in the way of removing the cq_lock for
normal queues, and avoiding lockdep warnings on the poll queues, for
which we already take it without IRQ disabling.
Signed-off-by: Christoph Hellwig
This will allow us to simplify both the regular NVMe interrupt handler
and the upcoming aio poll code. In addition to that the separate
queues are generally a good idea for performance reasons.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/pci.c | 18 +-
1 file changed
uld rework the nvme code to also skip the multipath code for any
private namespace, even if that could mean some trouble when rescanning.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/multipath.c | 16
1 file changed, 16 deletions(-)
diff --git a/drivers/nvme/host/multipath.
This gets rid of all the messing with cq_vector and the ->polled field
by using an atomic bitop to mark the queue enabled or not.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/pci.c | 43 ++---
1 file changed, 15 insertions(+), 28 deletions(-)
d
We have three places that can poll for I/O completions on a normal
interrupt-enabled queue. All of them are in slow path code, so
consolidate them to a single helper that uses spin_lock_irqsave and
removes the fast path cqe_pending check.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host
Use a bit flag to mark if the SQ was allocated from the CMB, and clean
up the surrounding code a bit.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/pci.c | 33 +++--
1 file changed, 15 insertions(+), 18 deletions(-)
diff --git a/drivers/nvme/host/pci.c b
The following changes since commit 8dc765d438f1e42b3e8227b3b09fad7d73f4ec9a:
SCSI: fix queue cleanup race before queue initialization is done (2018-11-14
08:19:10 -0700)
are available in the Git repository at:
git://git.infradead.org/nvme.git nvme-4.20
for you to fetch changes up to
ail, that is it is on top of what you send to the list
plus my first two patches.
Completely untested again of course..
---
>From cf9fd90d13a025d53b26ba54202c2898ba4bf0ef Mon Sep 17 00:00:00 2001
From: Christoph Hellwig
Date: Sun, 18 Nov 2018 17:17:55 +0100
Subject: change aio poll list management
MIME-Versio
On Fri, Nov 16, 2018 at 02:28:02PM -0500, Mike Snitzer wrote:
> You rejected the idea of allowing fine-grained control over whether
> native NVMe multipathing is enabled or not on a per-namespace basis.
> All we have is the coarse-grained nvme_core.multipath=N knob. Now
> you're forecasting
On Thu, Nov 15, 2018 at 10:58:20AM -0700, Keith Busch wrote:
> There are no more users relying on blk-mq request states to prevent
> double completions, so replace the relatively expensive cmpxchg operation
> with WRITE_ONCE.
>
> Signed-off-by: Keith Busch
Looks fine,
Reviewe
> index 5d83a162d03b..c1d5e4e36125 100644
> --- a/drivers/scsi/scsi_lib.c
> +++ b/drivers/scsi/scsi_lib.c
> @@ -1635,8 +1635,11 @@ static blk_status_t scsi_mq_prep_fn(struct request
> *req)
>
> static void scsi_mq_done(struct scsi_cmnd *cmd)
> {
> + if
d-off-by: Keith Busch
Looks fine,
Reviewed-by: Christoph Hellwig
On Mon, Nov 19, 2018 at 04:19:24PM +0800, Ming Lei wrote:
> On Fri, Nov 16, 2018 at 02:38:45PM +0100, Christoph Hellwig wrote:
> > On Thu, Nov 15, 2018 at 04:52:55PM +0800, Ming Lei wrote:
> > > BTRFS is the only user of this helper, so move this helper into
> > >
1 - 100 of 2477 matches
Mail list logo