RE: [PATCH 9/9] [RFC] nvme: Fix a race condition

2016-09-27 Thread Steve Wise
> On 09/27/2016 09:31 AM, Steve Wise wrote: > >> @@ -2079,11 +2075,15 @@ EXPORT_SYMBOL_GPL(nvme_kill_queues); > >> void nvme_stop_queues(struct nvme_ctrl *ctrl) > >> { > >>struct nvme_ns *ns; > >> + struct request_que

RE: [PATCH 9/9] [RFC] nvme: Fix a race condition

2016-09-27 Thread Steve Wise
> @@ -2079,11 +2075,15 @@ EXPORT_SYMBOL_GPL(nvme_kill_queues); > void nvme_stop_queues(struct nvme_ctrl *ctrl) > { > struct nvme_ns *ns; > + struct request_queue *q; > > mutex_lock(>namespaces_mutex); > list_for_each_entry(ns, >namespaces, list) { > -

RE: [PATCH 9/9] [RFC] nvme: Fix a race condition

2016-09-28 Thread Steve Wise
> > Hello James and Steve, > > I will add a comment. > > Please note that the above patch does not change the behavior of > nvme_stop_queues() except that it causes nvme_stop_queues() to wait > until any ongoing nvme_queue_rq() calls have finished. > blk_resume_queue() does not affect the value

RE: A question regarding "multiple SGL"

2016-10-27 Thread Steve Wise
> > Hi Robert, > > Hey Robert, Christoph, > > > please explain your use cases that isn't handled. The one and only > > reason to set MSDBD to 1 is to make the code a lot simpler given that > > there is no real use case for supporting more. > > > > RDMA uses memory registrations to register

RE: [RFC PATCH 00/28] INFINIBAND NETWORK BLOCK DEVICE (IBNBD)

2017-03-24 Thread Steve Wise
> > From: Jack Wang > > This series introduces IBNBD/IBTRS kernel modules. > > IBNBD (InfiniBand network block device) allows for an RDMA transfer of block IO > over InfiniBand network. The driver presents itself as a block device on client > side and transmits the

Re: [PATCH rfc 0/6] Automatic affinity settings for nvme over rdma

2017-04-10 Thread Steve Wise
On 4/2/2017 8:41 AM, Sagi Grimberg wrote: This patch set is aiming to automatically find the optimal queue <-> irq multi-queue assignments in storage ULPs (demonstrated on nvme-rdma) based on the underlying rdma device irq affinity settings. First two patches modify mlx5 core driver to use

RE: [PATCH v2] block: fix rdma queue mapping

2018-08-25 Thread Steve Wise
> > I guess this way still can't fix the request allocation crash issue > > triggered by using blk_mq_alloc_request_hctx(), in which one hw queue > may > > not be mapped from any online CPU. > > Not really. I guess we will need to simply skip queues that are > mapped to an offline cpu. > > >

RE: [PATCH v2] block: fix rdma queue mapping

2018-10-23 Thread Steve Wise
> >> Christoph, Sagi: it seems you think /proc/irq/$IRP/smp_affinity > >> shouldn't be allowed if drivers support managed affinity. Is that correct? > > > > Not just shouldn't, but simply can't. > > > >> But as it stands, things are just plain borked if an rdma driver > >> supports

RE: [PATCH v2] block: fix rdma queue mapping

2018-10-23 Thread Steve Wise
> -Original Message- > From: Sagi Grimberg > Sent: Tuesday, October 23, 2018 4:25 PM > To: Steve Wise ; 'Christoph Hellwig' > > Cc: linux-block@vger.kernel.org; linux-r...@vger.kernel.org; linux- > n...@lists.infradead.org; 'Max Gurtovoy' > Subject: Re: [P

Re: [PATCH v2] block: fix rdma queue mapping

2018-10-03 Thread Steve Wise
On 8/24/2018 9:17 PM, Sagi Grimberg wrote: > >>> nvme-rdma attempts to map queues based on irq vector affinity. >>> However, for some devices, completion vector irq affinity is >>> configurable by the user which can break the existing assumption >>> that irq vectors are optimally arranged over