> On 09/27/2016 09:31 AM, Steve Wise wrote:
> >> @@ -2079,11 +2075,15 @@ EXPORT_SYMBOL_GPL(nvme_kill_queues);
> >> void nvme_stop_queues(struct nvme_ctrl *ctrl)
> >> {
> >>struct nvme_ns *ns;
> >> + struct request_que
> @@ -2079,11 +2075,15 @@ EXPORT_SYMBOL_GPL(nvme_kill_queues);
> void nvme_stop_queues(struct nvme_ctrl *ctrl)
> {
> struct nvme_ns *ns;
> + struct request_queue *q;
>
> mutex_lock(>namespaces_mutex);
> list_for_each_entry(ns, >namespaces, list) {
> -
>
> Hello James and Steve,
>
> I will add a comment.
>
> Please note that the above patch does not change the behavior of
> nvme_stop_queues() except that it causes nvme_stop_queues() to wait
> until any ongoing nvme_queue_rq() calls have finished.
> blk_resume_queue() does not affect the value
> > Hi Robert,
>
> Hey Robert, Christoph,
>
> > please explain your use cases that isn't handled. The one and only
> > reason to set MSDBD to 1 is to make the code a lot simpler given that
> > there is no real use case for supporting more.
> >
> > RDMA uses memory registrations to register
>
> From: Jack Wang
>
> This series introduces IBNBD/IBTRS kernel modules.
>
> IBNBD (InfiniBand network block device) allows for an RDMA transfer of block
IO
> over InfiniBand network. The driver presents itself as a block device on
client
> side and transmits the
On 4/2/2017 8:41 AM, Sagi Grimberg wrote:
This patch set is aiming to automatically find the optimal
queue <-> irq multi-queue assignments in storage ULPs (demonstrated
on nvme-rdma) based on the underlying rdma device irq affinity
settings.
First two patches modify mlx5 core driver to use
> > I guess this way still can't fix the request allocation crash issue
> > triggered by using blk_mq_alloc_request_hctx(), in which one hw queue
> may
> > not be mapped from any online CPU.
>
> Not really. I guess we will need to simply skip queues that are
> mapped to an offline cpu.
>
> >
> >> Christoph, Sagi: it seems you think /proc/irq/$IRP/smp_affinity
> >> shouldn't be allowed if drivers support managed affinity. Is that correct?
> >
> > Not just shouldn't, but simply can't.
> >
> >> But as it stands, things are just plain borked if an rdma driver
> >> supports
> -Original Message-
> From: Sagi Grimberg
> Sent: Tuesday, October 23, 2018 4:25 PM
> To: Steve Wise ; 'Christoph Hellwig'
>
> Cc: linux-block@vger.kernel.org; linux-r...@vger.kernel.org; linux-
> n...@lists.infradead.org; 'Max Gurtovoy'
> Subject: Re: [P
On 8/24/2018 9:17 PM, Sagi Grimberg wrote:
>
>>> nvme-rdma attempts to map queues based on irq vector affinity.
>>> However, for some devices, completion vector irq affinity is
>>> configurable by the user which can break the existing assumption
>>> that irq vectors are optimally arranged over
10 matches
Mail list logo