Re: [PATCH rfc 0/6] Automatic affinity settings for nvme over rdma

2017-04-12 Thread Christoph Hellwig
On Mon, Apr 10, 2017 at 01:05:50PM -0500, Steve Wise wrote: > I'll test cxgb4 if you convert it. :) That will take a lot of work. The problem with cxgb4 is that it allocatesd all the interrupts at device enable time, but then only allocates them to ULDs when they attached, while this scheme

Re: [PATCH rfc 0/6] Automatic affinity settings for nvme over rdma

2017-04-10 Thread Steve Wise
On 4/2/2017 8:41 AM, Sagi Grimberg wrote: This patch set is aiming to automatically find the optimal queue <-> irq multi-queue assignments in storage ULPs (demonstrated on nvme-rdma) based on the underlying rdma device irq affinity settings. First two patches modify mlx5 core driver to use

Re: [PATCH rfc 0/6] Automatic affinity settings for nvme over rdma

2017-04-06 Thread Sagi Grimberg
Hi Sagi, Hey Max, the patchset looks good and of course we can add support for more drivers in the future. have you run some performance testing with the nvmf initiator ? I'm limited by the target machine in terms of IOPs, but the host shows ~10% cpu usage decrease, and latency improves

Re: [PATCH rfc 0/6] Automatic affinity settings for nvme over rdma

2017-04-04 Thread Max Gurtovoy
Any feedback is welcome. Hi Sagi, the patchset looks good and of course we can add support for more drivers in the future. have you run some performance testing with the nvmf initiator ? Sagi Grimberg (6): mlx5: convert to generic pci_alloc_irq_vectors mlx5: move affinity hints

[PATCH rfc 0/6] Automatic affinity settings for nvme over rdma

2017-04-02 Thread Sagi Grimberg
This patch set is aiming to automatically find the optimal queue <-> irq multi-queue assignments in storage ULPs (demonstrated on nvme-rdma) based on the underlying rdma device irq affinity settings. First two patches modify mlx5 core driver to use generic API to allocate array of irq vectors