RE: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-08-17 Thread Steve Wise
> > > > Hey Sagi, > > > > The patch works allowing connections for the various affinity mappings > below: > > > > One comp_vector per core across all cores, starting with numa-local cores: > > Thanks Steve, is this your "Tested by:" tag? Sure: Tested-by: Steve Wise

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-08-17 Thread Sagi Grimberg
Hi Jason, The new patchworks doesn't grab patches inlined in messages, so you will need to resend it. Yes, just wanted to to add Steve's tested by as its going to lists that did not follow this thread. Also, can someone remind me what the outcome is here? Does it supersede Leon's patch:

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-08-17 Thread Jason Gunthorpe
On Fri, Aug 17, 2018 at 01:03:20PM -0700, Sagi Grimberg wrote: > > > Hey Sagi, > > > > The patch works allowing connections for the various affinity mappings > > below: > > > > One comp_vector per core across all cores, starting with numa-local cores: > > Thanks Steve, is this your "Tested

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-08-17 Thread Sagi Grimberg
Hey Sagi, The patch works allowing connections for the various affinity mappings below: One comp_vector per core across all cores, starting with numa-local cores: Thanks Steve, is this your "Tested by:" tag?

RE: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-08-17 Thread Steve Wise
> On 8/16/2018 1:26 PM, Sagi Grimberg wrote: > > > >> Let me know if you want me to try this or any particular fix. > > > > Steve, can you test this one? > > Yes! I'll try it out tomorrow. > > Stevo > Hey Sagi, The patch works allowing connections for the various affinity mappings below:

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-08-16 Thread Steve Wise
On 8/16/2018 1:26 PM, Sagi Grimberg wrote: > >> Let me know if you want me to try this or any particular fix. > > Steve, can you test this one? Yes!  I'll try it out tomorrow.  Stevo > -- > [PATCH rfc] block: fix rdma queue mapping > > nvme-rdma attempts to map queues based on irq vector

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-08-16 Thread Sagi Grimberg
Let me know if you want me to try this or any particular fix. Steve, can you test this one? -- [PATCH rfc] block: fix rdma queue mapping nvme-rdma attempts to map queues based on irq vector affinity. However, for some devices, completion vector irq affinity is configurable by the user which

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-08-15 Thread Leon Romanovsky
On Mon, Aug 06, 2018 at 02:20:37PM -0500, Steve Wise wrote: > > > On 8/1/2018 9:27 AM, Max Gurtovoy wrote: > > > > > > On 8/1/2018 8:12 AM, Sagi Grimberg wrote: > >> Hi Max, > > > > Hi, > > > >> > >>> Yes, since nvmf is the only user of this function. > >>> Still waiting for comments on the

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-08-06 Thread Steve Wise
On 8/1/2018 9:27 AM, Max Gurtovoy wrote: > > > On 8/1/2018 8:12 AM, Sagi Grimberg wrote: >> Hi Max, > > Hi, > >> >>> Yes, since nvmf is the only user of this function. >>> Still waiting for comments on the suggested patch :) >>> >> >> Sorry for the late response (but I'm on vacation so I have

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-08-01 Thread Max Gurtovoy
On 8/1/2018 8:12 AM, Sagi Grimberg wrote: Hi Max, Hi, Yes, since nvmf is the only user of this function. Still waiting for comments on the suggested patch :) Sorry for the late response (but I'm on vacation so I have an excuse ;)) NP :) currently the code works.. I'm thinking

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-31 Thread Sagi Grimberg
Hi Max, Yes, since nvmf is the only user of this function. Still waiting for comments on the suggested patch :) Sorry for the late response (but I'm on vacation so I have an excuse ;)) I'm thinking that we should avoid trying to find an assignment when stuff like irqbalance daemon is

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-31 Thread Max Gurtovoy
On 7/30/2018 6:47 PM, Steve Wise wrote: On 7/23/2018 11:53 AM, Max Gurtovoy wrote: On 7/23/2018 7:49 PM, Jason Gunthorpe wrote: On Fri, Jul 20, 2018 at 04:25:32AM +0300, Max Gurtovoy wrote: [ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18 queue 9 is not mapped

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-30 Thread Steve Wise
On 7/23/2018 11:53 AM, Max Gurtovoy wrote: > > > On 7/23/2018 7:49 PM, Jason Gunthorpe wrote: >> On Fri, Jul 20, 2018 at 04:25:32AM +0300, Max Gurtovoy wrote: >>> >> [ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18 > > queue 9 is not mapped (overlap). > please try

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-24 Thread Steve Wise
On 7/24/2018 10:24 AM, Steve Wise wrote: > > On 7/19/2018 8:25 PM, Max Gurtovoy wrote: > [ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18 queue 9 is not mapped (overlap). please try the bellow: >>> This seems to work.  Here are three mapping cases:  each vector on

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-24 Thread Steve Wise
On 7/19/2018 8:25 PM, Max Gurtovoy wrote: > [ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18 >>> >>> queue 9 is not mapped (overlap). >>> please try the bellow: >>> >> >> This seems to work.  Here are three mapping cases:  each vector on its >> own cpu, each vector on 1 cpu

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-23 Thread Max Gurtovoy
On 7/23/2018 7:49 PM, Jason Gunthorpe wrote: On Fri, Jul 20, 2018 at 04:25:32AM +0300, Max Gurtovoy wrote: [ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18 queue 9 is not mapped (overlap). please try the bellow: This seems to work.  Here are three mapping cases:  each

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-23 Thread Jason Gunthorpe
On Fri, Jul 20, 2018 at 04:25:32AM +0300, Max Gurtovoy wrote: > > >>>[ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18 > >> > >>queue 9 is not mapped (overlap). > >>please try the bellow: > >> > > > >This seems to work.  Here are three mapping cases:  each vector on its > >own cpu,

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-19 Thread Max Gurtovoy
[ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18 queue 9 is not mapped (overlap). please try the bellow: This seems to work.  Here are three mapping cases:  each vector on its own cpu, each vector on 1 cpu within the local numa node, and each vector having all cpus in its numa

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-19 Thread Steve Wise
On 7/19/2018 9:50 AM, Max Gurtovoy wrote: > > > On 7/18/2018 10:29 PM, Steve Wise wrote: >> >>> >>> On 7/18/2018 2:38 PM, Sagi Grimberg wrote: >> IMO we must fulfil the user wish to connect to N queues and not >> reduce >> it because of affinity overlaps. So in order to push

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-19 Thread Max Gurtovoy
On 7/18/2018 10:29 PM, Steve Wise wrote: On 7/18/2018 2:38 PM, Sagi Grimberg wrote: IMO we must fulfil the user wish to connect to N queues and not reduce it because of affinity overlaps. So in order to push Leon's patch we must also fix the blk_mq_rdma_map_queues to do a best effort

RE: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-18 Thread Steve Wise
> > On 7/18/2018 2:38 PM, Sagi Grimberg wrote: > > > >>> IMO we must fulfil the user wish to connect to N queues and not reduce > >>> it because of affinity overlaps. So in order to push Leon's patch we > >>> must also fix the blk_mq_rdma_map_queues to do a best effort > mapping > >>> according

RE: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-18 Thread Steve Wise
ct: Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity > mask > > > > On 7/18/2018 2:38 PM, Sagi Grimberg wrote: > > > >>> IMO we must fulfil the user wish to connect to N queues and not reduce > >>> it because of affinity overlaps. So in o

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-18 Thread Max Gurtovoy
On 7/18/2018 2:38 PM, Sagi Grimberg wrote: IMO we must fulfil the user wish to connect to N queues and not reduce it because of affinity overlaps. So in order to push Leon's patch we must also fix the blk_mq_rdma_map_queues to do a best effort mapping according the affinity and map the rest

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-18 Thread Sagi Grimberg
IMO we must fulfil the user wish to connect to N queues and not reduce it because of affinity overlaps. So in order to push Leon's patch we must also fix the blk_mq_rdma_map_queues to do a best effort mapping according the affinity and map the rest in naive way (in that way we will *always*

RE: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-17 Thread Steve Wise
> On 7/16/2018 8:08 PM, Steve Wise wrote: > > Hey Max: > > > > > > Hey, > > > On 7/16/2018 11:46 AM, Max Gurtovoy wrote: > >> > >> > >> On 7/16/2018 5:59 PM, Sagi Grimberg wrote: > >>> > Hi, > I've tested this patch and seems problematic at this moment. > >>> > >>> Problematic how?

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-17 Thread Max Gurtovoy
On 7/17/2018 11:58 AM, Leon Romanovsky wrote: On Tue, Jul 17, 2018 at 11:46:40AM +0300, Max Gurtovoy wrote: On 7/16/2018 8:08 PM, Steve Wise wrote: Hey Max: Hey, On 7/16/2018 11:46 AM, Max Gurtovoy wrote: On 7/16/2018 5:59 PM, Sagi Grimberg wrote: Hi, I've tested this patch

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-17 Thread Leon Romanovsky
On Tue, Jul 17, 2018 at 11:46:40AM +0300, Max Gurtovoy wrote: > > > On 7/16/2018 8:08 PM, Steve Wise wrote: > > Hey Max: > > > > > > Hey, > > > On 7/16/2018 11:46 AM, Max Gurtovoy wrote: > > > > > > > > > On 7/16/2018 5:59 PM, Sagi Grimberg wrote: > > > > > > > > > Hi, > > > > > I've tested this

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-17 Thread Max Gurtovoy
On 7/16/2018 8:08 PM, Steve Wise wrote: Hey Max: Hey, On 7/16/2018 11:46 AM, Max Gurtovoy wrote: On 7/16/2018 5:59 PM, Sagi Grimberg wrote: Hi, I've tested this patch and seems problematic at this moment. Problematic how? what are you seeing? Connection failures and same error

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-16 Thread Steve Wise
Hey Max: On 7/16/2018 11:46 AM, Max Gurtovoy wrote: > > > On 7/16/2018 5:59 PM, Sagi Grimberg wrote: >> >>> Hi, >>> I've tested this patch and seems problematic at this moment. >> >> Problematic how? what are you seeing? > > Connection failures and same error Steve saw: > > [Mon Jul 16 16:19:11

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-16 Thread Max Gurtovoy
On 7/16/2018 5:59 PM, Sagi Grimberg wrote: Hi, I've tested this patch and seems problematic at this moment. Problematic how? what are you seeing? Connection failures and same error Steve saw: [Mon Jul 16 16:19:11 2018] nvme nvme0: Connect command failed, error wo/DNR bit: -16402 [Mon

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-16 Thread Sagi Grimberg
Hi, I've tested this patch and seems problematic at this moment. Problematic how? what are you seeing? maybe this is because of the bug that Steve mentioned in the NVMe mailing list. Sagi mentioned that we should fix it in the NVMe/RDMA initiator and I'll run his suggestion as well. Is

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-16 Thread Max Gurtovoy
Hi, I've tested this patch and seems problematic at this moment. maybe this is because of the bug that Steve mentioned in the NVMe mailing list. Sagi mentioned that we should fix it in the NVMe/RDMA initiator and I'll run his suggestion as well. BTW, when I run the blk_mq_map_queues it works

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-16 Thread Leon Romanovsky
On Mon, Jul 16, 2018 at 01:23:24PM +0300, Sagi Grimberg wrote: > Leon, I'd like to see a tested-by tag for this (at least > until I get some time to test it). Of course. Thanks > > The patch itself looks fine to me. signature.asc Description: PGP signature

Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-16 Thread Sagi Grimberg
Leon, I'd like to see a tested-by tag for this (at least until I get some time to test it). The patch itself looks fine to me.

[PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

2018-07-16 Thread Leon Romanovsky
From: Leon Romanovsky The IRQ affinity mask is managed by mlx5_core, however any user triggered updates through /proc/irq//smp_affinity were not reflected in mlx5_ib_get_vector_affinity(). Drop the attempt to use cached version of affinity mask in favour of managed by PCI core value. Fixes: