try to be better than the naive mapping I suggest in
the previous email.
From 007d773af7b65a1f1ca543f031ca58b3afa5b7d9 Mon Sep 17 00:00:00 2001
From: Max Gurtovoy
Date: Thu, 19 Jul 2018 12:42:00 +
Subject: [PATCH 1/1] blk-mq: fix RDMA queue/cpu mappings assignments for mq
Signed-off-by: Max
On 7/30/2018 6:47 PM, Steve Wise wrote:
On 7/23/2018 11:53 AM, Max Gurtovoy wrote:
On 7/23/2018 7:49 PM, Jason Gunthorpe wrote:
On Fri, Jul 20, 2018 at 04:25:32AM +0300, Max Gurtovoy wrote:
[ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18
queue 9 is not mapped (overlap
On 7/23/2018 7:49 PM, Jason Gunthorpe wrote:
On Fri, Jul 20, 2018 at 04:25:32AM +0300, Max Gurtovoy wrote:
[ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18
queue 9 is not mapped (overlap).
please try the bellow:
This seems to work. Here are three mapping cases: each
,
Max,
From 6f7b98f1c43252f459772390c178fc3ad043fc82 Mon Sep 17 00:00:00 2001
From: Max Gurtovoy
Date: Thu, 19 Jul 2018 12:42:00 +
Subject: [PATCH 1/1] blk-mq: fix RDMA queue/cpu mappings assignments for mq
In order to fulfil the block layer cpu <-> queue mapping, all the
allocated queues a
On 7/18/2018 10:29 PM, Steve Wise wrote:
On 7/18/2018 2:38 PM, Sagi Grimberg wrote:
IMO we must fulfil the user wish to connect to N queues and not reduce
it because of affinity overlaps. So in order to push Leon's patch we
must also fix the blk_mq_rdma_map_queues to do a best effort
On 7/18/2018 2:38 PM, Sagi Grimberg wrote:
IMO we must fulfil the user wish to connect to N queues and not reduce
it because of affinity overlaps. So in order to push Leon's patch we
must also fix the blk_mq_rdma_map_queues to do a best effort mapping
according the affinity and map the rest
On 7/17/2018 11:58 AM, Leon Romanovsky wrote:
On Tue, Jul 17, 2018 at 11:46:40AM +0300, Max Gurtovoy wrote:
On 7/16/2018 8:08 PM, Steve Wise wrote:
Hey Max:
Hey,
On 7/16/2018 11:46 AM, Max Gurtovoy wrote:
On 7/16/2018 5:59 PM, Sagi Grimberg wrote:
Hi,
I've tested this patch
On 7/16/2018 8:08 PM, Steve Wise wrote:
Hey Max:
Hey,
On 7/16/2018 11:46 AM, Max Gurtovoy wrote:
On 7/16/2018 5:59 PM, Sagi Grimberg wrote:
Hi,
I've tested this patch and seems problematic at this moment.
Problematic how? what are you seeing?
Connection failures and same error
On 7/16/2018 5:59 PM, Sagi Grimberg wrote:
Hi,
I've tested this patch and seems problematic at this moment.
Problematic how? what are you seeing?
Connection failures and same error Steve saw:
[Mon Jul 16 16:19:11 2018] nvme nvme0: Connect command failed, error
wo/DNR bit: -16402
[Mon
Hi,
I've tested this patch and seems problematic at this moment.
maybe this is because of the bug that Steve mentioned in the NVMe
mailing list. Sagi mentioned that we should fix it in the NVMe/RDMA
initiator and I'll run his suggestion as well.
BTW, when I run the blk_mq_map_queues it works
On 2/1/2018 10:21 AM, Greg KH wrote:
On Tue, Jan 30, 2018 at 10:12:51AM +0100, Marta Rybczynska wrote:
Hello Mellanox maintainers,
I'd like to ask you to OK backporting two patches in mlx5 driver to 4.9 stable
tree (they're in master for some time already).
We have multiple deployment in 4.9
Any feedback is welcome.
Hi Sagi,
the patchset looks good and of course we can add support for more
drivers in the future.
have you run some performance testing with the nvmf initiator ?
Sagi Grimberg (6):
mlx5: convert to generic pci_alloc_irq_vectors
mlx5: move affinity hints
q_rdma_map_queues);
Otherwise, Looks good.
Reviewed-by: Max Gurtovoy <m...@mellanox.com>
13 matches
Mail list logo