Looks good,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
On 07/02/17 21:34, Christoph Hellwig wrote:
On Tue, Feb 07, 2017 at 01:34:54PM -0500, Keith Busch wrote:
On Tue, Feb 07, 2017 at 05:46:58PM +0100, Christoph Hellwig wrote:
@@ -1233,6 +1243,8 @@ static void nvme_set_queue_limits(struct nvme_ctrl *ctrl,
if (ctrl->vwc &
Looks good,
Reviewed-by: Sagi Grimberg <s...@grimbeg.me>
Looks good,
Reviewed-by: Sagi Grimberg <s...@grimbeg.me>
Hi all,
this series adds support for merging discontiguous discard bios into a
single request if the driver supports it. This reduces the number of
discards sent to the device by about a factor of 5-6 for typical
workloads on NVMe, and for slower devices that use I/O scheduling
the number
Looks good,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
@@ -1014,9 +1013,9 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue
*queue,
}
Christoph, a little above here we still look at blk_rq_bytes(),
shouldn't that look at blk_rq_payload_bytes() too?
The check is ok for now as it's just zero vs non-zero. It's somewhat
broken for
Christoph suggest to me once that we can take a hybrid
approach where we consume a small amount of completions (say 4)
right away from the interrupt handler and if we have more
we schedule irq-poll to reap the rest. But back then it
didn't work better which is not aligned with my observations
--
[1]
queue = b'nvme0q1'
usecs : count distribution
0 -> 1 : 7310 ||
2 -> 3 : 11 | |
4 -> 7 : 10 | |
8 -> 15 : 20 | |
So it looks like we are super not efficient because most of the
times we catch 1
completion per interrupt and the whole point is that we need to find
more! This fio
is single threaded with QD=32 so I'd expect that we be somewhere in
8-31 almost all
the time... I also
Hey, so I made some initial analysis of whats going on with
irq-poll.
First, I sampled how much time it takes before we
get the interrupt in nvme_irq and the initial visit
to nvme_irqpoll_handler. I ran a single threaded fio
with QD=32 of 4K reads. This is two displays of a
histogram of the
@@ -1014,9 +1013,9 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue
*queue,
}
Christoph, a little above here we still look at blk_rq_bytes(),
shouldn't that look at blk_rq_payload_bytes() too?
if (count == 1) {
- if (rq_data_dir(rq) == WRITE &&
-
Hey Josef,
I'm going to use it the same way loop does, there will be a
/dev/nbd-control where you can say ADD, REMOVE, and GET_NEXT. I need the
search functionality to see if we are adding something that already
exists, and to see what is the next unused device that can be used for a
So what you say is you saw a consomed == 1 [1] most of the time?
[1] from
http://git.infradead.org/nvme.git/commitdiff/eed5a9d925c59e43980047059fde29e3aa0b7836
Exactly. By processing 1 completion per interrupt it makes perfect sense
why this performs poorly, it's not worth paying the
Hannes just spotted this:
static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
const struct blk_mq_queue_data *bd)
{
[...]
__nvme_submit_cmd(nvmeq, );
nvme_process_cq(nvmeq);
spin_unlock_irq(>q_lock);
return BLK_MQ_RQ_QUEUE_OK;
Your report provided this stats with one-completion dominance for the
single-threaded case. Does it also hold if you run multiple fio
threads per core?
It's useless to run more threads on that core, it's already fully
utilized. That single threads is already posting a fair amount of
On 24/02/17 02:36, Keith Busch wrote:
If the block layer has entered requests and gets a CPU hot plug event
prior to the resume event, it will wait for those requests to exit. If
the nvme driver is shutting down, it will not start the queues back up,
preventing forward progress.
To fix that,
Hey Jens,
I'm getting a regression in nvme-rdma/nvme-loop with for-linus [1]
with a small script to trigger it.
The reason seems to be that the sched_tags does not take into account
the tag_set reserved tags.
This solves it for me, any objections on this?
--
diff --git a/block/blk-mq-sched.c
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
OK, I think we can get it for fabrics too, need to figure out how to
handle it there too.
Do you have a reproducer?
To repro, I have to run a buffered writer workload then put the system into S3.
This fio job seems to reproduce for me:
fio --name=global --filename=/dev/nvme0n1
Hm, this may fix the crash, but I'm not sure it'll work as intended.
When we allocate the request, we'll get a reserved scheduler tag, but
then when we go to dispatch the request and call
blk_mq_get_driver_tag(), we'll be competing with all of the normal
requests for a regular driver tag. So
Now I'm getting a NULL deref with nvme-rdma [1].
For some reason blk_mq_tag_to_rq() is returning NULL on
tag 0x0 which is io queue connect.
I'll try to see where this is coming from.
This does not happen with loop though...
That's because the loop driver does not rely on the
cqe.command_id
Signed-off-by: Sagi Grimberg <s...@grimberg.me>
---
block/blk-mq-sched.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 98c7b061781e..46ca965fff5c 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -454,7
Otherwise we won't be able to retrieve the request from
the tag.
Signed-off-by: Sagi Grimberg <s...@grimberg.me>
---
block/blk-mq.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index d84c66fb37b7..9611cd9920e9 100644
--- a/block/blk-mq.c
+++ b/blo
Hey Jens,
I'm getting a regression in nvme-rdma/nvme-loop with for-linus [1]
with a small script to trigger it.
The reason seems to be that the sched_tags does not take into account
the tag_set reserved tags.
This solves it for me, any objections on this?
--
diff --git a/block/blk-mq-sched.c
Can't we just not go through the scheduler for reserved tags? Obviously
there is no point in scheduling them...
Right, that would be possible. But I'd rather not treat any requests
differently, it's a huge pain in the ass that flush request currently
insert with a driver tag already
[adding linux-nvme to Cc as the patch changes the nvme driver, despite
the subject line]
On Sat, Feb 25, 2017 at 08:16:04PM +0100, Matias Bjørling wrote:
On 02/25/2017 07:21 PM, Christoph Hellwig wrote:
On Fri, Feb 24, 2017 at 06:16:48PM +0100, Matias Bjørling wrote:
More implementations of
Thanks. Sagi, I updated your first patch as follows:
http://git.kernel.dk/cgit/linux-block/commit/?h=for-linus=d06f713e5d200959cdb445a0104e71d9e6070c51
and this is now head of for-linus.
thanks.
If the block layer has entered requests and gets a CPU hot plug event
prior to the resume event, it will wait for those requests to exit. If
the nvme driver is shutting down, it will not start the queues back up,
preventing forward progress.
To fix that, this patch freezes the request queues
n't
apply on top of the reserved tag patch.
Yup, I had those based on Sagi's original patches for some reason. I
fat-fingered send-email, sent as a reply to the original patch 1 instead
of this email.
I got it, applied all 3, thanks Omar!
FWIW, you can add my:
Tested-by: Sagi Grimberg
blk_mq_alloc_request_hctx() allocates a driver request directly, unlike
its blk_mq_alloc_request() counterpart. It also crashes because it
doesn't update the tags->rqs map.
Fix it by making it allocate a scheduler request.
Reported-by: Sagi Grimberg <s...@grimberg.me>
Signed-off
e
when scheduling internally, no need to duplicate it.
Looks good too,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
Question (while we're on the subject):
Do consumers have a way to restrict blk-mq to block on
lack of tags? I'm thinking in the context of nvme-target
that can do more useful
Hey Al,
What happens if we feed it a 3-element iovec array, one page in each?
AFAICS, bio_add_pc_page() is called for each of those pages, even if
the previous calls have failed - break is only out of the inner loop.
Sure, failure due to exceeded request size means that everything after
that
Looks good,
Reviewed-by: Sagi Grimberg <s...@rimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 14/09/16 07:18, Christoph Hellwig wrote:
Use the new helper to automatically select the right interrupt type, as
well as to use the automatic interupt affinity assignment.
Patch title and the change description are a little short IMO to
describe what is going on here (need the blk-mq side
Looks good,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Looks fine,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Looks fine,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Looks fine,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Looks fine,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
@@ -1908,33 +1909,36 @@ static void blk_mq_realloc_hw_ctxs(struct
blk_mq_tag_set *set,
if (node == NUMA_NO_NODE)
node = set->numa_node;
- hctxs[i] = kzalloc_node(sizeof(struct blk_mq_hw_ctx),
-
+static void srp_mq_wait_for_queuecommand(struct Scsi_Host *shost)
+{
+ struct scsi_device *sdev;
+ struct request_queue *q;
+
+ shost_for_each_device(sdev, shost) {
+ q = sdev->request_queue;
+
+ blk_mq_quiesce_queue(q);
+
Make nvme_requeue_req() check BLK_MQ_S_STOPPED instead of
QUEUE_FLAG_STOPPED. Remove the QUEUE_FLAG_STOPPED manipulations
that became superfluous because of this change. This patch fixes
a race condition: using queue_flag_clear_unlocked() is not safe
if any other function that manipulates the
Looks good,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Looks good,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Looks good,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
This library was moved to the generic area and was
renamed to irq-poll. Hence, update proc/softirqs output accordingly.
Signed-off-by: Sagi Grimberg <s...@grimberg.me>
---
kernel/softirq.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/softirq.c b/kernel/sof
You do realise that local filesystems can silently change the
location of file data at any point in time, so there is no such
thing as a "stable mapping" of file data to block device addresses
in userspace?
If you want remote access to the blocks owned and controlled by a
filesystem, then you
+static void nvmet_execute_write_zeroes(struct nvmet_req *req)
+{
+ struct nvme_write_zeroes_cmd *write_zeroes = >cmd->write_zeroes;
+ struct bio *bio = NULL;
+ u16 status = NVME_SC_SUCCESS;
+ sector_t sector;
+ sector_t nr_sector;
+
+ sector =
Looks good,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
splice to a local list (and splice back when done) so we won't
need to enable/disable local_irq in each iteration.
Signed-off-by: Sagi Grimberg <s...@grimberg.me>
---
lib/irq_poll.c | 31 ---
1 file changed, 16 insertions(+), 15 deletions(-)
diff --git
Some useful patches I came up with when working
on nvme irq-poll conversion (which still needs some
work).
Sagi Grimberg (3):
irq-poll: Remove redundant include
irq-poll: micro optimize some conditions
irq-poll: Reduce local_irq_save/restore operations in irq_poll_softirq
lib/irq_poll.c
Are they really that unlikely? I don't like these annotations unless
it's clearly an error path or they have a high, demonstrable benefit.
IRQ_POLL_F_DISABLE is set when disabling the iop (in the end of the
world). IRQ_POLL_F_SCHED is set on irq_poll_sched() itself so this cond
would match
+ while (!list_empty()) {
Maybe do a list_first_entry_or_null here if you're touching the list
iteration anyway?
I can do that.
+ local_irq_disable();
+ list_splice_tail_init(iop_list, );
+ list_splice(, iop_list);
+
if (rearm)
+struct sed_cb_data {
+ sec_cb *cb;
+ void*cb_data;
+ struct nvme_command cmd;
+};
+
+static void sec_submit_endio(struct request *req, int error)
+{
+ struct sed_cb_data *sed_data = req->end_io_data;
+
+ if (sed_data->cb)
+ sed_data->cb(error,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Looks useful,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Again,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
and again,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Looks good,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Robert,
Hey Robert, Christoph,
please explain your use cases that isn't handled. The one and only
reason to set MSDBD to 1 is to make the code a lot simpler given that
there is no real use case for supporting more.
RDMA uses memory registrations to register large and possibly
Looks fine,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Thanks for moving it,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Looks good,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
NBD can become contended on its single connection. We have to serialize all
writes and we can only process one read response at a time. Fix this by
allowing userspace to provide multiple connections to a single nbd device. This
coupled with block-mq drastically increases performance in
If hardware queues are stopped for some event, like the device has been
suspended by power management, requests allocated on that hardware queue
are indefinitely stuck causing a queue freeze to wait forever.
I have a problem with this patch. IMO, this is a general issue so, so
why do we tie a
This looks good,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Reviewed-by: Sagi Grimberg<s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
The offline CPUs need to assigned to something incase they come online
later, otherwise anyone using the mapping for things other than affinity
will have blank entries for that online CPU.
I don't really like the idea behind it. Back when we came up with
this code I had some discussion with
We need to leave the block queues stopped if we're changing the tagset's
number of queues.
Umm, Don't we need to fail these requests? What am I missing?
Won't these requests block until timeout expiration and will
trigger error recovery again?
--
To unsubscribe from this list: send the line
Looks good,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Add a helper to calculate the actual data transfer size for special
payload requests.
Signed-off-by: Christoph Hellwig
---
include/linux/blkdev.h | 13 +
1 file changed, 13 insertions(+)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index
Looks good,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
);
set->nr_hw_queues = nr_hw_queues;
+ if (set->ops->map_queues)
+ set->ops->map_queues(set);
+ else
+ blk_mq_map_queues(set);
+
Makes sense,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send
Hey Josef,
Since we are in the memory reclaim path we need our recv work to be on a
workqueue that has WQ_MEM_RECLAIM set so we can avoid deadlocks. Also
set WQ_HIGHPRI since we are in the completion path for IO.
Really a workqueue per device?? Did this really give performance
advantage? Can
Hey Josef,
To prepare for dynamically adding new nbd devices to the system switch
from using an array for the nbd devices and instead use an idr. This
copies what loop does for keeping track of its devices.
I think ida_simple_* is simpler and sufficient here isn't it?
--
To unsubscribe from
I'd like to attend LSF/MM and would like to discuss polling for block
drivers.
Currently there is blk-iopoll but it is neither as widely used as NAPI in
the networking field and accoring to Sagi's findings in [1] performance
with polling is not on par with IRQ usage.
On LSF/MM I'd like to
Hi all,
I'd like to attend LSF/MM and would like to discuss polling for block drivers.
Currently there is blk-iopoll but it is neither as widely used as NAPI in the
networking field and accoring to Sagi's findings in [1] performance with
polling is not on par with IRQ usage.
On LSF/MM I'd
I agree with Jens that we'll need some analysis if we want the
discussion to be affective, and I can spend some time this if I
can find volunteers with high-end nvme devices (I only have access
to client nvme devices.
I have a P3700 but somehow burned the FW. Let me see if I can bring it back
Hi Folks,
I would like to propose a general discussion on Storage stack and device driver
testing.
I think its very useful and needed.
Purpose:-
-
The main objective of this discussion is to address the need for
a Unified Test Automation Framework which can be used by
Hey Coly,
Also I receive reports from users that raid1 performance is desired when
it is built on NVMe SSDs as a cache (maybe bcache or dm-cache). I am
working on some raid1 performance improvement (e.g. new raid1 I/O
barrier and lockless raid1 I/O submit), and have some more ideas to discuss.
**Note: when I ran multiple threads on more cpus the performance
degradation phenomenon disappeared, but I tested on a VM with
qemu emulation backed by null_blk so I figured I had some other
bottleneck somewhere (that's why I asked for some more testing).
That could be because of the vmexits
Hey Josef,
To prepare for dynamically adding new nbd devices to the system switch
from using an array for the nbd devices and instead use an idr. This
copies what loop does for keeping track of its devices.
I think ida_simple_* is simpler and sufficient here isn't it?
I use more of the
Hey Josef,
Since we are in the memory reclaim path we need our recv work to be on a
workqueue that has WQ_MEM_RECLAIM set so we can avoid deadlocks. Also
set WQ_HIGHPRI since we are in the completion path for IO.
Really a workqueue per device?? Did this really give performance
advantage?
On 26/03/17 05:18, sba...@raithlin.com wrote:
From: Stephen Bates
In order to bucket IO for the polling algorithm we use a sysfs entry
to set the filter value. It is signed and we will use that as follows:
0 : No filtering. All IO are considered in stat generation
>
This series introduces IBNBD/IBTRS kernel modules.
IBNBD (InfiniBand network block device) allows for an RDMA transfer of block IO
over InfiniBand network. The driver presents itself as a block device on client
side and transmits the block requests in a zero-copy fashion to the server-side
via
Looks good,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
You don't mention what you are running? But I'm assuming it was my 4.12
branch.
Ehh, details...
If so, this is fixed in a later revision of it. If you pull an
update, it should go away.
Will try, thanks Jens.
Signed-off-by: Sagi Grimberg <s...@grimberg.me>
---
block/blk-mq-pci.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk-mq-pci.c b/block/blk-mq-pci.c
index 966c2169762e..0c3354cf3552 100644
--- a/block/blk-mq-pci.c
+++ b/block/blk-mq-pci.c
@@ -23,7 +23,7 @@
*
Looks good,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
Hey Sagi,
Hey Nic
Let's make 'batch' into a backend specific attribute so it can be
changed on-the-fly per device, instead of a hard-coded value.
Here's a quick patch to that end. Feel free to fold it into your
series.
I will, thanks!
Looks good,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
).
The affinity assignments should match what mlx5 tried to
do earlier but now we do not set affinity to async, cmd
and pages dedicated vectors.
Signed-off-by: Sagi Grimberg <s...@grimberg.me>
---
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 3 +-
drivers/net/ethernet/mellanox/mlx5/core/
Now that we have a generic code to allocate an array
of irq vectors and even correctly spread their affinity,
correctly handle cpu hotplug events and more, were much
better off using it.
Signed-off-by: Sagi Grimberg <s...@grimberg.me>
---
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
Simply refer to the generic affinity mask helper.
Signed-off-by: Sagi Grimberg <s...@grimberg.me>
---
drivers/infiniband/hw/mlx5/main.c | 10 ++
1 file changed, 10 insertions(+)
diff --git a/drivers/infiniband/hw/mlx5/main.c
b/drivers/infiniband/hw/mlx5/main.c
index 4dc0a8
1 - 100 of 434 matches
Mail list logo