On 4/11/2021 10:26 PM, Javier González wrote:
On 11.04.2021 12:10, Max Gurtovoy wrote:
On 4/10/2021 9:32 AM, Javier González wrote:
On 10 Apr 2021, at 02.30, Chaitanya Kulkarni
wrote:
On 4/9/21 17:22, Max Gurtovoy wrote:
On 2/19/2021 2:45 PM, SelvaKumar S wrote:
This patchset tries
On 4/6/2021 2:53 PM, Jason Gunthorpe wrote:
On Tue, Apr 06, 2021 at 08:09:43AM +0300, Leon Romanovsky wrote:
On Tue, Apr 06, 2021 at 10:37:38AM +0800, Honggang LI wrote:
On Mon, Apr 05, 2021 at 08:23:54AM +0300, Leon Romanovsky wrote:
From: Leon Romanovsky
From Avihai,
Relaxed Ordering
On 4/10/2021 9:32 AM, Javier González wrote:
On 10 Apr 2021, at 02.30, Chaitanya Kulkarni wrote:
On 4/9/21 17:22, Max Gurtovoy wrote:
On 2/19/2021 2:45 PM, SelvaKumar S wrote:
This patchset tries to add support for TP4065a ("Simple Copy Command"),
v2020.05.04
On 2/19/2021 2:45 PM, SelvaKumar S wrote:
This patchset tries to add support for TP4065a ("Simple Copy Command"),
v2020.05.04 ("Ratified")
The Specification can be found in following link.
https://nvmexpress.org/wp-content/uploads/NVM-Express-1.4-Ratified-TPs-1.zip
Simple copy command is a
buf(isert_conn)
by the isert_conn->device->ib_device statement. This patch
free the device in the correct order.
Signed-off-by: Lv Yunlong
---
drivers/infiniband/ulp/isert/ib_isert.c | 16
1 file changed, 8 insertions(+), 8 deletions(-)
looks good,
Reviewed-by: Max Gurtovoy
On 3/11/2021 1:37 PM, Christoph Hellwig wrote:
On Wed, Mar 10, 2021 at 08:31:27AM -0400, Jason Gunthorpe wrote:
Yes, that needs more refactoring. I'm viewing this series as a
"statement of intent" and once we commit to doing this we can go
through the bigger effort to split up vfio_pci_core
On 3/11/2021 9:54 AM, Alexey Kardashevskiy wrote:
On 11/03/2021 13:00, Jason Gunthorpe wrote:
On Thu, Mar 11, 2021 at 12:42:56PM +1100, Alexey Kardashevskiy wrote:
btw can the id list have only vendor ids and not have device ids?
The PCI matcher is quite flexable, see the other patch
On 3/10/2021 4:19 PM, Alexey Kardashevskiy wrote:
On 10/03/2021 23:57, Max Gurtovoy wrote:
On 3/10/2021 8:39 AM, Alexey Kardashevskiy wrote:
On 09/03/2021 19:33, Max Gurtovoy wrote:
The new drivers introduced are nvlink2gpu_vfio_pci.ko and
npu2_vfio_pci.ko.
The first
On 3/10/2021 8:39 AM, Alexey Kardashevskiy wrote:
On 09/03/2021 19:33, Max Gurtovoy wrote:
The new drivers introduced are nvlink2gpu_vfio_pci.ko and
npu2_vfio_pci.ko.
The first will be responsible for providing special extensions for
NVIDIA GPUs with NVLINK2 support for P9 platform
Create a new driver igd_vfio_pci.ko that will be responsible for
providing special extensions for INTEL Graphics card (GVT-d).
Also preserve backward compatibility with vfio_pci.ko vendor specific
extensions.
Signed-off-by: Max Gurtovoy
---
drivers/vfio/pci/Kconfig | 5
).
Also, preserve backward compatibility for users that were binding
NVLINK2 devices to vfio_pci.ko. Hopefully this compatibility layer will
be dropped in the future
Signed-off-by: Max Gurtovoy
---
drivers/vfio/pci/Kconfig | 28 +++-
drivers/vfio/pci/Makefile
This is a preparation for moving vendor specific code from
vfio_pci_core to vendor specific vfio_pci drivers. The next step will be
creating a dedicated module to NVIDIA NVLINK2 devices with P9 extensions
and a dedicated module for Power9 NPU NVLink2 HBAs.
Signed-off-by: Max Gurtovoy
and nvlink2 to a
dedicated module instead of managing their vendor specific extensions in
the core driver.
Signed-off-by: Max Gurtovoy
---
drivers/vfio/pci/vfio_pci_core.c | 1 +
drivers/vfio/pci/vfio_pci_core.h | 5 +
2 files changed, 6 insertions(+)
diff --git a/drivers/vfio/pci/vfio_pci_core.c
of vfio_device_ops and will be able
to use container_of mechanism as well (instead of passing void pointers
as around the stack).
Signed-off-by: Max Gurtovoy
---
drivers/vfio/pci/vfio_pci.c | 31 +
drivers/vfio/pci/vfio_pci_core.c | 39 +---
drivers
eneric PCI functionality exported from it.
Additionally it will add the needed vendor specific logic for HW
specific features such as Live Migration. Same for the igd_vfio_pci that
will add special extensions for Intel Graphics cards (GVT-d).
Signed-off-by: Max Gurtovoy
---
drivers/vfio/p
of the generic vfio_pci driver.
Signed-off-by: Max Gurtovoy
---
drivers/vfio/pci/vfio_pci_config.c | 68 +++---
drivers/vfio/pci/vfio_pci_core.c| 90 ++---
drivers/vfio/pci/vfio_pci_core.h| 76
drivers/vfio/pci
This is a preparation patch for separating the vfio_pci driver to a
subsystem driver and a generic pci driver. This patch doesn't change
any logic.
Signed-off-by: Max Gurtovoy
---
drivers/vfio/pci/vfio_pci_config.c | 2 +-
drivers/vfio/pci/vfio_pci_core.c
This is a preparation patch for separating the vfio_pci driver to a
subsystem driver and a generic pci driver. This patch doesn't change any
logic.
Signed-off-by: Max Gurtovoy
---
drivers/vfio/pci/Makefile| 2 +-
drivers/vfio/pci/{vfio_pci.c => vfio_pci_core.c} | 0
patch 3/9 to emphasize the needed extension for LM feature (From
Cornelia)
- take/release refcount for the pci module during core open/release
- update nvlink, igd and zdev to PowerNV, X86 and s390 extensions for
vfio-pci core
- fix segfault bugs in current vfio-pci zdev code
Max Gurtovoy (9
On 2/2/2021 7:10 PM, Jason Gunthorpe wrote:
On Tue, Feb 02, 2021 at 05:06:59PM +0100, Cornelia Huck wrote:
On the other side, we have the zdev support, which both requires s390
and applies to any pci device on s390.
Is there a reason why CONFIG_VFIO_PCI_ZDEV exists? Why not just always
On 2/5/2021 2:42 AM, Alexey Kardashevskiy wrote:
On 04/02/2021 23:51, Jason Gunthorpe wrote:
On Thu, Feb 04, 2021 at 12:05:22PM +1100, Alexey Kardashevskiy wrote:
It is system firmware (==bios) which puts stuff in the device tree. The
stuff is:
1. emulated pci devices (custom pci
On 2/3/2021 5:24 AM, Alexey Kardashevskiy wrote:
On 03/02/2021 04:41, Max Gurtovoy wrote:
On 2/2/2021 6:06 PM, Cornelia Huck wrote:
On Mon, 1 Feb 2021 11:42:30 -0700
Alex Williamson wrote:
On Mon, 1 Feb 2021 12:49:12 -0500
Matthew Rosato wrote:
On 2/1/21 12:14 PM, Cornelia Huck
On 2/3/2021 5:24 AM, Alexey Kardashevskiy wrote:
On 03/02/2021 04:41, Max Gurtovoy wrote:
On 2/2/2021 6:06 PM, Cornelia Huck wrote:
On Mon, 1 Feb 2021 11:42:30 -0700
Alex Williamson wrote:
On Mon, 1 Feb 2021 12:49:12 -0500
Matthew Rosato wrote:
On 2/1/21 12:14 PM, Cornelia Huck
On 2/2/2021 10:44 PM, Jason Gunthorpe wrote:
On Tue, Feb 02, 2021 at 12:37:23PM -0700, Alex Williamson wrote:
For the most part, this explicit bind interface is redundant to
driver_override, which already avoids the duplicate ID issue.
No, the point here is to have the ID tables in the PCI
On 2/2/2021 6:06 PM, Cornelia Huck wrote:
On Mon, 1 Feb 2021 11:42:30 -0700
Alex Williamson wrote:
On Mon, 1 Feb 2021 12:49:12 -0500
Matthew Rosato wrote:
On 2/1/21 12:14 PM, Cornelia Huck wrote:
On Mon, 1 Feb 2021 16:28:27 +
Max Gurtovoy wrote:
This patch doesn't change any
vfio-pci driver.
For now, powernv extensions will include only nvlink2.
Signed-off-by: Max Gurtovoy
---
drivers/vfio/pci/Kconfig| 6 --
drivers/vfio/pci/Makefile | 2 +-
drivers/vfio/pci/vfio_pci_core.c
In case allocation fails, we must behave correctly and exit with error.
Signed-off-by: Max Gurtovoy
---
drivers/vfio/pci/vfio_pci_zdev.c | 4
1 file changed, 4 insertions(+)
diff --git a/drivers/vfio/pci/vfio_pci_zdev.c b/drivers/vfio/pci/vfio_pci_zdev.c
index 175096fcd902..e9ef4239ef7a
vfio-pci driver.
For now, x86 extensions will include only igd.
Signed-off-by: Max Gurtovoy
---
drivers/vfio/pci/Kconfig| 13 ++---
drivers/vfio/pci/Makefile | 2 +-
drivers/vfio/pci/vfio_pci_core.c| 2
driver.
Signed-off-by: Max Gurtovoy
---
drivers/vfio/pci/Kconfig | 4 ++--
drivers/vfio/pci/Makefile | 2 +-
drivers/vfio/pci/vfio_pci_core.c | 2 +-
drivers/vfio/pci/vfio_pci_private.h | 2
Zdev static functions does not use vdev argument. Remove it.
Signed-off-by: Max Gurtovoy
---
drivers/vfio/pci/vfio_pci_zdev.c | 20
1 file changed, 8 insertions(+), 12 deletions(-)
diff --git a/drivers/vfio/pci/vfio_pci_zdev.c b/drivers/vfio/pci/vfio_pci_zdev.c
index
to the generic vfio_pci.ko.
Signed-off-by: Max Gurtovoy
---
drivers/vfio/pci/Kconfig | 10 ++
drivers/vfio/pci/Makefile| 3 +
drivers/vfio/pci/mlx5_vfio_pci.c | 253 +++
include/linux/mlx5/vfio_pci.h| 36 +
4 files changed, 302 insertions
and nvlink to a
dedicated module instead of managing their functionality in the core
driver.
Signed-off-by: Max Gurtovoy
---
drivers/vfio/pci/vfio_pci_core.c| 12 +-
drivers/vfio/pci/vfio_pci_core.h| 28
drivers/vfio/pci/vfio_pci_igd.c | 16
i will use vfio_pci_core to register to vfio
subsystem and also use the generic PCI functionality exported from it.
Additionally it will add the needed vendor specific logic for HW
specific features such as Live Migration.
Signed-off-by: Max Gurtovoy
---
drivers/vfio/pci/Kconfig| 24 +
This is a preparation patch for separating the vfio_pci driver to a
subsystem driver and a generic pci driver. This patch doesn't change any
logic.
Signed-off-by: Max Gurtovoy
---
drivers/vfio/pci/Makefile| 2 +-
drivers/vfio/pci/{vfio_pci.c => vfio_pci_core.c} | 0
e've decided to split the submission for now.
Max Gurtovoy (9):
vfio-pci: rename vfio_pci.c to vfio_pci_core.c
vfio-pci: introduce vfio_pci_core subsystem driver
vfio-pci-core: export vfio_pci_register_dev_region function
mlx5-vfio-pci: add new vfio_pci driver for mlx5 devices
vfio-pci/zdev: re
On 2/1/2021 6:32 AM, Alex Williamson wrote:
On Sun, 31 Jan 2021 20:46:40 +0200
Max Gurtovoy wrote:
On 1/28/2021 11:02 PM, Alex Williamson wrote:
On Thu, 28 Jan 2021 17:29:30 +0100
Cornelia Huck wrote:
On Tue, 26 Jan 2021 15:27:43 +0200
Max Gurtovoy wrote:
On 1/26/2021 5:34 AM, Alex
On 1/28/2021 11:02 PM, Alex Williamson wrote:
On Thu, 28 Jan 2021 17:29:30 +0100
Cornelia Huck wrote:
On Tue, 26 Jan 2021 15:27:43 +0200
Max Gurtovoy wrote:
On 1/26/2021 5:34 AM, Alex Williamson wrote:
On Mon, 25 Jan 2021 20:45:22 -0400
Jason Gunthorpe wrote:
On Mon, Jan 25, 2021
On 1/28/2021 6:29 PM, Cornelia Huck wrote:
On Tue, 26 Jan 2021 15:27:43 +0200
Max Gurtovoy wrote:
Hi Alex, Cornelia and Jason,
thanks for the reviewing this.
On 1/26/2021 5:34 AM, Alex Williamson wrote:
On Mon, 25 Jan 2021 20:45:22 -0400
Jason Gunthorpe wrote:
On Mon, Jan 25, 2021
On 1/28/2021 4:41 PM, Stefano Garzarella wrote:
From: Max Gurtovoy
This will allow running vDPA for virtio block protocol.
Signed-off-by: Max Gurtovoy
[sgarzare: various cleanups/fixes]
Signed-off-by: Stefano Garzarella
---
v2:
- rebased on top of other changes (dev_attr, get_config
Hi Alex, Cornelia and Jason,
thanks for the reviewing this.
On 1/26/2021 5:34 AM, Alex Williamson wrote:
On Mon, 25 Jan 2021 20:45:22 -0400
Jason Gunthorpe wrote:
On Mon, Jan 25, 2021 at 04:31:51PM -0700, Alex Williamson wrote:
We're supposed to be enlightened by a vendor driver that does
i will use vfio_pci_core to register to vfio
subsystem and also use the generic PCI functionality exported from it.
Additionally it will add the needed vendor specific logic for HW
specific features such as Live Migration.
Signed-off-by: Max Gurtovoy
---
drivers/vfio/pci/Kconfig| 12 +
to the generic vfio_pci.ko.
Signed-off-by: Max Gurtovoy
---
drivers/vfio/pci/Kconfig | 10 ++
drivers/vfio/pci/Makefile| 3 +
drivers/vfio/pci/mlx5_vfio_pci.c | 253 +++
include/linux/mlx5/vfio_pci.h| 36 +
4 files changed, 302 insertions
This is a preparation patch for separating the vfio_pci driver to a
subsystem driver and a generic pci driver. This patch doesn't change any
logic.
Signed-off-by: Max Gurtovoy
---
drivers/vfio/pci/Makefile| 2 +-
drivers/vfio/pci/{vfio_pci.c => vfio_pci_core.c} | 0
or adding vendor extension to vfio-pci devices. As the
changes to the subsystem must be defined as a pre-condition for
this work, we've decided to split the submission for now.
Max Gurtovoy (3):
vfio-pci: rename vfio_pci.c to vfio_pci_core.c
vfio-pci: introduce vfio_pci_core subsyst
out in small patches
- left batch_mapping module parameter in the core [Jason]
Max Gurtovoy (2):
vdpa_sim: remove hard-coded virtq count
vdpa: split vdpasim to core and net modules
Stefano Garzarella (15):
vdpa: remove unnecessary 'default n' in Kconfig entries
vdpa_sim: remove
On 10/25/2020 1:51 PM, zhenwei pi wrote:
Hit a kernel warning:
refcount_t: underflow; use-after-free.
WARNING: CPU: 0 PID: 0 at lib/refcount.c:28
RIP: 0010:refcount_warn_saturate+0xd9/0xe0
Call Trace:
nvme_rdma_recv_done+0xf3/0x280 [nvme_rdma]
__ib_process_cq+0x76/0x150 [ib_core]
...
Thanks Mauro, small fix for iser
On 10/23/2020 7:33 PM, Mauro Carvalho Chehab wrote:
Some functions have different names between their prototypes
and the kernel-doc markup.
Others need to be fixed, as kernel-doc markups should use this format:
identifier - description
Signed-off-by:
l_gendisk(). However, that seems unnecessary, since as nvme_alloc_ns()
is currently written, we know that device_add_disk() does not need to be
negated.
drivers/nvme/host/core.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
Looks good,
Reviewed-by: Max Gurtovoy
On 6/3/2020 2:32 AM, Jason Gunthorpe wrote:
On Wed, Jun 03, 2020 at 01:40:51AM +0300, Max Gurtovoy wrote:
On 6/3/2020 12:37 AM, Jens Axboe wrote:
On 6/2/20 1:09 PM, Jason Gunthorpe wrote:
On Tue, Jun 02, 2020 at 01:02:55PM -0600, Jens Axboe wrote:
On 6/2/20 1:01 PM, Jason Gunthorpe wrote
On 6/3/2020 12:37 AM, Jens Axboe wrote:
On 6/2/20 1:09 PM, Jason Gunthorpe wrote:
On Tue, Jun 02, 2020 at 01:02:55PM -0600, Jens Axboe wrote:
On 6/2/20 1:01 PM, Jason Gunthorpe wrote:
On Tue, Jun 02, 2020 at 11:37:26AM +0300, Max Gurtovoy wrote:
On 6/2/2020 5:56 AM, Stephen Rothwell wrote
On 6/2/2020 5:56 AM, Stephen Rothwell wrote:
Hi all,
Hi,
This looks good to me.
Can you share a pointer to the tree so we'll test it in our labs ?
need to re-test:
1. srq per core
2. srq per core + T10-PI
And both will run with shared CQ.
Today's linux-next merge of the block tree
Looks good,
Reviewed-by: Max Gurtovoy
Looks fine,
Reviewed-by: Max Gurtovoy
On 7/10/2019 12:29 AM, Christoph Hellwig wrote:
On Sat, Jul 06, 2019 at 01:06:44PM +0300, Max Gurtovoy wrote:
+ /* check if multipath is enabled and we have the capability */
+ if (!multipath)
+ return 0;
+ if (!ctrl->subsys || ((ctrl->subsys->cmic
On 7/5/2019 5:05 PM, Marta Rybczynska wrote:
Fix a crash with multipath activated. It happends when ANA log
page is larger than MDTS and because of that ANA is disabled.
The driver then tries to access unallocated buffer when connecting
to a nvme target. The signature is as follows:
[
ort after it's freed.
+*/
+ flush_workqueue(nvme_delete_wq);
}
static const struct nvmet_fabrics_ops nvme_loop_ops = {
Looks good:
Reviewed-by: Max Gurtovoy
On 7/5/2019 12:01 AM, Logan Gunthorpe wrote:
On 2019-07-04 3:00 p.m., Max Gurtovoy wrote:
Hi Logan,
On 7/4/2019 2:03 AM, Logan Gunthorpe wrote:
When a port is removed through configfs, any connected controllers
are still active and can still send commands. This causes a
use-after-free bug
Hi Logan,
On 7/4/2019 2:03 AM, Logan Gunthorpe wrote:
When a port is removed through configfs, any connected controllers
are still active and can still send commands. This causes a
use-after-free bug which is detected by KASAN for any admin command
that dereferences req->port (like in
On 6/17/2019 3:19 PM, Christoph Hellwig wrote:
This ensures all proper DMA layer handling is taken care of by the
SCSI midlayer.
Signed-off-by: Christoph Hellwig
Looks good,
Reviewed-by: Max Gurtovoy
On 6/17/2019 3:19 PM, Christoph Hellwig wrote:
This ensures all proper DMA layer handling is taken care of by the
SCSI midlayer.
Signed-off-by: Christoph Hellwig
Looks good,
Reviewed-by: Max Gurtovoy
On 5/3/2019 3:29 PM, Christoph Hellwig wrote:
On Thu, May 02, 2019 at 02:47:57PM +0300, Maxim Levitsky wrote:
If the mdev device driver also sets the
NVME_F_MDEV_DMA_SUPPORTED, the mdev core will
dma map all the guest memory into the nvme device,
so that nvme device driver can use dma
the return -ENOSYS with a break
for the default case and returning -ENOSYS at the end of the
function. This allows len to be removed. Also remove redundant
break that follows a return statement.
Signed-off-by: Colin Ian King
Looks good,
Reviewed-by: Max Gurtovoy
On 2/22/2019 7:55 AM, Johannes Thumshirn wrote:
On 22/02/2019 01:41, Chaitanya Kulkarni wrote:
[...]
As per specified in the patch, this is only useful for testing, then we
should modify the test scripts so that on creation of the ctrl we switch
to the buffered I/O before running fio.
Or on
| (key ? 1 << 3 : 0);
return nvme_pr_command(bdev, cdw10, key, 0, nvme_cmd_resv_release);
}
Looks good,
Reviewed-by: Max Gurtovoy <m...@mellanox.com>
return nvme_pr_command(bdev, cdw10, key, 0, nvme_cmd_resv_release);
}
Looks good,
Reviewed-by: Max Gurtovoy
Hi Jianchao,
On 5/10/2018 12:42 PM, Jianchao Wang wrote:
BUG: KASAN: double-free or invalid-free in nvme_rdma_free_queue+0xf6/0x110
[nvme_rdma]
Workqueue: nvme-reset-wq nvme_rdma_reset_ctrl_work [nvme_rdma]
Call Trace:
dump_stack+0x91/0xeb
print_address_description+0x6b/0x290
Hi Jianchao,
On 5/10/2018 12:42 PM, Jianchao Wang wrote:
BUG: KASAN: double-free or invalid-free in nvme_rdma_free_queue+0xf6/0x110
[nvme_rdma]
Workqueue: nvme-reset-wq nvme_rdma_reset_ctrl_work [nvme_rdma]
Call Trace:
dump_stack+0x91/0xeb
print_address_description+0x6b/0x290
nd invoke
nvme_rdma_stop_queue in all the failed cases after nvme_rdma_start_queue
Looks good,
Reviewed-by: Max Gurtovoy <m...@mellanox.com>
led cases after nvme_rdma_start_queue
Looks good,
Reviewed-by: Max Gurtovoy
Hi Sasha,
please consider taking a small fix for this one (also useful for 4.15):
commit d3b9e8ad425cfd5b9116732e057f1b48e4d3bcb8
Author: Max Gurtovoy <m...@mellanox.com>
Date: Mon Mar 5 20:09:48 2018 +0200
RDMA/core: Reduce poll batch for direct cq polling
Fix warning
Hi Sasha,
please consider taking a small fix for this one (also useful for 4.15):
commit d3b9e8ad425cfd5b9116732e057f1b48e4d3bcb8
Author: Max Gurtovoy
Date: Mon Mar 5 20:09:48 2018 +0200
RDMA/core: Reduce poll batch for direct cq polling
Fix warning limit for kernel stack
On 4/2/2018 8:38 PM, Keith Busch wrote:
Thanks, I've applied the patch with a simpler changelog explaining
the bug.
Thanks Rodrigo and Keith, I've tested with/w.o the patch and it works
well (with the fix only).
-Max.
___
Linux-nvme mailing
On 4/2/2018 8:38 PM, Keith Busch wrote:
Thanks, I've applied the patch with a simpler changelog explaining
the bug.
Thanks Rodrigo and Keith, I've tested with/w.o the patch and it works
well (with the fix only).
-Max.
___
Linux-nvme mailing
On 2/28/2018 8:55 PM, Doug Ledford wrote:
On Wed, 2018-02-28 at 11:50 +0200, Max Gurtovoy wrote:
On 2/28/2018 2:21 AM, Bart Van Assche wrote:
On 02/27/18 14:15, Max Gurtovoy wrote:
-static int __ib_process_cq(struct ib_cq *cq, int budget, struct ib_wc
*poll_wc)
+static int __ib_process_cq
On 2/28/2018 8:55 PM, Doug Ledford wrote:
On Wed, 2018-02-28 at 11:50 +0200, Max Gurtovoy wrote:
On 2/28/2018 2:21 AM, Bart Van Assche wrote:
On 02/27/18 14:15, Max Gurtovoy wrote:
-static int __ib_process_cq(struct ib_cq *cq, int budget, struct ib_wc
*poll_wc)
+static int __ib_process_cq
On 2/28/2018 2:21 AM, Bart Van Assche wrote:
On 02/27/18 14:15, Max Gurtovoy wrote:
-static int __ib_process_cq(struct ib_cq *cq, int budget, struct ib_wc
*poll_wc)
+static int __ib_process_cq(struct ib_cq *cq, int budget, struct ib_wc
*poll_wc,
+ int batch
On 2/28/2018 2:21 AM, Bart Van Assche wrote:
On 02/27/18 14:15, Max Gurtovoy wrote:
-static int __ib_process_cq(struct ib_cq *cq, int budget, struct ib_wc
*poll_wc)
+static int __ib_process_cq(struct ib_cq *cq, int budget, struct ib_wc
*poll_wc,
+ int batch
On 2/28/2018 12:09 AM, Jason Gunthorpe wrote:
On Thu, Feb 22, 2018 at 05:39:09PM +0200, Sagi Grimberg wrote:
The only reason why I added this array on-stack was to allow consumers
that did not use ib_alloc_cq api to call it, but that seems like a
wrong decision when thinking it over again
On 2/28/2018 12:09 AM, Jason Gunthorpe wrote:
On Thu, Feb 22, 2018 at 05:39:09PM +0200, Sagi Grimberg wrote:
The only reason why I added this array on-stack was to allow consumers
that did not use ib_alloc_cq api to call it, but that seems like a
wrong decision when thinking it over again
On 2/21/2018 3:44 PM, Sagi Grimberg wrote:
On Tue, 2018-02-20 at 21:59 +0100, Arnd Bergmann wrote:
/* # of WCs to poll for with a single call to ib_poll_cq */
-#define IB_POLL_BATCH 16
+#define IB_POLL_BATCH 8
The purpose of batch polling is to minimize contention on
On 2/21/2018 3:44 PM, Sagi Grimberg wrote:
On Tue, 2018-02-20 at 21:59 +0100, Arnd Bergmann wrote:
/* # of WCs to poll for with a single call to ib_poll_cq */
-#define IB_POLL_BATCH 16
+#define IB_POLL_BATCH 8
The purpose of batch polling is to minimize contention on
On 2/20/2018 11:47 PM, Chuck Lever wrote:
On Feb 20, 2018, at 4:14 PM, Bart Van Assche wrote:
On Tue, 2018-02-20 at 21:59 +0100, Arnd Bergmann wrote:
/* # of WCs to poll for with a single call to ib_poll_cq */
-#define IB_POLL_BATCH 16
+#define
On 2/20/2018 11:47 PM, Chuck Lever wrote:
On Feb 20, 2018, at 4:14 PM, Bart Van Assche wrote:
On Tue, 2018-02-20 at 21:59 +0100, Arnd Bergmann wrote:
/* # of WCs to poll for with a single call to ib_poll_cq */
-#define IB_POLL_BATCH 16
+#define IB_POLL_BATCH
On 2/6/2018 11:48 AM, Sagi Grimberg wrote:
Looks good,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
I'll pick this one up unless someone thinks I shouldn't..
Looks good to me (I can imagine what scenario failed this :) ),
Reviewed-by: Max Gurtovoy <m...@mel
On 2/6/2018 11:48 AM, Sagi Grimberg wrote:
Looks good,
Reviewed-by: Sagi Grimberg
I'll pick this one up unless someone thinks I shouldn't..
Looks good to me (I can imagine what scenario failed this :) ),
Reviewed-by: Max Gurtovoy
___
Linux
On 2/1/2018 10:21 AM, Greg KH wrote:
On Tue, Jan 30, 2018 at 10:12:51AM +0100, Marta Rybczynska wrote:
Hello Mellanox maintainers,
I'd like to ask you to OK backporting two patches in mlx5 driver to 4.9 stable
tree (they're in master for some time already).
We have multiple deployment in 4.9
On 2/1/2018 10:21 AM, Greg KH wrote:
On Tue, Jan 30, 2018 at 10:12:51AM +0100, Marta Rybczynska wrote:
Hello Mellanox maintainers,
I'd like to ask you to OK backporting two patches in mlx5 driver to 4.9 stable
tree (they're in master for some time already).
We have multiple deployment in 4.9
to me,
Reviewed-by: Max Gurtovoy <m...@mellanox.com>
()
Delete an unnecessary variable initialisation in iser_send_data_out()
Combine substrings for three messages
drivers/infiniband/ulp/iser/iser_initiator.c | 16 ++--
1 file changed, 6 insertions(+), 10 deletions(-)
This series looks good to me,
Reviewed-by: Max Gurtovoy
);
ret = -EINVAL;
+ kfree(p);
goto out;
}
+ kfree(p);
break;
case NVMF_OPT_DUP_CONNECT:
opts->duplicate_connect =
goto out;
}
+ kfree(p);
break;
case NVMF_OPT_DUP_CONNECT:
opts->duplicate_connect = true;
Looks good,
Reviewed-by: Max Gurtovoy
On 1/18/2018 12:10 PM, Jianchao Wang wrote:
After Sagi's commit (nvme-rdma: fix concurrent reset and reconnect),
both nvme-fc/rdma have following pattern:
RESETTING- quiesce blk-mq queues, teardown and delete queues/
connections, clear out outstanding IO requests...
On 1/18/2018 12:10 PM, Jianchao Wang wrote:
After Sagi's commit (nvme-rdma: fix concurrent reset and reconnect),
both nvme-fc/rdma have following pattern:
RESETTING- quiesce blk-mq queues, teardown and delete queues/
connections, clear out outstanding IO requests...
hi Jianchao Wang,
On 1/17/2018 6:54 AM, Jianchao Wang wrote:
Currently, the ctrl->state will be changed to NVME_CTRL_RESETTING
before queue the reset work. This is not so strict. There could be
a big gap before the reset_work callback is invoked. In addition,
there is some disable work in the
hi Jianchao Wang,
On 1/17/2018 6:54 AM, Jianchao Wang wrote:
Currently, the ctrl->state will be changed to NVME_CTRL_RESETTING
before queue the reset work. This is not so strict. There could be
a big gap before the reset_work callback is invoked. In addition,
there is some disable work in the
On 1/15/2018 3:28 PM, Max Gurtovoy wrote:
On 1/14/2018 11:48 AM, Sagi Grimberg wrote:
Currently, the ctrl->state will be changed to NVME_CTRL_RESETTING
before queue the reset work. This is not so strict. There could be
a big gap before the reset_work callback is invoked. In addit
On 1/15/2018 3:28 PM, Max Gurtovoy wrote:
On 1/14/2018 11:48 AM, Sagi Grimberg wrote:
Currently, the ctrl->state will be changed to NVME_CTRL_RESETTING
before queue the reset work. This is not so strict. There could be
a big gap before the reset_work callback is invoked. In addit
On 1/14/2018 11:48 AM, Sagi Grimberg wrote:
Currently, the ctrl->state will be changed to NVME_CTRL_RESETTING
before queue the reset work. This is not so strict. There could be
a big gap before the reset_work callback is invoked. In addition,
there is some disable work in the reset_work
On 1/14/2018 11:48 AM, Sagi Grimberg wrote:
Currently, the ctrl->state will be changed to NVME_CTRL_RESETTING
before queue the reset work. This is not so strict. There could be
a big gap before the reset_work callback is invoked. In addition,
there is some disable work in the reset_work
Hi Greg/Bjorn,
On 1/2/2018 9:27 PM, Greg Kroah-Hartman wrote:
On Tue, Jan 02, 2018 at 01:00:03PM -0600, Bjorn Helgaas wrote:
[+cc Greg, linux-kernel]
Hi Max,
Thanks for the report!
On Tue, Jan 02, 2018 at 01:50:23AM +0200, Max Gurtovoy wrote:
hi all,
I encountered a strange phenomena using
1 - 100 of 127 matches
Mail list logo